Hear talks from experts on the latest topics in AI, ML, and computer vision on March 19th.
Schedule
Towards Reliable Clinical AI: Evaluating Factuality, Robustness, and Real-World Performance of Large Language Models
Large language models are increasingly deployed in clinical settings, but their reliability remains uncertain—they hallucinate facts, behave inconsistently across instruction phrasings, and struggle with evolving medical terminology. In my talk, I address methods to systematically evaluate clinical LLM reliability across four dimensions aligned with how healthcare professionals actually work: verifying concrete facts (FactEHR), ensuring stable guidance across instruction variations (instruction sensitivity study showing up to 0.6 AUROC variation), integrating up-to-date knowledge (BEACON improving biomedical NER by 15%), and assessing real patient conversations (PATIENT-EVAL revealing models abandon safety warnings when patients seek reassurance). These contributions establish evaluation standards spanning factuality, robustness, knowledge integration, and patient-centered communication, charting a path toward clinical AI that is safer, more equitable, and more trustworthy.
Language Diffusion Models
Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue.
Neural BRDFs: Learning Compact Representations for Material Appearance
Accurately modeling how light interacts with real-world materials remains a central challenge in rendering. Bidirectional Reflectance Distribution Functions (BRDFs) describe how materials reflect light as a function of viewing and lighting directions. Creating realistic digital materials has traditionally required choosing between fast parametric models that can't capture real-world complexity, or massive measured BRDFs that are expensive to acquire and store. Neural BRDFs address this challenge by learning continuous reflectance functions from data, exploiting directional correlations and symmetry to achieve significant compression while maintaining rendering quality. In this talk, we examine how neural networks can encode complex material behavior compactly, why this matters for rendering and material capture, and how neural BRDFs fit into the broader evolution toward data-driven graphics.
Supercharging AI agents with evaluations
Reliable deployment of AI agents depends on rigorous evaluation, which must shift from a nice-to-have QA step to a core engineering discipline. Robust evaluation is essential for safety, predictability, misuse resistance, and sustained user trust. To meet this bar, Evals must be deeply integrated into the agent development lifecycle. This talk will outline how simulation-based testing—using high-fidelity, controllable environments—provides the next generation of evaluation infrastructure for production-ready AI agents.
VAND 4.0 – Defect Detection in Real-World Retail Logistics: The Kaputt Challenge at CVPR26
Recent work on industrial anomaly detection has primarily focused on manufacturing scenarios with highly controlled poses and a limited number of object categories. Established benchmarks like MVTec-AD (Bergmann et al., 2021) and VisA (Zou et al., 2022) have reached saturation, with state-of-the-art methods achieving up to 99.9% AUROC scores. In contrast to manufacturing, anomaly detection in retail logistics faces unique challenges, particularly in the diversity and variability of object pose and appearance.
VAND 4.0, the newest iteration of the Visual Anomaly and Novelty Detection Challenge, was designed specifically to address these gaps. It introduces a retail logistics-focused task and evaluation based on the Kaputt dataset (Hoefer et al., 2025). In this talk, I will present the key ideas behind this year's challenge design and explain how we structured VAND 4.0 to tackle the unique demands of real-world retail environments.