Register for the event

Build better computer vision models.

  • Annotate samples
  • Curate datasets
  • Evaluate models
Virtual
Americas
CV Meetups
Women in AI Meetup - March 19, 2026
Mar 19, 2026
9 - 11 AM Pacific
Online. Register for Zoom!
Speakers
About this event
Hear talks from experts on the latest topics in AI, ML, and computer vision on March 19th.
Schedule
Towards Reliable Clinical AI: Evaluating Factuality, Robustness, and Real-World Performance of Large Language Models
Large language models are increasingly deployed in clinical settings, but their reliability remains uncertain—they hallucinate facts, behave inconsistently across instruction phrasings, and struggle with evolving medical terminology. In my talk, I address methods to systematically evaluate clinical LLM reliability across four dimensions aligned with how healthcare professionals actually work: verifying concrete facts (FactEHR), ensuring stable guidance across instruction variations (instruction sensitivity study showing up to 0.6 AUROC variation), integrating up-to-date knowledge (BEACON improving biomedical NER by 15%), and assessing real patient conversations (PATIENT-EVAL revealing models abandon safety warnings when patients seek reassurance). These contributions establish evaluation standards spanning factuality, robustness, knowledge integration, and patient-centered communication, charting a path toward clinical AI that is safer, more equitable, and more trustworthy.
Language Diffusion Models
Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs). Challenge this notion by introducing LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens. Optimizing a likelihood bound provides a principled generative approach for probabilistic inference. Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive instruction-following abilities in case studies such as multi-turn dialogue.
Neural BRDFs: Learning Compact Representations for Material Appearance
Accurately modeling how light interacts with real-world materials remains a central challenge in rendering. Bidirectional Reflectance Distribution Functions (BRDFs) describe how materials reflect light as a function of viewing and lighting directions. Creating realistic digital materials has traditionally required choosing between fast parametric models that can't capture real-world complexity, or massive measured BRDFs that are expensive to acquire and store. Neural BRDFs address this challenge by learning continuous reflectance functions from data, exploiting directional correlations and symmetry to achieve significant compression while maintaining rendering quality. In this talk, we examine how neural networks can encode complex material behavior compactly, why this matters for rendering and material capture, and how neural BRDFs fit into the broader evolution toward data-driven graphics.
Supercharging AI agents with evaluations
Reliable deployment of AI agents depends on rigorous evaluation, which must shift from a nice-to-have QA step to a core engineering discipline. Robust evaluation is essential for safety, predictability, misuse resistance, and sustained user trust. To meet this bar, Evals must be deeply integrated into the agent development lifecycle. This talk will outline how simulation-based testing—using high-fidelity, controllable environments—provides the next generation of evaluation infrastructure for production-ready AI agents.