Skip to content

Visual AI in Healthcare

June 27, 2025 | 9 AM Pacific

Join us for the first of several virtual events focused on the latest research, datasets and models at the intersection of visual AI and healthcare.

When

June 27 at 9 AM Pacific

Where

Online. Register for the Zoom!

Register for the Zoom

MedVAE: Efficient Automated Interpretation of Medical Images with Large-Scale Generalizable Autoencoders

Aswin Kumar

Stanford

Maya Varma

Stanford

We present MedVAE, a family of six generalizable 2D and 3D variational autoencoders trained on over one million images from 19 open-source medical imaging datasets using a novel two-stage training strategy. MedVAE downsizes high-dimensional medical images into compact latent representations, reducing storage by up to 512× and accelerating downstream tasks by up to 70× while preserving clinically relevant features. We demonstrate across 20 evaluation tasks that these latent representations can replace high-resolution images in computer-aided diagnosis pipelines without compromising performance. MedVAE is open-source with a streamlined finetuning pipeline and inference engine, enabling scalable model development in resource-constrained medical imaging settings.

Leveraging Foundation Models for Pathology: Progress and Pitfalls

Heather (Dunlop) Couture

PixelScientia

How do you train ML models on pathology slides that are thousands of times larger than standard images? Foundation models offer a breakthrough approach to these gigapixel-scale challenges. This talk explores how self-supervised foundation models trained on broad histopathology datasets are transforming computational pathology. We’ll examine their progress in handling weakly-supervised learning, managing tissue preparation variations, and enabling rapid prototyping with minimal labeled examples. However, significant challenges remain: increasing computational demands, the potential for bias, and questions about generalizability across diverse populations. This talk will offer a balanced perspective to help separate foundation model hype from genuine clinical value.

LesionLocator: Zero-Shot Universal Tumor Segmentation and Tracking in 3D Whole-Body Imaging

Maximilian Rokuss

DKFZ German Cancer Research Center

Recent advances in promptable segmentation have transformed medical imaging workflows, yet most existing models are constrained to static 2D or 3D applications. This talk presents LesionLocator, the first end-to-end framework for universal 4D lesion segmentation and tracking using dense spatial prompts. The system enables zero-shot tumor analysis across whole-body 3D scans and multiple timepoints, propagating a single user prompt through longitudinal follow-ups to segment and track lesion progression. Trained on over 23,000 annotated scans and supplemented with a synthetic time-series dataset, LesionLocator achieves human-level performance in segmentation and outperforms state-of-the-art baselines in longitudinal tracking tasks. The presentation also highlights advances in 3D interactive segmentation, including our open-set tool nnInteractive, showing how spatial prompting can scale from user-guided interaction to clinical-grade automation.

Find a Meetup Near You

Join the AI and ML enthusiasts who have already become members

The goal of the AI, Machine Learning, and Computer Vision Meetup network is to bring together a community of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of AI and complementary technologies. If that’s you, we invite you to join the Meetup closest to your timezone.