Skip to content

AI, Machine Learning and Computer Vision Meetup

June 27, 2024 at 10 AM Pacific

Register for the Zoom

By submitting you (1) agree to Voxel51’s Terms of Service and Privacy Statement and (2) agree to receive occasional emails.

Leveraging Pre-trained Text2Image Diffusion Models for Zero-Shot Video Editing

Barışcan Kurtkaya
KUIS AI Fellow at Koc University

Text-to-image diffusion models demonstrate remarkable editing capabilities in the image domain, especially after Latent Diffusion Models made diffusion models more scalable. Conversely, video editing still has much room for improvement, particularly given the relative scarcity of video datasets compared to image datasets. Therefore, we will discuss whether pre-trained text-to-image diffusion models can be used for zero-shot video editing without any fine-tuning stage. Finally, we will also explore possible future work and interesting research ideas in the field.

About the Speaker

Bariscan Kurtkaya is a KUIS AI Fellow and a graduate student in the Department of Computer Science at Koc University. His research interests lie in exploring and leveraging the capabilities of generative models in the realm of 2D and 3D data, encompassing scientific observations from space telescopes.

Stay tuned!

Stay tuned!
Stay tuned

Stay tuned.

About the Speaker

Stay tuned.

Stay tuned!

Stay tuned!
Stay tuned

Stay tuned.

About the Speaker

Stay tuned.