Skip to content

Recapping the Computer Vision Meetup — May 11, 2023

We just wrapped up the May 11, 2023 Computer Vision Meetup, and if you missed it or want to revisit it, here’s a recap! In this blog post you’ll find the playback recordings, highlights from the presentations and Q&A, as well as the upcoming Meetup schedule so that you can join us at a future event.

First, Thanks for Voting for Your Favorite Charity!

In lieu of swag, we gave Meetup attendees the opportunity to help guide our monthly donation to charitable causes. The charity that received the highest number of votes this month was BRAC! We are sending this month’s charitable donation of $200 to BRAC, an organization powering people to rise above poverty, on behalf of the computer vision community.

BRAC logo

Missed the Meetup? No problem. Here are playbacks and talk abstracts from the event.

Lightning Talk: Visualizing Defects in Amazon’s ARMBench Dataset Using Embeddings and OpenAI’s CLIP Model

In this lightning talk, machine learning engineer Allen Lee from Voxel51 gave us a quick tour of Amazon’s recently released ARMBench dataset for training “pick and place” robots. You can learn more about how to create embeddings on the dataset using the FiftyOne Brain to derive interesting insights in the companion blog and notebook on GitHub.

The Role of Symmetry in Human and Computer Vision

Symmetry is one of the most ubiquitous regularities in our natural world. For almost 100 years, human vision researchers have studied how the human vision system has evolved to exploit this powerful regularity as a basis for grouping image features. While computer vision is a much younger discipline, the trajectory is similar, with symmetry playing a major role in both perceptual grouping and object representation. After briefly reviewing some of the milestones in symmetry-based perceptual grouping and object representation/recognition in both human and computer vision, I will review our efforts that draw on computer vision to understand the role that symmetry plays in human scene perception. Conversely, I will also look at how these results in human scene perception can strengthen the performance of modern deep learning computer vision systems for scene perception.

Sven Dickinson is Professor of Computer Science at the University of Toronto, and is also Vice President and Head of the new Samsung Toronto AI Research Center. Learn more about his research and publications.

Q&A from the talk included:

  • How do you think these learnings can inform future deep learning foundational models like SAM, Symmetry-Net, etc?
  • MAT is very susceptible to noise in the edges (which are likely present); how do you handle that to get a “reasonable” medial axis?
  • How can you tell whether it’s our ability to use symmetry vs our ability to extrapolate lines to the implied junction point that explains why removing the middle is so important?
  • Is there a way to remove different types of symmetry and does it have an effect on human categorization?
  • Perhaps the same experiment would show a different result if it used Resnet instead?
  • Separation score had a reverse trend for VGG16 vs human, any insights on why this is the case?
  • Have similar tests been made on a Transformer architecture? Some symmetry is “enforced” by convolution (translational), so could the transformer architecture be less rigid for symmetry and if so, would this have less of an effect?
  • Is there a role of something like “amount of expected symmetry”?  i.e., Might the visual system weigh the importance of symmetries more in scenes that are expected to have them?
  • What software was used to create the visualizations?

Machine Learning for Fast, Motion-Robust MRI

Magnetic resonance imaging (MRI) is a powerful imaging modality that enables detailed visualization of tissue content and structure. However, MRI suffers from long acquisition times and significant susceptibility to motion artifacts. This talk will explore deep learning approaches that incorporate imaging physics to produce high-quality MR images from highly accelerated and/or motion-corrupted data.

Nalini Singh is a Ph.D. student at the Harvard-MIT Program in Health Sciences and Technology, working in the Medical Vision Group at the MIT Computer Science and Artificial Intelligence Laboratory. Her primary research interests are in medical image reconstruction and analysis, signal processing, and inverse problems.

Q&A from the talk included:

  • With adjacent pixels being of the same portion, how is occlusion by other vessel/body parts handled?
  • Can we implement this method in real time? For the example by attaching eeg to the head?
  • Would it be preferable to incorporate ordinary cameras (presumably using mirrors) to recover the motion directly, rather than hoping that MRI-only reconstruction is accurate (rather than merely plausible)?
  • When imposing consistency, would it be possible to account for the possible motion, not enforce that they match exactly, but that the change can be explained by a motion?

Join the Computer Vision Meetup!

Computer Vision Meetups Worlwide, sponsored by Voxel51

Computer Vision Meetup membership has grown to over 4,000 members in just under a year! The goal of the Meetups is to bring together communities of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of computer vision and complementary technologies. 

Join one of the 13 Meetup locations closest to your timezone.

What’s Next?

We have exciting speakers already signed up over the next few months! Become a member of the Computer Vision Meetup closest to you, then register for the Zoom. 

Up next on May 25 at 10 AM IST we have the APAC-timezone-friendly Computer Vision Meetup happening with talks including:

  • YOLO-NAS – SOTA Object Detection Generated by NAS – Ofri Masad (Deci.ai)
  • Wildlife Watcher: A Smart Wildlife Camera – Victor Anton (Wildlife.ai)
  • Applying Computer Vision to Real Estate at Opendoor – Shashwat Srivastava (Opendoor)

You can find a complete schedule of upcoming Meetups on the Voxel51 Events page.

Get Involved!

There are a lot of ways to get involved in the Computer Vision Meetups. Reach out if you identify with any of these:

  • You’d like to speak at an upcoming Meetup
  • You have a physical meeting space in one of the Meetup locations and would like to make it available for a Meetup
  • You’d like to co-organize a Meetup
  • You’d like to co-sponsor a Meetup

Reach out to Meetup co-organizer Jimmy Guerrero on Meetup.com or ping him over LinkedIn to discuss how to get you plugged in.


The Computer Vision Meetup network is sponsored by Voxel51, the company behind the open source FiftyOne computer vision toolset. FiftyOne enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster. It’s easy to get started, in just a few minutes.