Skip to content

Recapping the Computer Vision Meetup – April 27, 2023

We just wrapped up the April 27, 2023 Computer Vision Meetup, and if you missed it or want to revisit it – here’s a recap! In this blog post you’ll find the playback recordings, highlights from the presentations and Q&A, as well as the upcoming Meetup schedule so that you can join us at a future event. 

First, Thanks for Voting for Your Favorite Charity!

In lieu of swag, we gave Meetup attendees the opportunity to help guide our monthly donation to charitable causes. The charity that received the highest number of votes again this month was Wildlife AI! We were first introduced to Wildlife AI through the FiftyOne community. They are using FiftyOne to enable their users to easily analyze the camera data and create their own models. We are sending this month’s charitable donation of $200 to Wildlife AI on behalf of the computer vision community. 

wildlife.ai logo

Missed the Meetup? No problem. Here are playbacks and talk abstracts from the event.

Leveraging Attention for Improved Accuracy and Robustness

In recent years, the naturally interpretable attention mechanism has become one of the most common building blocks of neural networks, allowing us to produce explanations intuitively and easily. However, the applications of such explanations beyond the scope of accountability and interpretability remain limited.

In this talk, Hila presents her latest research on leveraging attention to significantly improve the accuracy and robustness of state-of-the-art large neural networks with limited resources. This is achieved by directly manipulating the attention maps based on intuitive objectives and can be applied to a variety of tasks ranging from object classification to image generation.

Hila Chefer is a PhD student and lecturer at Tel-Aviv University, and an intern at Google research. Her research focuses on constructing faithful explainable AI algorithms, and leveraging explanations to promote model accuracy and robustness.

Q&A from the talk included:

  • Is the attention matrix specific to one self-attention head? Or is it aggregated in some manner across all heads?
  • In ViT, is the attention score (q,k,v) for a patch initialized from some unsupervised dataset?
  • Could you give us some intuition on how to create a differentiable loss function that can help to differentiate between foregrounds and backgrounds using vision transformers?
  • Why do stopwords have “strong” maps since they might not be affecting the output?
  • If we change the question to “A crown on the lion”, will the error be the same?
  • What are some real-world scenarios where the attention mechanism could be used?

You can jump straight to the Q&A here and here.

Breaking the Bottleneck of AI Deployment at the Edge with OpenVINO

In this workshop, you will learn how to use less data for performant AI models. You will see this in action through real-world computer vision implementations, such as object detection and anomaly detection use cases, optimization processes, and deployment at the edge. And, you will learn how the open source OpenVINO toolkit can help reduce the gap between theoretical models and real-world implementations.

  • Train & optimize for the edge with less data
  • Improve the performance of your model regardless of hardware
  • Learn how OpenVINO can accelerate AI models

Zhuo Wu is an AI software evangelist at Intel focusing on the OpenVINO toolkit. Her work ranges from deep learning technologies to 5G wireless communication technologies. She has delivered end2end machine learning and deep learning based solutions to business customers in different industries.

Q&A from the talk included:

  • Are the detections 360 degree or front face?
  • Which detectors does Anomalib use?
  • Is OpenVINO similar to OpenCV?
  • Does OpenVINO support training and inference on CUDA?
  • Is it possible to fine-tune the model to detect different levels of anomalies?
  • Are we able to deploy Anomalib to tablet/mobile devices?
  • For anomaly detection, must the training and testing of the images have to use the same camera angle and lighting conditions?
  • Does Anomalib work with synthetic data generation?

You can jump straight to the Q&A here.

Computer Vision Meetup Locations

Computer Vision Meetup membership has grown to over 3,800 members in just under a year! The goal of the Meetups is to bring together communities of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of computer vision and complementary technologies. 

Join one of the 13 Meetup locations closest to your timezone.

What’s Next?

We have exciting speakers already signed up over the next few months! Become a member of the Computer Vision Meetup closest to you, then register for the Zoom. 

Up next on May 11 at 10 AM Pacific we have the US and EU-timezone-friendly Computer Vision Meetup happening with talks including:

  • The Role of Symmetry in Human and Computer Vision – Sven Dickinson (University of Toronto & Samsung)
  • Machine Learning for Fast, Motion-Robust MRI – Nalini Singh (MIT)

Register for the Zoom here. You can find a complete schedule of upcoming Meetups on the Voxel51 Events page.

Get Involved!

There are a lot of ways to get involved in the Computer Vision Meetups. Reach out if you identify with any of these:

  • You’d like to speak at an upcoming Meetup
  • You have a physical meeting space in one of the Meetup locations and would like to make it available for a Meetup
  • You’d like to co-organize a Meetup
  • You’d like to co-sponsor a Meetup

Reach out to Meetup co-organizer Jimmy Guerrero on Meetup.com or ping him over LinkedIn to discuss how to get you plugged in.


The Computer Vision Meetup network is sponsored by Voxel51, the company behind the open source FiftyOne computer vision toolset. FiftyOne enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster. It’s easy to get started, in just a few minutes.