Skip to content

Recapping the AI, Machine Learning, and Data Science Meetup — Sept 7, 2023

We just wrapped up the Sept 7, 2023 , AI, Machine Learning, and Data Science Meetup and if you missed it or want to revisit it, here’s a recap! In this blog post you’ll find the playback recordings, highlights from the presentations and Q&A, as well as the upcoming Meetup schedule so that you can join us at a future event.

First, Thanks for Voting for Your Favorite Charity!

In lieu of swag, we gave Meetup attendees the opportunity to help guide a $200 donation to charitable causes. The charity that received the highest number of votes this month was Coalition for Rainforest Nations, an organization on a mission to save the World’s last great rainforests to achieve environmental and social sustainability. We are sending this event’s charitable donation of $200 to Coalition for Rainforest Nations on behalf of the computer vision community!

Missed the Meetup? No problem. Here are playbacks and talk abstracts from the event.

Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes. Current techniques that utilize neural rendering for facilitating freeview videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes.

Minye Wu is a Postdoctoral researcher at KU Leuven.

Q&A

  • Why is PCA linear encoder used? Why not a nonlinear one?

Resource links

EgoSchema: A Dataset for Truly Long-Form Video Understanding

Introducing EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior.

Karttikeya Mangalam is a PhD student in Computer Science at the Department of Electrical Engineering & Computer Sciences (EECS) at University of California, Berkeley advised by Prof. Jitendra Malik. Earlier, he held a visiting researcher position at Meta AI where he collaborated with Dr. Christoph Feichtenhofer and team.

Q&A

  • Could you explain the difference between reconstruction and generation on your first couple of slides?
  • Can we detect face gestures?
  • Could the EgoSchema Generation Process be suitable with real estate video data? For example: “What is the curb appeal of this property?”
  • Why is it called EgoSchema?

Resource links

Monitoring Large Language Models (LLMs) in Production

Just like with all machine learning models, once you put an LLM in production you’ll probably want to keep an eye on how it’s performing. Observing key language metrics about user interaction and responses can help you craft better prompt templates and guardrails for your applications. This talk will take a look at what you might want to be looking at once you deploy your LLMs.

Sage Elliott is a Technical Evangelist – Machine Learning & MLOps at WhyLabs. He enjoys breaking down the barrier to AI observability and talking to amazing people in the AI community.

Q&A

  • Could explain how reading a score is constructed for understanding?
  • When and where is LangKit used in an ML pipeline for LLMS (in production)?
  • What is data sketching?
  • Is this applicable to non-conversational AI?

Resource links

Join the AI, Machine Learning, and Data Science Meetup!

The Meetup’s membership has grown to more than 10,000 members! The goal of the Meetups is to bring together communities of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of AI, machine learning, data science and complementary technologies. 

Join one of the 12 Meetup locations closest to your timezone.

We have exciting speakers already signed up over the next few months! Become a member of the AI, Machine Learning, and Data Science Meetup closest to you, then register for the Zoom.

What’s Next?

Up next on Sept 14 at 10 AM Pacific we have a great line up speakers including:

  • ARMBench: An Object-Centric Benchmark Dataset for Robotic Manipulation – Amazon Robotics team
  • From Model to the Edge, Putting Your Model into Production – Joy Timmermans, Secury360
  • Optimizing Distributed Fine-Tuning Workloads for Stable Diffusion with the Intel Extension for PyTorch on AWS – Eduardo Alvarez, Intel

Register for the Zoom here. You can find a complete schedule of upcoming Meetups on the Voxel51 Events page.

Get Involved!

There are a lot of ways to get involved in the Computer Vision Meetups. Reach out if you identify with any of these:

  • You’d like to speak at an upcoming Meetup
  • You have a physical meeting space in one of the Meetup locations and would like to make it available for a Meetup
  • You’d like to co-organize a Meetup
  • You’d like to co-sponsor a Meetup

Reach out to Meetup co-organizer Jimmy Guerrero on Meetup.com or ping me over LinkedIn to discuss how to get you plugged in.


The Computer Vision Meetup network is sponsored by Voxel51, the company behind the open source FiftyOne computer vision toolset. FiftyOne enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster. It’s easy to get started, in just a few minutes.