September AI, Machine Learning & Data Science Meetup

September AI, Machine Learning & Data Science Meetup

This event is now over.

Register for the next one.

Go to upcoming events
Skip to content

September AI, Machine Learning & Data Science Meetup

September 7 at 10 AM PDT [1 PM EDT]

When

September 7, 2023 – 10:00 AM PDT (1:00 PM EDT)

Where

Virtual / Zoom

Agenda

  • Monitoring Large Language Models (LLMs) in ProductionSage Elliot, Technical Evangelist – Machine Learning & MLOps at WhyLabs
  • Neural Residual Radiance Fields for Streamably Free-Viewpoint VideosMinye Wu, Postdoctoral researcher, KU Leuven
  • Egoschmema: A Dataset for Truly Long-Form Video Understanding – Karttikeya Mangalam, PhD student at UC Berkeley

 

Monitoring Large Language Models (LLMs) in Production

Just like with all machine learning models, once you put an LLM in production you’ll probably want to keep an eye on how it’s performing. Observing key language metrics about user interaction and responses can help you craft better prompt templates and guardrails for your applications. This talk will take a look at what you might want to be looking at once you deploy your LLMs.

Speaker: Sage Elliott is a Technical Evangelist – Machine Learning & MLOps at WhyLabs. He enjoys breaking down the barrier to AI observability and talking to amazing people in the AI community.

 

Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos

The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes.Current techniques that utilize neural rendering for facilitating freeview videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes.

Speaker: Minye Wu – Postdoctoral researcher, KU Leuven

 

Egoschmema: A Dataset for Truly Long-Form Video Understanding

Introducing EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior.

Speaker: Karttikeya is a PhD student in Computer Science at the Department of Electrical Engineering & Computer Sciences (EECS) at University of California, Berkeley advised by Prof. Jitendra Malik. Earlier, he held a visiting researcher position at Meta AI where he collaborated with Dr. Christoph Feichtenhofer and team.

 

Don’t Forget

  • Voxel51 will make a donation on behalf of the Meetup members to the charity that gets the most votes this month.
  • Can’t make the date and time? No problem! Just make sure to register here so we can send you links to the playbacks.

Register now to receive your invite link

By submitting you (1) agree to Voxel51’s Terms of Service and Privacy Statement and (2) agree to receive occasional emails.

Find a Meetup Near You

Join 9,400+ AI/ML enthusiasts who have already become members

The goal of the AI, Machine Learning, and Data Science Meetup network is to bring together a community of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of AI and complementary technologies. If that’s you, we invite you to join the Meetup closest to your timezone: