Skip to content

Recapping the AI, Machine Learning and Data Science Meetup — April 18, 2024

We just wrapped up the April ‘24 AI, Machine Learning and Data Science Meetup, and if you missed it or want to revisit it, here’s a recap! In this blog post you’ll find the playback recordings, highlights from the presentations and Q&A, as well as the upcoming Meetup schedule so that you can join us at a future event.

First, Thanks for Voting for Your Favorite Charity!

In lieu of swag, we gave Meetup attendees the opportunity to help guide a $200 donation to charitable causes. The charity that received the highest number of votes this month was Oceana, which is focused on ocean conservation — protecting and restoring marine life and the world’s abundant and biodiverse oceans. We are sending this event’s charitable donation of $200 to Oceana on behalf of the Meetup members!

Missed the Meetup? No problem. Here are playbacks and talk abstracts from the event.

Towards Resource Efficient Robust Text-to-Image Generative Models

Text-to-image (T2I) diffusion models (such as Stable Diffusion XL, DALL-E 3, etc.) achieve state-of-the-art (SOTA) performance on various compositional T2I benchmarks, at the cost of significant computational resources. For instance, the unCLIP (i.e., DALL-E 2) stack comprises T2I prior and diffusion image decoder. The T2I prior model itself adds a billion parameters, increasing the computational and high-quality data requirements. Maitreya propose the ECLIPSE, a novel contrastive learning method that is both parameter and data-efficient as a way to combat these issues

Speaker: Maitreya Patel is a PHD student studying at Arizona State University focusing on model performance and efficiency. Whether it is model training or inference, Maitreya strives to make optimizations to make AI more accessible and powerful.


  • Has focusing on Projection Priors proven beneficial when building task-specific generative models?
  • Does this approach replace the concept of LORAs?
  • Prior to training the ECLIPSE model, does it rely upon the training of the prior model?
  • If the example was video creation, how might this work?
  • How modular is it? Could we swap out the CLIP?
  • What does it mean to have your own dataset with the “anchor”, “positive” and “negative” images?

Resource links

GraphRAG with a Knowledge Graph

Knowledge Graphs place information in context using graph structures to express local and global semantics. When used in a RAG context, particular access patterns emerge that map natural language to different graph data patterns. We’ll review both the model and the matching code.

Speaker: Andreas Kollegger is a founding member of Neo4j, now responsible for researching the use of Knowledge Graphs for GenAI applications.


  • How do you manage graph updates?
  • How is security managed?
  • Should semantic chunks or the size of chunks be given per node in neo4j?
  • How do you do the index search or pattern matching after you get the required chunks using distance-based similarity search?
  • Do you put the embeddings in the nodes that are chunked or do they stay separate?

Resource links

Optimizing Training Data with the Voxel51 and V7 Darwin Integration

One of the most expensive parts of a machine learning project is obtaining high quality training data. In this talk, Mark will discuss how the integration between Voxel51 and the V7 Darwin platform can help you optimize the subset of data to be labeled with the goals of reducing costs whilst maintaining quality.

Speaker: Mark Cox-Smith is a Principal Solutions Architect at V7 where he helps customers to connect their labeling workflows into their MLOps stack.

Resource links

Exploring Multimodal Models: Llava-Next and TextQA Dataset

In this session, you’ll get hands-on with the newest LlaVa model, LlaVa-next! You’ll learn how to use fiftyone to visually vibe check the performance of both the Vicuna-7B and Mistral-7B backbones models on the TextQA dataset.

Speaker: Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in RAG, Agents, and Multimodal AI.

Resource links

Join the AI, Machine Learning and Data Science Meetup!

The combined membership of the Computer Vision and AI, Machine Learning and Data Science Meetups has grown to over 20,000 members! The goal of the Meetups is to bring together communities of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of AI and complementary technologies. 

Join one of the 12 Meetup locations closest to your timezone.

What’s Next?

Up next on May 2 at 10 AM India we have two great speakers lined up!

  • Who needs RLHF When You Have SFT?Srishti Gureja – Georgia Institute of Technology & Writesonic
  • Develop a Legal Search Application from Scratch using Milvus and DSPy!Mert Bozkir – LLM Engineer

Register for the Zoom here. You can find a complete schedule of upcoming Meetups on the Voxel51 Events page.

Get Involved!

There are a lot of ways to get involved in the Computer Vision Meetups. Reach out if you identify with any of these:

  • You’d like to speak at an upcoming Meetup
  • You have a physical meeting space in one of the Meetup locations and would like to make it available for a Meetup
  • You’d like to co-organize a Meetup
  • You’d like to co-sponsor a Meetup

Reach out to Meetup co-organizer Jimmy Guerrero on or ping me over LinkedIn to discuss how to get you plugged in.

These Meetups are sponsored by Voxel51, the company behind the open source FiftyOne computer vision toolset. FiftyOne enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster. It’s easy to get started, in just a few minutes.