SF AI, Machine Learning and Computer Vision Meetup
Nov 20, 2024 | 5:30 to 8:00 PM PT
Register for the event at GitHub's offices in San Francisco. RSVPs are limited!
By submitting you (1) agree to Voxel51’s Terms of Service and Privacy Statement and (2) agree to receive occasional emails.
Date and Time
Nov 20, 2024 from 5:30 PM to 8:00 PM Pacific
Location
The Meetup will take place at GitHub’s offices in San Francisco. Note that pre-registration is mandatory.
88 Colin P Kelly Jr St, San Francisco, CA 94107
Why Speed Matters in Compound AI Systems: Making Models Go Vroom at Fireworks
Mikiko Bazeley
Fireworks.ai
In this talk, we will analyze the critical role of speed in compound AI systems and how it impacts overall performance and user experience. Attendees will learn about some of the strategies we’ve implemented at Fireworks to optimize model efficiency, reduce latency, and enhance responsiveness.
About the Speaker
Mikiko Bazeley is a Developer Relations Engineer at Fireworks.ai, specializing in MLOps and data science. With a passion for building high-performance AI systems, she collaborates with leading companies to drive innovation in generative AI. Mikiko loves empowering developers through hands-on workshops and engaging content, making complex AI concepts accessible and exciting.
DIY LLMs
Charles Frye
Modal Labs
In this talk, Charles will give a guided tour through the components of a self-hosted LLM service, from hardware considerations to engineering tools like ‘evals,’ all the way to the application layer. We’ll consider the open weights models, open source software, and infrastructure that power LLM applications. He will heavily shill the open source vLLM project.
About the Speaker
Charles Frye builds applications of neural networks at Modal. He got his PhD at Berkeley for work on neural network optimization. He previously worked at Weights & Biases and Full Stack Deep Learning.
From C Student to C-Suite without C++: AI Engineering for Entrepreneurs
Christos Magganas
AI Makerspace and Toro
Building an 8-microservice LLM infrastructure with a team of three taught me that successful AI-assisted development isn’t about complex prompting – it’s about clear specifications and solid engineering practices. I’ll share how we use Cursor’s AI capabilities effectively: writing descriptive comments, maintaining clean architecture, and knowing when to break down complex tasks. By combining these fundamentals with well-documented frameworks like FastAPI and Pydantic, we’ve achieved 8-10x faster development while keeping code reliable. Learn from our experiences to separate what actually works from the AI hype.
About the Speaker
Christos Magganas is an AI Engineer at AI Makerspace and a Founding Engineer at Toro, specializing in Python and AI/ML Ops. A firm believer in the collaborative spirit of innovation, he dedicates his time to community building and mentorship within the tech space. Through volunteering and consulting, Christos empowers others to explore the potential of AI/ML and contribute to solving real-world problems. He is driven by a vision of a future where technology serves humanity.
Find a Meetup Near You
Join the AI and ML enthusiasts who have already become members
The goal of the AI, Machine Learning, and Computer Vision Meetup network is to bring together a community of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of AI and complementary technologies. If that’s you, we invite you to join the Meetup closest to your timezone.