AI, Machine Learning & Computer Vision Meetup – Oct 24, 2024

AI, Machine Learning & Computer Vision Meetup – Oct 24, 2024

This event is now over.

Register for the next one.

Go to upcoming events
Skip to content

AI, Machine Learning and Computer Vision Meetup

Oct 24, 2024 at 2 PM BST (UTC +1)

Register for the Zoom

By submitting you (1) agree to Voxel51’s Terms of Service and Privacy Statement and (2) agree to receive occasional emails.

Accelerating Machine Learning Research and Development for Autonomy

Guillaume Rochette
Oxa (MetaDriver)

At Oxa (Autonomous Vehicle Software), we designed an automated workflow for building machine vision models at scale from data collection to in-vehicle deployment, involving a number of steps, such as, intelligent route planning to maximise visual diversity; sampling of the sensor data w.r.t. visual and semantic uniqueness; language-driven automated annotation tools and multi-modal search engine; and sensor data expansion using generative methods.

About the Speaker

Guillaume Rochette is a Staff Engineer at Oxa MetaDriver, a suite of tools that combines generative AI, digital twins and simulation to accelerate machine learning and testing of self-driving technology before and during real-world use. Prior to that, he did a PhD. in Machine Vision at the University of Surrey on “Pose Estimation and Novel View Synthesis of Humans”. He is currently working on Machine Vision and 3D Geometric Understanding for autonomous driving.

Pixels Are All You Need: Utilizing 2D Image Representations in Applied Robotics

Brent Griffin
Voxel51

Many vision-based robot control applications (like those in manufacturing) require 3D estimates of task-relevant objects, which can be realized by training a direct 3D object detection model. However, obtaining 3D annotation for a specific application is expensive relative to 2D object representations like segmentation masks or bounding boxes.

In this talk, Brent will describe how we achieve mobile robot manipulation using inexpensive pixel-based object representations combined with known 3D environmental constraints and robot kinematics. He will also discuss how recent Visual AI developments show promise to further reduce the cost of 2D training data, thereby increasing the practicality of pixel-based objects representations in robotics.

About the Speaker

Brent Griffin, PhD is a Principal Machine Learning Scientist at Voxel51. Previously, he was the Perception Lead at Agility Robotics and an assistant research scientist at the University of Michigan conducting research at the intersection of computer vision, control, and robot learning. He is lead author on publications in all of the top IEEE conferences for computer vision, robotics, and control, and his work has been featured in Popular Science, in IEEE Spectrum, and on the Big Ten Network.

PostgreSQL for Innovative Vector Search

Steve Pousty
Voxel51

There are a plethora of datastores that can work with vector embeddings. You are probably already running one that allows for innovative uses of data alongside your embeddings – PostgreSQL! This talk will focus on showing examples of how features already present in the PostgreSQL ecosystem allow you to leverage it for cutting edge use cases. Live demos and lively discussion will be the focus of the talk. You will go home with the foundation to do more impressive vector similarity searches.

About the Speaker

Steve Pousty is a dad, partner, son, a founder, and a principal developer advocate at Voxel51. He can teach you about Computer Vision, Data Analysis, Java, Python, PostgreSQL, Microservices, and Kubernetes. He has deep expertise in GIS/Spatial, Remote Sensing,  Statistics, and Ecology. Steve has a Ph.D. in Ecology and can be bribed with offers of bird watching or fly fishing.