Register for the event

Build better computer vision models.

  • Annotate samples
  • Curate datasets
  • Evaluate models
Virtual
Americas
CV Meetups
AI, ML and Computer Vision Meetup – March 5, 2026
Mar 5, 2026
9 - 11 AM Pacific
Online. Register for the Zoom!
Speakers
About this event
Join our virtual meetup to hear talks from experts on cutting-edge topics across AI, ML, and computer vision.
Schedule
MOSPA: Human Motion Generation Driven by Spatial Audio
Enabling virtual humans to dynamically and realistically respond to diverse auditory stimuli remains a key challenge in character animation. This problem demands the tight integration of perceptual modeling and motion synthesis, yet despite its importance, it remains largely unexplored.
Most prior work has focused on mapping modalities such as speech, audio, or music to generate human motion. However, these approaches typically overlook the role of spatial features encoded in spatial audio signals, and how those features influence human movement.
To bridge this gap and enable high-quality modeling of human motion in response to spatial audio, we will introduce the first comprehensive Spatial Audio-Driven Human Motion (SAM) dataset. SAM contains diverse, high-quality spatial audio paired with corresponding human motion data.
For benchmarking, we will develop a simple yet effective diffusion-based generative framework for human motion generation driven by spatial audio, termed MOSPA. MOSPA faithfully captures the relationship between body motion and spatial audio through an effective multimodal fusion mechanism. Once trained, the model can generate diverse and realistic human motions conditioned on varying spatial audio inputs.
Finally, we will conduct a thorough investigation of the proposed dataset and perform extensive benchmarking experiments. Our approach achieves state-of-the-art performance on this task, demonstrating the effectiveness of both the dataset and the proposed framework.
Securing the Autonomous Future: Navigating the Intersection of Agentic AI, Connected Devices, and Cyber Resilience
With billions of devices now in our infrastructure and emerging as autonomous agents (AI), we face a very real question: How can we create intelligent systems that are both secure and trusted? This talk will explore the intersection of agentic AI and IoT and demonstrate how the same AI systems can provide robust defense mechanisms. At its core, however, this is a challenge about trusting people with technology, ensuring their safety, and providing accountability. Therefore, creating a new way of thinking is required, one in which security is built in, and where autonomous action has oversight; and, ultimately, innovation leads to greater human well-being.
Transforming Business with Agentic AI
Agentic AI is reshaping business operations by employing autonomous systems that learn, adapt, and optimize processes independently of human input. This session examines the essential differences between traditional AI agents and Agentic AI, emphasizing their significance for project professionals overseeing digital transformation initiatives. Real-world examples from eCommerce, insurance, and healthcare illustrate how autonomous AI achieves measurable outcomes across industries. The session addresses practical orchestration patterns in which specialized AI agents collaborate to resolve complex business challenges and enhance operational efficiency. Attendees will receive a practical framework for identifying high-impact use cases, developing infrastructure, establishing governance, and scaling Agentic AI within their organizations.
Plugins as Products: Bringing Visual AI Research into Real-World Workflows with FiftyOne
Visual AI research often introduces new datasets, models, and analysis methods, but integrating these advances into everyday workflows can be challenging. FiftyOne is a data-centric platform designed to help teams explore, evaluate, and improve visual AI, and its plugin ecosystem is how the platform scales beyond the core. In this talk, we explore the FiftyOne plugin ecosystem from both perspectives: how users apply plugins to accelerate data-centric workflows, and how researchers and engineers can package their work as plugins to make it easier to share, reproduce, and build upon. Through practical examples, we show how plugins turn research artifacts into reusable components that integrate naturally into real-world visual AI workflows.