Register for the event
Virtual
Americas
Launch Event
Building Feedback-Driven Annotation Pipelines for End-to-End ML Workflows – February 18, 2026
Feb 18, 2026
10 - 11 AM PST
Online. Register for the Zoom!
About this event
One of the most critical yet overlooked bottlenecks in visual AI development is the absence of systematic annotation workflows for data selection, error detection, and quality validation across 2D and 3D tasks. Without these workflows, teams face extensive coordination overhead across annotation services, domain experts, and tools, resulting in delays that compound at every ML stage from curation to model evaluation.
In this technical workshop, we’ll show how to build a feedback-driven annotation pipeline for perception models using FiftyOne. We’ll explore real model failures and data gaps, and turn them into focused annotation tasks that then route through a repeatable workflow for labeling and QA. The result is an end-to-end pipeline keeping annotators, tools, and models aligned and closing the loop from annotation, curation, back to model training and evaluation.
Host

What you’ll learn:

  • Techniques for labeling the data that matters the most for annotation time and cost savings
  • Structure human-in-the-loop workflows for finding and fixing model errors, data gaps, and targeted relabeling instead of bulk labeling
  • Combine auto-labeling and human review in a single, feedback-driven pipeline for perception models
  • Use label schemas and metadata as “data contracts” to enforce consistency between annotators, models, and tools, especially for multimodal data
  • Detect and manage schema drift and tie schema versions to dataset and model versions for reproducibility
  • QA and review steps that surface label issues early and tie changes back to model behavior
  • An annotation architecture that can accommodate new perception tasks and feedback signals without rebuilding your entire data stack

Who should attend:

ML and perception engineers, data scientists, AV/ADAS and robotics practitioners, computer vision researchers, data platform and MLOps engineers, and technical leads responsible for labeling, developing multimodal datasets and models, or maintaining consistent label semantics across projects and tools.