Introducing FiftyOne Enterprise Workflows to Accelerate Visual AI Innovation
March 25, 2025 – Written by Brian Moore

I’m excited to announce the general availability of three powerful visual AI workflows in FiftyOne Enterprise—a major step forward in helping organizations streamline and scale data curation and model analysis processes with greater efficiency and precision.
With the release of FiftyOne Enterprise 2.7.0, we are introducing powerful built-in workflows that simplify the labor-intensive process of curating high-quality data and analyzing model performance during the data preparation and model evaluation phases of visual AI development:
- Data Quality: a powerful workflow that identifies common quality issues in your dataset, such as near duplicates, exact duplicates, blurry and bright samples, aspect ratio, and entropy to mitigate the downstream impact of quality on AI model performance.
- Data Lens: streamlines data curation by unlocking direct and fast access to billions of data samples stored in your data lake to query, preview, and select interesting samples to import directly into FiftyOne.
- Model Evaluation: an out-of-the-box experience for users to quickly and intuitively understand model strengths and weaknesses down to the sample level by analyzing model metrics and comparing results across models.
This release also includes several other performance enhancements and updates. Check out the full rundown in the release notes.
You can benefit from these workflows yourself by upgrading to the latest version of FiftyOne Enterprise. And if you’ve never experienced the power of FiftyOne Enterprise, we’d love to show you a demo!
New Data and Model Workflows
We built FiftyOne Enterprise to help organizations working with production-grade visual AI workloads achieve the scalability and efficiency required to develop dependable visual AI systems. By putting data at the core of development, we’ve helped thousands of organizations eliminate guesswork and unwieldy manual processes to maximize AI model accuracy and development efficiency. Our mission to enable every organization to bring the value of visual AI into their applications has never been stronger, and these built-in workflows further strengthen the experience.
Data Quality
Model failures due to poor-quality data are pervasive, and addressing these issues upfront can save projects significant time and effort. However, uncovering bad-quality samples and identifying the ones to fix isn’t trivial, especially with continually evolving large-scale datasets. Data Quality tackles these challenges in FiftyOne Enterprise with a systematic and scalable approach. You can now easily compute, identify, and take action against common data quality issues such as:
- Aspect ratio: detect images with unusual aspect ratios
- Blurriness: identify images that lack sharpness or clarity
- Brightness: find images with extreme illumination and saturation
- Entropy: detect simple and/or complex images that contain unusual amounts of information
- Exact duplicates: identify duplicate images using a hash function
- Near duplicates: leverage vector embeddings to identify near-duplicate samples
With this workflow, you can easily view and filter samples based on adjustable thresholds to achieve the desired quality standard for each metric and review results collaboratively with your ML team in the loop.

Data Lens
A high-quality training dataset is an essential ingredient for accurate and reliable visual AI models. However, building a diverse and well-balanced dataset has many unique challenges—from understanding the amount and distribution of data to use to determining whether real or synthetically generated data is the best choice.
ML teams often have questions like the following during the data curation and model development process:
- What samples do I need to mitigate my model’s poor performance under specific scenarios (e.g., weather conditions or scenes)?
- My current training dataset feels incomplete—how can I identify and augment the dataset to improve model performance?
Through dozens of customer conversations, we identified a common theme: the typical process of collecting and importing data from external sources, curating and visualizing them, and then selecting the right samples for training dataset creation/augmentation is time-consuming and inefficient. We built Data Lens to bridge this gap, simplify the curation process, and remove bottlenecks like complex dependencies on cross-functional teams when sourcing data.
With Data Lens, FiftyOne Enterprise users can query billions of data samples from their configured data sources by writing a simple connector, previewing the results in an intuitive visual interface, and then seamlessly importing the samples into new or existing FiftyOne datasets. What used to take teams of data engineers multiple days of effort can now be done within seconds—all within the context of an ML engineer’s model development and analysis work!

Model Evaluation
Model Evaluation in FiftyOne provides ML engineers the data-centric insights they need to effectively evaluate and improve model performance. When engineers deeply understand a model’s performance in the context of the underlying data it processes, taking the next step to improve performance becomes easier. But more often than not, the lack of visibility into the root causes of underperforming models leaves teams guessing and experimenting.
FiftyOne’s model evaluation workflow was built from the ground up to present information in a way that makes it easier to discover areas of model strengths and weaknesses, data inaccuracies, blind spots, and corner cases.
Kick-off model evaluation directly from the FiftyOne App by providing your predictions, ground truth fields, and desired evaluation method. With FiftyOne Enterprise, you can schedule evaluations to run in the background and continue with other work while FiftyOne automatically calculates industry-standard metrics such as precision, recall, F1 score, confusion matrices, and more.
You can visualize the performance of each metric down to the class level and drill down to understand the best-performing and worst-performing scenarios. Importantly, all evaluation charts in FiftyOne are interactive: simply click on a histogram bar, confusion matrix cell, or numeric table entry and FiftyOne will automatically load the corresponding samples in the grid, allowing you to dive deep to understand and diagnose the model’s performance.
Comparing two model versions is also easy. FiftyOne’s Model Evaluation UI provides a side-by-side view that helps you understand the strengths and weaknesses of each model and the specific samples contributing to those metrics. As you perform your analysis, you can leave notes and collaborate with other members of your team to choose the best model for deployment or further work.

What’s Next
We’re confident these new workflows will streamline and scale your visual AI work, making your development and validation faster, streamlined, and more efficient.
Ready to take the next step with FiftyOne? You can:
- See these workflows in action at our upcoming workshop, “Building Visual AI in the Enterprise”
- Not a FiftyOne user yet? Learn more about FiftyOne Enterprise and get a personalized demo from our team of experts
- Join the FiftyOne Community on Discord to ask questions and get involved