Normally, in our weekly FiftyOne tips and tricks blog we recap interesting questions and answers that have recently popped up on Slack, GitHub, Stack Overflow, and Reddit. However, this time we are going to recap the questions that were asked in last week’s “Getting Started with FiftyOne” workshop.
Missed the workshop?
No need to experience FOMO! We’ve got two more upcoming workshops before the end of the year.
FiftyOne Plugins Workshop: Authoring Data-Centric AI Applications
Are you ready to take your computer vision tooling to the next level? Open source FiftyOne is the most flexible computer vision toolkit on the planet. By tapping into its builtin Plugin framework, you can extend your FiftyOne experience and streamline your workflows, building Gradio-like applications with data at their core.
From concept interpolation to image deduplication, optical character recognition, and even curating your own AI art gallery by adding generated images directly into a dataset, your imagination is the only limit. Join us to discover how you can unleash your creativity and interact with data like never before.
The workshop will cover the following topics:
- FiftyOne Plugins – what are they?
- Installing a plugin
- Creating your own Python plugin
- Python plugin tips
- Creating your own JavaScript plugin
- Publishing your plugin
Register for the FiftyOne Plugins workshop on Nov 15.
Getting Started with FiftyOne Workshop
Want greater visibility into the quality of your computer vision datasets and models? Then join Dan Gural, Machine Learning Engineer at Voxel51, for this free 90 minute, hands-on workshop to learn how to leverage the open source FiftyOne computer vision toolset.
In the first part of the workshop we’ll cover:
- FiftyOne Basics (terms, architecture, installation, and general usage)
- An overview of useful workflows to explore, understand, and curate your data
- How FiftyOne represents and semantically slices unstructured computer vision data
The second half will be a hands-on introduction to FiftyOne, where you will learn how to:
- Load datasets from the FiftyOne Dataset Zoo
- Navigate the FiftyOne App
- Programmatically inspect attributes of a dataset
- Add new sample and custom attributes to a dataset
- Generate and evaluate model predictions
- Save insightful views into the data
Register for the Getting Started with FiftyOne workshop on Dec 6.
Working with annotations in FiftyOne
Does data imported into FiftyOne need to be already annotated? Or does FiftyOne help facilitate annotations in some way?
FiftyOne provides a powerful annotation API that makes it easy to add or edit labels on your datasets or specific views into them.
The basic workflow to use the annotation API to add or edit labels on your FiftyOne datasets is as follows:
- Load a labeled or unlabeled dataset into FiftyOne
- Explore the dataset using the App or dataset views to locate either unlabeled samples that you wish to annotate or labeled samples whose annotations you want to edit
- Use the
annotate()
method on your dataset or view to upload the samples and optionally their existing labels to the annotation backend - In the annotation tool, perform the necessary annotation work
- Back in FiftyOne, load your dataset and use the
load_annotations()
method to merge the annotations back into your FiftyOne dataset - If desired, delete the annotation tasks and the record of the annotation run from your FiftyOne dataset
Additional links to check out in regards to annotations:
Working with video datasets in FiftyOne
Does FiftyOne support video datasets?
Yes! Before working with video datasets in FiftyOne, make sure to add FFmpeg to your base FiftyOne install. The FiftyOne App has a built-in visualizer for working with video data.
The video visualizer offers all of the same functionality as the image visualizer, as well as some convenient actions and shortcuts for navigating through a video and its labels. Also, the video visualizer streams frame data on-demand, which means that playback begins as soon as possible and even heavyweight label types like segmentations are supported!
To get started with a sample video dataset, check out the Quickstart Video dataset that consists of 10 video segments with dense object detections generated by human annotators.
Grouping images and/or videos together
Does FiftyOne have a way of grouping images or videos together? For example, if you have synchronized videos of the same scene.
Yes! The best way to accomplish this in FiftyOne is to use grouped datasets and dynamic group views. Grouped datasets contain multiple slices of samples of possibly different modalities (image, video, or point cloud) that are organized into groups. Grouped datasets can be used, for example, to represent multiview scenes, where data for multiple perspectives of the same scene can be stored, visualized, and queried in ways that respect the relationships between the slices of data. You can also create dynamic group views into your datasets based on a field or expression of interest.
Here’s an example of an image being grouped with its associated sensor data.
Finally, here’s some tips and tricks on how to group data using dynamic groups.
Auditing and versioning data
If we have multiple people annotating our data at the same time, is there a way in FiftyOne to see who annotated what so that we can evaluate each person’s annotation accuracy?
One way to accomplish this is to make use of FiftyOne Teams’ collaboration and data versioning capabilities.
With FiftyOne Teams’ data versioning you are able to capture the state of your dataset at any given time so that it can be referenced in the future. This enables workflows like recalling particular important events in the dataset’s lifecycle (model trained, annotation added, etc) as well as helping to prevent accidental data loss.
Evaluating ground truth annotations
If we have multiple people annotating our data at the same time, is there a way in FiftyOne to see who annotated what so that we can evaluate each person’s annotation accuracy?
Yes! Check out this tutorial that shows you how to evaluate your ground truth annotations for errors/weaknesses that might need to be corrected before any subsequent model training.
Next, check out this tutorial, to learn how to use FiftyOne to evaluate and understand the strengths and weaknesses of both the classification model, and the underlying ground truth annotation.
Support for point clouds and instance segmentation
Does FiftyOne support point clouds? What about instance segmentation?
Yes! FiftyOne natively supports point clouds.
Check out this blog on how to visualize point clouds, create orthographic projections, and evaluate detections with the latest release of FiftyOne. Also, check out this tutorial on how to build a 3D self-driving dataset from scratch with OpenAI’s Point-E and FiftyOne.
In regards to segmentations, check out how to pre-label your computer vision data with CLIP, SAM, and other zero-shot models using the Zero-Shot prediction plugin for FiftyOne.