Skip to content

Webinar Recap: What’s New in FiftyOne 0.20 for Computer Vision

We recently released FiftyOne 0.20, which is packed with exciting new features to help you organize, visualize, search, and explore your computer vision datasets. Voxel51 Co-Founder and CTO Brian Moore walked us through the new features in a live webinar, with plenty of live demos and code examples, so that you can see all the awesomeness in action. 

You can watch the video playback on YouTube, take a look at the slides, read the transcript, and read the recap below for the highlights. Enjoy!

First, Thanks for Voting for Your Favorite Charity!

In lieu of swag, we gave attendees the opportunity to help guide our monthly donation to charitable causes. The charity that received the highest number of votes was Wildlife AI. We were first introduced to Wildlife AI through the FiftyOne community! They are using FiftyOne to enable their users to easily analyze the camera data and create their own models. We are sending a charitable donation of $200 to Wildlife AI on behalf of the computer vision community.

What Is FiftyOne?

Brian starts with a quick overview of what FiftyOne is for those who might be new to it: “think of FiftyOne as glue between your datasets and your models, and also between your data-centric tools and your model-centric tools.”

FiftyOne helps you curate high quality datasets on the left and feed them to your models on the right. Nestled at the center of your data and models, FiftyOne unlocks dozens of computer vision workflows so you can continuously build high quality data, high performing models, and production-grade AI.

FiftyOne sits in the center of your data and models

Here’s a snapshot of some of the computer vision workflows made possible by FiftyOne:

  • Visualize, query, and analyze computer vision datasets
  • Streamline data annotation workflows
  • Identify and correct labeling mistakes
  • Analyze model performance, both visually and programmatically
  • And dozens more workflows!

You can get the full FiftyOne overview in the first five minutes of the presentation recap video.

What Are the New Features in FiftyOne 0.20?

Here are the new features in FiftyOne 0.20 that Brian demonstrated in the webinar and you can read the highlights in the sections below: 

  • Natural language search: you can now perform arbitrary search-by-text queries natively in the FiftyOne App and Python SDK, leveraging multimodal vector indexes on your datasets under-the-hood
  • Qdrant and Pinecone integrations: new integrations with Qdrant and Pinecone to power text/image similarity queries
  • Similarity API: significant upgrades to the FiftyOne Brain’s similarity API, including configurable vector database backends and the ability to modify existing indexes
  • Point cloud-only datasets: you can now create datasets composed only of point cloud samples and visualize them in the App’s grid view

Prerequisites

Before diving into the features, Brian first installed FiftyOne with pip install fiftyone, loaded a dataset from the Dataset Zoo with 200 images, loaded in a model from the Model Zoo that can generate embeddings for both language and images, and then computed embeddings (specifically a similarity index which is at the heart of the new features being demonstrated). Now that there’s a dataset and an index on it, Brian launches the FiftyOne App to show us the new features in action!

You can find the prerequisite steps here in the Jupyter notebook, or consider joining us for a Getting Started with FiftyOne Workshop – you can find a variety of upcoming dates and times listed on our events page. Half lecture and half lab, you’ll walk away with everything you need to get up and running with open source FiftyOne.  

Natural Language Search

To use natural language search now that the prerequisite steps are done, Brian clicks the magnifying glass icon in the FiftyOne App samples grid bar and first searches for “puppies,” but then tries searching a string that’s a little more complicated: “kites high in the air.” What returns by default are the 25 most closely matching images in the dataset (because 25 is the default setting, but can be configured).

This dataset may have annotations, but this search feature is completely unsupervised. It’s not relying on annotations; it’s only based on the embeddings that we added to the dataset in the prerequisite step.

You can see the demo of the natural language search feature from  ~08:37 – 17:42 in the presentation, including using a subset of the COCO dataset (the validation split) with 5,000 samples. Brian even goes under the hood to explain how this all works: vector search! Furthermore, not only can you search by natural language, you can also perform an image to image search. Simply use the image similarity icon in the sample grid and find similar images in the dataset to the one you selected.

So far Brian has demonstrated vector searches being done using a built-in vector database that runs in memory. But if you want to scale this workflow to very large datasets, let’s say millions of images, and you want to be able to quickly do searches across those millions of data points, then it would be more performant to consider using a dedicated vector database solution, which is described in the next section!

Quadrant and Pinecone Integrations

As of the 0.20 release, FiftyOne integrates with two vector databases: Qdrant and Pinecone. Earlier in the presentation, Brian ran the compute_similarity() method. The method now supports a backend parameter that you can either set to use the built-in database, or you can specify one of the two supported backends. Assuming you’ve configured your Qdrant or Pinecone vector database, then you can generate indices for your FiftyOne datasets where the vectors are stored in that separate database. Then performing similarity searches in the FiftyOne App will query that database rather than the built-in one.

Brian dives deep into both integrations and shows multiple compelling computer vision workflows. You can watch it all from ~17:42 – 36:28 in the presentation, and you can learn more about them in the integration docs: Qdrant and Pinecone.

Enhanced Similarity API

Brian notes that we’ve covered the upgrades to the FiftyOne Brain’s similarity API, because it’s what’s makes all this possible from the presentation so far:

  • Use default backend or configure a custom one (Qdrant, Pinecone, or add your own)
  • Initialize an empty index
  • Add vectors to an existing index
  • Retrieve vectors from an index
  • Remove vectors from an index

Previously, similarity indexes were static objects that could not be edited once created. In FiftyOne 0.20, the similarity indices are now mutable! 

Get this quick summary of the similarity API updates from ~36:28 – 37:02 in the presentation, and here in the Similarity API docs.

Upgrades to Point Cloud Support

Previously, working with point cloud datasets in the App was to add point cloud samples as slices of grouped datasets that also contain other media modalities (image, video, etc). However, in FiftyOne 0.20 you can now create datasets that contain only point cloud samples and work with them natively in the App’s grid and modal views.

Brian explains how: “You can run a utility that exists in the tool to generate projection images. So, what it’s doing is taking each of the point clouds in the dataset and generating what we’re calling an orthographic projection image, like a top-down projection of that point cloud, and then storing those in a new field of the dataset. And the reason that’s useful is when you open up the grid view, you have a fast rendering representation of each of those point clouds so you can quickly scan through to find the one of interest. And then if you want to click into it and work with it in three dimensions, you can then open the modal and work with the full 3D scene.”

Watch Brian’s demo of the new point cloud features from ~37:02 – 45:18 in the presentation. Also check out the docs for more information about adding point cloud samples and orthographic projections to your FiftyOne datasets and visualizing them in the App.

Other Notes

After demonstrating the new features, Brian shares a few additional points before concluding the presentation.

Open source software like FiftyOne doesn’t happen without an amazing community supporting it. Brian gives a shout out to the community members who contributed to FiftyOne 0.20. 

Also, we’re always open to new contributions and contributors! Check out the good first issue label on GitHub. If you are interested in working on one of those, feel free to leave a comment there and we will be happy to help you out in any way.