# Evaluating Object Detections with FiftyOne¶

This walkthrough demonstrates how to use FiftyOne to perform hands-on evaluation of your detection model.

It covers the following concepts:

So, what’s the takeaway?

Aggregate measures of performance like mAP don’t give you the full picture of your detection model. In practice, the limiting factor on your model’s performance is often data quality issues that you need to see to address. FiftyOne is designed to make it easy to do just that.

Running the workflow presented here on your ML projects will help you to understand the current failure modes (edge cases) of your model and how to fix them, including:

• Identifying scenarios that require additional training samples in order to boost your model’s performance

• Deciding whether your ground truth annotations have errors/weaknesses that need to be corrected before any subsequent model training will be profitable

## Setup¶

If you haven’t already, install FiftyOne:

[ ]:

!pip install fiftyone


In this tutorial, we’ll use an off-the-shelf Faster R-CNN detection model provided by PyTorch. To use it, you’ll need to install torch and torchvision, if necessary.

[ ]:

!pip install torch torchvision


[1]:

import torch
import torchvision

# Run the model on GPU if it is available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# Load a pre-trained Faster R-CNN model
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
model.to(device)
model.eval()


Model ready


We’ll perform our analysis on the validation split of the COCO dataset, which is conveniently available for download via the FiftyOne Dataset Zoo.

[2]:

import fiftyone as fo
import fiftyone.zoo as foz

"coco-2017",
split="validation",
dataset_name="evaluate-detections-tutorial",
)
dataset.persistent = True

Split 'validation' already downloaded
100% |███████████████| 5000/5000 [28.5s elapsed, 0s remaining, 166.8 samples/s]
Dataset 'evaluate-detections-tutorial' created


[3]:

# Print some information about the dataset
print(dataset)

Name:           evaluate-detections-tutorial
Media type:     image
Num samples:    5000
Persistent:     True
Tags:           ['validation']
Sample fields:
filepath:     fiftyone.core.fields.StringField
tags:         fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)
ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)

[4]:

# Print a ground truth detection
sample = dataset.first()
print(sample.ground_truth.detections[0])

<Detection: {
'id': '6065d1e04976aab284081d83',
'attributes': BaseDict({
'area': <NumericAttribute: {'value': 531.8071000000001}>,
'iscrowd': <NumericAttribute: {'value': 0.0}>,
}),
'tags': BaseList([]),
'label': 'potted plant',
'bounding_box': BaseList([
0.37028125,
0.3345305164319249,
0.038593749999999996,
0.16314553990610328,
]),

Note that the ground truth detections are stored in the ground_truth field of the samples.
[5]:

session = fo.launch_app(dataset)