Welcome to our weekly FiftyOne tips and tricks blog where we recap interesting questions and answers that have recently popped up on Slack, GitHub, Stack Overflow, and Reddit.
Wait, what’s FiftyOne?
FiftyOne is an open source machine learning toolset that enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster.
[@portabletext/react] Unknown block type "externalImage", specify a component for it in the `components.types` prop
Get started! We’ve made it easy to get up and running in a few minutes
Join the FiftyOne Slack community, we’re always happy to help
Ok, let’s dive into this week’s tips and tricks!
Filtering labels with ViewField
Community Slack member Geoffrey Keating asked,
“I have a function that takes a bounding box and margin of error to determine if the box is on the border of an image; could I use this in conjunction with ViewFieldto filter labels?”
First, a little background on ViewField. When you create a ViewField using a string field like ViewField(“$embedded.field.name”), the meaning of this field is interpreted relative to the context in which the ViewField object is used. For example, when passed to the ViewExpression.map()method, this object will refer to the embedded.field.nameobject of the array element being processed.
In other cases, you may wish to create a ViewField that always refers to the root document. You can do this by prepending “$” to the name of the field, as in ViewField(“$embedded.field.name”).
Here are two options that could work. The first one uses relative coordinates:
import fiftyone as fo import fiftyone.zoo as foz from fiftyone import ViewField as F
def is_bordering_box(margin=0.05): bbox = F("bounding_box") margins = [ bbox[0], bbox[1], 1 - bbox[0] - bbox[2], 1 - bbox[1] - bbox[3], ] return F.any([m < margin for m in margins])
“For fiftyone.brain.compute_mistakenness, how are missing objects calculated? Is there a certain probability threshold that a prediction has to reach? Also, is there a certain IoU or IoA threshold that a detection and prediction bounding needs to meet before it is marked as missing/spurious?”
The confidence threshold for predictions to be marked as missing is currently hard coded at 0.95 and IoU at 0.5. In the future, it may make sense to expose these as parameters.
“I’d like to annotate a bounding box dataset and use the same colors for each class every time. So, dogs are blue, cats are red, etc. Can someone point me to how to set this up in the configs?”
At the moment, you can only provide a color pool to the App, from which colors are randomly pulled. However, this is a popular request! You can track this feature’s progress here.
If you are using our draw_labels() functionality to render images to disk with labels drawn on them, then you could iteratively draw one label class at a time with a set color: