Classification Metrics
What you’ll learn
Section titled “What you’ll learn”- What side accuracy measures and why it matters
- How the metric is calculated from human corrections
- What low accuracy indicates about your classification model
Overview
Section titled “Overview”After a detection model locates a feature in an image, a classification model determines which side of the animal it belongs to — left or right. This applies primarily to dorsal fin annotations, where side determination is essential for matching individuals across encounters. The classification metrics page tracks how often the model gets this right.
Side Accuracy
Section titled “Side Accuracy”What it measures: Of all confirmed ML annotations, how many had the correct side assignment without needing a human correction.
Accuracy = (Confirmed - SideChanged) / Confirmed x 100
Every time a researcher reviews an ML annotation and changes its side from left to right (or vice versa) before confirming, that generates a SideChanged revision. The metric subtracts those corrections from the total confirmed count to determine how many the model got right on its own.
Interpreting the values
Section titled “Interpreting the values”- High accuracy (e.g., 97%) — The model reliably distinguishes left from right. Researchers rarely need to flip the side assignment.
- Moderate accuracy (e.g., 85-95%) — The model is mostly correct but makes occasional errors. You may notice certain angles or lighting conditions cause more confusion.
- Low accuracy (below 85%) — The model frequently confuses left and right sides. This adds significant manual effort to the review process and may warrant model retraining.
What counts and what does not
Section titled “What counts and what does not”- Only
IsMachine = trueannotations are included — human-created annotations are excluded. - Only dorsal fin annotations (or other categories where side is relevant) contribute to this metric.
- If no annotations have been confirmed yet (Confirmed = 0), the metric displays as ”---”.
Common causes of low side accuracy
Section titled “Common causes of low side accuracy”Side classification errors are often caused by:
- Ambiguous images — When the animal is photographed from directly above or behind, the distinction between left and right becomes difficult even for ML models.
- Unusual body posture — Animals rolling, breaching, or turning can present a side view that contradicts the actual orientation.
- Training data imbalance — If the model was trained on significantly more left-side images than right-side (or vice versa), it may be biased toward predicting the more common class.
Reviewing the specific encounters where side corrections were made can help identify whether errors follow a pattern.
Related
Section titled “Related”- ML Center Overview — How the ML Center works and how metrics are refreshed
- Detection Metrics — Precision, recall, and bounding box quality
- Identification Metrics — Accuracy of individual ID predictions
- Revision Rate — Overall human correction effort
- Confirming Annotations — The review workflow that generates metric data