Coco Metrics

Precision vs Recall

Summary of Key Concepts:

  • AP (Average Precision): Measures the model's precision across various IoU thresholds and object sizes.

  • AR (Average Recall): Measures how well the model can recall (find) all object instances, given a certain number of detections.

  • IoU (Intersection over Union): Indicates how much overlap exists between the predicted bounding box and the ground truth. Different IoU thresholds (e.g., 0.50 or 0.75) correspond to different degrees of overlap required for a correct detection.

Average Precision Metrics

The metric reflects the average precision across all object catogories. Aside from the main AP metric, other metrics are performed to provide a better understanding of how the model performs:

  • For @IoU: evaluation is performed across various IoU thresholds.

    • bbox_mAP: is computed from 0.5 to 0.95 in steps of 0.05.

    • bbox_mAP_50: at IoU = 0.5.

    • bbox_mAP_75: at IoU = 0.75.

  • For the bbox size:

    • bbox_mAP_s: for small objects (with an area below 32² pixels).

    • bbox_mAP_m: for medium objects (with an area between 32² and 96² pixels inclusive).

    • bbox_mAP_l: for large objects (with an area above 96² pixels).

Average Recall Metrics

Captures the model's ability to find all objects, emphasizing the minimum amount of false negatives.

  • bbox_AR@100: considering only up to 100 detections per image.

  • bbox_AR@300: considering up to 300 detections per image.

  • bbox_AR@1000: considering up to 1000 detections per image.

  • bbox_AR_s@1000: considering up to 1000 small detections per image.

  • bbox_AR_m@1000: considering up to 1000 medium detections per image.

  • bbox_AR_l@1000: considering up to 1000 large detections per image.

Last updated