callcut.evaluation.EventMetrics🔗
- class callcut.evaluation.EventMetrics(n_ground_truth, n_predicted, tp, fp, fn, precision, recall, f1)[source]🔗
Event-level detection metrics.
These metrics evaluate detection at the call/event level: each ground truth call is either matched to a prediction (true positive) or missed (false negative), and each prediction is either matched (true positive) or a false alarm (false positive).
- Parameters:
- n_ground_truth
int Total number of ground truth events.
- n_predicted
int Total number of predicted events.
- tp
int True positives (correctly matched predictions).
- fp
int False positives (predictions without a matching ground truth).
- fn
int False negatives (ground truth events without a matching prediction).
- precision
float Precision = TP / (TP + FP). Of the predicted calls, what fraction are real.- recall
float Recall = TP / (TP + FN). Of the real calls, what fraction were detected.- f1
float F1 score = 2 * precision * recall / (precision + recall). Harmonic mean of precision and recall.
- n_ground_truth
Examples
>>> metrics = EventMetrics( ... n_ground_truth=10, ... n_predicted=8, ... tp=7, ... fp=1, ... fn=3, ... precision=0.875, ... recall=0.7, ... f1=0.778, ... ) >>> metrics.precision 0.875