callcut.training.TverskyLoss🔗

class callcut.training.TverskyLoss(alpha=0.5, beta=0.5, smooth=1.0)[source]🔗

Tversky loss with adjustable false positive/negative penalties.

Generalization of Dice loss that allows separate weighting of false positives and false negatives. Useful when recall is more important than precision (or vice versa).

The Tversky index is:

\[\begin{split}TI = \\frac{TP}{TP + \\alpha \\cdot FP + \\beta \\cdot FN}\end{split}\]
Parameters:
alphafloat

Weight for false positives. Higher values penalize FP more.

betafloat

Weight for false negatives. Higher values penalize FN more. Use beta > alpha to favor recall over precision.

smoothfloat

Smoothing factor for numerical stability.

Attributes

alpha

False positive weight.

beta

False negative weight.

smooth

Smoothing factor.

Methods

forward(logits, targets)

Compute Tversky loss.

Notes

  • alpha = beta = 0.5 recovers Dice loss

  • alpha = beta = 1.0 recovers Tanimoto coefficient

  • beta > alpha favors recall (fewer missed calls)

  • alpha > beta favors precision (fewer false alarms)

Examples

>>> # Favor recall (fewer missed calls)
>>> loss_fn = TverskyLoss(alpha=0.3, beta=0.7)
>>> loss = loss_fn(logits, targets)
forward(logits, targets)[source]🔗

Compute Tversky loss.

Parameters:
logitsTensor

Raw model output (before sigmoid).

targetsTensor

Ground truth binary labels.

Returns:
lossTensor

Scalar loss value.

property alpha🔗

False positive weight.

Type:

float

property beta🔗

False negative weight.

Type:

float

property smooth🔗

Smoothing factor.

Type:

float