I found a code snippet, which is a custom metric for tensorboard (pytorch training)
def specificity(output, target, t=0.5):
tp, tn, fp, fn = tp_tn_fp_fn(output, target, t)
if fp == 0:
return 1
s = tn / (tn + fp)
if s != s:
s = 1
return s
def tp_tn_fp_fn(output, target, t):
with torch.no_grad():
preds = output > t # torch.argmax(output, dim=1)
preds = preds.long()
num_true_neg = torch.sum((preds == target) & (target == 0), dtype=torch.float).item()
num_true_pos = torch.sum((preds == target) & (target == 1), dtype=torch.float).item()
num_false_pos = torch.sum((preds != target) & (target == 1), dtype=torch.float).item()
num_false_neg = torch.sum((preds != target) & (target == 0), dtype=torch.float).item()
return num_true_pos, num_true_neg, num_false_pos, num_false_neg
In terms of the calculation itself it is easy enough to understand.
What I don't understand is s != s. What does that check do, how can the two s even be different?
Since it's ML-related, I'll assume the data are all numbers. The only number where s != s is true is the special not-a-number value nan. Any comparison with nan is always false, so from that follows that nan is not equal to itself.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With