There's always a tradeoff between precision and recall. I'm dealing with a multi-class problem, where for some classes I have perfect precision but really low recall.
Since for my problem false positives are less of an issue than missing true positives, I want reduce precision in favor of increasing recall for some specific classes, while keeping other things as stable as possible. What are some ways to trade in precision for better recall?
You can use a threshold on the confidence score of your classifier output layer and plot the precision and recall at different values of the threshold. You can use different thresholds for different classes.
You can also take a look on weighted cross entropy of Tensorflow as a loss function. As stated, it uses weights to allow trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With