I have a binary classification problem.
I am using the log_loss from tensorflow.losses.log_loss.
To check, I use sklearn.metrics.log_loss. Most of the times, the two functions give the same result (only difference in dtype). In some instance, the sklearn function returns NaN while tf.losses.log_loss returns a correct value.
data is here: https://pastebin.com/BvDgDnVT
code:
import sklearn.metrics
import tensorflow as tf
y_true = [... see pastebin link]
y_pred = [... see pastebin link]
loss_sk = sklearn.metrics.log_loss(y_true, y_pred, labels=[0, 1]) # -> returns NaN
with tf.Session() as sess:
loss_tf = tf.losses.log_loss(y_true, y_pred).eval(session=sess) # -> returns 0.0549
There seems to be some log(0) happening, but why does tensorflow not have this problem?
Changing the dtype of both arrays to a 64-bit float fixes it
dtype=np.float64
for example adding y_pred = y_pred.astype(np.float64)
Another way of fixing the issue is to provide eps=1e-7 to log_loss, which is a more appropriate epsilon for float32 and is what tensorflow's using.
Scikit however uses 1e-15 as a default (expecting float64).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With