I want to use the histogram of the output from a CNN to compute the loss. I am wondering whether tf.histogram_fixed_width() supports the gradient to flow back to its former layer. Only it works, I can add a loss layer after calculating the histogram.
tf.histogram_fixed_width() does not support autogradient functionality since histogram is not a continuous differential function. You can look at the following example which returns gradient None.
import keras.backend as K
import tensorflow as tf
value_range = [0.0, 5.0]
a = np.array([-1.0, 0.0, 1.5, 2.0, 5.0, 15])
x = K.variable(a)
hist = tf.histogram_fixed_width(x, value_range, nbins=5, dtype=tf.float32)
gradient = K.gradients(hist, x)
# output is [None]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With