I am trying to implement a custom loss function in Keras with TF backend based on the Laplacian of two images.
def blur_loss(y_true, y_pred):
#weighting of blur loss
alpha = 1
mae = losses.mean_absolute_error(y_true, y_pred)
lapKernel = K.constant([0, 1, 0, 1, -4, 1, 0, 1, 0],shape = [3, 3])
trueLap = K.conv2d(y_true, lapKernel)
predLap = K.conv2d(y_pred, lapKernel)
trueBlur = K.var(trueLap)
predBlur = K.var(predLap)
blurLoss = alpha * K.abs(trueBlur - predBlur)
loss = (1-alpha) * mae + alpha * blurLoss
return loss
When I try to compile the model I get this error
Traceback (most recent call last):
File "kitti_train.py", line 65, in <module>
model.compile(loss='mean_absolute_error', optimizer='adam', metrics=[blur_loss])
File "/home/ubuntu/.virtualenvs/dl4cv/lib/python3.5/site-packages/keras/engine/training.py", line 924, in compile
handle_metrics(output_metrics)
File "/home/ubuntu/.virtualenvs/dl4cv/lib/python3.5/site-packages/keras/engine/training.py", line 921, in handle_metrics
mask=masks[i])
File "/home/ubuntu/.virtualenvs/dl4cv/lib/python3.5/site-packages/keras/engine/training.py", line 450, in weighted
score_array = fn(y_true, y_pred)
File "/home/ubuntu/prednet/blur_loss.py", line 14, in blur_loss
trueLap = K.conv2d(y_true, lapKernel)
File "/home/ubuntu/.virtualenvs/dl4cv/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 3164, in conv2d
data_format='NHWC')
File "/home/ubuntu/.virtualenvs/dl4cv/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py", line 655, in convolution
num_spatial_dims, strides, dilation_rate)
File "/home/ubuntu/.virtualenvs/dl4cv/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py", line 483, in _get_strides_and_dilation_rate
(len(dilation_rate), num_spatial_dims))
ValueError: len(dilation_rate)=2 but should be 0
After reading other questions, my understanding is that this problem stems from the compilation using placeholder tensors for y_true and y_pred. I've tried checking if the inputs are placeholders and replacing them with zero tensors, but this gives me other errors.
How do I use a convolution (the image processing function, not a layer) in my loss function without getting these errors?
The problem here was a misunderstanding of the conv2d function which is not simply a 2-dimensional convolution. It is a batched 2-d convolution of multiple channels. So while you might expect a *2d function to accept 2-dimensional tensors, the input should actually 4 dimensions (batch_size, height, width, channels) and the filter should also be 4 dimensions (filter_height, filter_width, input_channels, output_channels). Details can be found in the TF docs
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With