In Keras, for dense layers, we can use the parameter activity_regularizer. In Tensorflow, there is no similar parameter.
Keras :
from keras import regularizers
encoding_dim = 32
input_img = Input(shape=(784,))
# add a Dense layer with a L1 activity regularizer
encoded = Dense(encoding_dim, activation='relu', activity_regularizer=regularizers.l1(10e-5))(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)
How to make an activity_regularizer in tensorflow?
The Keras documentation is not too precise, but from what I've read the activity regularization is simply a L1 or L2 term for the output of a specific layer added to the corresponding loss function of the model.
So let's say you have some loss, for example the MSE for some labels:
loss = tf.metrics.mean_squared_error(labels, model_output)
To add L1 activity regularization to a certain layer you would simply add the L1 regularization term for the output of that layer to your loss with some regularization strength (I'll take 10e-5
like given in your question):
loss += 10e-5*tf.nn.l1_loss(layer_output)
Where layer_output
is the output of the layer you want to regulate.
If you did the same with the layer's weights instead of its output you would have what the Keras documentation calls kernel regularization. If you do the same for the bias vector of that layer you get Keras's bias regularization.
tf.keras
, so technically if it is defined in Keras it is/should be in tensorflow.tf.layer.Dense
has kernel_regularizer
and bias_regularizer
arguments in its constructor.10e-5
factor in your example).If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With