Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Activity regularizer in TensorFlow

In Keras, for dense layers, we can use the parameter activity_regularizer. In Tensorflow, there is no similar parameter.

Keras :

from keras import regularizers
encoding_dim = 32
input_img = Input(shape=(784,))
# add a Dense layer with a L1 activity regularizer
encoded = Dense(encoding_dim, activation='relu', activity_regularizer=regularizers.l1(10e-5))(input_img)
decoded = Dense(784, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)

How to make an activity_regularizer in tensorflow?

like image 604
nairouz mrabah Avatar asked Sep 03 '25 01:09

nairouz mrabah


2 Answers

The Keras documentation is not too precise, but from what I've read the activity regularization is simply a L1 or L2 term for the output of a specific layer added to the corresponding loss function of the model.

So let's say you have some loss, for example the MSE for some labels:

loss = tf.metrics.mean_squared_error(labels, model_output)

To add L1 activity regularization to a certain layer you would simply add the L1 regularization term for the output of that layer to your loss with some regularization strength (I'll take 10e-5 like given in your question):

loss += 10e-5*tf.nn.l1_loss(layer_output)

Where layer_output is the output of the layer you want to regulate.

If you did the same with the layer's weights instead of its output you would have what the Keras documentation calls kernel regularization. If you do the same for the bias vector of that layer you get Keras's bias regularization.

like image 198
Alexander Harnisch Avatar answered Sep 04 '25 15:09

Alexander Harnisch


  • tensorflow implements the Keras API in tf.keras, so technically if it is defined in Keras it is/should be in tensorflow.
  • other high-level APIs in tensorflow have a similar behavior. For example tf.layer.Dense has kernel_regularizer and bias_regularizer arguments in its constructor.
  • If you do not want to use high-level APIs but rather implement everything yourself, you can add a regularizer to your loss. An L2 regularizer on a parameter for example is achieved by adding the sum of its squared elements to your loss multiplied by some constant related to the strength to this constraint. (The 10e-5 factor in your example).
like image 29
P-Gn Avatar answered Sep 04 '25 14:09

P-Gn