TensorFlow seems to implement at least 3 versions of batch normalization:
tf.nn.batch_normalizationtf.layers.batch_normalizationtf.contrib.layers.batch_normThese all have different arguments and documentation.
What is the difference between these, and which one should I use?
They are actually very different.
nn.batch_normalization performs the basic operation (i.e. a simple normalization)layers.batch_normalization is a batchnorm "layer", i.e. it takes care of setting up the trainable parameters etc. At the end of the day, it is a wrapper around nn.batch_normalization. Chances are this is the one you want to use, unless you want to take care of setting up variables etc. yourself.It's similar to the difference between nn.conv2d and layers.conv2d, for example.
As for the contrib version, I can't say for sure, but it seems to me like an experimental version with some extra parameters not available in the "regular" layers one.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With