It seems a bit cumbersome to take into account the batch dimension for every layer in a neural network. Why don't we have some functionality in Tensorflow that can just set the batch size for an entire model?
In tensorflow you do not have to take into account the batch size.
In the MNIST Tutorial it's explained how tensorflow handles batches of every size.
Quoting the tutorial:
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
The input images x will consist of a 2d tensor of floating point numbers. Here we assign it a shape of [None, 784], where 784 is the dimensionality of a single flattened MNIST image, and None indicates that the first dimension, corresponding to the batch size, can be of any size.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With