Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Data Augmentation Layer in Keras Sequential Model

I'm trying to add data augmentation as a layer to a model but I'm having what I assume is a shape issue. I tried specifying the input shape in the augmented layer as well. When I take out the data_augmentation layer from the model it runs fine.

preprocessing.RandomFlip('horizontal', input_shape=(224, 224, 3))
  data_augmentation_layer = keras.Sequential([                                    
  preprocessing.RandomFlip('horizontal'),
  preprocessing.RandomRotation(0.2),
  preprocessing.RandomZoom(0.2),
  preprocessing.RandomWidth(0.2),
  preprocessing.RandomHeight(0.2),
  preprocessing.RandomContrast(0.2)                                  
], name='data_augmentation')



  model = keras.Sequential([
  data_augmentation_layer,
  Conv2D(filters=32,
         kernel_size=1,
         strides=1,
         input_shape=(224, 224, 3)),
  Activation(activation='relu'),
  MaxPool2D(),
  Conv2D(filters=32,
         kernel_size=1,
         strides=1),
  Activation(activation='relu'),
  MaxPool2D(),
  Flatten(),
  Dense(1, activation='sigmoid')
])```

 The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received: (None, None)
    
    Call arguments received:
      • inputs=tf.Tensor(shape=(None, 224, 224, 3), dtype=float32)
      • training=True
      • mask=None

like image 331
Jordan Anderson Avatar asked Oct 20 '25 05:10

Jordan Anderson


1 Answers

The layers RandomWidth and RandomHeight are causing this error, since they are leading to None dimensions: See the comment here:

[...]RandomHeight will lead to a None shape on the height dimension, as not all outputs from the layer will be the same height (by design). That is ok for things like the Conv2D layer, which can accept variable shaped image input (with None shapes on some dimensions).

This will not work for then calling into a Flatten followed by a Dense, because the flattened batches will also be of variable size (because of the variable height), and the Dense layer needs a fixed shape for the last dimension. You could probably pad output of flatten before the dense, but if you want this architecture, you may just want to avoid image augmentation layer that lead to a variable output shape.

So instead of using a Flatten layer, you could, for example, use a GlobalMaxPool2D layer, which does not need to know the other dimensions beforehand:

import tensorflow as tf

data_augmentation_layer = tf.keras.Sequential([                                    
  tf.keras.layers.RandomFlip('horizontal',
         input_shape=(224, 224, 3)),
  tf.keras.layers.RandomRotation(0.2),
  tf.keras.layers.RandomZoom(0.2),
  tf.keras.layers.RandomWidth(0.2),
  tf.keras.layers.RandomHeight(0.2),
  tf.keras.layers.RandomContrast(0.2)                                  
], name='data_augmentation')

model = tf.keras.Sequential([
data_augmentation_layer,
tf.keras.layers.Conv2D(filters=32,
        kernel_size=1,
        strides=1),
tf.keras.layers.Activation(activation='relu'),
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Conv2D(filters=32,
        kernel_size=1,
        strides=1),
tf.keras.layers.Activation(activation='relu'),
tf.keras.layers.GlobalMaxPool2D(),
tf.keras.layers.Dense(1, activation='sigmoid')
])

print(model.summary())
Model: "sequential_4"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 data_augmentation (Sequenti  (None, None, None, 3)    0         
 al)                                                             
                                                                 
 conv2d_8 (Conv2D)           (None, None, None, 32)    128       
                                                                 
 activation_8 (Activation)   (None, None, None, 32)    0         
                                                                 
 max_pooling2d_6 (MaxPooling  (None, None, None, 32)   0         
 2D)                                                             
                                                                 
 conv2d_9 (Conv2D)           (None, None, None, 32)    1056      
                                                                 
 activation_9 (Activation)   (None, None, None, 32)    0         
                                                                 
 global_max_pooling2d_1 (Glo  (None, 32)               0         
 balMaxPooling2D)                                                
                                                                 
 dense_4 (Dense)             (None, 1)                 33        
                                                                 
=================================================================
Total params: 1,217
Trainable params: 1,217
Non-trainable params: 0
_________________________________________________________________
None
like image 91
AloneTogether Avatar answered Oct 21 '25 18:10

AloneTogether



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!