In this code snippet from TensorFlow tutorial Basic text classification,
model = tf.keras.Sequential([
layers.Embedding(max_features + 1, embedding_dim),
layers.Dropout(0.2),
layers.GlobalAveragePooling1D(),
layers.Dropout(0.2),
layers.Dense(1)])
As far as I understood, max_features is the size of vocabulary(with index 0 for padding and index 1 for OOV).
Also, I've done an experiment by setting layers.Embedding(max_features, embedding_dim), the tutorial can still successfully run through(screenshots below).
So why do we need input_dim=max_features + 1 here?

The example is very misleading - arguably wrong, though the example code doesn't actually fail in that execution context.
The embedding layer input dimension, per the Embedding layer documentation is the maximum integer index + 1, not the vocabulary size + 1, which is what the author of that example had in the code you cite.

In my toy example below, you can see how the 0-based integer index works out:


Frankly, it looks like the writer just got lucky because he was using the Sequential model type and didn't need to serialize the model. In this special case, the example code worked.
Vocabulary Size = Maximum Integer Index + 1
Example:
a[0] = 'item 1'
a[1] = 'item 2'
a[2] = 'item 3'
................
Maximum Integer Index = 2
Vocabulary Size = 3
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With