I am new to ML and tensorflow and trying to train and use a standard text generation model. When I go to train the model I get this error:
Train for 155 steps
Epoch 1/5
2/155 [..............................] - ETA: 4:49 - loss: 2.5786
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-133-d70c02ff4270> in <module>()
----> 1 model.fit(dataset, epochs=epochs, callbacks=[checkpoint_callback])
11 frames
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: indices[58,87] = 63 is not in [0, 63)
[[node sequential_12/embedding_12/embedding_lookup (defined at <ipython-input-131-d70c02ff4270>:1) ]]
[[VariableShape/_24]]
(1) Invalid argument: indices[58,87] = 63 is not in [0, 63)
[[node sequential_12/embedding_12/embedding_lookup (defined at <ipython-input-131-d70c02ff4270>:1) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_distributed_function_95797]
Errors may have originated from an input operation.
Input Source operations connected to node sequential_12/embedding_12/embedding_lookup:
sequential_12/embedding_12/embedding_lookup/92192 (defined at /usr/lib/python3.6/contextlib.py:81)
Input Source operations connected to node sequential_12/embedding_12/embedding_lookup:
sequential_12/embedding_12/embedding_lookup/92192 (defined at /usr/lib/python3.6/contextlib.py:81)
Function call stack:
distributed_function -> distributed_function
Data
data['title'] = [['Sentence'],['Sentence2'], ...]
Data Prep
tokenizer = keras.preprocessing.text.Tokenizer(num_words=209, lower=False, char_level=True)
tokenizer.fit_on_texts(df['title'])
df['encoded_with_keras'] = tokenizer.texts_to_sequences(df['title'])
dataset = df['encoded_with_keras']
dataset = tf.keras.preprocessing.sequence.pad_sequences(dataset, padding='post')
dataset = dataset.flatten()
dataset = tf.data.Dataset.from_tensor_slices(dataset)
sequences = dataset.batch(seq_len+1, drop_remainder=True)
def create_seq_targets(seq):
input_txt = seq[:-1]
target_txt = seq[1:]
return input_txt, target_txt
dataset = sequences.map(create_seq_targets)
batch_size = 128
buffer_size = 10000
dataset = dataset.shuffle(buffer_size).batch(batch_size, drop_remainder=True)
Model:
vocab_size = len(tokenizer.word_index)
embed_dim = 128
rnn_neurons = 256
epochs = 5
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
def create_model(vocab_size, embed_dim, rnn_neurons, batch_size):
model = Sequential()
model.add(Embedding(vocab_size, embed_dim, batch_input_shape=[batch_size, None], mask_zero=True))
model.add(LSTM(rnn_neurons, return_sequences=True, stateful=True))
model.add(Dropout(0.2))
model.add(LSTM(rnn_neurons, return_sequences=True, stateful=True))
model.add(Dropout(0.2))
model.compile(optimizer='adam', loss="sparse_categorical_crossentropy")
return model
model.fit(dataset, epochs=epochs, callbacks=[checkpoint_callback])
I have tried changing almost all model settings, and playing around with custom tokenization and data prep. But this starts training and on the 2nd step of 155 I get this error. I'm not sure where to start any help is appreciated
Try changing the batch_size to something like 32, 16 or 8 . Apparently, for rtx 2060/70/80 there’s a tensorflow bug that makes it run out of memory.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With