I am running a Deep Learning model, but sometimes I use my work PC and sometimes my Mac. The problem is that whenever using the Mac (with an M2 chip), I get this warning message:
WARNING:absl: At this time, the v2.11+ optimizer
tf.keras.optimizers.Adamruns slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located attf.keras.optimizers.legacy.Adam.
I question whether there is a way to shift to tf.keras.optimizers.legacy.Adam in my Mac. As a side question, is it beneficial at all? I guess so because my training is taking way more than I expected, given the problem's simplicity.
As a minimum viable example, please check this example I adapted from the TensorFlow website:
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# If-else should go here:
# If in a Mac with M1 / M2:
# opt = tf.keras.optimizers.legacy.Adam(learning_rate=0.0005)
# Else:
opt = tf.keras.optimizers.Adam(learning_rate=0.0005)
model.compile(optimizer=opt,
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
Thanks in advance, and apologies if this has been already answered. I could not find a question similar.
I believe I found a way to solve my first problem by looking at the keras optimizer code.
An example of the solution is below:
import tensorflow as tf
import platform
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
if platform.system() == "Darwin" and platform.processor() == "arm":
opt = tf.keras.optimizers.legacy.Adam(learning_rate=0.0005)
else:
opt = tf.keras.optimizers.Adam(learning_rate=0.0005)
model.compile(optimizer=opt,
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
The way I solved it:
if platform.system() == "Darwin" and platform.processor() == "arm":
To do so, it needs the platform module. I imported it in the second line of my MVE above.
Then, I added the optimizer to match the desired condition.
I hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With