The documentation says:
You can run Estimators-based models on a local host or on a distributed multi-server environment without changing your model. Furthermore, you can run Estimators-based models on CPUs, GPUs, or TPUs without recoding your model.
Is there documentation to explain how to run an Estimator in a distributed multi-server environment?
Tensorflow documentation for tf.estimator.train_and_evaluate here explains one method for running tf.estimator in a distributed environment, which simply requires setting the TF_CONFIG environment variable appropriately.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With