I got the results by running the code provided in this link Neural Network – Predicting Values of Multiple Variables. I was able to compute losses accuracy etc. However, every time I run this code, I get a new result. Is it possible to get the same (consistent) result?
The code is full of random.randint()
everywhere! Furthermore, the weights are most of the time randomly set aswell, and the batch_size also has an influence (although pretty minor) in the result.
adam
as optimizer, means you'll be performing stochastic gradient descent. With a random beginning point of the iterations in order to converge.Solution:
np.random.seed()
If I find a way to have consistente sampling methods for the batch_size
/epoch
issue I will edit my answer.
There are lots of random arrays in there. Use np.random.seed()
to get the same ones each time. For example:
np.random.seed(42)
for _ in range(3):
print(np.random.random(3))
Every time you run this code, you'll get the same result. On my machine:
[0.37454012 0.95071431 0.73199394]
[0.59865848 0.15601864 0.15599452]
[0.05808361 0.86617615 0.60111501]
Note that lots of other bits of the machine learning pipeline use randomization too. For example:
train_test_split()
.Most ML functions allow you to pass a seed as an argument. Have a look in the documentation. Depending on what you are doing, and which libraries you're using, you may or may not be able to make the entire pipeline reproducible.
You might also like this article or this one about getting reproducible results with Keras.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With