I am completely new to Machine learning and also to tensorflow.js, I am trying to predict the values of the next set but it is giving me "NaN" in result. What am I doing wrong ?
Following this Github example
async function myFirstTfjs(arr) {
// Create a simple model.
const model = tf.sequential();
model.add(tf.layers.dense({units: 1, inputShape: [2]}));
// Prepare the model for training: Specify the loss and the optimizer.
model.compile({
loss: 'meanSquaredError',
optimizer: 'sgd'
});
const xs = tf.tensor([[1,6],
[2,0],
[3,1],
[4,2],
[5,3],
[6,4],
[7,5],
[8,6],
[9,0],
[10,1],
[11,2],
[12,3],
[13,4],
[14,5],
[15,6],
[16,0],
[17,1],
[18,2],
[19,3],
[20,4],
[21,5],
[22,6],
[23,0],
[24,1],
[25,2],
[26,3]]);
const ys = tf.tensor([104780,30280,21605,42415,32710,30385,35230,97795,31985,34570,35180,30095,36175,57300,104140,30735,28715,36035,34515,42355,38355,110080,26745,35315,40365,30655], [26, 1]);
// Train the model using the data.
await model.fit(xs, ys, {epochs: 500});
// Use the model to do inference on a data point the model hasn't seen.
model.predict(tf.tensor(arr, [1, 2])).print();
}
myFirstTfjs([28,5]);
What's happening is that the large values in ys are leading to a very large error. That large error, in combination with the (default) learning rate, are causing the model to overcorrect and be unstable. The model will converge if you lower the learning rate.
const learningRate = 0.0001;
const optimizer = tf.train.sgd(learningRate);
model.compile({
loss: 'meanSquaredError',
optimizer: optimizer,
});
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With