I am trying to run a simple autoencoder, all the training input is the same. The training data features are equal to 3, and the hidden layer has 3 nodes in it. I train the autoencoder with that input, then I try to predict it (encode/decode) again (so if the autoencoder passes everything as is without any changes it should work)
Anyway, that's not the case, and I am a sturggling a bit to understand why. I am not sure if it's something wrong in my code, or in my understanding of the autoencdoer implementation. Here is the code for reference.
P.S. I played around with the number of epoches, number of examples in the training set, the batch size, made the training data values between 0-1, and kept track of the loss value, but that didn't help either.
`
from keras.layers import Input, Dense
from keras.models import Model
import numpy as np
# this is the size of our encoded representations
encoding_dim = 3
x_train=np.array([[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3]])
in= Input(shape=(3,))
encoded = Dense(encoding_dim, activation='relu')(in)
decoded = Dense(3, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(in, decoded)
autoencoder.compile(optimizer='adadelta', loss='mse')
autoencoder.fit(x_train, x_train,
epochs=100,
batch_size=4)
autoencoder.predict(x_train)
`
The output I get should be the same as the input (or at least close) but I get this instead)
`Out[180]:
array([[ 0.80265796, 0.89038897, 0.9100889 ],
[ 0.80265796, 0.89038897, 0.9100889 ],
[ 0.80265796, 0.89038897, 0.9100889 ],
...,
[ 0.80265796, 0.89038897, 0.9100889 ],
[ 0.80265796, 0.89038897, 0.9100889 ],
[ 0.80265796, 0.89038897, 0.9100889 ]], dtype=float32)`
Any help would be appreciated, most likely I understood something wrong so hopefully this question is not that hard to answer.
The error is here decoded = Dense(3, activation='sigmoid')(encoded).
You shouldn't use sigmoid activation, because it will limit the output in range (0, 1), replace the sigmoid with linear or just remove it, and you can add more epochs, e.g. train 1000 epochs. In this setting, I get what you need
[[ 0.98220336 1.98066235 2.98398876]
[ 0.98220336 1.98066235 2.98398876]
[ 0.98220336 1.98066235 2.98398876]
[ 0.98220336 1.98066235 2.98398876]
[ 0.98220336 1.98066235 2.98398876]
[ 0.98220336 1.98066235 2.98398876]
[ 0.98220336 1.98066235 2.98398876]
[ 0.98220336 1.98066235 2.98398876]
[ 0.98220336 1.98066235 2.98398876]]
In addition, you should replace the input in with another name, as it is a keyword in Python :-).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With