Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Neural Network for MNIST digits is not learning at all - problem with backpropagation

after a long time, I am still not able to run my nn without any bugs. Accuracy of this toy nn is an astonishing 1-2% (60 neurons in hidden layer, 100 epochs, 0.3 learning rate, tanh activation, MNIST dataset downloaded via TF) - so basically it is not learning at all. After all this time looking at videos / post about backpropagation, I am still not able to fix it. So my bug must be in between the part marked with two ##### lines. I think that my understanding of derivatives in general is good, but I just cannot connect this knowlege with backpropagation. If the backpropagation base is correct, then the mistake must at axis = 0/1, because I also cannot understand, how to determine on which axis I will be working on.

Also, I have a strong feeling, that dZ2 = A2 - Y might be wrong, it should be dZ2 = Y - A2, but after that correction, nn starts to guess only one number.

(and yes, backpropagation itself I haven't written, I have found it on the internet)

#importing data and normalizing it
#"x_test" will be my X
#"y_test" will be my Y

import tensorflow as tf
(traindataX, traindataY), (testdataX, testdataY) = tf.keras.datasets.mnist.load_data()
x_test = testdataX.reshape(testdataX.shape[0], testdataX.shape[1]**2).astype('float32')
x_test = x_test / 255

y_test = testdataY
y_test = np.eye(10)[y_test]
#Activation functions:
def tanh(z):
    a = (np.exp(z)-np.exp(-z))/(np.exp(z)+np.exp(-z))
    return a
###############################################################################START
def softmax(z):
    smExp = np.exp(z - np.max(z, axis=0))
    out = smExp / np.sum(smExp, axis=0)
    return out
###############################################################################STOP
def neural_network(num_hid, epochs, 
                  learning_rate, X, Y):
    #num_hid - number of neurons in the hidden layer
    #X - dataX - shape (10000, 784)
    #Y - labels - shape (10000, 10)

    #inicialization
    W1 = np.random.randn(784, num_hid) * 0.01
    W2 = np.random.randn(num_hid, 10) * 0.01
    b1 = np.zeros((1, num_hid))
    b2 = np.zeros((1, 10))
    correct = 0

    for x in range(1, epochs+1):
        #feedforward
        Z1 = np.dot(X, W1) + b1
        A1 = tanh(Z1)
        Z2 = np.dot(A1, W2) + b2
        A2 = softmax(Z2)


        ###############################################################################START
        m = X.shape[1] #-> 784
        loss = - np.sum((Y * np.log(A2)), axis=0, keepdims=True)
        cost = np.sum(loss, axis=1) / m

        #backpropagation
        dZ2 = A2 - Y
        dW2 = (1/m)*np.dot(A1.T, dZ2)
        db2 = (1/m)*np.sum(dZ2, axis = 1, keepdims = True)
        dZ1 = np.multiply(np.dot(dZ2, W2.T), 1 - np.power(A1, 2))
        dW1 = (1/m)*np.dot(X.T, dZ1)
        db1 = (1/m)*np.sum(dZ1, axis = 1, keepdims = True)
        ###############################################################################STOP


        #parameters update - gradient descent
        W1 = W1 - dW1*learning_rate
        b1 = b1 - db1*learning_rate
        W2 = W2 - dW2*learning_rate
        b2 = b2 - db2*learning_rate


        for i in range(np.shape(Y)[1]):
            guess = np.argmax(A2[i, :])
            ans = np.argmax(Y[i, :])
            print(str(x) + " " + str(i) + ". " +"guess: ", guess, "| ans: ", ans)
            if guess == ans:
                correct = correct + 1;

    accuracy = (correct/np.shape(Y)[0]) * 100

like image 726
Lukas Avatar asked Oct 25 '25 03:10

Lukas


2 Answers

Lucas, Good problem to refresh the fundamentals. I made a few fixes to your code:

  • calculation of m
  • Transposed all weights and biases (can't explain properly, but it was not working otherwise).
  • changed calculation of accuracy (and loss, which is not used).

See the corrected code below. It gets to 90% accuracy with your original parameters:

def neural_network(num_hid, epochs, learning_rate, X, Y):
#num_hid - number of neurons in the hidden layer
#X - dataX - shape (10000, 784)
#Y - labels - shape (10000, 10)

#inicialization
# W1 = np.random.randn(784, num_hid) * 0.01
# W2 = np.random.randn(num_hid, 10) * 0.01
# b1 = np.zeros((1, num_hid))
# b2 = np.zeros((1, 10))
W1 = np.random.randn(num_hid, 784) * 0.01
W2 = np.random.randn(10, num_hid) * 0.01
b1 = np.zeros((num_hid, 1))
b2 = np.zeros((10, 1))

for x in range(1, epochs+1):
    correct = 0  # moved inside cycle
    #feedforward
    # Z1 = np.dot(X, W1) + b1
    Z1 = np.dot(W1, X.T) + b1
    A1 = tanh(Z1)
    # Z2 = np.dot(A1, W2) + b2
    Z2 = np.dot(W2, A1) + b2
    A2 = softmax(Z2)

    ###############################################################################START
    m = X.shape[0] #-> 784  # SHOULD BE NUMBER OF SAMPLES IN THE BATCH
    # loss = - np.sum((Y * np.log(A2)), axis=0, keepdims=True)
    loss = - np.sum((Y.T * np.log(A2)), axis=0, keepdims=True)
    cost = np.sum(loss, axis=1) / m

    #backpropagation
    # dZ2 = A2 - Y
    # dW2 = (1/m)*np.dot(A1.T, dZ2)
    # db2 = (1/m)*np.sum(dZ2, axis = 1, keepdims = True)
    # dZ1 = np.multiply(np.dot(dZ2, W2.T), 1 - np.power(A1, 2))
    # dW1 = (1/m)*np.dot(X.T, dZ1)
    dZ2 = A2 - Y.T
    dW2 = (1/m)*np.dot(dZ2, A1.T)
    db2 = (1/m)*np.sum(dZ2, axis = 1, keepdims = True)
    dZ1 = np.multiply(np.dot(W2.T, dZ2), 1 - np.power(A1, 2))
    dW1 = (1/m)*np.dot(dZ1, X)

    db1 = (1/m)*np.sum(dZ1, axis = 1, keepdims = True)
    ###############################################################################STOP

    #parameters update - gradient descent
    W1 = W1 - dW1*learning_rate
    b1 = b1 - db1*learning_rate
    W2 = W2 - dW2*learning_rate
    b2 = b2 - db2*learning_rate

    guess = np.argmax(A2, axis=0)  # axis fixed
    ans = np.argmax(Y, axis=1)  # axis fixed
    # print (guess.shape, ans.shape)
    correct += sum (guess==ans)

    #     #print(str(x) + " " + str(i) + ". " +"guess: ", guess, "| ans: ", ans)
    #     if guess == ans:
    #         correct = correct + 1;
    accuracy = correct / x_test.shape[0]
    print (f"Epoch {x}. accuracy = {accuracy*100:.2f}%")


neural_network (64, 100, 0.3, x_test, y_test)

Epoch 1. accuracy = 14.93%
Epoch 2. accuracy = 34.70%
Epoch 3. accuracy = 47.41%
(...)
Epoch 98. accuracy = 89.29%
Epoch 99. accuracy = 89.33%
Epoch 100. accuracy = 89.37%
like image 111
Poe Dator Avatar answered Oct 26 '25 18:10

Poe Dator


It might be because you should normalize your inputs between the values of 0 and 1 by dividing X by 255 (255 is max pixel value). You should also have Y one hot encoded as series of size 10 vectors. I think your backprop is right, but you should implement gradient checking to double check.


Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!