Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Computing the loss of a function of predictions with pytorch

I have a convolutional neural network that predicts 3 quantities: Ux, Uy, and P. These are the x velocity, y-velocity, and pressure field. They are all 2D arrays of size [100,60], and my batch size is 10.

I want to compute the loss and update the network by calculating the CURL of the predicted velocity with the CURL of the target velocity. I have a function that does this: v = curl(Ux_pred, Uy_pred). Given the predicted Ux and Uy, I want to compute the loss by comparing it to ground truth targets that I have: true_curl = curl(Ux_true, Uy_true) - I've already computed the true curl and added it to my Y data, as the fourth channel.

However, I want my network to only predict Ux, Uy, and P. I want my NN parameters to update based on the LOSS of the curls to improve the accuracy of Ux and Uy. The loss of the curl has to be in terms of Ux and Uy. I have been trying to do this using Pytorch autograd, and have already read many similar questions, but I just can't get it to work. This is my code so far:

        print("pred_Curl shape:", np.shape(pred_curl))
        print("pred_Ux shape:", np.shape(pred[:,0,:,:]))
        print("pred_Uy shape:", np.shape(pred[:,1,:,:]))
        true_curl = torch.from_numpy(y[:,3,:,:]) # not sure where to use the true curl?

        pred_curl = Variable(pred_curl, requires_grad=True)
        
        pred_ux = pred[:,0,:,:]
        pred_uy = pred[:,1,:,:]

        pred_ux = Variable(pred_ux, requires_grad=True)
        pred_uy = Variable(pred_uy, requires_grad=True)

        grad_tensor = torch.autograd.grad(outputs=pred_curl, inputs=(pred_ux, pred_uy), 
                       grad_outputs=torch.ones_like(pred_curl), 
                       retain_graph=True,create_graph=True)

        loss = torch.sum(grad_tensor)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

This has the following output:

pred_Curl shape: torch.Size([10, 100, 60])
pred_Ux shape: torch.Size([10, 100, 60])
pred_Uy shape: torch.Size([10, 100, 60])

RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. 
Set allow_unused=True if this is the desired behavior.

Any help would be appreciated!

Edit: Here is my curl function:

    def discrete_curl(self,x,y,new_arr):
            for m in range(100):
                for n in range(60):
                    if n <= 58:
                        if m <= 98:
                            if x[m,n] != 0 and y[m,n] != 0:
                                new_arr[m,n] = ((y[m+1,n] - y[m-1,n]) / 2*1) - ((x[m,n+1] - x[m,n-1]) / 2*1)
            return new_arr 

Where x and y are Ux ad Uy, and new_arr is the curl output.

like image 792
user3611 Avatar asked Jan 23 '26 17:01

user3611


1 Answers

You could try something like this:

def discrete_curl(self, pred):
        new_arr = torch.zeros((pred.shape[0],100,60))
        for pred_idx in range(pred.shape[0]):
            for m in range(100):
                for n in range(60):
                    if n <= 58:
                        if m <= 98:
                            if pred[pred_idx,0,m,n] != 0 and pred[pred_idx,1,m,n] != 0:
                                new_arr[pred_idx,m,n] = ((pred[pred_idx,1,m+1,n] - pred[pred_idx,1,m-1,n]) / 2*1) - ((pred[pred_idx,0,m,n+1] - pred[pred_idx,0,m,n-1]) / 2*1)
        return new_arr 

pred_curl = discrete_curl(pred)
true_curl = torch.from_numpy(y[:,3,:,:])
loss = torch.nn.functional.mse_loss(pred_curl, true_curl)
optimizer.zero_grad()
loss.backward()
optimizer.step()

I think the curl computation can be optimized, but I tried to stick to your structure for the most part.

like image 102
GoodDeeds Avatar answered Jan 25 '26 06:01

GoodDeeds



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!