Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use multi-gpu during inference in pytorch framework

I am trying to make model prediction from unet3D built on pytorch framework. I am using multi-gpus

import torch
import os
import torch.nn as nn
os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2'

model = unet3d()
model = nn.DataParallel(model)
model = model.to('cuda')

result = model.forward(torch.tensor(input).to('cuda').float())

But the model still uses only 1 GPU (the first one) and I get memory error.

CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 11.00 GiB total capacity; 8.43 GiB already allocated; 52.21 MiB free; 5.17 MiB cached) 

How shoudl I use Multi-GPUs during inference phase? What is the mistake in my script above?

like image 340
AKSHAYAA VAIDYANATHAN Avatar asked Oct 28 '25 07:10

AKSHAYAA VAIDYANATHAN


1 Answers

DataParallel handles sending the data to gpu.

import torch
import os
import torch.nn as nn
os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2'

model = unet3d()
model = nn.DataParallel(model.cuda())

result = model.forward(torch.tensor(input).float())

if this doesn't work, please provide more details about input.

[EDIT]:

Try this:

with torch.no_grad():
    result = model(torch.tensor(input).float())
like image 68
thedch Avatar answered Oct 31 '25 10:10

thedch



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!