Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CUDA Out of memory when there is plenty available

I'm having trouble with using Pytorch and CUDA. Sometimes it works fine, other times it tells me RuntimeError: CUDA out of memory. However, I am confused because checking nvidia-smi shows that the used memory of my card is 563MiB / 6144 MiB, which should in theory leave over 5GiB available. output of nvidia-smi

However, upon running my program, I am greeted with the message: RuntimeError: CUDA out of memory. Tried to allocate 578.00 MiB (GPU 0; 5.81 GiB total capacity; 670.69 MiB already allocated; 624.31 MiB free; 898.00 MiB reserved in total by PyTorch)

It looks like Pytorch is reserving 1GiB, knows that ~700MiB are allocated, and is trying to assign ~600MiB to the program—but claims that the GPU is out of memory. How can this be? There should be plenty of GPU memory left given these numbers.

like image 849
Jeff Chen Avatar asked Jan 20 '26 02:01

Jeff Chen


1 Answers

You need empty torch cache after some method(before error)

torch.cuda.empty_cache()
like image 164
stahh Avatar answered Jan 21 '26 18:01

stahh



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!