I use Google Colab to train the model, but like the picture shows that when I input 'torch.cuda.is_available()' and the ouput is 'true'. And then I run the code but it has the error that RuntimeError: No CUDA GPUs are available.

Try to install cudatoolkit version you want to use "conda install pytorch torchvision cudatoolkit=10.1 -c pytorch"
I guess I have found one solution which fixes mine. I used to have the same error. When you run this:
import tensorflow as tf
tf.test.gpu_device_name()
it will give you the GPU number, which in my case it was
/device:GPU:0
I realized that I was passing the code as:
parser.add_argument('--gpu', type=str, default="1", help='choose GPU')
so I replaced the "1" with "0", the number of GPU that Colab gave me, then it worked.
I hope this fixes your problem too.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With