I have two GPU.
How can I use them for inference with a huggingface pipeline?
Huggingface documentation seems to say that we can easily use the DataParallel class with a huggingface model, but I've not seen any example.
For example with pytorch, it's very easy to just do the following :
net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])
output = net(input_var) # input_var can be on any device, including CPU
Is there an equivalent with huggingface ?
I found it's not possible with the pipelines, so:
two ways :
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With