When you try to convert a Torch bfloat16 tensor to a numpy array, it throws a TypeError
:
import torch
x = torch.Tensor([0]).to(torch.bfloat16)
x.numpy() # TypeError: Got unsupported ScalarType BFloat16
import numpy as np
np.array(x) # same error
Is there a work-around to make this conversion?
Currently, numpy
does not support bfloat16. One work-around is to upcast the tensor from half-precision to single-precision before making the conversion:
x.float().numpy()
The Pytorch maintainers are also considering adding a force=True
option to the Tensor.numpy
method to this automatically.
There is also ml_dtypes
, a NumPy extension which adds support for bf16.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With