When I am running pytorch matmul, the following error is thrown:
Traceback (most recent call last):
File "/home/omrastogi/Desktop/Side/One-Class-Classification-Customer-Complaints/pattern.py", line 71, in <module>
print(obj.infer(list([df.text[0]]), list([df.reason[0]])))
File "/home/omrastogi/Desktop/Side/One-Class-Classification-Customer-Complaints/pattern.py", line 45, in infer
cos_sm = self.batch_cosine_similarity(enc1, enc2)
File "/home/omrastogi/Desktop/Side/One-Class-Classification-Customer-Complaints/pattern.py", line 51, in batch_cosine_similarity
dot_prd = torch.matmul(inp1, inp2.transpose(0, 1))
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
inp1 --> [1256]
inp2 --> [1256]
The error was throwing because the data type of operands was float16. Changing it back to float32 solved the problem. I guess float16 is for GPU implementation only.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With