so I have an RNN encoder that is part of a larger language model, where the process is encode -> rnn -> decode.
As part of my __init__
for my rnn class I have the following:
self.encode_this = nn.Embedding(self.vocab_size, self.embedded_vocab_dim)
now I am trying to implement a forward class, which takes in batches and performs encoding then decoding,
def f_calc(self, batch):
#Here, batch.shape[0] is the size of batch while batch.shape[1] is the sequence length
hidden_states = (torch.zeros(self.num_layers, batch.shape[0], self.hidden_vocab_dim).to(device))
embedded_states = (torch.zeros(batch.shape[0],batch.shape[1], self.embedded_vocab_dim).to(device))
o1, h = self.encode_this(embedded_states)
however, my problem is always with the encoder which gives me the following error:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1465 # remove once script supports set_grad_enabled
1466 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1467 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1468
1469
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
Anyone have any idea how to fix? I am completely new to pytorch so please excuse me if this is a stupid question. I know there is some form of type casting involved but I am not sure how to go about doing it...
much appreciated!
Embedding layer expects integers at the input.
import torch as t
emb = t.nn.Embedding(embedding_dim=3, num_embeddings=26)
emb(t.LongTensor([0,1,2]))
Add long()
in your code:
embedded_states = (torch.zeros(batch.shape[0],batch.shape[1], self.embedded_vocab_dim).to(device)).long()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With