Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

New posts in huggingface-transformers

How do I use BertForMaskedLM or BertModel to calculate perplexity of a sentence?

How to fine tune BERT on unlabeled data?

Downloading transformers models to use offline

How exactly should the input file be formatted for the language model finetuning (BERT through Huggingface Transformers)?

Save only best weights with huggingface transformers

BERT tokenizer & model download

Huggingface transformer model returns string instead of logits

How to reconstruct text entities with Hugging Face's transformers pipelines without IOB tags?

Huggingface AlBert tokenizer NoneType error with Colab

How do I train a encoder-decoder model for a translation task using hugging face transformers?

why take the first hidden state for sequence classification (DistilBertForSequenceClassification) by HuggingFace

Transformer: Error importing packages. "ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler'"

Use of attention_mask during the forward pass in lm finetuning

HuggingFace BERT `inputs_embeds` giving unexpected result

Understanding BERT vocab [unusedxxx] tokens:

PyTorch torch.no_grad() versus requires_grad=False

How to make a Trainer pad inputs in a batch with huggingface-transformers?

Named Entity Recognition with Huggingface transformers, mapping back to complete entities