Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

New posts in huggingface-transformers

Strange results with huggingface transformer[marianmt] translation of larger text

Transformers model from Hugging-Face throws error that specific classes couldn t be loaded

How to load a fine-tuned peft/lora model based on llama with Huggingface transformers?

huggingface transformers: truncation strategy in encode_plus

stucking at downloading shards for loading LLM model from huggingface

resize_token_embeddings on the a pertrained model with different embedding size

Huggingface GPT2 and T5 model APIs for sentence classification?

What is the recommended number of threads for PyTorch relative to available CPU cores?

Difference between from_config and from_pretrained in HuggingFace

How to save a tokenizer after training it?

Efficiently using Hugging Face transformers pipelines on GPU with large datasets

biobert for keras version of huggingface transformers

Speeding up load time of LLMs

Loading a HuggingFace model on multiple GPUs using model parallelism for inference

AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor'

huggingface longformer memory issues

ImportError: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`