Questions tagged [huggingface]

The huggingface tag can be used for all libraries made by Hugging Face. Please ALWAYS use the more specific tags; huggingface-transformers, huggingface-tokenizers, huggingface-datasets if your question concerns one of those libraries.

606 questions
0
votes
0 answers

Twitter Sentiment Analysis : TypeError: dropout(): argument 'input' (position 1) must be Tensor, not tuple in DistilBERT using huggingface library

The following is my Sentiment Analyser: from transformers import DistilBertTokenizer, DistilBertModel PRE_TRAINED_MODEL_NAME = 'distilbert-base-cased' db_model = DistilBertModel.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict = False) tokenizer…
0
votes
0 answers

Train mobileBERT from scratch for other languages

I am thinking of training a mobileBERT model from scratch for the German language. Can I use the English mobileBERT model from HuggingFace to apply it to a dataset in another language? It makes sense that I would have to adapt the teacher model of…
0
votes
1 answer

Multiple threads of Hugging face Stable diffusion Inpainting pipeline slows down the inference on same GPU

I am using Stable diffusion inpainting pipeline to generate some inference results on a A100 (40 GB) GPU. For a 512X512 image it is taking approx 3 s per image and takes about 5 GB of space on the GPU. In order to have faster inference, I am trying…
hsuyaa
  • 79
  • 1
  • 1
  • 8
0
votes
0 answers

huggingface transformers installation problem - ar? -cq? cargo rustc?

Any suggestions appreciated: MacOs Ventura 13.0.1 pip install transformers ... ... running: "cc" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64" "-arch" "x86_64" "-I" "bzip2-1.0.8" "-D_FILE_OFFSET_BITS=64" "-DBZ_NO_STDIO" "-o"…
dreamer
  • 69
  • 3
0
votes
1 answer

Why tflite model output shape is different than the original model converted from T5ForConditionalGeneration?

T5ForConditionalGeneration Model to translate English to German from transformers import T5TokenizerFast, T5ForConditionalGeneration tokenizer = T5TokenizerFast.from_pretrained("t5-small") model =…
0
votes
0 answers

Accesing layers within the base HuggingFace Model in Tensorflow

I have used the "microsoft/resnet-50" for a vision task but I want to access the resnet layers inside the base model for Evaluation/Explainability purposes. Code: from transformers import AutoFeatureExtractor, TFResNetModel model =…
Imperial_J
  • 306
  • 1
  • 7
  • 23
0
votes
0 answers

How to use 'run_glue.py' in HuggingFace to finetune for classification?

Here is all the "documentation" I could find https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification I honestly don't see how you're supposed to know how to use this thing with the resources I found online unless…
znb
  • 3
  • 2
0
votes
0 answers

Error while downloading a repo from Hugging Face : Read timed out

I’m trying to download a repo from huggingface using this code: from huggingface_hub import snapshot_download snapshot_download(repo_id="openclimatefix/era5-land", repo_type="dataset", cache_dir="/home/saben1/scratch/o/slurms/data_4") After 3…
Saben1
  • 1
  • 1
0
votes
1 answer

how to create Flair Huggingface output to dataframe

I am new to huggingface and i working on Flair (NER) module which gives me below output: from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-german-large") # make example…
0
votes
0 answers

Huggingface TFRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=1) give ValueError

Reproducible on google colab (transformers 4.24.0). from transformers import TFAutoModelForSequenceClassification model = TFRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=1) I would like to…
kawingkelvin
  • 3,649
  • 2
  • 30
  • 50
0
votes
2 answers

How to display image data returned from dreambooth / stable-diffusion model?

I'm querying a dreambooth model from Hugging Face using the inference API and am getting a huge data response string back which starts with: ����çx00çx10JFIFçx00çx01çx01çx00çx00çx01çx0... Content-type is: image/jpeg How do I decode this and display…
0
votes
0 answers

What are some ways to deal with large slug size in Heroku?

I’m trying to deploy my backend on Heroku and running into the 500 MB slug size limit because my code downloads two tokenizers from Huggingface. For reference, the two tokenizers are BertTokenizerFast.from_pretrained('bert-base-uncased') and…
0
votes
1 answer

Data collation step causing "ValueError: Unable to create tensor..." due to unnecessary padding attempts to extra inputs

I am trying to fine-tune a Bart model from the huggingface transformers framework on a dialogue summarisation task. The Bart model by default takes in the conversations as a monolithic piece of text as the input and takes the summaries as the…
0
votes
1 answer

Target size (torch.Size([8])) must be the same as input size (torch.Size([8, 15])), multi-class classification using hugging face Roberta

I am using hugging face Roberta to classify multi-class dataset, but now I got an error “Target size (torch.Size([8])) must be the same as input size (torch.Size([8, 15]))”. I am not sure what should I do now, could anyone provide some…
0
votes
0 answers

GCP does not see the huggingface-hub installation

I successfully ran !pip3 install huggingface-hub in a GCP notebook instance but I'm still not able to import it. Where am I going wrong?