Questions tagged [roberta]

Roberta is a graphical open source IDE designed for multiple robot systems, like Calliope Mini, LEGO Mindstorm or the NAO. The main audience is children doing their first programming steps.

Roberta is a graphical open source IDE designed for multiple robot systems, like Calliope Mini, LEGO Mindstorm or the NAO. The main audience is children doing their first programming steps.

Links:

37 questions
0
votes
0 answers

Fine-Tuning Roberta for sentiment analysis

I am trying to fine tune a roberta model for sentiment analysis. I have downloaded this model locally from huggingface. Below is my code for fine tunning: # dataset is amazon review, the rate goes from 1 to 5 electronics_reivews = …
sin0x1
  • 105
  • 1
  • 3
  • 13
0
votes
1 answer

Hugging Face not able to reload all weights after training

I recently being using a RobertaLarge model, which I perform a down stream Training, using "Trainer" package. All goes well, I see the loss going down, and compare manually some results with valid dataset. Problem goes when I try to save the model…
0
votes
1 answer

Error message when trying to use huggingface pretrained Tokenizer (roberta-base)

I am pretty new at this, so there might be something I am missing completely, but here is my problem: I am trying to create a Tokenizer class that uses the pretrained tokenizer models from Huggingface. I would then like to use this class in a larger…
0
votes
0 answers

Is it possible to increase token limit in RoBERTa from 512?

So I was trying out EmoRoBERTA for emotions classification, however, some of the strings in my data is exceeding the 512 tokens limit. Is there any way to increase this limit? I read somewhere about setting max_length = 1024 but not sure if this…
Shrumo
  • 47
  • 7
0
votes
1 answer

Can use different transformer model for tokenizer and model?

Can I use roberta for tokenizer while bert for model? from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained("./bert_tokenizer", max_len=512) from transformers import BertForMaskedLM config =…
0
votes
1 answer

PyTorch: "TypeError: Caught TypeError in DataLoader worker process 0."

I am trying to implement RoBERTa model for sentiment analysis. First, I declared GPReviewDataset to create a PyTorch Dataset. MAX_LEN = 160 class GPReviewDataset(Dataset): def __init__(self, reviews, targets, tokenizer, max_len): self.reviews…
-1
votes
1 answer

Roberta is not able to learn and predict positive class in sentence pair classification

I'm training a sentence-pair binary classification model using Roberta but the model is not able to learn the positive class (class with label 1). My dataset is imbalanced such that: training data - 0 --- 140623 1 --- 5537 validation data - 0…
Sonu Gupta
  • 11
  • 3
1 2
3