Questions tagged [squad]

18 questions
5
votes
1 answer

Implementing channels in haskell -- Tackling the awkward squad

In the paper Tackling the awkward squad, Simon Peyton Jones has provided a "possible implementation" of a Channel. type Channel a = (MVar (Stream a) , -- Read end MVar (Stream a) ) -- Write end (the hole) type Stream a = MVar…
user2388535
3
votes
1 answer

How to map token indices from the SQuAD data to tokens from BERT tokenizer?

I am using the SQuaD dataset for answer span selection. After using the BertTokenizer to tokenize the passages, for some samples, the start and end indices of the answer don't match the real answer span position in the passage tokens anymore. How to…
3
votes
0 answers

Fail to run trainer.train() with huggingface transformer

I am trying to set up a TensorFlow fine-tune framework for a question-answering project. Using hugging-face/transformer as the prototype, but cannot run through the trainer. The experiment is conducted at Databricks, the pre-trained model loaded is…
2
votes
1 answer

Running BERT SQUAD model on GPU

I am using the BERT Squad model to ask the same question on a collection of documents (>20,000). The model currently runs on my CPU and it takes around a minute to process a single document - which means that I'll need several days to complete the…
vineeth venugopal
  • 1,064
  • 1
  • 9
  • 17
2
votes
1 answer

What does BERT's special characters appearance in SQuAD's QA answers mean?

I'm running a fine-tuned model of BERT and ALBERT for Questing Answering. And, I'm evaluating the performance of these models on a subset of questions from SQuAD v2.0. I use SQuAD's official evaluation script for evaluation. I use Huggingface…
1
vote
1 answer

Why do we need to write a function to "Compute Metrics" with Huggingface Question Answering Trainer when evaluating SQuAD?

Currently, I'm trying to build a Extractive QA pipeline, following the Huggingface Course on the matter. There, they show how to create a compute_metrics() function to evaluate the model after training. However, I was wondering if there's a way to…
1
vote
0 answers

fine tuning with hugging face trainer when adding layer on eletra model

i'm trying to fine tune my own model with hugging face trainer module. There was no problem until just training ElectraforQuestionAnswering, however I tried to add additional layer on the model and tried the same process. And there comes this…
장준서
  • 11
  • 2
1
vote
1 answer

How can I build a custom context based Question answering model SQuAD using deeppavlov

I have the following queries Dataset format (is how to split train, test and valid data ) Where to place the dataset How to change the path for dataset reader How to save the model in my own directory And How to use the trained…
1
vote
1 answer

deeppavlov model train no module found

I'm trying to start deeppavlov model training on GoogleColab: with configs.ner.ner_ontonotes_bert_mult.open(encoding='utf8') as f: nerconfig = json.load(f) nerconfig['dataset_reader']['data_path'] =…
sigma
  • 13
  • 4
1
vote
1 answer

Multiple answer spans in context, BERT question answering

I am writing a Question Answering system using pre-trained BERT with a linear layer and a softmax layer on top. When following the templates available on the net the labels of one example usually only consists of one answer_start_index and one…
1
vote
0 answers

How to fine tune BERT on Squad2.0

I am really new to BERT and I would like to fine tune BERT base model on Google Colab. Basically I set up using GPU, downloaded data and try to call python run_squad.py !git clone https://github.com/google-research/bert.git !wget…
Trent
  • 53
  • 1
  • 7
1
vote
1 answer

Understanding the Hugging face transformers

I am new to the Transformers concept and I am going through some tutorials and writing my own code to understand the Squad 2.0 dataset Question Answering using the transformer models. In the hugging face website, I came across 2 different…
0
votes
0 answers

Using BERT Q&A model (SQUAD) to answer questions from a dataset

I am developing a custom BERT Q&A model (in the same format as SQUAD) with a view to pose questions to a dataset for an answer (the dataset is large collection of reports). Is it possible to use the BERT model directly on the dataset, or would I…
Jon
  • 89
  • 6
0
votes
0 answers

bert-case from portugese to english

I'm trying to build bert question-answering model. This is from where I'm trying to build it: https://nbviewer.org/github/piegu/language-models/blob/master/question_answering_BERT_large_cased_squad_v11_pt.ipynb model_name_or_path =…
0
votes
1 answer

How to understand the answer_start parameter of Squad dataset for training BERT-QA model + practical implications for creating custom dataset?

I am in the process of creating a custom dataset to benchmark the accuracy of the 'bert-large-uncased-whole-word-masking-finetuned-squad' model for my domain, to understand if I need to fine-tune further, etc. When looking at the different Question…
1
2