0

I am a bit confused on the difference between/advantage of finetuning BERT or other LLMs for text classification instead of just using the BERT embeddings with a Spacy pipeline.

I believe by using the Spacy pipeline, speed and flexibility (different pipes like POS tagger, etc.) would be two advantages. But it seems it is the industry standard to just finetune LLMs like BERT to directly do the classification? What would be the main benefits of approaching the problem this way?

I am someone who uses the Spacy pipeline and adds the embedding layer, but seeing a lot of others just finetune the LLMs, I am curious what the advantage is there?

kbmmoran
  • 1
  • 1
  • what have you tried so far ? the question needs sufficient code for a minimal reproducible example: https://stackoverflow.com/help/minimal-reproducible-example – D.L Apr 03 '23 at 09:54

0 Answers0