Pre LLM, when I want to build a text classifier (e.g., a sentiment analysis model, when given an input text, it returns "positive" or "neutral" or "negative"), I'll have to gather tons of data, choose a model architecture, and spend resources training the model.
Now as the LLMs like ChatGPT and Google Bard are very smart, I'm wondering if it is possible to build the same text classifier based on those LLMs. (I'm assuming this will require less data and less resources.)
Is this possible? Is there a walk through or tutorial I can follow? Thanks.