What are the most suitable open source LLMs and frameworks for fine-tuning? I intend to use this model in a quite specific domain, perhaps a physics mentor for a school. How long might it take (with 3070 Ti 11Gb) to achieve acceptable accuracy for this purpose? I assume that the process of fine-tuning a new language is the same as fine-tuning on any other data, or is it not?
I couldn't find any open source LLMs that support the language I need, or are even partially trained on it, which would've made fine-tuning less complex. While there've been LLMs that support languages from the same family, but I believe that this is more likely to cause issues and confusion, since it'll be harder for the model to distinguish between languages.