I am experimenting with a new UI where typical profile settings are updated via chat instead of UI. For example, instead of showing a frontend component to let users cancel their billing, they can just chat to the bot.
I am wondering if its possible to get my LLM (lets say gpt3) to generate the graphql query necessary to run these operation. I am thinking I could ingest my graphql schema into a vector database like Pinecone, then feed in that context into LLM so that it can generate the appropriate GQL query/mutation.
Is this feasible, or a bad idea?
I have only theorized this so far