No that's not what I mean. let's say a user types : "who's the worst performing employee of the month?", the LLM turns this prompt into a Query and extracts the required values from a DB. so far the data remains private (schema doesn't matter), but when the extracted data is returned to the LLM for explanation, like " Mr. X performed the worst." at this point I'd consider it a data risk. I want to stop it from the data to be returned to the LLM for elaboration upon.
oh nice! that's excellent. btw while we're at it let me ask one last question. is it possible for that explanation to occur by another LLM ? (for example OpenAI's GPT does the query translation but the returned raw results are handed over to something like Vicuna.) is it possible to do so with Llamaindex? or should I define a function or a Langchain's chain to do so?
Yea for that, I would use langchain with a local LLM to interpret the result. I actually have a demo of exactly that process (but using openai of course lol)
I need to update the demo to use that latest version of the llama index library, but the pieces are all there