Find answers from the community

Updated 3 months ago

Sql index

No that's not what I mean. let's say a user types : "who's the worst performing employee of the month?", the LLM turns this prompt into a Query and extracts the required values from a DB. so far the data remains private (schema doesn't matter), but when the extracted data is returned to the LLM for explanation, like " Mr. X performed the worst." at this point I'd consider it a data risk. I want to stop it from the data to be returned to the LLM for elaboration upon.
L
S
6 comments
Yea the sql index in llama index doesn't do any elaboration, it's a little basic right now
So the explanation never happens
The raw sql result is returned and that's it πŸ‘Œ
oh nice! that's excellent. btw while we're at it let me ask one last question. is it possible for that explanation to occur by another LLM ? (for example OpenAI's GPT does the query translation but the returned raw results are handed over to something like Vicuna.) is it possible to do so with Llamaindex? or should I define a function or a Langchain's chain to do so?
I think since you said it's basic then I already have my answer, but that'd be an interesting feature to add. Thank you very much for your time!
Yea for that, I would use langchain with a local LLM to interpret the result. I actually have a demo of exactly that process (but using openai of course lol)

I need to update the demo to use that latest version of the llama index library, but the pieces are all there

https://huggingface.co/spaces/llamaindex/llama_index_sql_sandbox
Add a reply
Sign up and join the conversation on Discord