Find answers from the community

Updated 2 months ago

Sql correctness

Hi, have question regarding using LlamaIndex GPTSqlIndex to call LLM and generate SQL query. 1. By using text to sql prompt, sometimes SQL query generated by LLM is still not accurate. In order to guarantee accuracy, can refine prompte be added with text to sql prompt? Will that be more accurate? 2. And is there any way to add some SQL validator after LLM generate SQL, but before it actually executes it? 3. In order to build SQL index and use for query, need to wrap LLM with SQL Database, which takes a long time. Is there any way to let it once connected, connected for a long time?
L
c
15 comments
  1. Thats an interesting idea! It may help
  1. Have you looked into guardrails yet? https://gpt-index.readthedocs.io/en/latest/how_to/output_parsing.html#guardrails
I haven't had a chance to use it, but from their docs they do sql validation

  1. Oh, how long does it take?
Hi @Logan M Thanks. For guardrails, I looked into it, however, guardrails is 3rd party tool, which is not allowed to use in our case. Assuming if there is any way to customize validator without using guardrails before LLM really execute SQL query.
Like if we have some common query format, and using vector similarity to compare whatever generated by LLM to our common one, if similarity is high, then execute... Like this. But how to add this into sql index to let it validate before really executing query?
  1. For database connection, it can take 2 min sometimes.
Currently what I did is just async call to async query run. However, for long term use, in order to let sql index query return response in < 10s, how to let SQL connection faster and may be not expire?
There are two guardrails packages that come up in Google. Did you see this one? https://github.com/ShreyaR/guardrails

I'm not sure why it wouldn't be allowed. It's another python package like llama index πŸ€”
@Logan M Thanks. This is open source package, I'm not sure if we are able to use. Is there any other way to support validation of LLM query? 2. What about my 3rd question regarding SQL database connection?
Currently guardrails seems like the only option. Very open to PRs on this!

For 3, I think the best option is connecting during index initialization, and then keeping the index object in memory between queries? πŸ€”
I see. @Logan M I think connecting during index initialization can help. However, when build SQL index, we need to build context_builder = SQLContextContainerBuilder(db), and query index for context based on different user queries, and then build sql index with db, and context container. Then how to keep index in memory? If each time we need to build context container based on differnt user queries?
It's been a second since I've done this, but can't you keep the engine in memory, since that's the part that makes the connection?

https://gpt-index.readthedocs.io/en/latest/guides/tutorials/sql_guide.html#setup
Do you mean keep SQL Alchemy engine open in memory?
Yea, that's the one. Then you can create tables and whatnot as needed for each user, and pass the engine to the index
Thanks Logan. How to make SQL Alchemy engine keep open in memory? Could you show more tips? I jus followed this one before. https://gpt-index.readthedocs.io/en/latest/guides/tutorials/sql_guide.html#setup. But this does not show how to keep it open in memory.
Are you running an api server? When you initialize the server, I would create the engine as a sort of singleton/global variable.

You might have to check the connection and reconnect as needed, but at least it would limit the time spent waiting

For example, in this demo, I keep a vector index as a global, shared across all functions (a similar principle)

https://github.com/logan-markewich/llama_index_starter_pack/blob/main/flask_react/index_server.py
Thanks. I will have a try!
Add a reply
Sign up and join the conversation on Discord