The community members are discussing how to use the LangChain SQLDatabase Chain with an embedded table schema index, similar to the feature provided by Llamaindex. The main challenges they face are:
1. The SQL index query can only return the SQL result, which needs to be converted into a customized summary. However, when the SQL result is large, directly using the QuestionAnswer prompt to convert it to natural language hits the token limit.
2. Passing the entire SQL result as one document throws format errors, while splitting each row as a separate document results in the prompt only returning one row record.
The community members suggest creating a list index on the fly and querying to summarize it, as Llamaindex can handle the token limits. However, they are unsure how well it will perform with raw SQL results and may need to customize the prompt template.
Additionally, the community members discuss whether the SQLIndex query can generate a natural language result directly, instead of just presenting the SQL dataset. The response indicates that this feature is not currently available, but the Llamaindex team would like to add it in the future.
Finally, the community members inquire about Llamaindex's support for GPT-4, and receive guidance on how to set the model name and ensure access to
Hi, how to use langchain SQLDatabase Chain with embedded table schema index? Llamaindex provide option to store embedded table schema index from service context. Then how to use similar one from Langchain?
@Logan M Thanks Logan. This is similar to what I'm trying now. However, currently I face challenge is that for sql index query, it could only return sql result. However, we need to convert sql result back into a customized summary way. So what I'm facing is that sometimes is sql result is large, directly using QuestionAnswer prompt to convert sql result back to natural langauge hit token limit. Then I tried using index, but passing all sql result as one document still will throw some format error. However, if I split each row as one document, prompt result always return only 1 row record. Do you have any better suggestion for this?
@Logan M and one more question. Currently does llamaindex support gpt4? I try to convert to gpt4. But when use verctor index.query, it shows error of needs engine or deployment id, which I've already passed api key.