Find answers from the community

Updated 2 years ago

Hi jerryjliu98 9313 you ve done a

At a glance
Hi @jerryjliu0 you've done a stellar job on the GPT_index library. Have been trying with it for a few days now and it is truly remakable.
Now I don't have much experience in this domain (being studying LLM's for only a month now) but with this library and langchain, I've gotten a lot of clarity on how to work with this technology.
However I do have a few questions that would be great if you could provide some answer to -

  1. Does quering from an Index mean that we are totally cutting of the knowledge of the LLM? In case our index only has knowledge of one aspect of a huge subject and query is on another aspect, does the response from index get combined with the knowledge of LLM to form the response? If not is there a way to gauge if the response from index is satisfactory and divert the query to use the knowledge of LLM?
  2. How do we create NLP to SQL queries by giving GPT index the knowledge of an existing database schema and data?
  3. Langchain has an example of SQL lite, Is it similar to what you've done with the latest release?
Any additional literature on these would be really appreciated along with your response.. Thanks!!

Sorry for the long question.
j
b
m
6 comments
thanks @bivob for your support and feedback!

  1. That's a really interesting question, and probably something left to the prompt. The set of default prompts is here: https://github.com/jerryjliu/gpt_index/blob/main/gpt_index/prompts/default_prompts.py. It does explicitly say "ignore prior knowledge" in the response synthesis prompt, but that's something you can customize as well! https://gpt-index.readthedocs.io/en/latest/how_to/custom_prompts.html
  2. Yeah that's it, we just give GPT Index the table schema. It'll probably help performance if we gave example datapoints, haven't gotten to that yet πŸ™‚
  3. yes! we use langchain's SQLAlchemy wrapper which can connect to sqlite or any other data soruce
Understood...Thank you very much.
Yes for Q1, you can try altering the response prompt to add some qualifiers to the instruction. For example "Consider the following information if helpful" is a much softer guidance. Here I tried this soft guidance and provided completely irrelevant reference material, and it still answered correctly:
If we restrict the prompt to the provided info, it follows the instruction and can't answer:
Add a reply
Sign up and join the conversation on Discord