Find answers from the community

Updated 2 years ago

Prompts

Hi all, I have a question regarding graphs. I am creating a composable graph like this:
Plain Text
graph = ComposableGraph.from_indices(
    GPTTreeIndex,
    [index_set[y] for y in countries+ top_level_subfolders], 
    index_summaries=index_summaries,
    service_context=service_context,
)

How and where I can customize my prompts? I want answer to be specifically detailed. Should I do this in query_configs? I tried but it did not gave any results. Thanks in advance!
L
b
v
10 comments
Yea you can customize the prompts in the query configs, you'll want to create both a text_qa_template and refine_template

What did you try that didn't work well?
@Logan M thanks for answering!
I think I tried text_qa_template and it did not made any effect
Reguarding those I have a question. From an index query is there a way to retrieve the {context_str} parameter passed in the query for instance?
Yea, you'll also want to define a refine template. Check out the FAQ near the bottom for some more instructions and links to how to create prompts
https://discord.com/channels/1059199217496772688/1059200010622873741/1088122994251010139
You can either check response.source_nodes

Or you can use llama logger to see what was sent to openai (bottom of this notebook)
https://github.com/jerryjliu/llama_index/blob/main/examples/vector_indices/SimpleIndexDemo.ipynb
Thanks @Logan M . Also I always seem to get only 1 node as source. Is it a parameter that allows it to fetch more nodes or just the way I am using my Vector Index?
Yea the default is one, you can adjust this with something like this

index.query(..., similarity_top_k=3, response_mode="compact")

The response mode will just stuff as much text as possible into each llm call, rather than making one call per top k chunk
Okay I just tried it and it does in fact return more nodes.

The reason I asked about context initially is I added a qna prompt as per the examples and then I started getting answer referring to the context, which is not known by the user. So the response alone is lacking. So I wanted to be able to show the context of the query
Yeaaa gpt-3.5 likes to mention the context a lot, which is annoying. I've tried playing around with the prompt templates but tbh it never fully follows my instructions lol
Add a reply
Sign up and join the conversation on Discord