Find answers from the community

Updated 2 years ago

Has anyone figured out how to do custom

Has anyone figured out how to do custom prompts with ComposableGraph.query()?
L
G
13 comments
Yea you'll need to set the prompts in the query_configs

Plain Text
query_configs = [
    {
        "index_struct_type": "simple_dict",
        "query_mode": "default",
        "query_kwargs": {
            "similarity_top_k": 3,
            "text_qa_template": my_qa_template,
            "refine_template: my_refine_template
        }
    },
    ...
]
thanks @Logan M. So how do you control which prompt is used for the index look up for each index and which prompt is use for the final text generation with the instructions?
Depends on what your graph looks like

if your bottom layer is a vector index, then the above will control the final text generation.

if your top level is a tree or list, you can set the templates in the same way I think ๐Ÿค”
@Logan M I have two GPTSimpleVectorIndex put together in a single graph. I want the index to do a vector look up on both indexes in the graph and return relevant context that will be used to generate the final text using the context and the instructions around the context
So the top level index is a list?
@Logan M like this:
graph = ComposableGraph.from_indices(
GPTListIndex,
[global_index, index_2],

index_summaries=[
"global summary",
"index 2 summary"
],
)
Cool!

So your query config would look like this


Yea you'll need to set the prompts in the query_configs

Plain Text
query_configs = [
    {
        "index_struct_type": "simple_dict",
        "query_mode": "default",
        "query_kwargs": {
            "similarity_top_k": 3,
            "text_qa_template": my_qa_template,
            "refine_template: my_refine_template
        }
    },
    {
        "index_struct_type": "list",
        "query_mode": "default",
        "query_kwargs": {}
    }
]
So i think how it works, your query runs in both sub indexes. Then using those responses, the query is ran again using the list index
What is the problem you are facing then? ๐Ÿค” you can also set the prompt tempted for the list index too in the config, similar to how its set for the vector index
thanks. What does my_refine_template do? the qa_template has the instructions for GPT right?
Yes. But if the text doesn't all fit into the first LLM call, llama index refines the answer across a few LLM calls.

The refine template presents the LLM with new context, the original query, and the previous answer, and asks the LLM to either update its previous answer using the new context or repeat the previous answer if the new context is not helpful
thanks @Logan M, where do I find examples of my_refine_template?
It will look slightly different depending on if you use gpt-3.5 or davinci

See the faq for some links to the ones inside llama index ๐Ÿ’ช

https://discord.com/channels/1059199217496772688/1059200010622873741/1088122994251010139
Add a reply
Sign up and join the conversation on Discord