Find answers from the community

s
F
Y
a
P
Updated 2 years ago

```

Plain Text
from llama_index.langchain_helpers.agents import LlamaToolkit, create_llama_chat_agent, IndexToolConfig, GraphToolConfig
agent_chain = create_agent(toolkit, llm)
    while True: 
        query = input("What do you want to ask? ")
        print('question: ', query)
        response = agent_chain.run(input=query)
        print(f'Agent: {response}')

When integrating a language model with an agent_chain, is it possible to obtain the resources or chunks used as the context by the agent_chain?
L
J
32 comments
You'll want to use the lower-level integrations, like this notebook: https://github.com/jerryjliu/llama_index/blob/main/examples/langchain_demo/LangchainDemo.ipynb

Then in the func for the tool, you can put your own wrapper function instead of a lambda, that calls query and gets the sources nodes from the response object
(Also, a PR to make the llama index agent wrappers accomodate this would be super cool!)
Thanks, I'll read the note book
Plain Text
toolkit = LlamaToolkit(
    index_configs=index_configs,
    graph_configs=graph_configs
)
memory = ConversationBufferMemory(memory_key="chat_history")
agent_chain = create_llama_chat_agent(
    toolkit,
    llm,
    memory=memory,
    verbose=True
)


I'm using graph and toolkit, can I still get the resources?
sadly, not with the toolkit right now 😦
I see. Does that mean, I cannot compose my indices to graph?
you can still do that with the notebook I shared above πŸ’ͺ Create your graph as usually, and the func property of the Tool object will query your graph
I see. Thanks.

func=lambda q: str(graph.query(q))

how can I get resources used when querying the my question
something like this, using a wrapper function around query

Plain Text
graph = [create your graph]
query_configs = [create your query configs]

def query_index(q):
  response = graph.query(q, query_configs=query_configs)
  source_nodes = response.source_nodes
  source_texts = [x.node.get_text() for x in source_nodes]
  source_scores = [x.score for x in source_nodes]
  # Do something with the texts/scores?
  ...
  return str(response)

...
func=lambda q: query_index(q),
...
I tried using Tool and initialize_agent to build an agent_chain, but the it does not give the accurate answer as create_llama_chat_agent does.
Plain Text
graphs = []
tools = []
idx = 0
for file in files:
    print(file)
    graph = get_graph('indexing/' + file+'.json')
    graphs.append(graph)

    desc = graph_desc[idx] if idx < len(graph_desc) else 'others'
    tool = Tool(
        name = file + ' Graph',
        func= lambda q: str(graph.query(q)), #lambda q: str(query_index(q, graph, query_configs)),
        description="useful for when you want to answer questions about the " + desc,
        return_direct=True
    )
    tools.append(tool)
    idx += 1


memory = ConversationBufferMemory(memory_key="chat_history")
llm=OpenAI(temperature=0.2, model_name="gpt-3.5-turbo", max_tokens=512)
agent_chain = initialize_agent(tools, llm, agent="conversational-react-description", memory=memory)
return agent_chain


This is my code using initialized_agent
Plain Text
memory = ConversationBufferMemory(memory_key="chat_history")
agent_chain = create_llama_chat_agent(
    toolkit,
    llm,
    memory=memory,
    verbose=True
)

It seems this way the agent will give much more accurate answer. How can I improve the other one since only the other way can I get the resources.
When you created with the toolkit, did you use the same index descriptions?

Did you define a query config for the toolkit method?
Plain Text
   graph_desc = [' general gurufocus tutorials questions, such as some key page tutorials or stock summary page', ' warren buffet and BERKSHIRE HATHAWAY INC SHAREHOLDER LETTERS', ' 10K sec filing for some popular stocks']
    graphs = []
    graph_configs = []
    idx = 0
    for file in files:
        print(file)
        graph = ComposableGraph.load_from_disk('indexing/' + file+'.json', llm_predictor=llm_predictor, prompt_helper=prompt_helper)
        graphs.append(graph)
        # graph config
        graph_config = GraphToolConfig(
            graph=graph,
            name=f"Graph Index",
            description="useful for when user ask about gurufocus" + graph_desc[idx],
            query_configs=query_configs,
            tool_kwargs={"return_direct": True}
        )
        graph_configs.append(graph_config)
        idx += 1
   
Plain Text
   # load index
    index_list = []
    index_configs = []
    index_desc = { 'getting-started.json': ', including homepage dashboard, stock summary page, guru pages, insider trades, all-in-one screener, excel add-in, and google sheets add-on', 'stock-summary-page.json': ', including stock summary page, warning signs, gf score, gf value, performance charts, peter lynch chart, segment data charts' }
    for file in files:
        file_list = os.listdir('indexing/' + file)
        for filename in file_list:
            file_path = f'indexing/{file}/{filename}'
            cur_index = GPTSimpleVectorIndex.load_from_disk(file_path, llm_predictor=llm_predictor, prompt_helper=prompt_helper)
            index_list.append(cur_index)
            
            desc = index_desc[filename] if filename in index_desc else ' '
            tool_config = IndexToolConfig(
                index=cur_index, 
                name=f"Vector Index {filename}",
                description=f"useful for when you want to answer queries about the {filename[:-5]} " + desc,
                index_query_kwargs={"similarity_top_k": 3},
                tool_kwargs={"return_direct": True}
            )
            index_configs.append(tool_config)

    toolkit = LlamaToolkit(
        index_configs=index_configs,
        graph_configs=graph_configs
    )
This is how I defined toolkit
https://github.com/jerryjliu/llama_index/blob/3189e32ef2f97547147f476d2b3402d4b2cdd34d/gpt_index/langchain_helpers/agents/agents.py#L48

I just see the source code for toolkit create_llama_chat_agent, I wonder if it's possible that I can modify this to pass func in the tool . So that I can get the resources using toolkit.
It might be possible to pass it in πŸ€”

But back to the original problem, you aren't passing in the query configs in the raw tool approach.

You'll want to do something like this
lambda q: str(graph.query(q, query_configs=query_configs))
I see. I'll modify the code and try again.
Plain Text
  # define LLM
    llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.2, model_name="gpt-3.5-turbo", max_tokens=num_outputs))
    prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)

    decompose_transform = DecomposeQueryTransform(
        llm_predictor, verbose=True
    )
   

    # define query configs for graph 
    query_configs = [
        {
            "index_struct_type": "simple_dict",
            "query_mode": "default",
            "query_kwargs": {
                "similarity_top_k": 1,
                "include_summary": True,
                "refine_template": CHAT_REFINE_PROMPT
            },
            "query_transform": decompose_transform
        },
        {
            "index_struct_type": "list",
            "query_mode": "default",
            "query_kwargs": {
                "response_mode": "tree_summarize",
                "verbose": True,
                "refine_template": CHAT_REFINE_PROMPT
            }
        },
    ]

    # define query configs for index
    index_query_configs = [
        {
            "index_query_kwargs": {"similarity_top_k": 3},
            "tool_kwargs": {"return_direct": True}
        }
    ]

I add these query_configs, but it often answer Sorry, the provided knowledge source context is not related to the topic
Plain Text
  # add graph tools
    graph_desc = [' general gurufocus tutorials questions, such as some key page tutorials or stock summary page', ' warren buffet and BERKSHIRE HATHAWAY INC SHAREHOLDER LETTERS', ' 10K sec filing for some popular stocks']
    graphs = []
    tools = []
    idx = 0
    for file in files:
        print(file)
        graph = get_graph('indexing/' + file+'.json')
        graphs.append(graph)

        desc = graph_desc[idx] if idx < len(graph_desc) else 'others'
        tool = Tool(
            name = file + ' Graph',
            func= lambda q: str(query_index(q, graph, query_configs)),
            description="useful for when you want to answer questions about the " + desc,
            return_direct=True
        )
        tools.append(tool)
        idx += 1

    # Add all index tools
    index_desc = { 'getting-started.json': ', including homepage dashboard, stock summary page, guru pages, insider trades, all-in-one screener, excel add-in, and google sheets add-on', 'stock-summary-page.json': ', including stock summary page, warning signs, gf score, gf value, performance charts, peter lynch chart, segment data charts' }
    for file in files:
        file_list = os.listdir('indexing/' + file)
        for filename in file_list:
            file_path = f'indexing/{file}/{filename}'
            cur_index = GPTSimpleVectorIndex.load_from_disk(file_path, llm_predictor=llm_predictor, prompt_helper=prompt_helper)
            
            desc = index_desc[filename] if filename in index_desc else ' '

            tool = Tool(
                name = file + ' index',
                func= lambda q: str(query_index(q, index, index_query_configs)),
                description="useful for when you want to answer questions about the " + desc,
                return_direct=True
            )
            tools.append(tool)

these are how I add Tools for both graph and index
πŸ˜΅β€πŸ’«

Maybe try changing the query configs to this for the non-toolkit apporach

Plain Text
 query_configs = [
        {
            "index_struct_type": "simple_dict",
            "query_mode": "default",
            "query_kwargs": {
                "similarity_top_k": 3,   # updated to match the index query config?
                "include_summary": True,
                "refine_template": CHAT_REFINE_PROMPT
            },
            "query_transform": decompose_transform
        },
        {
            "index_struct_type": "list",
            "query_mode": "default",
            "query_kwargs": {
                "response_mode": "tree_summarize",
                "verbose": True,
                "refine_template": CHAT_REFINE_PROMPT
            }
        },
    ]
I'm sorry I was so confused and spent a lot of time from you. I'll try again. Thanks
hahaha no worries, hopefully we are getting somewhere πŸ˜… πŸ™
The first one is the result using query_config and the second one is the result using toolkit. The first one does not answer the question at all. Is it because I didn't set a proper query_config? I use this config
Plain Text
query_configs = [
        {
            "index_struct_type": "simple_dict",
            "query_mode": "default",
            "query_kwargs": {
                "similarity_top_k": 3,   # updated to match the index query config?
                "include_summary": True,
                "refine_template": CHAT_REFINE_PROMPT
            },
            "query_transform": decompose_transform
        },
        {
            "index_struct_type": "list",
            "query_mode": "default",
            "query_kwargs": {
                "response_mode": "tree_summarize",
                "verbose": True,
                "refine_template": CHAT_REFINE_PROMPT
            }
        },
    ]
Attachments
Screen_Shot_2023-04-13_at_5.05.55_PM.png
Screen_Shot_2023-04-13_at_5.09.51_PM.png
hmmm, I'm really not sure at this point πŸ˜…

How about this. Use the toolkit for now, and I'll see if I can figure out a way to provide sources using the toolkit and make a PR πŸ™
Thank you so much.\
@JW I've landed the PR to show sources from the create_llama_agent setup, it's in the latest version

Check out this notebook. There's a new item to add to tool_kwargs

Then, it will spit out some json that you can parse with json.loads πŸ’ͺ

https://github.com/jerryjliu/llama_index/blob/main/examples/chatbot/Chatbot_SEC.ipynb
Add a reply
Sign up and join the conversation on Discord