Find answers from the community

Updated 2 years ago

Langchain print sources

At a glance

The community member is asking if it is possible to print the response.source_node using the llama_index as a tool in a LangChain agent. Other community members respond by providing suggestions on how to set up the tool and agent, including using a custom function instead of a lambda function. They provide an example of how to define the tool and the function, but there is no explicitly marked answer in the comments.

Useful resources
Is it possible to print the response.source_node using the llama_index as a tool in a langchain agent?
L
A
5 comments
How are you setting up the tool and agent now?
Im using an agent with, for now, a single tool (where I specified the gpt_index). Furthermore Im using agent="conversational-react-description" and the ConversationBufferMemory as memory
Right! And I'm guessing you are using a lambda function for the tool func?

You can replace the lambda with our own function the calls the index query, prints the source nodes, and returns the response
Can you give me an example? I created a function which print “response.response” and “response.get_formatted_sources” and return “str(response.response)”

However when i use it in the func of Tool i get the following error:
ValidationError: 1 validation error for Tool func
It also tells me the error type: type=type_error.callable

I cannot figure out what is it
Ohhh, you need to set func to a func still. So either a lambda that calls your new function or pass the function itself

Here's an example I have

Here is the tool definition
https://github.com/logan-markewich/llama_index_starter_pack/blob/main/streamlit_sql_sandbox/streamlit_demo.py#L52

Here is the function definition
https://github.com/logan-markewich/llama_index_starter_pack/blob/main/streamlit_sql_sandbox/utils.py#L5
Add a reply
Sign up and join the conversation on Discord