The community member is asking if it is possible to print the response.source_node using the llama_index as a tool in a LangChain agent. Other community members respond by providing suggestions on how to set up the tool and agent, including using a custom function instead of a lambda function. They provide an example of how to define the tool and the function, but there is no explicitly marked answer in the comments.
Im using an agent with, for now, a single tool (where I specified the gpt_index). Furthermore Im using agent="conversational-react-description" and the ConversationBufferMemory as memory
Can you give me an example? I created a function which print “response.response” and “response.get_formatted_sources” and return “str(response.response)”
However when i use it in the func of Tool i get the following error: ValidationError: 1 validation error for Tool func It also tells me the error type: type=type_error.callable