Find answers from the community

Updated 3 months ago

Hi all,

Hi all,

I'm running the chat engine react notebook. I have not made any changes to the notebook/cells

When running the cell:
Chat with your data
response = chat_engine.chat( "Use the tool to answer what did Paul Graham do in the summer of 1995?" )

The output is:

Added user message to memory: Use the tool to answer what did Paul Graham do in the summer of 1995? === Calling Function === Calling function: query_engine_tool with args: {"input":"What did Paul Graham do in the summer of 1995?"} Got output: In the summer of 1995, Paul Graham started working on a new version of Arc with Robert. This version of Arc was compiled into Scheme, and to test it, Paul Graham wrote Hacker News. Initially meant to be a news aggregator for startup founders, it was later renamed to Hacker News with a broader topic to engage intellectual curiosity. ========================

Rather than the more verbose and step by step LLM Reasoning in the example:

Thought: I need to use a tool to help me answer the question. Action: query_engine_tool Action Input: {'input': 'What did Paul Graham do in the summer of 1995?'} Observation: In the summer of 1995, Paul Graham worked on building a web application for making web applications. He recruited Dan Giffin, who had worked for Viaweb, and two undergrads who wanted summer jobs, and they got to work trying to build what it's now clear is about twenty companies and several open source projects worth of software. The language for defining applications would of course be a dialect of Lisp. Response: In the summer of 1995, Paul Graham worked on building a web application for making web applications. He recruited Dan Giffin, who had worked for Viaweb, and two undergrads who wanted summer jobs, and they got to work trying to build what it's now clear is about twenty companies and several open source projects worth of software. The language for defining applications would of course be a dialect of Lisp.

Anyone know how to show each step?
W
L
m
6 comments
Passing verbose=True in index.as_chat_engine(verbose=True...) should give you more details
This is an openai agent it looks like, so thats as verbose as it gets
All the "react" loop is hidden behind openai's API
Thanks for the response, parameter verbose=True is in as_chat_engine as that's the default for the example jupyter notebook
Ah ok, the jupyter notebook on the website has more LLM reasoning surfaced whereas when i run it (with verbose=True), it doesn't show as much
yea, that speccifically is for the react agent
Add a reply
Sign up and join the conversation on Discord