Find answers from the community

Updated 2 years ago

Help When I call `response get formatted

At a glance

The community member is having an issue with the get_formatted_sources() method in the llama_index library. When using the default language model "text-davinci-003" as a chat engine, the get_formatted_sources() method returns a blank response, while the as_query_engine() method returns the formatted sources. The community members discuss potential workarounds, such as using GPT-3.5-turbo or GPT-4, or using a different chat mode like "context mode". They also note that the response object differs between chat engines and query engines, and that the library still needs to bring them to feature parity.

Useful resources
Help. When I call response.get_formatted_sources() I get blank on index.as_chat_engine() ... but index.as_query_engine() returns formatted sources.
Plain Text
chat_engine = index.as_chat_engine()
response = chat_engine.query("What did the author do growing up?")
print("get_formatted_sources():", response.get_formatted_sources()) # <-- BLANK
# get_formatted_sources(): < -- BLANK HERE
print("metadata:", response.metadata)
# metadata: None
print("response:", response.response)
# response: Growing up, the author wrote short stories, programmed on an IBM 1401, and eventually convinced his father to buy him a TRS-80 microcomputer. He wrote simple games, a program to predict how high his model rockets would fly, and a word processor. He studied philosophy in college, but eventually switched to AI. He wrote essays and published them online, and worked on spam filters and painting. He also hosted dinners for a group of friends every Thursday night and bought a building in Cambridge.
print("source_nodes:", response.source_nodes)
# source_nodes: []
L
f
7 comments
hmm what LLM are you using? What version of llama_index?
llama_index version: 0.7.18
ah, so you are using the default LLM ("text-davinci-003") which when used as a chat engine, uses the react agent. But the react agent doesn't have the sources implemented just yet

So workaround is either
thanks. I wasn't able to immediately figure out how to "set gpt-3.5-turbo" but I changed things around.

By just adding 'context' , a dummy memory, and system prompt, I was able to get response.

However there is no "metadata" and no "get_formatted_sources()"
BUT there is a response.source_nodes
Yea, the response object itself differs between chat engines and query engines. Still need to bring them to feature parity πŸ™‚
(basically, the chat engine/agents use a special response object, that generalizes to sources from all types of "tools")
Add a reply
Sign up and join the conversation on Discord