Find answers from the community

Home
Members
Quentin
Q
Quentin
Offline, last seen 3 months ago
Joined September 25, 2024
Hi How to response stream output when using agent that built via initialize_agent(), I can't do it following your notebook which wrote for as_chat_engine(). Because the function agent.run() only return string type.
27 comments
Q
L
How to update documents extra_info of an exists GPTSimpleVectorIndex?
2 comments
k
Once I assignment graph.index_struct.summary a value ,next time I load this graph from disk .This value will be none.
Is there a bug of save? Or does this field have any effect?
9 comments
Q
L
I added this parameter, but it doesn't seem to work very well, the response is still often truncated, whether it is language-related?
2 comments
Q
L
Q
Quentin
·

num_output

I had create a Chatbot follow your document "How to Build a Chatbot".But the response always truncated,How to fix it ?
1 comment
V
Q
Quentin
·

hello bros

hello bros,
Can I pass two prompts by one query? How to do it?
index = GPTListIndex(documents)
response = index.query(prompt, response_mode="tree_summarize")
11 comments
Q
M
@kapa.ai can I specify separators for TokenTextSplitter?
2 comments
k
Hi. I got following error when chat to agent: openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, you requested 4918 tokens (3894 in the messages, 1024 in the completion). Please reduce the length of the messages or completion.

It still raise that even history messages is []

There are my settings:
99 comments
L
Q
4
Q
Quentin
·

Agent

@Logan M I had create an agent using create_llama_chat_agent,based on ComposableGraph,and I add a GPTSimpleVectorIndex with document to ComposableGraph.Then start chat to this agent.But I found this the agent's answer that were rarely relevant to my document.I want know how to improve the hit rate of index against my documents?
35 comments
Q
L