Find answers from the community

Home
Members
cyberandy
c
cyberandy
Offline, last seen 6 months ago
Joined September 25, 2024
Hi all, I am having an issue today with the OpenAI Agent using gpt-4-1106-preview, while yesterday everything worked fine, it looks as if today when the tool responds (the results are coming correctly in JSON from the external API) the Agent has a shorter time out and doesn't process the response from the tool. Everything works ok when the external API responds faster (and with lesser data).
23 comments
c
L
Hi all, I have a question; I built a graph index on top of two index and I am using langchain to build a chatbot with memory (using create_llama_chat_agent()).

Things are working (while there is still room for optimization) but how can I get the metadata from the agent response? I would like to get the same metadata I obtain when I run the underlying graph.query() on the graph llama index. Thanks in advance.
6 comments
L
c
Hi @Logan M I created the agent as follows:

Plain Text
from langchain.chat_models import ChatOpenAI
from llama_index import ServiceContext

memory = ConversationBufferMemory(memory_key="chat_history", ai_prefix=system_message)
llm=ChatOpenAI(temperature=0, model_name="gpt-4")
agent_chain = create_llama_chat_agent(
    toolkit,
    llm,
    memory=memory,
    verbose=True,
    agent_kwargs={"prefix": system_message})


Unfortunately when the agent uses the tool llama-index, it doesn't get the system_message, should I personalize the prompt templates for each of the index? Thanks in advance for your help.
50 comments
L
c