Find answers from the community

I
Icksir
Offline, last seen 5 days ago
Joined February 17, 2025
Hey, does anyone know how to solve this error? Right now I am using Vertex AI LLM, and when I try to do a refine response synthethizer to query on a ReActAgent, it throws the following error. I tried adding "response_validation = False" into the query to the top agent, but it doesn't work. Any ideas?

Plain Text
...
> Running step 8f16654d-e689-4366-a3c0-8e7a397a94cf. Step input: estacionamientos y evaluación ambiental
Thought: The current language of the user is: Spanish. I need to determine if the question is specific enough to use the vector_tool or if it requires a more general summary using the summary_tool. The question "estacionamientos y evaluación ambiental" (parking lots and environmental assessment) is quite broad. I will use the summary_tool to get a general overview of how Decreto 40 of 2013 addresses parking lots and environmental assessments.
Action: summary_tool
Action Input: {'input': 'Decreto 40 de 2013, estacionamientos y evaluación ambiental'}
> Refine context: Decreto 30,
g ter) Observación ciudadana: Toda ...
> Refine context: c)   Centrales generadoras de energía mayores a...

------------ ERROR ---------
Observation: Error: The model response did not complete successfully.
Finish reason: 2.
Finish message: .
Safety ratings: [].
To protect the integrity of the chat session, the request and response were not added to chat history.
To skip the response validation, specify `model.start_chat(response_validation=False)`.
Note that letting blocked or otherwise incomplete responses into chat history might lead to future interactions being blocked by the service.
------------ --- ---------
...
13 comments
I
L
c
Hey! I am curring facing issues with the use of memory inside a workflow, and I don't know where else to ask. I am creating a chatbot to chat with multiple documents, and my workflow is now looking like the image with this message. The "ingest" path just creates the top agent to retrieve the documents, and the "ask" path is meant to consult the LLM with the indexes.

My ask step looks like this, but the chat stores just overwrites itself after the top agent call. It doesn't remember the chat history, and I don't know if I am doing something wrong or simple I shouldn't use SimpleChatStore (I just wanted to do a proof of concept).

Any advice is welcomed

Plain Text
@step
async def ask(self, ev: StartEvent) -> StopEvent | None:

    obj_index = ev.get("obj_index")
    query = ev.get("query")
    chat_store = ev.get("chat_store")
    user = ev.get("user")
    if not obj_index or not query:
        return None
    
    user_file = f"./conversations/{user}.json"

    if not os.path.exists(user_file):
        chat_store = SimpleChatStore()
    else:
        chat_store = SimpleChatStore.from_persist_path(persist_path=user_file)
    
    chat_memory = ChatMemoryBuffer.from_defaults(
        token_limit=3000,
        chat_store=chat_store,
        chat_store_key=user,
    )

    top_agent = OpenAIAgent.from_tools(
        tool_retriever=obj_index.as_retriever(similarity_top_k=3),
        system_prompt=PROMPT,
        memory=chat_memory,
        verbose=True,
    )
    
    response = top_agent.query(query)
    chat_store.persist(persist_path=user_file)

    return StopEvent(result={"response": response, "source_nodes": response.source_nodes})
13 comments
I
L