Find answers from the community

Updated last year

Hello everyone. I've been working with

Hello everyone. I've been working with OpenAI agent, the app I'm developing works with Notion integration and custom prompts, which clients can change on their own. In order to work with agent I've created BaseToolSpec. It works correct. However the final answer from llm generates for a long time and without consideration of the custom prompt. whislt not receiving large amount of data. There were also cases where llm doesn't even generate an answer on already tested cases whilst using the same tools.
The question is how can I make the agent work faster and fix the errors where the agent does not generate answers and does not consider system_prompt
N
L
5 comments
Logging in case when llm does not generate an answer.
Attachment
photo_2023-11-21_17-22-52.jpg
what version of llama-index do you have? And what did the function call do here to take that long? This might just be an openai timeout issue in this case
Now i'm on the latest.
If I'm not mistaken, when this problem occurred I was using version 0.7
Function takes as args query and sub_query to search data in Notion, retrieve it and transform to text. It could be search contacts database or any Notion page.
Add a reply
Sign up and join the conversation on Discord