Find answers from the community

Updated 3 months ago

I am working on building a RAG on pdfs

I am working on building a RAG on pdfs - and running into an issue of lack of repeatability of responses, even when I have set the temperature of open ai to 0. I literally copy the code and run it in two different jupyter notebooks and get different responses every time.

Any insights here ? Is the chunking randomized for the same file ? Help would be appreciated
L
S
G
9 comments
The chunking is not randomized. But people have reported that OpenAI has trouble with repeatablility (likely due to them using mixture-of-experts)
Found another thing with chat_engine, I sent a query to the chat_engine and it gave an response, when I am printing the response it is giving something and when i am just displaying the response it is soething else, wondering what to say to this kind of scenario, if you can explain this behaviour
Look at the screenshot for your reference
Attachment
Screenshot_2024-03-07_at_11.54.51_AM.png
So this is an agent. An agent will
  • read the latest message + chat history, along with a list of tools
  • decide if it needs to call a tool or not. If so, it writes the tool input
  • the tool runs
  • the model interprets the tool response, and returns a final response
any way to use the seed parameter ? it was introduced for repeatability
Logan, I am not clear about your message in agents. If you see the screenshot - how would getting a response - and then printing it be different ?
hmm, I think if you did OpenAI(..., additional_kwargs={"seed": 1234} or similar it might work
Becuase it gets a response from a tool, and then re-interprets it in the context of the current chat history
I'll give it a shot
Add a reply
Sign up and join the conversation on Discord