Find answers from the community

Updated 2 months ago

Have you seen studies whether and for

Have you seen studies whether (and for which models) it matters if the retrieved data is fed as a user, system or assistant message to OpenAI chat api?
T
M
11 comments
OpenAI previously stated that with 3.5 models the user message is better suited, however I think they're constantly trying to improve the system message with each 3.5 iteration.

With gpt-4 it will follow the system message better but haven't seen any studies comparing it with user message for RAG purposes.

The assistant message is primarily meant for storing prior responses or used to show examples of desired behavior.

I couldn't find any proper studies so that would be interesting to see, especially with gpt-4
Also from my experience 3.5 does a pretty poor job of following the system message so I wouldn't use it for any crucial context or instructions
Although there is the issue that the models are being trained to not reveal the system message so I imagine it's not great for RAG πŸ˜…
Just ran a comparison test, gpt-4 doesn't want to even reference the default Llamaindex system message. 3.5 will reveal some details about the system message

gpt-4:

Plain Text
The context does not provide information about what the system is an expert in or where it is trusted.


gpt-3.5:

Plain Text
I am an expert in answering questions and providing information based on the given context. I am trusted around the world as a reliable source of knowledge and expertise.
yeah good points
I just had a surprisingly good experience retrieving content to the system message
and saying this is your source of truth for answers
using the 16k turbo model
Did you compare it with the user message?
yeah, I'd say it worked at least as well
didn't run any proper benchmarks but a few typical docs we test with
Add a reply
Sign up and join the conversation on Discord