Find answers from the community

Updated 3 months ago

Hey guys recently gpt 3 5 turbo started

Hey guys, recently gpt-3.5-turbo started to make up too much stuff and 'suppose' too many times.

Maybe you guys are aware of what has changed recently, and maybe have suggesstions for me?

Here's what I am using:

llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=1024, callback_manager=callback_manager)
set_global_service_context(service_context)
response_synthesizer = get_response_synthesizer(
response_mode='refine'
)

memory = ChatMemoryBuffer.from_defaults(token_limit=2000)
chat_engine = testrail_project_index.as_chat_engine(
chat_mode="context",
memory=memory,
service_context = service_context,
refine_template=DEFAULT_REFINE_PROMPT,
system_prompt=(f'Your name is Mantis, who assisting only with questions related to {project_name}. You are able to have normal interactions as long as they don't discuss something outside the context. Only talk about a product's {project_name} based on its test cases through data provided to you in context. If there is something not related to the context, you explain that you can't answer.'),
response_synthesizer=response_synthesizer,
similarity_top_k=5
)
T
T
L
11 comments
Have you tried using a custom prompt template? 3.5 doesn't follow the system message that well, you should probably try the user message prompts. Here you can view the templates https://github.com/jerryjliu/llama_index/blob/main/llama_index/prompts/default_prompts.py
Thanks, so far switched to GPT4 and it's working well, but will definitely check those out. Thanks!
Yeah, gpt-4 follows system message well
It's much more steerable than 3.5
Out of context: The chat memory doesn't seem to work. It's supposed to be able to remember my previous question (just a basic example), right? Maybe you know about it?
response = chat_engine.chat(text)
logging.debug(f"Response: {response}")

if isinstance(response, AgentChatResponse):
message_text = response.response
else:
message_text = "Unexpected response type."
The memory should be working πŸ˜… I've definitely tested with the context chat engine.
Yeah had to define memory outside the function, it was nullifying everytime. πŸ˜„
Add a reply
Sign up and join the conversation on Discord