Hmmm, and what type of index are you using?
llm_predictor = ChatGPTLLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, chunk_size_limit=512)
index = GPTSimpleVectorIndex.load_from_disk(input_index, service_context=service_context)
A soon as I am chaning this:
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.1, model_name=set_model, max_tokens=num_outputs))
to:
llm_predictor = ChatGPTLLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name="gpt-3.5-turbo"))
It only refere to the context information and have no idea what Italy is. π¦
So it knows "Italy" with LLMPredictor but not with ChatGPTLLMPredictor. The index knows in both situations.
Hmmm yea the first example uses davinci, the second uses ChatGPT...
And you used a prompt similar to the one I linked above? I also use chatgpt in my demo (the same codebase I linked above) and it seems to work great π€
I took all yout templates and use this
response = index.query(
query_str,
service_context=service_context,
similarity_top_k=3,
response_mode="compact",
text_qa_template=TEXT_QA_TEMPLATE, refine_template=CHAT_REFINE_PROMPT
)
I have no idea why this chatbot lost connection to the general knowleadge. It is so wierd and it happens only when I switch to ChatGPTLLPredictor
Hmm. Maybe try this instead
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name="gpt-3.5-turbo"))
Literaly:
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.1, modelname="text-davinci-003"))Answer: Italy is a country in Europe. It is located in the southern part of the continent and is bordered by France, Switzerland, Austria, and Slovenia. It is home to the capital city of Rome and is known for its rich culture, art, and cuisine._
llm_predictor = ChatGPTLLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name="gpt-3.5-turbo"))
Answer: The context information does not provide any information about Italy.
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name="gpt-3.5-turbo"))
Answer: There is no information provided about Italy in the given context.
Unreal shit... I am soooo tired!
Oof. Let my try something quick, I have a chance to run some code
This is very wierd now, look:
Italy -> #llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.1, model_name="text-davinci-003"))
No Italy ->#llm_predictor = ChatGPTLLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name="gpt-3.5-turbo"))
No Italy ->#llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name="gpt-3.5-turbo"))
No Italy -># llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.1, model_name="gpt-3.5-turbo"))
The only difference is ""gpt-3.5-turbo""
Hmm I'm actually having some similar troubles. Something about the prompt is causing chatgpt to focus on the context π
One workaround is using llama index as a tool in langchain.
Then it will only query llama index when the LLM thinks it needs to invoke the tool (based on the tool description)
Otherwise, I think it just needs more prompt engineering
Do you know how I can "see" the prompt content before it is send to OpenAI?
Enable debug logs
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
It is great "feature" to be strict about index but it could be a kind of parameter to on and off. Now, I need to take it off. π I will debug, thanks for your help. You are a great person!
I have only this, still no promt
INFO:llama_index.token_counter.token_counter:> [query] Total LLM token usage: 224 tokens
[query] Total LLM token usage: 224 tokens
[query] Total LLM token usage: 224 tokens
[query] Total LLM token usage: 224 tokens
INFO:llama_index.token_counter.token_counter:> [query] Total embedding token usage: 4 tokens
[query] Total embedding token usage: 4 tokens
[query] Total embedding token usage: 4 tokens
[query] Total embedding token usage: 4 tokens
The context information does not provide any information about Italy.
Whaaa where are the debug logs ?? π
It doesn't work too... Time to give up for today. π Thanks a lot for your help, have a good one!
No way.... sorry man π get some good rest!
I coun't sleep. Got up, rebooted host, fixed logs, find promts and you were right. They simply are a reason of my problem. Fixed them as per your example and now... Italy is again Italy!
Thanks a lot for yoru help today, you did great. Now I can sleep. π π
Hahaha I can't sleep either with unsolved problems. Glad you got it!! ππͺπ«‘
@TomPro are you now using these prompts?
Not 1:1, I have changed them now. Still trying to build a good once. It is not so easy. Know I understand what the problem is with GTP-3.5-Turbo. Davinci-003 in my case is way better but mayby prompt tunning is needed. I will see tomorrow... I should slepp like from 2h now. Hahaha...
If you do settle on a good one, would be great to share it here, I've been having a helluva time reining 3.5 in as well...