Find answers from the community

Updated last year

here my actual test code i want to limit

At a glance
here my actual test code, i want to limit the GPT responses for only my docs, for example if i ask what is vesuvio? it response me with a correct response but it was not in my docs
L
s
16 comments
  1. Make sure you pass in the service context when loading from disk
query_engne = load_index_from_storage(storage_context, service_context=service_context).as_query_engine()

  1. You LLM definition is not quite correct. Temperature can only be between 0 and 1, and you are using the incorrect langchain class for gpt-3.5. Try this instead
Plain Text
from llama_index.llms import OpenAI
_llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.5, model="gpt-3.5-turbo", max_tokens=num_outputs))
really thanks for the help Logan
i will try this now
i really appreciate that
If it still doesn't help, we can set an extra system prompt in the llm predictor to try and control the LLM a bit more πŸ™‚
here is my actual code i have mesh yours suggestions + gpt4 + custom prompt and now work as expected
if you have time and want to suggest me any ideas to improve that i would offer you a coffee hahahha
the next steps i will follow are:
  • online db for index (mongo + policone)
  • build rest api with flask
Heads up, are you using the LLM class from llama-index? The correct kwarg in that case is model="gpt-4" not model_name="gpt-4"
  • do frontend with react
ok sure thanks
what model was the app using with the wrong args?
it was defaulting to gpt-3.5
Add a reply
Sign up and join the conversation on Discord