service_context = ServiceContext.from_defaults(llm='local', chunk_size_limit=3000)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/llama_cpp/llama.py", line 900, in _create_completion raise ValueError( ValueError: Requested tokens (3993) exceed context window of 3900
llm=ChatOpenAI
. Even I indexed my entire data set, it seems it is not added in the context. Any ideas how I can let it answer more accurately? Answer:The context provided is about ... Therefore, the original answer remains the same.
text-davinci-03
. here's the gist of the code.# define LLM
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo", max_tokens=512))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
index = GPTSimpleVectorIndex.from_documents(
documents, service_context=service_context
)