Find answers from the community

Updated 2 months ago

@Logan M Have you seen this error before

Have you seen this error before?
Calculated available context size -4316 was not non-negative
I just updated my llama-index and am getting this.
L
B
d
11 comments
Usually that's related to some odd llm/service context settings
we are using openai and everything was working yesterday
but it got messed up right now as we update to 0.9.39
can you share some llm/service context code?
I can probably spot the issue pretty quickly
yep just a second
def get_llm(openai_api_key, max_tokens=8192):

os.environ["OPENAI_API_KEY"] = openai_api_key
return OpenAI(
temperature=0.0, model='gpt-3.5-turbo', max_tokens=max_tokens
) def get_ds_index1(documents, llm, c_m, api_key):
# Defining the service context
embed_model = OpenAIEmbedding()
service_context = ServiceContext.from_defaults(
llm=llm,
chunk_size=384,
embed_model=embed_model,
callback_manager=c_m,
chunk_overlap=128
)

response_synthesizer = get_response_synthesizer(
response_mode="tree_summarize", use_async=False
)

# Processing the entire 'documents'
temp_index = DocumentSummaryIndex.from_documents(
documents,
service_context=service_context,
response_synthesizer=response_synthesizer
)

return temp_index
so you've set max_tokens=8192 for gpt-3.5-turbo, but that model only has a 4096 context window πŸ€”
Attachment
image.png
That would be the issue
ahhh man sorry @Logan M
no worries 😎
Add a reply
Sign up and join the conversation on Discord