Log in
Log into community
Find answers from the community
s
F
Y
a
P
3,278
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated last month
0
Follow
@Logan M Have you seen this error before
@Logan M Have you seen this error before
0
Follow
B
BioHacker
9 months ago
Β·
Have you seen this error before?
Calculated available context size -4316 was not non-negative
I just updated my llama-index and am getting this.
L
B
d
11 comments
Share
Open in Discord
L
Logan M
9 months ago
Usually that's related to some odd llm/service context settings
B
BioHacker
9 months ago
we are using openai and everything was working yesterday
B
BioHacker
9 months ago
but it got messed up right now as we update to 0.9.39
L
Logan M
9 months ago
can you share some llm/service context code?
L
Logan M
9 months ago
I can probably spot the issue pretty quickly
B
BioHacker
9 months ago
yep just a second
d
ddashed
9 months ago
def get_llm(openai_api_key, max_tokens=8192):
os.environ["OPENAI_API_KEY"] = openai_api_key
return OpenAI(
temperature=0.0, model='gpt-3.5-turbo', max_tokens=max_tokens
) def get_ds_index1(documents, llm, c_m, api_key):
# Defining the service context
embed_model = OpenAIEmbedding()
service_context = ServiceContext.from_defaults(
llm=llm,
chunk_size=384,
embed_model=embed_model,
callback_manager=c_m,
chunk_overlap=128
)
response_synthesizer = get_response_synthesizer(
response_mode="tree_summarize", use_async=False
)
# Processing the entire 'documents'
temp_index = DocumentSummaryIndex.from_documents(
documents,
service_context=service_context,
response_synthesizer=response_synthesizer
)
return temp_index
L
Logan M
9 months ago
so you've set max_tokens=8192 for gpt-3.5-turbo, but that model only has a 4096 context window π€
Attachment
L
Logan M
9 months ago
That would be the issue
B
BioHacker
9 months ago
ahhh man sorry @Logan M
L
Logan M
9 months ago
no worries π
Add a reply
Sign up and join the conversation on Discord
Join on Discord