Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
๐
๐
๐
Powered by
Hall
Inactive
Updated 2 months ago
0
Follow
I'm overrunning the token count after
I'm overrunning the token count after
Inactive
0
Follow
D
Darthus
last year
ยท
I'm overrunning the token count after increasing my chunk size and adding more system prompts. Can I see an exact output of the request to OpenAI and total token count so I can troubleshoot?
L
1 comment
Share
Open in Discord
L
Logan M
last year
https://docs.llamaindex.ai/en/stable/module_guides/observability/observability.html#simple-llm-inputs-outputs
Add a reply
Sign up and join the conversation on Discord
Join on Discord