Find answers from the community

Updated 2 months ago

Is there any plans for standardizing the

Is there any plans for standardizing the metadata in responses? Picking raw response for example to count the tokens will not scale across the many different providers.
s
L
18 comments
I would suggest capturing input_tokens_count, output_tokens_count
stop_reason, stop_sequence for guided capturing stop-words
On the same thought, stop words parameter could be part of BaseLLM as there are some deviations in parameter names when passed to some providers.
Not really any plans beyond what's already there (capturing the text/message)

It would be a large effort. And not every api provides those items in the response
OK. I was going through the bedrock API docs.

Thought to add token count capturing for whichever were supported. Saw the current implementation and thought an issue may arise.
Is it OK if I still add those and do a slight refactor?
Also just so we are clear the usage dict capture is for anthropic right?
which usage dict capture? In the token counter?
Mainly meant for openai, plus any other API that returns usage in a similar format
For anthropic it's input_tokens and output_tokens
but in the same dict
or maybe, we can expose the TokenCounter in the handler and each integration can have their own implementations. wdyt?
or this. default can be none or nan for whichevers are not provided
That's not a bad idea tbh
I gave this some thought. I think the metadata one will be easier to set up, and much more useful long term, viz directed ReAct agent.
We can have some glue code in the the token counter interim to use the metadata if it is available or else fallback to original logic.
Add a reply
Sign up and join the conversation on Discord