Find answers from the community

Updated 3 months ago

Hi everyone

Hi everyone,
I passed a LLMPredictor() in the service_context when instantiating a ComposableGraph. But no matter how many times I query using that graph, the "last_token_usage" of the LLMPredictor remains 0. Why is this? Is there a better way that I can keep track of token usage?
M
H
L
8 comments
Which index are you using?
Hi Mikko,
I'm currently using a GPTPineconeIndex ComposableGraph over a collection of GPTListIndex s. I have also tried other configurations before but as long as there is a ComposableGraph component, it has never worked.
the LLMPredictor is in the service_context argument
Attachment
image.png
Hmm hard to say, but you've confirmed the individual vector indices work?
Yes the individual vector indices work. As you can see here, first I queried an individual document in "index_set" which I have already passed to ComposableGraph.from_indices in the previous screenshot. When I query the composable graph again. But the token usage did not change at all.
Attachment
image.png
Hi @Logan M ,
Any thoughts on why "last_token_usage" doesn't seem to be working when I am querying a graph? Any pointer would be greatly appreciated.
heh was just talking about this in another thread. I think it has something to do with how last_token_usage gets reset and the graph technically querying many indexes

What if you do llm_predictor.total_tokens_used ? (This is the accumulated count for the lifetime of the llm_predictor, but it should work for graphs)
Thanks for the prompt response. I tried it out, this is perfect! Thanks πŸ™‚
Add a reply
Sign up and join the conversation on Discord