Hi everyone, I passed a LLMPredictor() in the service_context when instantiating a ComposableGraph. But no matter how many times I query using that graph, the "last_token_usage" of the LLMPredictor remains 0. Why is this? Is there a better way that I can keep track of token usage?
Hi Mikko, I'm currently using a GPTPineconeIndex ComposableGraph over a collection of GPTListIndex s. I have also tried other configurations before but as long as there is a ComposableGraph component, it has never worked.
Yes the individual vector indices work. As you can see here, first I queried an individual document in "index_set" which I have already passed to ComposableGraph.from_indices in the previous screenshot. When I query the composable graph again. But the token usage did not change at all.
heh was just talking about this in another thread. I think it has something to do with how last_token_usage gets reset and the graph technically querying many indexes
What if you do llm_predictor.total_tokens_used ? (This is the accumulated count for the lifetime of the llm_predictor, but it should work for graphs)