Find answers from the community

Updated 2 months ago

Hi all I am noticing some issues after

Hi all, I am noticing some issues after upgrading from 0.6.26 to latest that I could use some advice on.

When I persist my newly created VectorStoreIndex with a new TokenCountingHandler set on the CallbackManager on the index's ServiceContext, the TokenCountingHandler is missing in the list of handlers on retrieval of the index. Is this expected behaviour?

Also when I use ResponseEvaluator or QueryResponseEvaluator now on the response of the chat engine, I am getting an error about missing the source_nodes property on AgentChatResponse. The response type appears not to be common anymore between the response of a query engine and chat engine. Is there going to be a way to evaluate the response of the chat engine?
L
R
5 comments
we could patch the evaluator to work with both yea, good catch.

I'm not sure what you mean for the first issue. You mean when you load the index from disk? When you load, you need to make sure you pass in the service context again. Or just set a global service context to make things easier

Plain Text
from llama_index import set_global_service_context

set_global_service_context(service_context)
Thanks Logan for the answers. Did I need to file an issue in Github? As for the first issue I was wrongly assuming that the ServiceContext was getting persisted to disk when saving the index because I was using a lot the defaults and they were re-populated on index load. I tested with changing some of the defaults in LLM and PromptHelper and indeed it reverts back to the default. Thanks for the tip on global service context I will give that a try.
There is no way to persist the ServiceContext is there? It does seem like a nice feature to have for thinks like token usage and llm and prompt settings.
there is! πŸ’ͺ

Plain Text
load_index_from_storage(storage_context, service_context=service_context)
Perfect! You rock 😁
Add a reply
Sign up and join the conversation on Discord