Find answers from the community

Updated last year

I noticed that my `service_context` does

I noticed that my service_context does NOT update unless I restart the entire application, has anyone run into this issue before?

Here's my code:

Plain Text
    set_global_service_context(service_context)
    vector_store = PineconeVectorStore(
        pinecone_index=PINECONE_INDEX, namespace=namespace
    )
    storage_context = StorageContext.from_defaults(
        docstore=DOCUMENT_STORE,
        index_store=INDEX_STORE,
        vector_store=vector_store,
    )
    print(service_context)
    return VectorStoreIndex.from_vector_store(
        vector_store=vector_store,
        storage_context=storage_context,
        service_context=service_context,
    )
L
k
45 comments
hmmm kind of confused.

You set a global service contex, and then also pass it directly into the index.

What are you expecting to happen?
Hey @Logan M so if I run this code twice and in the 2nd run I update the service_context with a different context_window then it doesnt update it. I've tried it with and without setting it globally.
Bit of background, we create an other index if someone requests a query for another namespace. For free users we want it to use GPT-3.5 and for paid users GPT-4. But as it seems the 2nd time I'm creating the index the service_context isn't update. It stays what ever was used on the 1st creation of the index.
Is there any caching or something done internally?
No caching, probably a bug I'm guessing
context_window is the one arg not passed properly in the global context LOL
Merged -- you can install from source to get the fix now, but should have it on pypi sometime tomorow or sunday
AHHH thanks a lot! : )
@Logan M just had a look at your code, is llm kwarg not missing as well?
Actually just tried it with your fix and the model of the service_context is still not correct ๐Ÿ˜ฆ
:PSadge: I only tested with setting context window lol
will make another pr
i cannot WAIT to remove the service context -- so messy :PSadge:
its a real headache lol
ah you made a pr nice
yep ๐Ÿ™‚
but idk if that actually fixes the full thing, I feel like if i pass a llm arg to service_context.from_defaults it still doesnt work
seems to work
Plain Text
from llama_index import set_global_service_context, ServiceContext
from llama_index.llms import MockLLM

# setup llm w/ 2
ctx = ServiceContext.from_defaults(llm=MockLLM(max_tokens=2))
print(ctx.llm)
set_global_service_context(ctx)

# validate 2
test_ctx = ServiceContext.from_defaults()
print(test_ctx.llm)

# setup new llm
new_ctx = ServiceContext.from_defaults(llm=MockLLM(max_tokens=4))
print(new_ctx.llm)
set_global_service_context(new_ctx)

# validate llm w/ 4
test_ctx = ServiceContext.from_defaults()
print(test_ctx.llm)


Which prints

Plain Text
... max_tokens=2
... max_tokens=2
... max_tokens=4
... max_tokens=4
huh one sec, then idk what I'm doing wrong
lemme know if my test is not representitive
your tests look good. Testing again on my end, one sec.
for some reason when I print service_context.llm I get this:

Plain Text
callback_manager=<llama_index.callbacks.base.CallbackManager object at 0x14ef35b10> model='gpt-3.5-turbo' temperature=0.0 max_tokens=None additional_kwargs={} max_retries=3 timeout=60.0 default_headers=None api_key='sk-XXX' api_base='https://api.openai.com/v1' api_version=''
is that to be expected? ๐Ÿ˜ฎ
yea that looks right
all the LLM kwargs
so for example. if you modified the temperature or model, you'd see that change
its odd cos I initalize the service context with GPT 4 like this:

Plain Text
service_context = ServiceContext.from_defaults(
        llm=OpenAI(temperature=0, model_name="gpt-4-1106-preview"),
        callback_manager=CALLBACK_MANAGER,
        embed_model=embed_model,
        context_window=context_window,
    )
model_name= ---> model=
and I still see gpt-3.5-turbo in the print
lol :cryingskull: Easy mistake to make (I know langchain uses model_name )
well at least we fixed some bugs on the way.
works now! I'm so happy. Thanks a lot!
Nice! Yea happy to fix bugs along the way haha, glad we got it working
Yep, it all seems to work now. Finally. Happy thanksgiving!
Add a reply
Sign up and join the conversation on Discord