Find answers from the community

Updated 3 months ago

Can I use service_context if I use index = VectorStoreIndex([])

Can I use service_context if I use index = VectorStoreIndex([])
k
2 comments
Please stand by @Raz as I search the knowledge base. We are experiencing especially high traffic right now and might take longer than usual. :wave:
----------------------
Yes, you can use ServiceContext when creating an instance of VectorStoreIndex. The ServiceContext object contains abstractions such as the LLM and chunk size. When creating a VectorStoreIndex, you can pass the ServiceContext as an argument.

Here is an example of how you can do it:

Plain Text
from llama_index import ServiceContext, LLMPredictor
from llama_index.storage import StorageContext
from llama_index.vector_stores import PineconeVectorStore
from llama_index.langchain_helpers.text_splitter import TokenTextSplitter
from llama_index.llms import OpenAI

# define node parser and LLM
chunk_size = 1024
llm = OpenAI(temperature=0, model="gpt-4", streaming=True)
service_context = ServiceContext.from_defaults(chunk_size=chunk_size, llm=llm)


Then, you can use the service_context when creating the VectorStoreIndex:

Plain Text
from llama_index.indices.vector_store.base import VectorStoreIndex

index = VectorStoreIndex([], service_context=service_context)


Please note that the VectorStoreIndex also accepts other parameters like storage_context, use_async, store_nodes_override, show_progress, and kwargs. Make sure to provide these parameters if necessary.

For more details, you can refer to the LlamaIndex documentation ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord