Find answers from the community

Updated 2 months ago

@Logan M - can you please help

- can you please help
L
a
11 comments
there is no service context (or at least, its going to be removed) so we cant use that

The prompt helper is only used in response synthesizers. You can pass it there (and imo, changing these values isn't really helpful)

Or, you can set the chunk overlap etc. in the text splitter/node parser you are using for ingestion
my usecase is ..i let user configure all those things if he/she choose to. So need a more flexble way to set these things
i was able to do that in the earlier version very easily. which version service context will be removed ?
most likely v0.11.x

you can still pass it

index.as_query_engine(..., prompt_helper=prompt_helper)

Or for node parsers/text splitters
VectorStoreIndex.from_documents(..., transformations=[SentenceSplitter(chunk_size=512)])
@Logan M - How do you suggest llama_index could be used in multithreaded context ? it was easy to wrap everything in service context and pass it in a thead now I will have to re-write lot of code
because it seems all the components are now invidividual will have to be passed to the base api
I had this line SummaryIndex(documents, service_context) , how do I change it now ?
not sure why we keep going back and forth between these design choices.
the service context carried a lot of tech debt, and forced the initialization of stuff that wasn't even used. It was also never clear what components where using what models/settings. Tbh this has been like this for several months now.

Plain Text
# summary index uses neither embeddings nor llm to construct
# transformations optional
index = SummaryIndex.from_documents(documents, transformations=[SentenceSplitter()])

# query engine for a summary index uses an llm
query_engine = index.as_query_engine(llm=llm)

# a vector index uses embeddings at construction time
# again, transformations optional
index = VectorStoreIndex.from_documents(documents, embed_model=embed_model, transformations=[...])

# query engine for a vector index uses llm and embeddings
query_engine = index.as_query_engine(llm=llm, embed_model=embed_model)


Whenever a model is not supplied, it pulls from whatever is set on the global Settings object
okay . thanks. I think I need to do frequent updates to my code to avoid this. thanks a lot
Add a reply
Sign up and join the conversation on Discord