Find answers from the community

Updated 2 years ago

Is there anyway to speed up the building

At a glance

The post asks if there are ways to speed up the building of documents, indexing, and querying. The comments suggest the following:

For indexing using a vector store, community members recommend increasing the embed_batch_size from the default of 10 to 50.

For querying, community members suggest looking into the optimizer and enabling streaming to make the query feel faster.

One community member also mentions that if using a list index, setting use_async=True for tree_summarize may also help, though they are not certain.

There is no explicitly marked answer in the comments.

Useful resources
Is there anyway to speed up the building of documents, index, and querying? any of those parts?
L
b
5 comments
indexing (using a vector store) -> you can increase the embed_batch_size (default is 10)

Plain Text
from llama_index.embeddings.openai import OpenAIEmbedding
embed_model = OpenAIEmbedding(embed_batch_size=50)
service_context = ServiceContext.from_defaults(...., embed_model=embed_model)


querying, you could look into trying the optimizer -> https://gpt-index.readthedocs.io/en/latest/examples/node_postprocessor/OptimizerDemo.html

Enabling streaming will also make the query feel faster
It's a list index @Logan M
thank you will look into optimizer.
Right right, you have the list index.

If you are still using tree_summarize, setting use_async=True maaaay also help? I can't remember though, been a while since I tested that
ooh will give it. ashot.
Add a reply
Sign up and join the conversation on Discord