Find answers from the community

Updated 3 months ago

Hey guys Does anyone have any

Hey guys! Does anyone have any suggestions on how to add multiple documents to a simple vector index in parallel?
k
L
9 comments
In other words, is it safe to call index.insert() asynchronously? If not, does index.from_documents() generate the document embeddings asynchronously, and is it possible to combine multiple indexes formed via index.from_documents?
I'm wondering because currently, synchronous calls to OpenAI to create embeddings for documents (done with index.insert()) are by far the rate limiting operation in the app I'm building. I'd imagine I can speed up the whole process of updating an index by a factor of 5x or more if I can do this async πŸ™‚
@jerryjliu0 is there async support for inserts? Or would it even be thread safe to do async inserts? Trying to look at the code but I can't decide if that's feasible lol
Was trying to look through the code too haha. Do you know if there's a way to generate embeddings and then add them to the index after the fact? By linking the embedding with the document beforehand or something like that? If so then maybe I could just generate the embeddings async, link them w/ their corresponding documents, and then add them to the index synchronously
That's not a bad idea. Create space for the embeddings and then insert them under the hood when they are ready πŸ€”
If you are open to making a pr, that would be amazing! πŸ‘
@Logan M after a deep dive into the codebase, I figured out how to do this!

Plain Text
from llama_index import Document, GPTSimpleVectorIndex, ServiceContext, LLMPredictor
from langchain.chat_models import ChatOpenAI
from llama_index.embeddings import openai

txt1 = "foo"
txt2 = "bar"

# generate embeddings async
embedder = openai.OpenAIEmbedding()
embdResult = await embedder.aget_queued_text_embeddings([("txt1", txt1), ("txt2", txt2)])

# create empty index
llm_predictor = LLMPredictor(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.7))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
index = GPTSimpleVectorIndex.from_documents(documents=[], service_context=service_context)

# add docs as needed (note that they already have embeddings attached to them)
docs = [
    Document(text=txt1, doc_id="txt1", embedding=embdResult[0]), 
    Document(text=txt2, doc_id="txt2", embedding=embdResult[1]),
]
for d in docs:
    index.insert(d)
looks like all the tools are there already haha
Wow, that's crazy haha! Very cool :dotsHARDSTYLE: :dotsHARDSTYLE:
Add a reply
Sign up and join the conversation on Discord