Find answers from the community

Updated 3 months ago

Aaync

Does this call the open AI API asynchronously? It does not throw an error but hard to tell locally if its truly async. If that does not do it, how can I utilize the AsyncOpenAI client with llama index query engine?

Plain Text
model = OpenAI(model=self.LLM, async_http_client=openai.AsyncOpenAI())
        query_engine = self.create_engine_from_index(index, model)
L
g
8 comments
The llm is only called async if you use async entry points (i.e. query_engine.aquery(..)
will Llama Index create the AsyncOpenAI client under the hood?
thanks Logan! got it to work
@Logan M quick follow up. How would I do index.insert_nodes(nodes) in an async way? I have an async qdrant client and if I try to do this right now, it throws an error because the client is not set - we use aclient instead

Plain Text
vector_store = QdrantVectorStore(aclient=self.client, 
                                         collection_name=collection_name,
                                         enable_hybrid=enable_hybrid,
                                         sparse_doc_fn=sparse_doc_fn)
I don't see an async insert nodes method on the index
I think the index is missing async methods tbh

You can embed and insert directly async

Plain Text
texts = [node.get_content(metadata_mode="embed") for node in nodes]
embeddings = await embed_model.aget_text_embedding_batch(texts)
for node, embedding in zip(nodes, embeddings):
  node.embedding = embedding

await vector_store.async_add(nodes)
will try it, thanks!
Add a reply
Sign up and join the conversation on Discord