AsyncOpenAI
client with llama index query engine?model = OpenAI(model=self.LLM, async_http_client=openai.AsyncOpenAI()) query_engine = self.create_engine_from_index(index, model)
index.insert_nodes(nodes)
in an async way? I have an async qdrant client and if I try to do this right now, it throws an error because the client is not set - we use aclient
insteadvector_store = QdrantVectorStore(aclient=self.client, collection_name=collection_name, enable_hybrid=enable_hybrid, sparse_doc_fn=sparse_doc_fn)
texts = [node.get_content(metadata_mode="embed") for node in nodes] embeddings = await embed_model.aget_text_embedding_batch(texts) for node, embedding in zip(nodes, embeddings): node.embedding = embedding await vector_store.async_add(nodes)