Find answers from the community

Home
Members
garnizzle
g
garnizzle
Offline, last seen 3 months ago
Joined September 25, 2024
g
garnizzle
·

Aaync

Does this call the open AI API asynchronously? It does not throw an error but hard to tell locally if its truly async. If that does not do it, how can I utilize the AsyncOpenAI client with llama index query engine?

Plain Text
model = OpenAI(model=self.LLM, async_http_client=openai.AsyncOpenAI())
        query_engine = self.create_engine_from_index(index, model)
8 comments
g
L
I am running into an issue with trying to add nodes to a qdrant vector store. Apparently the nodes need to already be embedded but I cannot find where in the source code or in the documentation it says how to use llama index to embed the nodes. Here is my code - can someone please help me fix this? Its probably a one-liner I am missing. It is failing at index.vector_store.add(nodes)


Plain Text
def create_engine_with_nodes(self, nodes):
        index = self.create_index()
        index.vector_store.add(nodes)
        
        model = OpenAI(model=self.LLM)
        query_engine = self.create_engine_from_index(index, model)
        return query_engine

    def create_engine(self):
        index = self.create_index()
        model = OpenAI(model=self.LLM)
        query_engine = self.create_engine_from_index(index, model)
        return query_engine
    
    def create_index(self):
        embedding_model = OpenAIEmbedding(model=OpenAIEmbeddingModelType.TEXT_EMBED_3_LARGE, dimensions=1024)

        if self.USE_QDRANT:
            client = qdrant_client.QdrantClient(
                os.environ.get("QDRANT_CLOUD_ENDPOINT"),
                api_key=os.environ.get("QDRANT_API_KEY"),
                grpc_port=6334, 
                prefer_grpc=True,
                timeout=30
            )
            
            print("Creating Qdrant Vector Store Index from nodes")
            vector_store = QdrantVectorStore(client=client, collection_name=self.collection_name, parallel=5)
            index = VectorStoreIndex.from_vector_store(vector_store=vector_store, embed_model=embedding_model)       
        else:
            index = VectorStoreIndex(embed_model=embedding_model)

        return index
29 comments
g
L
Is there a way to create a VectorStore that can query a collection/index in a vector DB that is already populated with data which was not ingested via LlamaIndex?

For example, something like this:
Plain Text
vector_store = WeaviateVectorStore(weaviate_client = client, index_name="test", text_key="source_text")

# setting up the storage for the embeddings
storage_context = StorageContext.from_defaults(vector_store = vector_store)

# set up the index
index = VectorStoreIndex([], storage_context = storage_context)

but this tries to create the index for me under the hood. What I want is to point at an already existing index called "test" which is already full of vectors and query against it using a query engine or a retriever.

if i do something like this vector_store.index_name = "Test" it will point at my index but it won't return any results. I suspect this is because its tracking the nodes and I initialized the VectorStoreIndex without any nodes.

Any help would be greatly appreciated!
10 comments
g
L