Hi Guys, i saw this new version 0.11 update last night . is it possible to do similarity search by vector ? just like how search by vector functions available in langchain for respective vector databases?
Hi Team, i am using lancedb, i have created a index and i want to query the vectorindex of lancedb using embedding not string, so i am using the following code, help me in resolving the error code ; ->? db = lancedb.connect(lancedb_path)
# create QueryBundle for your query and add embeddings embed_query = QueryBundle(query_str="unused", embedding = query)
# pass this object in your retriever to get nodes return retriever.retrieve(embed_query) ERROR --->> AttributeError: 'LanceDBVectorStore' object has no attribute 'vector_store' and this line is where it throwing an error retriever = VectorIndexRetriever(index=self.db, similarity_top_k=3)
Hi Team, in langchain we have the following runnable functions, do we have anything similar to this, like runnabel functions in llamaindex - @abstractmethod def invoke(self, input: Input, config: Optional[RunnableConfig] = None) -> Output: """Transform a single input into an output. Override to implement.
Args: input: The input to the Runnable. config: A config to use when invoking the Runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details.
Guys how do i query the vector embedding, llama_docs=[] for doc in self.documents: llama_docs.append(Document.from_langchain_format(doc))
self.db = VectorStoreIndex.from_documents( llama_docs, storage_context=storage_context, embed_model=self.embeddings, ) this is my index , self.db i want to do search the db using vector, something like self.db.query(query_embeding) , here query_embedding which my application already converts in to embedding
Hi Guys, i am in the process of building a rag application using llamaindex, already i have reference to look for same rag application done by peers using langchain, in both cases we use chromadb as vector, however there is a function VECTOR_STORE.search_by_vector(question_embeddings, k = 3) to search by the vector in langachin, where question_embeddings are the embedded values of the input and now we are looking for similarilty search with in our vectorstore to return the closest of 3 matches, how to achieve this using LLAMAINDEX, #❓py-issues-and-help ????