Find answers from the community

M
MKhere
Offline, last seen 3 months ago
Joined September 25, 2024
Hi Guys, i saw this new version 0.11 update last night . is it possible to do similarity search by vector ? just like how search by vector functions available in langchain for respective vector databases?
27 comments
M
k
L
Hi Team, i am using lancedb, i have created a index and i want to query the vectorindex of lancedb using embedding not string, so i am using the following code, help me in resolving the error code ; ->? db = lancedb.connect(lancedb_path)


vector_store = LanceDBVectorStore(
uri=lancedb_path, table_name = self.index_name, query_type="vector",connection = db,embeddong=self.embeddings)

storage_context = StorageContext.from_defaults(vector_store=vector_store)

self.db = VectorStoreIndex.from_documents(
self.documents, storage_context=storage_context,vector_store=vector_store
)
retriever = VectorIndexRetriever(index=self.db, similarity_top_k=3)

# create QueryBundle for your query and add embeddings
embed_query = QueryBundle(query_str="unused", embedding = query)

# pass this object in your retriever to get nodes
return retriever.retrieve(embed_query) ERROR --->> AttributeError: 'LanceDBVectorStore' object has no attribute 'vector_store' and this line is where it throwing an error retriever = VectorIndexRetriever(index=self.db, similarity_top_k=3)
3 comments
L
M
@kapa.ai how to pas encoding in csvreader() encoding = 'utf-8-sig' like this
5 comments
k
M
Hi Team, in langchain we have the following runnable functions, do we have anything similar to this, like runnabel functions in llamaindex - @abstractmethod
def invoke(self, input: Input, config: Optional[RunnableConfig] = None) -> Output:
"""Transform a single input into an output. Override to implement.

Args:
input: The input to the Runnable.
config: A config to use when invoking the Runnable.
The config supports standard keys like 'tags', 'metadata' for tracing
purposes, 'max_concurrency' for controlling how much work to do
in parallel, and other keys. Please refer to the RunnableConfig
for more details.

Returns:
The output of the Runnable.
"""
2 comments
M
L
HI Team, how to achieve the following langchain code in to llamaindex - final_prompt = Chatprompt.from_template(context_prompt)


chain = prompt | MODEL.llm | StrOutputParser()

response = chain.run({
"input": question,
"history": "\n".join(memory)},
config = {"callbacks": [MODEL.callback]})
7 comments
L
M
@kapa.ai I see this self.db.similarity_search(query, k, filter = filter) used in langchain, how to do the same similarity search in llamaindex
6 comments
k
M
Guys how do i query the vector embedding, llama_docs=[]
for doc in self.documents:
llama_docs.append(Document.from_langchain_format(doc))






self.db = VectorStoreIndex.from_documents(
llama_docs,
storage_context=storage_context,
embed_model=self.embeddings,
)
this is my index , self.db i want to do search the db using vector, something like self.db.query(query_embeding) , here query_embedding which my application already converts in to embedding
14 comments
W
M
k
Hi Guys, i am in the process of building a rag application using llamaindex, already i have reference to look for same rag application done by peers using langchain, in both cases we use chromadb as vector, however there is a function VECTOR_STORE.search_by_vector(question_embeddings, k = 3) to search by the vector in langachin, where question_embeddings are the embedded values of the input and now we are looking for similarilty search with in our vectorstore to return the closest of 3 matches, how to achieve this using LLAMAINDEX, #❓py-issues-and-help ????
6 comments
M
W