retriever.retrieve("query")
or query_engine.query("query")
?from llama_index.core import QueryBundle query = QueryBundle(query_str="unused", embedding=[0.1, ...]) nodes = retriever.retrieve(query) response = query_engine.query(query)
as_query_engine
method of the VectorStoreIndex
class. This method returns a QueryEngine
object that you can use to perform queries. Here's an example of how you can do this:from llama_index.core import VectorStoreIndex # Assuming you have already created your VectorStoreIndex index = VectorStoreIndex.from_vector_store( vector_store, storage_context=storage_context, embed_model=self.embeddings ) # Create a QueryEngine query_engine = index.as_query_engine(similarity_top_k=3) # Perform a query response = query_engine.query("Your query here")
"Your query here"
with your actual query. The similarity_top_k=3
argument specifies that you want the top 3 results.query
method expects a string query. If you have an embedding and want to perform a similarity search with it, you might need to use a different method or convert the embedding back to a string query.VectorStoreQuery
class to perform a similarity search in your vector store. Here's an example:from llama_index.core import VectorStoreQuery # Assuming you have your query_embedding query_embedding = ... # Your query embedding here # Create a VectorStoreQuery vector_store_query = VectorStoreQuery(query_embedding=query_embedding, similarity_top_k=3) # Perform the query response = index.as_query_engine().query(vector_store_query)
...
with your actual query embedding. The similarity_top_k=3
argument specifies that you want the top 3 results.query
method of the QueryEngine
class can accept either a string or a VectorStoreQuery
object. If you provide a VectorStoreQuery
object, it will perform a similarity search using the provided embedding.query
method of the QueryEngine
class with a VectorStoreQuery
object. The error message suggests that the query
method is expecting a string or a QueryBundle
object, not a VectorStoreQuery
object.VectorStoreQuery
object with the query
method of the QueryEngine
class. The examples provided in the knowledge sources use a string as the query.VectorStoreQuery
class is not compatible with the query
method of the QueryEngine
class, or that there's a specific way to use it that's not covered in the provided knowledge sources.VectorStoreQuery
and QueryEngine
classes for more information on how to use them. If the documentation doesn't help, you might want to reach out to the developers or the community for assistance.QueryEngine
from your VectorStoreIndex
and then using the query
method of the QueryEngine
.from llama_index.core import VectorStoreIndex # Assuming you have already created your VectorStoreIndex index = VectorStoreIndex.from_vector_store( vector_store, storage_context=storage_context, embed_model=self.embeddings ) # Create a QueryEngine query_engine = index.as_query_engine() # Perform a query response = query_engine.query("Your query here")
"Your query here"
with your actual query.query
method expects a string query. If you have an embedding and want to perform a similarity search with it, you might need to use a different method or convert the embedding back to a string query.TrafilaturaWebReader
is not able to extract text from some of the URLs you're trying to parse. This is causing the Document
creation to fail because it's receiving None
as the text value, which is not a valid string.import uuid from llama_index.readers.web import TrafilaturaWebReader from llama_index.core import Document def new_uuids(file_name: str): """Assign metadata based on the file name.""" return {"id_": uuid.uuid4().hex} reader = TrafilaturaWebReader() docs = [] for url in to_parse[:50]: try: loaded_docs = reader.load_data([url], show_progress=True) if loaded_docs and loaded_docs[0].text: # Check if text was successfully extracted loaded_docs[0].metadata.update(new_uuids(url)) docs.append(loaded_docs[0]) else: print(f"Failed to extract text from {url}") except Exception as e: print(f"Error processing {url}: {str(e)}") print(f"Successfully loaded {len(docs)} documents")
docs
list.TrafilaturaWebReader
, you can refer to the LlamaIndex documentation on TrafilaturaWebReader ().