Hey! Anyone faced problems with Weaviate vectorstore? I try to query the database, but it always returns an empty response. Created the embeddings from llamaindex and query it with the same parameters. Uploading embeddings worked, but querying not.
Hello! Is it possible to add an existing, locally saved vector database (created with llama index vector index) to an external vector db provider? I saved the index as files with storage_context.persist, but I would like to transfer the vector db to an external provider, without needing to recompute the whole index. Is it somehow possible or not yet?
Hey! I think there is an error with the sentence splitter. I try to use the hierarchical node parser, but I am repeatedly getting this error. I tried to modify chunk sizes but it didn’t work out: what is the reason for this error? RecursionError Traceback (most recent call last) <ipython-input-16-dcee5b2f043b> in <cell line: 1>() ----> 1 nodes = node_parser.get_nodes_from_documents(documents)
9 frames ... last 1 frames repeated, from the frame below ...
@Logan M Hey! I saw you did some updates regarding tool output of agents (return_direct). Is there a simple way, to handle streaming responses of query engine tools? The pydantic validator ToolOutput does not seem to be compatible with it (at least it just outputs the whole response as a string for now). I am wondering if you guys already did something for that purpose. Otherwise I am happy to contribute.
Hey! I've finally decided to migrate my project to v0.10, however I can't find the cause of the following error message. when I try to run: llamaindex-cli upgrade <target folder> I get this error: "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/core/utilities/token_counting.py", line 6, in <module> from llama_index.core.llms import ChatMessage, MessageRole ImportError: cannot import name 'ChatMessage' from 'llama_index.core.llms' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/core/llms/init.py)
I tried creating a new env from scratch (with conda, pyenv and also python venv), however I repeatedly get this error. Is the llamaindex-cli broken, or is there another way to resolve this problem? I couldn't find anything here or in the github repo.
Is there some easy way to run several instances of the same llama index object in parallel? I’m trying to run 3 instances of sub question query engines at the same time, however there are always some errors or infinite loops with the event loops. Defining an async function and running it with asyncio.run made it just as slow as sequential execution would.
Did anyone try to use recursive retriever with embedded tables on several hundreds of documents? If someone has a large number of complex documents, each containing different embedded tables, I don't think we can use the recursive retriever effectively. Am I right or it's just me who didn't understand something? We could create a complex index structure for each of the docs separately, but it wouldn't be efficient to use an llm, to decide from hundreds of possibilities which one to use. Some kind of embedding based routing would be a great idea in my opinion. Currently working on it, but let me know if there is a better way.
@Logan M Hey! Is there a method I'm not yet familiar with, that uses the MultiStepQueryEngine approach (transform query then retrieve and synthesize again), but where we can put in several different retrievers? I was thinking about modifying the prompt templates of the Guidance Question generator with a SubQuestionQueryEngine, but I am curious if I am right with this approach.
Hey! Anyone using Lancedb as a vectorstore? I struggle with keeping the metadata of my documents. I can find the in the lancedb table, as separate columns, but when querying the db with Llama index the metadata fields are missing. I saw other had that problem before, but I couldn't find the solution.
Hey! Has anyone experience with the retrieval speed of the AutoMergingRetriever? It is taking for me about 1 min/query, with pinecone vector db. I am wondering if it's only for me so slow.
Hey! Anyone faced speed issues with custom embedding models? I use the instructor-xl model to create a vector db with llama index, but it is extremely slow. 23 vectors take like 8 minutes. Running on colab and using the HF-langchain wrapper.
Anyone facing problems with custom embedding models? I have tried instructor embedding models and now sentence transformers, but none of them worked properly, even if running example notebooks. I always get different kinds of value errors. Like: ValueError: "HuggingFaceHubEmbeddings" object has no field "callback_manager" If you have a currently working notebook I would really appreciate the code.