Find answers from the community

a
akvn
Offline, last seen 3 months ago
Joined September 25, 2024
Hi all, Im getting the error ModuleNotFound : no module named 'llama_index.agent' when running get_nodes_from_documents with a markdownparser. It used to work fine but im switching from an llm using Ollama to Azure OpenAI. Note that the llm works fine as I tested it. Any idea what might be causing this?
4 comments
a
W
Any idea why this is taking 18 minutes for a simple query? Im using Mistral through Ollama locally.
5 comments
W
a
Hey @Logan M , I saw that you tackled a similar issue to mine in https://github.com/run-llama/llama_index/issues/9277 . Any idea where I can find messages_to_prompt & completion_to_prompt when using Llama3 & Mistral through the HF Inference API?
22 comments
L
a
Im using FlagEmbeddingReranker but I am getting an error:
Cannot import FlagReranker package, please install it: ', 'pip install git+https://github.com/FlagOpen/FlagEmbedding.git'
. Note that I installed the reranker earlier and tried to install the github provided but im still getting the same error
14 comments
a
W
Hey guys, I created an index for each document and saved each in a directory, how do I load them into one index or engine?
5 comments
a
W
@kapa.ai im getting a parse error (used llamaparse and now get_nodes_from_documents, error tokenizing data. C error: EOF inside string starting at row 0.
10 comments
k
a
@kapa.ai i have a chat engine based on a rag framework, how do i extract the reference that the bot responded thru
5 comments
k
a
hey, i am trying to use the HuggingFaceInferenceAPI through llamaindex. It works fine for mistral but for llama im getting this error. Note that I have access to the model and my API key is there with the request.03 Forbidden: None.
Cannot access content at: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-8B.
If you are trying to create or update content,make sure you have a token with the write role.
The model meta-llama/Meta-Llama-3-8B is too large to be loaded automatically (16GB > 10GB). Please use Spaces (https://huggingface.co/spaces) or Inference Endpoints (https://huggingface.co/inference-endpoints).
27 comments
a
L
W
why are there no screenshots of how most of these look like in the repo
2 comments
a
W