Find answers from the community

Home
Members
Sebitas
S
Sebitas
Offline, last seen 3 months ago
Joined September 25, 2024
I have a problem when loading more than one PDF document, I don't know if my implementation is correct since the times I have tried to run the program it gives me the same error, if I use a single PDF document it works if I implement this way with 2 files stops working
9 comments
S
L
I am trying to implement the demo of Metadata References: Summaries + Generated Questions referring to a bigger chunk loading PDF documents, this is my implementation, however when executing the code it stays waiting and never moves forward loading the PDF data
3 comments
S
L
I am trying to print the reference of the documents based on which the chat responds with display_source_node(), however it does not print correctly, does anyone know why this is?
4 comments
W
S
I am trying to make a simple bot that bases its answers on a PDF file that I provide, however when executing the code the bot gives me wrong answers as if it had not loaded the information, I suspect that the error may be in the index process but I have tried different ways to do it and the problem still occurs
3 comments
L
S
S
Sebitas
·

Storage

Hello i have a question about StorageContext, there is some way to use a local StorageContext and not the default that use OpenAI Key
5 comments
S
W
Hello guys, im trying to complement ollama model mistral with llama-index, but im having this error when i tried to run a simple script : from llama_index.llms import Ollama

llm = Ollama(model='mistral')

resp = llm.complete("What did Rome grow? Be concise.")
print(resp)
7 comments
L
S
Hello, I was developing a chatbot application with an LLM locally with a focus on acquiring data from PDF documents, I was wondering if it was possible through llama-index to somehow configure this chat model so that the responses provide the location or the name of the document from which the answer to a certain question was obtained, or if, on the contrary, I would need some other framework, I ask for your advice.
3 comments
n
S
Hello, I was developing a chatbot application with an LLM locally with a focus on acquiring data from PDF documents, I was wondering if it was possible through llama-index to somehow configure this chat model so that the responses provide the location or the name of the document from which the answer to a certain question was obtained, or if, on the contrary, I would need some other framework, I ask for your advice.
2 comments
S
T