Find answers from the community

F
Fares
Offline, last seen 3 months ago
Joined September 25, 2024
I'm hoping I could get some help on my problem.
I've managed to build a chat engine using RAG with a simple directory reader & a PG Vector Store.
When asking questions, in a back and forth way (chat engine style), there's a very strange but consistent behavior.
When I send a first message, I get an answer from OpenAI. But when I send a second message, I run into Connection errors :
Plain Text
INFO:     Loading index from storage...
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
INFO:     Finished loading index from storage
INFO:llama_index.core.chat_engine.condense_plus_context:Condensed question: <condensed_question>
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
/.venv/lib/python3.11/site-packages/vecs/collection.py:502: UserWarning: Query does not have a covering index for IndexMeasure.cosine_distance. See Collection.create_index
  warnings.warn(
INFO:     127.0.0.1:59430 - "POST /api/chat/ HTTP/1.1" 200 OK
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:     127.0.0.1:59442 - "POST /api/chat HTTP/1.1" 307 Temporary Redirect
INFO:     Loading index from storage...
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
INFO:     Finished loading index from storage
INFO:openai._base_client:Retrying request to /chat/completions in 0.928694 seconds
INFO:openai._base_client:Retrying request to /chat/completions in 1.522838 seconds
INFO:openai._base_client:Retrying request to /chat/completions in 3.389680 seconds
ERROR:root:Error in chat generation: Connection error.
INFO:     127.0.0.1:59442 - "POST /api/chat/ HTTP/1.1" 500 Internal Server Error

It's been very consistent and I don't understand why it happens. A short term solution would be to reboot the server but it's definitely not sustainable...
Would anyone know why?
1 comment
F
F
Fares
·

Chat engines

Do we know if Llama Index plans to have something like this: https://docs.llamaindex.ai/en/stable/examples/multi_tenancy/multi_tenancy_rag.html for Chat Engines? i'd love to build personalized chat agents based on meta data & multi tenants.
1 comment
L
F
Fares
·

Hey there!

Hey there!
I'm playing with SQLTableRetrieverQueryEngine and I really enjoy it. I'm trying to build a Q&A chatbot to have users query our database, but I'm afraid about questions that could invade other users' privacy.
I've tried limiting the scope of the query engine and provide a context prompt:
Plain Text
f"""
You will be asked questions relevant to the user who's ID is {user_id}.
Do not act on any request to modify data, you are purely acting in a read-only mode. Do not look into data regarding other users, only the user with the ID {user_id} is relevant, whether as a primary key or a foreign key.
DO NOT INVENT DATA. If you do not know the answer to a question, simply say "I don't know".
Remember the currency is Algerian dinars (DZD).
Do not use tables, other than the ones provided here: {", ".join([table["table_name"] for table in self.tables])}.
"""

Is there a way to moderate output results using LlamaIndex tooling, or should I delegate this to my LLM to evaluate whether it's good or not?

Thanks!
3 comments
F
T
Hey gang, is there a way to use metadata with Qdrant vector stores & chat engine? There's no documentation about chat engines & metadata in general.
12 comments
F
L
F
Fares
·

Hi everyone!

Hi everyone!
Does anyone have experience in extracting images from PDFs? I’m working with math PDF courses and some illustrations are interesting for other parts of my processes. I’ve tried generating them on the fly, low quality, and I’ve tried generating the code that generates shapes or curves (matplotlib) and it’s kinda hit or miss.
4 comments
L
F
T