Find answers from the community

Home
Members
bmjcoding
b
bmjcoding
Offline, last seen 2 weeks ago
Joined September 25, 2024
Hi all, having an issue with a project I'm building for a RAG Pipeline with a single PDF as a knowledge store. For some reason when I ask it anything about food or macros and anything it says it doesn't have knowledge though it's in the PDF and should be loaded in the index.

This is my app.py
1 comment
W
Hey y'all. I've made done quite a bit of refactoring of my code and I just noticed that the responses in my chatbot are no longer being streamed, but instead sent as a whole message at once. Any thoughts? I have
Plain Text
query_engine = index.as_query_engine(llm=llm, streaming=True)
2 comments
b
L
Hi all, getting a ReadTimeout(HTTPSConnectionPool(host=<endpoint>, port=443) when trying to use the OpensearchVectorClient. Using LlamaIndex 0.10.12 and llama-index-vector-stores-opensearch==0.1.7 if that matters.

Apologies if this has been covered here, but I couldn't find anything specific when searching the web other than someone who needed to include the port (which I have already specified)

I don't believe this to be an issue with credentials as I am able to authenticate with AWS Secrets Manager and AzureOpenAI in a couple code blocks above this.

Config is as follows:
Plain Text
client = OpensearchVectorClient(
  endpoint,
  idx,
  1536,
  embedding_field=embedding_field,
  text_field=text_field,
  http_auth=awsauth,
  port=443,
  use_ssl=true
  verify_certs=true
  connection_class=RequestsHttpConnection,
  timeout=30 (I've tried all from 15 - 300s)
)
8 comments
b
L
I have metadata stored in my Vector store for each file as so:
documents in the docs directory have the "document_link" metadata
documents in the pdfs directory have the "file_name" metadata

At the end my chat bot's response I need it to give either the document_link or file_name from which it used to answer the question it was asked, where would I go about this, and is there any docs I can follow?
1 comment
T
b
bmjcoding
·

Hey team.

Hey team.

I have a use case where I need to add a line to the response from my chatbot. The response right now is pulling from Vector Embeddings stored in OpenSearch. The records in OpenSearch have the file path in the metadata, but the use case is:

if the file path is x/x/pdf, then at the end of the chat bot's response, it needs to say something along the lines of "For more information, please refer to {file-name-from-metadata.pdf}"

How would I go about implementing this?
1 comment
T
b
bmjcoding
·

Source

Hi team, I recently upgraded from LlamaIndex 0.10.12 to 0.10.55 and now I am getting a python error when trying to retrieve a response from the LLM application.

Plain Text
for node in response.source_nodes:

Plain Text
AttributeError: 'coroutine' object has no attribute 'source_nodes'


cc @Logan M
5 comments
b
L
Is there a way to add hyperlinks to text in vector embeddings? Or on message add a hyperlink to a word for example "Click here" is linked back to a link extracted from metadata in the embedding
1 comment
L
Quick question for prompt setup.
How would I go about wording the prompt template to set a specific context?
i.e when the user asks "How do I turn off an instance?" it can't respond based on the indices. But if the query is worded "How do I turn off an instance in AWS" it will generate a response. So I would need the prompt template to always assume the user queries are relating to AWS.
9 comments
D
b