Find answers from the community

Home
Members
chaitanya
c
chaitanya
Offline, last seen 3 months ago
Joined September 25, 2024
Hi.. I need help in writing messages_to_prompt and completion_to_prompt for Mistral 7B. I tried the github, but its not clear. Any reference or guide is appreciated? Also do we need a separate output parser as well?
1 comment
W
c
chaitanya
·

Bedrock

@WhiteFang_Jr Is there any support for Claude 3 models from bedrock api ? I can see that foundation models up to Claude 2 in aws bedrock integration
2 comments
W
My set up is like this.
User sends a query to the RAG based chatbot (condense plus context one). There's a similarity cut off that doesn't pass irrelevant context to LLM. Is there any way, if the number of documents retrieved is zero, I can skip this call to LLM and return a generic response. @Logan M
9 comments
L
c
C
W
How to get all nodes and embeddings saved in a vector store at any point in time ? Do we have any low level function for this?
1 comment
L
@kapa.ai I want to use RAG over my data. I am using OpenSearch vector DB for vector store. I built an index and stored the content alongwith embeddings. Now I want to add some more documents to this, I used index.insert method and observed that it is stored in index. But I found that the new data is added in index but it is not added in vector store. I did index.refresh_ref_docs() as well. What am I missing here?
14 comments
L
d
c
K
k
c
chaitanya
·

Reader

I want to understand in detail how the Simple directory reader works. Especially load_data, what arguments I can pass etc.?
3 comments
L
B
Hi All. I searched the documents and found that there is no way to calculate time taken by each step in a RAG or Agent. So can you guide me how to calculate the time taken by each step such as retrieval, llm completion etc..
2 comments
L
W
Can we run Evaluation pipeline without using OpenAI models? For example, Amazon Bedrock, llama2 , mistral etc?
13 comments
L
c
W