Find answers from the community

s
F
Y
a
P
Home
Members
adeelhasan
a
adeelhasan
Offline, last seen last month
Joined September 25, 2024
how to change prompt format for zephyr 7b beta model in llamacpp
3 comments
a
k
Hello everyone using mistral 7b instruct as my llm but when applying it into chat engine got incomplete response as compared to llama2 llm model
10 comments
W
a
I am using titleextractor for extracting metadata by passing a document which contains 20 different pdf but i am getting same title for all the PDFs
1 comment
L
Hey everyone i am having around 20k pdf files with each of 2-4 pages, i am using MetadataExtractor to extract metadata using llama2 llm but my kernel got crashed how to resolve it??
1 comment
E
"What are the key distinctions between using 'as_chat_engine' and 'as_query_engine' in terms of the responses they generate when the same question is asked?"
4 comments
a
T
How can i pass key value in the ExactMatchFilter dynamics means if a user si using my rag chatbot in that case how can i defined my key and value which will be relevant to query
3 comments
b
a
What are the default values of parameters when using llama2 llm such as temperature,context windows
1 comment
W
Hello,is there any way in llamaindex to find llama.cpp is running on gpu
1 comment
L
you guys did fabulous on query engine , now you need to shift your focus on chatengine because implement it in production is very painful
16 comments
i
L
a
i have implemented rag using llama-cpp-python with mistral7b openorca model but response time is too high although the api is hosted on sever which has 2 nvidia gpu RTX a4000 . can someone help me out
16 comments
a
L
A
I am using RAGs with llamacpp as a llm but getting this error
2 comments
a
L
Hey everyone why i am getting empty response when running query engine on a "doc_summary_index"
3 comments
E
a
I have developed a rag system now i want to integrate it on my website as a chatbot. What are the ways in which i can do that for example - i am thinking of creating an API in django but problem is how can i return the response in a streaming mode
8 comments
a
L
l
Why i am getting these warnings after updating llamaindex version i.e. 0.8.5.post1
3 comments
L
W
Hello folks i am new in rag so can u help me out i have build a rag system using only 2 text file and weaviate vector database. I am getting pretty decent response but it usually take approx 2 min to get the whole response now what should i do to decrease the response time because i need to scale this system for 1000k text files??
8 comments
b
a