Find answers from the community

Home
Members
Chiken1
C
Chiken1
Offline, last seen 3 months ago
Joined September 25, 2024
Is there an example of using ReActAgent without using OpenAI LLM? I set one up through HuggingFaceLLM but the response time is much longer and unable to get right results in contrast to OpenAI.
5 comments
C
L
I have a basic question because I am not familiar with Python, which Python version should I use for Llama-index?
2 comments
C
L
Is it possible to use local llm's such as llama.cpp for DocumentSummaryIndex? I keep getting llama_tokenize_with_model: too many tokens
Error
2 comments
L
I have a question: instead of getting response from LLM using query_engine, how do I get the retrieved context from the step before sending to LLM for generating response
4 comments
C
E