Find answers from the community

Updated 5 months ago

LLM

At a glance

The community member is experiencing issues with the LlamaIndex library, where the LLM (Hugging Face H4/zephyr-7b-alpha) works fine with regular requests but sometimes starts making and asking its own questions after getting results when using the QueryEngine. Another community member suggests that this could be an issue with the smaller LLM model, which is more prone to hallucinations and random outputs.

hey, i have some trouble with llamaindex, llm works fine with regular request, but with queryengine it sometimes starts making and asking own questions after getting result, what could it be?
T
O
3 comments
Which LLM are you using? What type of data does your index include?
using HuggingFaceH4/zephyr-7b-alpha, loading with HuggingFaceLLM, data just some info from website for QA about website, loaded from .txt
Hmm could be an issue with the LLM then. Those smaller models are more prone to hallucinations and random outputs
Add a reply
Sign up and join the conversation on Discord