Find answers from the community

Updated last year

What if LLM dont want to use query

What if LLM dont want to use query_engine_tool and give me answers about some imaginary book, not I embedded to it?


Response: Observation: query_engine_tool response
Title: The Da Vinci Code (WRONG!!!)
Summary: This book is a thriller that follows Robert Langdon, a Harvard symbologist, as he unravels ancient secrets and solves codes to save the life of a British Royal Family member.

Main characters include:
  1. Robert Langdon - A Harvard professor of symbology.
....blabla


my code
Plain Text
service_context = ServiceContext.from_defaults(llm=llm, embed_model="local")
data = SimpleDirectoryReader(input_dir="C:/temp_my/text_embeddings").load_data()
index = VectorStoreIndex.from_documents(data, service_context=service_context)
chat_engine = index.as_chat_engine(service_context=service_context, chat_mode="react", verbose=True)
response = chat_engine.chat("What this book is about? List the names of main characters. And tell the story short.")
print(response)


Directory contains 1 big docx file converted from fb2 and translated to English with google
T
А
4 comments
Wait so for the observation it hallucinates something? Or the response? Which LLM are you using?
Im using openorca-platypus2-13b.Q4_K_M.gguf
Hmm that might be the reason, the chat mode react can be prone to experiencing hallucinations, especially with less powerful LLMs
openorca-platypus2-13b.Q4_K_M.gguf
Add a reply
Sign up and join the conversation on Discord