Find answers from the community

Updated 4 months ago

Disabled

At a glance
I'm new to LlamaIndex, I have an issue that I'm worried about.
I always get this "LLM is explicitly disabled. Using MockLLM." Whenever I want to get a response, what would be the reason?
(Note that the query is: "When was Stephen Hawking born?", and I got all of the text file back as an answer)
Attachment
image.png
L
h
A
7 comments
That is concerning lol

What version do you have? What does your code look like?
I'm facing a similar issue to this when initializing StructuredLLMPredictor() with no params as in the docs. I attempted on version 0.8.11-0.8.11.post3 and 0.8.7

Plain Text
def llama_index_boiler(vector_input, dir_or_file, multiple_or_not, rail_template, prompt_template, params_needed=None):
    llm_predictor = StructuredLLMPredictor()

    if dir_or_file and not multiple_or_not:
        reader = SimpleDirectoryReader(input_dir=vector_input)
        documents = reader.load_data()
        index = VectorStoreIndex.from_documents(documents)
    if not dir_or_file and not multiple_or_not and vector_input != "":
        reader = SimpleDirectoryReader(input_files=[vector_input])
        documents = reader.load_data()
        index = VectorStoreIndex.from_documents(documents)
    if not dir_or_file and multiple_or_not:
        reader = SimpleDirectoryReader(input_files=vector_input)
        documents = reader.load_data()
        index = VectorStoreIndex.from_documents(documents)
    if vector_input == "":
        pass
    

    output_parser = GuardrailsOutputParser.from_rail(  # llamaIndex parses guardrails output from rail file
        rail_template, llm=llm_predictor.llm
    )


    formatted_qa_tmpl = prompt_template
    fmt_qa_tmpl = output_parser.format(formatted_qa_tmpl)  # llamaindex formats templates
    fmt_refine_tmpl = output_parser.format(DEFAULT_REFINE_PROMPT_TMPL)

    qa_prompt = QuestionAnswerPrompt(fmt_qa_tmpl, output_parser=output_parser)  
    refine_prompt = RefinePrompt(fmt_refine_tmpl, output_parser=output_parser)


    query_engine = index.as_query_engine(  # init query engine
        text_qa_template=qa_prompt,
        refine_template=refine_prompt,
        llm_predictor=llm_predictor,
    )
    if dir_or_file:
        return [query_engine, index]
    else:
        return query_engine


It was actually working before, but suddenly stopped, i've reverted to older builds of my project but it keeps occuring now.
Attachment
image.png
@honkylonky yea not sure if @Adel was doing something similar, but I have a fix coming for your specific case

For now, pass in an LLM

Plain Text
from llama_index.llms import OpenAI

llm_predictor = StructuredLLMPredictor(llm=OpenAI(model="gpt-3.5-turbo", temperature=0.1))
it is working with minor issues in guardrails for now (not sure if passed llm adheres to guardrails) but the output has changed as i'm getting errors in JSON parsing, ill be on standy for the fix, ty
this is my snip of code that shows that error, @Logan M
Attachment
code.png
I would try updating to the latest version of llama-index πŸ€” Looks fine initially
Oh, it worked !
You are a Legend @Logan M , Many thanks 🀩
Add a reply
Sign up and join the conversation on Discord