Find answers from the community

Updated 10 months ago

Can we run Evaluation pipeline without

Can we run Evaluation pipeline without using OpenAI models? For example, Amazon Bedrock, llama2 , mistral etc?
W
c
L
13 comments
You just need to set the llm to what you want to use.
Plain Text
from llama_index.llms.ollama import Ollama
from llama_index.core import Settings

Settings.llm = Ollama(model="llama2", request_timeout=60.0)

This will then use llama2 as llm.


https://docs.llamaindex.ai/en/stable/module_guides/models/llms/modules.html#bedrock


Try with a fresh env
How to run this? Do you have example? I am trying to run faithfullness evaluator using bedrock
okay will try this. I received NotImplementedError
https://docs.llamaindex.ai/en/stable/examples/evaluation/faithfulness_eval.html

You'll need to change the llm in this section of the doc:

Plain Text
# gpt-4 || CHANGE it to your llm
gpt4 = OpenAI(temperature=0, model="gpt-4")
# PASS YOUR llm object here
evaluator_gpt4 = FaithfulnessEvaluator(llm=gpt4)
# attach to the same event-loop import nest_asyncio nest_asyncio.apply() from llama_index.core.evaluation import FaithfulnessEvaluator evaluator = FaithfulnessEvaluator(llm=llm) evaluator.evaluate_response(response=response)
llm is the bedrock instance. received NotImplementedError:
Did you install bedrock: %pip install llama-index-llms-bedrock?
yes. that is working.
complete traceback
ah, async is not implemented for bedrock yet
Oh okay. but it is implemented for other LLMs i guess
yea, it happens. Someone just needs to make a PR to add it for bedrock.
Add a reply
Sign up and join the conversation on Discord