Find answers from the community

Updated 10 months ago

Tool

Hello, have what is hopefully a simple high-level question. What is the best way I can define an instance of BaseTool which simply takes in a string and executes a prompt against an LLM? e.g. I assume I could do something like
Plain Text
def run_prompt(input: str):
    prompt = f"Considering that the user is allergic to {input}, what are some alternatives?"
    response = someLlamaIndexFunctionToInvokeLLMDirectly.execute(prompt)
    return response.text

tool = FunctionTool.from_defaults(fn=run_prompt)

But wondering if there is a better way to accomplish this. The goal is to define an agent which uses some RetrieverTools and FunctionTools, but there are some actions I want to define which simply need to execute some pre-defined prompt template, so just not sure what the best practice would be here.
L
R
2 comments
You could use llm.predict()

Plain Text
from llama_index.core.prompts import PromptTemplate 

llm.predict(PromptTemate("{input}") , input=input)
Ah OK cool, I was otherwise planning to use the above with llm.complete(), but predict looks more useful, thank you
Add a reply
Sign up and join the conversation on Discord