Find answers from the community

Updated last year

Hi I already found this class but the

Hi. I already found this class, but the problem is that I don't know how to use ElastiSearch with it? llama-index has only ElasticSearch reader, not index.
L
t
7 comments
I think the best solution here would be to convert the elastisearch reader into a function tool, and then use that tool with an agent

Small example (note that the docstring is used as the tool description, this is how the agent decides to use the tool or not)

Plain Text
def multiply(a: int, b: int) -> int:
    """Multiple two integers and returns the result integer"""
    return a * b

from llama_index.tools import FunctionTool
multiply_tool = FunctionTool.from_defaults(fn=multiply)

from llama_index.agent import OpenAIAgent
from llama_index.llms import OpenAI

llm = OpenAI(model="gpt-3.5-turbo")
agent = OpenAIAgent.from_tools([multiply_tool], llm=llm, verbose=True)
HI!
I have another question: and if I use this chat_engine mode, will the generated questions go to ElasticSearchReader?
chat_engine = index.as_chat_engine(chat_mode="condense_question")
The method I detailed will really only work for agents -- the chat engines all rely on indexes, but you aren't using an index πŸ€”
but functionally they are extremely similar
How to use agent with custom llm? Eg. from hugging face
You can use the ReActAgent, which works with any LLM. But keep in mind open-source LLMs kind of suck for agents right now
the react agent needs pretty structured LLM outputs, and thats where open-source kind of fails
Add a reply
Sign up and join the conversation on Discord