Find answers from the community

W
Wyrine
Offline, last seen 3 months ago
Joined September 25, 2024
Plain Text
...
llama_index/agent/openai_agent.py", line 344, in chat
    chat_response = self._chat(
                    ^^^^^^^^^^^
...
python3.11/site-packages/llama_index/agent/openai_agent.py", line 40, in get_function_by_name
    raise ValueError(f"Tool with name {name} not found")


I'm using the OpenAIAgent for chat and I have quite a few tools atm (>30) and it usually finds the right one but sometimes it just slightly mangles the name leading to this error. What is there that I can do about this?
34 comments
W
L
b
I have a hierarchy of N (more than 10) RouterQueryEngines composed of one the following QueryEngineTools: Summary, Keyword, and Vector. The documents that they are trained on are large. The selector being used is the PydanticSingleSelector.

I ideally want to put another RouterQueryEngine on top of all of these RouterQueryEngines and use PydanticMultiSelector to answer potentially complex questions. The problem that I'm facing is the Summarizer field. It's far too time and computationally expensive to be practical.

Just the summarizer on one of the sub-RouterQueryEngines with the SingleSelector with a small set of nodes (sub-100) leads to slow performance (can't use async, will run into rate limits with gpt-3.5). I can't imagine what would happen with a larger search space.

I'm not sure what to do because this seems like the ideal architecture that I would want (can't really test to confirm). What are my options? I tried the SimpleSummarizer but that just led to errors . Compact wasn't useful either.

Thank you for fielding all my questions recently. You guys are the real MVPs.
32 comments
L
W
b
Plain Text
from llama_index.llms import OpenAI, HuggingFaceLLM
from langchain.embeddings import HuggingFaceBgeEmbeddings, HuggingFaceEmbeddings
hf = HuggingFaceEmbeddings(
    model_name="BAAI/bge-small-en-v1.5",
    model_kwargs={'device': 'cpu'},
    encode_kwargs={'normalize_embeddings': True}
)
llm = HuggingFaceLLM(model_name="Deci/DeciLM-6b")

service_context = ServiceContext.from_defaults(
    callback_manager=callback_manager, embed_model=hf, llm=llm
)
set_global_service_context(service_context)
# Load in the Documents
documents = SimpleDirectoryReader(input_files=input_files).load_data()
parser = SimpleNodeParser.from_defaults()

nodes = parser.get_nodes_from_documents(documents, show_progress=True)
response_synthesizer = get_response_synthesizer(
    response_mode=ResponseMode.COMPACT, use_async=True, verbose=True,
)
doc_summary_index = DocumentSummaryIndex(
    nodes, show_progress=True, response_synthesizer=response_synthesizer)
42 comments
L
W
b