Find answers from the community

Updated last year

Hi!

Hi!
Does anyone get good results using "gpt-35-turbo-16k" with Agents / Query Engine Tools?
I'm not getting good quality when LLM generates the query to call the query engine.
Is there any tips to Prompt the ToolMetadata Description?
Using "gpt-4" works excellent (but slower and expensive)

query_engine_tools = [
QueryEngineTool(
query_engine=doc_summary_index.as_query_engine(
vector_store_query_mode="hybrid",
service_context=service_context,
use_async=True,
verbose=True,
),
metadata=ToolMetadata(
name="doc_summary_index",
description=(
"Answers questions about Health Care program."
"Extract a well-formed question with a lot of detail."
),
),
)
]

agent = OpenAIAgent.from_tools(
llm=llm,
tools=query_engine_tools,
verbose=True
)
a
2 comments
Hi @frandagostino,

It looks like you're using a query_engine as a tool. That is where you would want to play around with the Prompt.

https://docs.llamaindex.ai/en/stable/module_guides/models/prompts.html#modify-prompts-used-in-query-engine
You may also consider fine-tuning a gpt-3.5 to try and get gpt-4 levels of performance

here's an example: https://docs.llamaindex.ai/en/stable/examples/finetuning/knowledge/finetune_retrieval_aug.html
Add a reply
Sign up and join the conversation on Discord