Find answers from the community

Home
Members
frandagostino
f
frandagostino
Offline, last seen 3 months ago
Joined September 25, 2024
Hey everyone, how are you?

I'm new to using LlamaIndex!
I have the following problem and would appreciate it if you could help me think it through:

I have a VectorStoreIndex and when I make a Query it returns a Node, the information I need to synthesize the answer is split between that Node it returns and another Node (which it doesn't return) but that in the Document is following that Chunk.

How can I make the Synthesizer take into consideration the Nodes that surround the Node that the Retriever brought me so that no information is missing when generating the answer?
2 comments
f
L
Hi!
Does anyone get good results using "gpt-35-turbo-16k" with Agents / Query Engine Tools?
I'm not getting good quality when LLM generates the query to call the query engine.
Is there any tips to Prompt the ToolMetadata Description?
Using "gpt-4" works excellent (but slower and expensive)

query_engine_tools = [
QueryEngineTool(
query_engine=doc_summary_index.as_query_engine(
vector_store_query_mode="hybrid",
service_context=service_context,
use_async=True,
verbose=True,
),
metadata=ToolMetadata(
name="doc_summary_index",
description=(
"Answers questions about Health Care program."
"Extract a well-formed question with a lot of detail."
),
),
)
]

agent = OpenAIAgent.from_tools(
llm=llm,
tools=query_engine_tools,
verbose=True
)
2 comments
a