----------------------
To customize the response and outputs from LlamaIndex, you can use the structured output parsing feature. This allows you to format the output in a more organized way, rather than just a paragraph.
Here is an example of how you can do this:
from llama_index.query_modules.structured_outputs import output_parser
from llama_index.query_modules.structured_outputs import QuestionAnswerPrompt, RefinePrompt
# format each prompt with output parser instructions
fmt_qa_tmpl = output_parser.format(DEFAULT_TEXT_QA_PROMPT_TMPL)
fmt_refine_tmpl = output_parser.format(DEFAULT_REFINE_PROMPT_TMPL)
qa_prompt = QuestionAnswerPrompt(fmt_qa_tmpl, output_parser=output_parser)
refine_prompt = RefinePrompt(fmt_refine_tmpl, output_parser=output_parser)
# obtain a structured response
query_engine = index.as_query_engine(
service_context=ServiceContext.from_defaults(
llm_predictor=llm_predictor
),
text_qa_template=qa_prompt,
refine_template=refine_prompt,
)
response = query_engine.query(
"What are the three items the author did growing up?",
)
print(response)
In this example, the output is structured as a dictionary with keys and values, which is more organized than a simple paragraph. You can customize the keys and values according to your needs.
For more details, you can refer to the
Output Parsing section () in the LlamaIndex documentation.
----------------------
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:
----------------------