---------------------- The SubQuestionQueryEngine in LlamaIndex is a powerful tool that allows you to answer complex queries using multiple data sources. It breaks down the complex query into sub-questions for each relevant data source, gathers all the intermediate responses, and synthesizes a final response.
Here's a basic example of how to set up a SubQuestionQueryEngine:
```python from llama_index import VectorStoreIndex, SimpleDirectoryReader from llama_index.tools import QueryEngineTool, ToolMetadata from llama_index.query_engine import SubQuestionQueryEngine from llama_index.callbacks import CallbackManager, LlamaDebugHandler from llama_index import ServiceContext
Using the LlamaDebugHandler to print the trace of the sub questions
query_engine_tools = [ QueryEngineTool( query_engine=lyft_engine, metadata=ToolMetadata( name="lyft_10k", description=( "Provides information about Lyft financials for year 2021" ), ), ), QueryEngineTool( query_engine=uber_engine, metadata=ToolMetadata( name="uber_10k", description=( "Provides information about Uber financials for year 2021" ), ), ), ]
s_engine = SubQuestionQueryEngine.from_defaults( question_gen=question_gen, # use guidance based question_gen defined above query_engine_tools=query_engine_tools, ) ``` (Source: LlamaIndex Documentation ())
However, the extracts provided do not contain specific information on how to use the SubQuestionQueryEngine with Streamlit. Streamlit is a framework for building machine learning and data science web applications, and while it's likely that you can use LlamaIndex's SubQuestionQueryEngine within a Streamlit app, the specifics would depend on the structure and requirements of your application.
For more detailed information on how to use LlamaIndex and its SubQuestionQueryEngine, you may want to refer to the official LlamaIndex documentation (). ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------