----------------------
Based on the information provided in the extracts, there isn't a specific example of creating a query engine for data from an API. However, I can guide you on how to create a query engine in general using LlamaIndex.
A query engine in LlamaIndex is a generic interface that allows you to ask questions over your data. It takes in a natural language query and returns a rich response. It is often built on one or many indexes via retrievers. You can compose multiple query engines to achieve more advanced capability.
Here is a high-level example of how to build and configure a query engine from an index in one line of code:
query_engine = index.as_query_engine(
response_mode="tree_summarize",
verbose=True,
)
Note that the high-level API optimizes for ease-of-use, but it does not expose the full range of configurability. You can see the full list of response modes and what they do
here ().
If you need more granular control, you can use the low-level composition API. Here is an example:
from llama_index import (
VectorStoreIndex,
get_response_synthesizer,
)
from llama_index.retrievers import VectorIndexRetriever
from llama_index.query_engine import RetrieverQueryEngine
# build index
index = VectorStoreIndex.from_documents(documents)
# configure retriever
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=2,
)
# configure response synthesizer
response_synthesizer = get_response_synthesizer(
response_mode="tree_summarize",
)
# assemble query engine
query_engine = RetrieverQueryEngine(
retriever=retriever,
response_synthesizer=response_synthesizer,
)
# query
response = query_engine.query("What did the author do growing up?")
print(response)