----------------------
LlamaIndex can be integrated with OpenAI's models to enhance its capabilities. This is done by using OpenAI's function calling API to improve the robustness of the sub-question query engine.
Here's an example of how to use OpenAI with LlamaIndex:
from llama_index import VectorStoreIndex, ServiceContext
from llama_index.llms import OpenAI
llm = OpenAI(model="gpt-3.5-turbo", temperature=0.1)
service_context = ServiceContext.from_defaults(llm=llm)
index = VectorStoreIndex.from_documents(
documents, service_context=service_context
)
query_engine = index.as_query_engine(
output_cls=Biography, response_mode="compact"
)
response = query_engine.query("Who is Paul Graham?")
In this example, the
OpenAI
class is used to create a language model (
llm
) with the specified model and temperature. This
llm
is then used to create a
ServiceContext
, which is used to create a
VectorStoreIndex
. The
VectorStoreIndex
is then used to create a
query_engine
, which can be used to query the data.
You can find more details in the
LlamaIndex documentation () and
GitHub examples ().
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------