----------------------
To use a vector index for chat with an OpenAI agent in LlamaIndex, you would first need to install the necessary packages. You can do this with the following commands:
%pip install llama-index-llms-openai
!pip install llama-index
(Source:
GitHub ())
Next, you would load your data and build the vector index:
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo-0613")
data = SimpleDirectoryReader(input_dir="../data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
(Source:
GitHub ())
Then, you would configure the chat engine:
chat_engine = index.as_chat_engine(chat_mode="openai", llm=llm, verbose=True)
(Source:
GitHub ())
Finally, you can chat with your data:
response = chat_engine.chat("Hi")
print(response)
(Source:
GitHub ())
Please note that the
OpenAIAgent
is used under the hood when the chat mode is set to "openai". This mode only works with OpenAI models that support the function calling API.
(Source:
LlamaIndex Documentation ())
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------