Find answers from the community

Updated 3 months ago

vector index for chat .. openaiagent..

vector index for chat .. openaiagent..
k
a
6 comments
Please stand by @andysingal as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
To use a vector index for chat with an OpenAI agent in LlamaIndex, you would first need to install the necessary packages. You can do this with the following commands:

Plain Text
%pip install llama-index-llms-openai
!pip install llama-index

(Source: GitHub ())

Next, you would load your data and build the vector index:

Plain Text
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI

llm = OpenAI(model="gpt-3.5-turbo-0613")
data = SimpleDirectoryReader(input_dir="../data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)

(Source: GitHub ())

Then, you would configure the chat engine:

Plain Text
chat_engine = index.as_chat_engine(chat_mode="openai", llm=llm, verbose=True)

(Source: GitHub ())

Finally, you can chat with your data:

Plain Text
response = chat_engine.chat("Hi")
print(response)

(Source: GitHub ())

Please note that the OpenAIAgent is used under the hood when the chat mode is set to "openai". This mode only works with OpenAI models that support the function calling API.
(Source: LlamaIndex Documentation ())
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai how to use
Plain Text
FunctionTool.from_defaults
and
Plain Text
OpenAIAgent.from_tools
with vector index in chat
Let me check the knowledge base for you. One moment! :books:
----------------------
To use FunctionTool.from_defaults and OpenAIAgent.from_tools with a vector index in a chat, you would first define your functions and create FunctionTools from them. Here's an example with two simple functions, add and multiply:

Plain Text
from llama_index.core.tools import FunctionTool

def add(a: int, b: int) -> int:
    """Add two integers and returns the result integer"""
    return a + b

add_tool = FunctionTool.from_defaults(fn=add)

def multiply(a: int, b: int) -> int:
    """Multiply two integers and returns the result integer"""
    return a * b

multiply_tool = FunctionTool.from_defaults(fn=multiply)

(Source: GitHub ())

Next, you would create an OpenAIAgent using these tools:

Plain Text
from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI

llm = OpenAI(model="gpt-3.5-turbo-0613")
agent = OpenAIAgent.from_tools([add_tool, multiply_tool], llm=llm, verbose=True)

(Source: GitHub ())

To use a vector index, you would first create an ObjectIndex from your tools:

Plain Text
from llama_index.core import VectorStoreIndex
from llama_index.core.objects import ObjectIndex

obj_index = ObjectIndex.from_objects(
    [add_tool, multiply_tool],
    index_cls=VectorStoreIndex,
)

(Source: LlamaIndex Documentation ())

Then, you would create an OpenAIAgent that uses this index to retrieve relevant tools during query-time:
Plain Text
agent = OpenAIAgent.from_tools(
    tool_retriever=obj_index.as_retriever(similarity_top_k=2), verbose=True
)

(Source: LlamaIndex Documentation ())

Finally, you can chat with your agent:

Plain Text
response = agent.chat("What's 212 multiplied by 122? Make sure to use Tools")
print(response)

(Source: LlamaIndex Documentation ())
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord