Find answers from the community

Updated last year

How I can use multi-agents in Llamaindex ?

@kapa.ai How I can use multi-agents in Llamaindex ?
k
3 comments
Please stand by @xrt as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
In LlamaIndex, you can use multi-agents to manage and interact with your data within your LLM application. This can be particularly useful when building an agent-based app. Here's a step-by-step guide on how you can set up multi-agents using LlamaIndex:

  1. Setup and Download Data: First, you need to import the necessary modules and download the data you want to work with. In the example provided, Wikipedia articles about different cities are downloaded. Each article is stored separately. You can use the requests library to download the data and store it in a dictionary. Here's the code snippet for this step:
```python
from llama_index import (
VectorStoreIndex,
SummaryIndex,
SimpleKeywordTableIndex,
SimpleDirectoryReader,
ServiceContext,
)
from llama_index.schema import IndexNode
from llama_index.tools import QueryEngineTool, ToolMetadata
from llama_index.llms import OpenAI

wiki_titles = [
"Toronto",
"Seattle",
"Chicago",
"Boston",
"Houston",
"Tokyo",
"Berlin",
"Lisbon",
"Paris",
"London",
"Atlanta",
"Munich",
"Shanghai",
"Beijing",
"Copenhagen",
"Moscow",
"Cairo",
"Karachi",
]

from pathlib import Path
import requests

for title in wiki_titles:
response = requests.get(
"
params={
"action": "query",
"format": "json",
"titles": title,
"prop": "extracts",
"explaintext": True,
},
).json()
page = next(iter(response["query"]["pages"].values()))
wiki_text = page["extract"]

data_path = Path("data")
if not data_path.exists():
Path.mkdir(data_path)

with open(data_path / f"{title}.txt", "w") as fp:
fp.write(wiki_text)
city_docs = {}
for wiki_title in wiki_titles:
city_docs[wiki_title] = SimpleDirectoryReader(
input_files=[f"data/{wiki_title}.txt"]
).load_data()

llm = OpenAI(temperature=0, model="gpt-3.5-turbo")
service_context = ServiceContext.from_defaults(llm=llm)
Plain Text
2. **Build Document Agent for each Document**: Next, you need to define "document agents" for each document. You can define both a vector index (for semantic search) and summary index (for summarization) for each document. The two query engines are then converted into tools that are passed to an OpenAI function calling agent. This document agent can dynamically choose to perform semantic search or summarization within a given document. Here's the code snippet for this step:
python
from llama_index.agent import OpenAIAgent
from llama_index import load_index_from_storage, StorageContext
from llama_index.node_parser import SimpleNodeParser
import os

node_parser = SimpleNodeParser.from_defaults()

agents = {}
query_engines = {}
all_nodes = []

for idx, wiki_title in enumerate(wiki_titles):
nodes = node_parser.get_nodes_from_documents(city_docs[wiki_title])
all_nodes.extend(nodes)

if not os.path.exists(f"./data/{wiki_title}"):
vector_index = VectorStoreIndex(nodes, service_context=service_context)
vector_index.storage_context.persist(persist_dir=f"./data/{wiki_title}")
else:
vector_index = load_index_from_storage(
StorageContext.from_defaults(persist_dir=f"./data/{wiki_title}"),
service_context=service_context,
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord