Find answers from the community

Updated last year

Agent Instruction Generation

Hello LlamaIndex Community,
I'm looking to create an Agent workflow utilizing LlamaIndex to analyze a task descriptions, suggest relevant instructions steps, and generate a structured workflow based on these descriptions. My goal is to integrate this functionality into an application that manages and automates various tasks with documents using AI.
Here are my specific requirements:
  1. Task Description Analysis: The system should interpret and analyze text-based task descriptions.
  2. Instruction Generation: Based on the analysis, the system should suggest actionable instructions or steps.
  3. Workflow Creation: The final objective is to formulate a structured workflow or task list derived from the task descriptions.
Could you please provide recommendations or best practices on:
• Which components of LlamaIndex are most suited for these requirements?
• Any specific strategies or modules within LlamaIndex that can efficiently handle these tasks?
• Examples or case studies where similar implementations were successfully achieved?
Any insights, examples, or pointers to relevant documentation would be greatly appreciated.
Thank you in advance!
1
A
D
L
12 comments
@DrSebastianK @Logan M What I was thinking:
-Step 1: Instruction Generation: Initially, create actionable instructions or steps from the task description, potentially using a structured prompt for clarity and precision.
-Step 2: Task Resolution by Agent: Following this, an agent is tasked with addressing each item on the task list, which is derived from the task descriptions. The agent will provide responses or solutions for each item.
-Step 3: Cumulative Final Response: Conclude the process with a final answer or outcome, which is compiled based on the resolutions provided by the agent for each task item.

Any recommendation on best components most suited for this?
I would go with autogen for this. I think it is by far the fastest way to accomplish this. Just use a groupchat and setup your userproxy and assistant agents according to your needs. You can easily adjust the workflow by simple prompt engineering. If needed you can add llama index for data retrieval (vector db, etc.) as a function call. When going to production consider the token usage of autogen, which is pretty high with openai, but you can log the first few chats and then use that database to finetune a opensource model. There are also a lot of tutorial available for this. Theoretically you could achieve something similar with llamaindex but it is not for this purpose, so it would require much more work and custom code.
How I can do this with llamaindex ? I know that AutoGen working only with OpenAI and I intended to use Bedrock with Claude
You can use agents and customise parts of them. Besides this custom prompting response synthesizers can be helpful
but there is a way to orchestrate this steps?
I don't understand how I can first generate the tasks list, and then take each task and assign for response
That's something I can't help you. I would go with autogen, as it is created for this type of task
Isn't generating the task list as simple calling llm.chat() / llm.complete()

Like it feels like all of this can be achieved by most setting up prompts and processing outputs tbh
@Logan M Could you provide any practical examples where llm.chat() or llm.complete() have been used to generate and manage tasks? I'm interested in seeing how these functions can be applied, particularly in terms of task creation, processing, and assignment.
there are integrated any tools or features within LlamaIndex that facilitate the sequencing of tasks, handling dependencies, and automating workflows?
We have a (slightly dated/unmaintained) repo here from when auto-gpt was big
https://github.com/run-llama/llama-lab/tree/main/llama_agi

If I was going to re-write this, I would use openais function/tool calling to create easy-to-parse pydantic outputs

For example

Plain Text
from pydantic import BaseModel
from typing import List

from llama_index import SummaryIndex

class Task(BaseModel):
    """Data model for a single task."""

    action: str
    expected_result: str


class TaskList(BaseModel):
    """Data model for a list of tasks."""
    
    tasks: List[Task]
    overall_objective: str


index = SummaryIndex.from_documents(task_documents)

query_engine = index.as_query_engine(output_cls=TaskList)

response = query_engine.query("Generate a task list to achieve XXX based on existing information.")

print(response.tasks)
print(response.overall_objective)


You could also use a pydantic program directly for more control
https://docs.llamaindex.ai/en/stable/examples/output_parsing/openai_pydantic_program.html
what's the best llamaindex GPT for searching things about llamaindex?
Add a reply
Sign up and join the conversation on Discord