Find answers from the community

J
Jeff
Offline, last seen 3 months ago
Joined September 25, 2024
Hello, how I can extract for a ReActAgent step-by-step reasoning process that is display on console ?
5 comments
L
Hello, fallowing this recursive_retriever example https://docs.llamaindex.ai/en/stable/examples/query_engine/recursive_retriever_agents/ can I query composable retriever and route queries to multiple agents and not only to one? changing similarity_top_k=2 or more will query in the same time on more or I should use another technique ?
7 comments
L
J
Hi , how I can control how many questions are generated for subquery engine and for QueryEnginetTool metadata, name is used in this process in any way or just description is takeing in consideration ?
Plain Text
# setup base query engine as tool
query_engine_tools = [
    QueryEngineTool(
        query_engine=vector_query_engine,
        metadata=ToolMetadata(
            name="pg_essay",
            description="Paul Graham essay on What I Worked On",
        ),
    ),
]
18 comments
J
L
J
Jeff
·

Guidance

4 comments
L
J
Hello, I'm getting below error when I try to run this in docker
aws_session_token is not provided, but error is for aws_region_name
How I can fix the problem ? Thanks

llm = Bedrock(
^^^^^^^^
TypeError: Bedrock.init() got an unexpected keyword argument 'aws_region_name'

Plain Text
from llama_index.llms import Bedrock
import os
from dotenv import load_dotenv
load_dotenv()

llm = Bedrock(
    model="amazon.titan-text-express-v1",
    aws_access_key_id=os.getenv('AWS_ACCESS_KEY_ID'),
    aws_secret_access_key=os.getenv('AWS_SECRET_ACCESS_KEY'),
    aws_region_name="us-west-2",
)

resp = llm.complete("Paul Graham is ")
3 comments
L
J
J
Jeff
·

Hello,

Hello, I'm exploring chat engine usage, what are the distinct roles of memory and chat_history? Are they intended for different purposes? Thank you!
6 comments
J
L