Find answers from the community

Home
Members
axentar
a
axentar
Offline, last seen 3 months ago
Joined September 25, 2024
Hey guys, I have a question regarding a query pipeline on an agent. Can the agent used be a ContextRetrieverOpenAIAgent? In the examples on the documentation they always use a ReAct agent. Does anybody know if this is possible? Is there any documentation regarding this?
3 comments
a
L
Hey! Does anyone know if a query pipeline can take different paths or routes via an if condition or similar? Or does it have to be done on the whole set of functions?
9 comments
L
a
Hello. I'm trying to output an intermediate step of a query pipeline into one output of the same query pipeline. Do you know if this is possible? If yes, may you refer me to an example? Not the streaming one. I didn't understood that one 😦
5 comments
L
a
Hello. I'm trying to replicate the simpledirectoryReader example and vectorstoreindex example (the paul graham essay one) with my openai api key. However, I get this error: BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 16723 tokens (16723 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}. Do you know if there's a modification or something i need to do in open ai for it to run the vectorstoreinde
3 comments
L
a
Hello. I'm trying to replicate the simpledirectoryReader example and vectorstoreindex example (the paul graham essay one) with my openai api key. However, I get this error: BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 16723 tokens (16723 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}. Do you know if there's a modification or something i need to do in open ai for it to run the vectorstoreindex?
1 comment
W
Hello. Does anyone has tried to put memory on a query pipeline without using agent handlers? If yes… is there a tutorial on something that might work on this ? My query pipeline consists of multiple functions and retrievers. I tried to transform it to an agent handled pipeline but I was unable to do that.
1 comment
L
Hello guys. I've been working recently using AzureOpenAI service on my LLM in LlamaIndex and trying to create a custom agent. Given that, I followed the tutorial found on https://docs.llamaindex.ai/en/stable/examples/agent/custom_agent.html, changing only the llm part for AzureOpenAI instead of OpenAI. When getting to the end of the tutorial, once I'm gonna "initialize" the agent, I got the following error (Shown on thread):
29 comments
a
a
L