Find answers from the community

J
JG
Offline, last seen 7 days ago
Joined September 25, 2024
Hi,
I'm trying out the new AgentWorkflows with some return_direct tools, and I'm wondering why the final AgentOutput.raw is converting the tool's raw_output into a string. https://github.com/run-llama/llama_index/blob/98a1a4bfd4d54e108aaff52a550ff0763b1dab68/llama-index-core/llama_index/core/agent/workflow/multi_agent_workflow.py#L465 I understand that I could listen for the ToolCallResult events on the context's event stream for the raw output. I was just expecting the raw output to be available from the StopEvent.
10 comments
J
L
J
JG
·

Hi,

Hi,
I'm trying to use the new Instrumentation module, but having difficulty adding a @dispatcher.span decorator to a generator function since the span ends as soon as a value is yielded.
I started to implement my custom span without the decorator, as shown in the module guide here https://docs.llamaindex.ai/en/stable/module_guides/observability/instrumentation/#enteringexiting-a-span
but looking at the source code, it seems some extra logic was added to the decorator for handling threads, and I'm not sure if I need to rewrite those locks in my code https://github.com/run-llama/llama_index/blob/baa3e82e56a647d0281135c8c279fa1c386e8f6c/llama-index-core/llama_index/core/instrumentation/dispatcher.py#L261-L265
It would be nice if LlamaIndex had a context manager for creating spans or if the decorator worked with generator functions.
8 comments
L
J
J
JG
·

Error

Hi, I'm using StreamingAgentChatResponse's async_response_gen to return generated text to users through a FastAPI endpoint, and I'd like to return a 5xx status code to users when something goes wrong. However, it seems that any errors raised during requests to the LLM are getting swallowed here https://github.com/run-llama/llama_index/blob/0ee041efadeccb9884052cb393ed5e1dd7b83678/llama-index-core/llama_index/core/chat_engine/types.py#L196
I see this PR was added recently to allow reraising exceptions for synchronous calls, but nothing was added for async. https://github.com/run-llama/llama_index/pull/10407/files
Any ideas for how to work around this?
4 comments
J
L