Find answers from the community

Updated 2 months ago

Events

At a glance

The community member wants to modify a tutorial to collect events for each request independently and achieve independent event streaming for each request. They are concerned that the event handler of the dispatcher will be applied to all requests, so when requests are processed in parallel, events will be returned to the wrong user. The community member asks how they can distinguish events per user.

In the comments, another community member suggests using the tagging mechanism to attach tags to local events, such as with instrument_tags({"user": "me"}): .... The original community member says this is one solution, but asks if there is an easier way to implement, as it may be difficult to identify the user's event from the tag and return it in the stream.

Another community member suggests that the instrumentation stuff is not meant to be user-facing, and it may be easier to break the process into more low-level steps and log information as needed. The original community member mentions that they have implemented a custom event handler and a queue, but it did not work.

The community members discuss different approaches, including using workflows and the streaming API provided by the llama_index library. The original community member says they will follow the suggested approach and utilize llama deploy if they use a workflow to implement the functionality.

Useful resources
I want to modify this tutorial to collect event for each request independently.
And achieve independent event streaming for each request.
https://github.com/rsrohan99/rag-stream-intermediate-events-tutorial/tree/b5062c31d0c4a9cf619f673de84967f2f7c12e35

Based on my understanding, a eventHandler of dispatcher will be applied to all of the request so when requests processed parallelly, events will be returned to the wrong user.
How can I distinct event per user?

Events that I want to utilize:
https://docs.llamaindex.ai/en/stable/api_reference/instrumentation/event_types/
L
d
9 comments
You can use the tagging mechanism to attach tags to local events

Plain Text
from llama_index.core.instrumentation.dispatcher import instrument_tags

with instrument_tags({"user": "me"}):
  <code in this block will emit events with this tag>
Thank you very much. That solution will be one of the solution.

Is there any other way that would be easier to implement, since it is a bit difficult to identify the user's event from the tag and return it in the stream?

I wanted to implement a dispatcher in the request scope as follows, but it does not work


Plain Text
event_q = Queue()
event_handler = RequestEventHandler(event_q)
dispatcher = get_dispatcher()    dispatcher.add_event_handler(event_handler)

@Logan M
I'm not sure what modules you are using, but it sounds like there is an easier way to accomplish what you are doing

If you are just trying to attach data to a response from a fastapi endpoint, it should be much easier πŸ˜…
Tbh the instrumentation stuff isn't really meant to be user facing, more so for people like arize to hook into.

In most cases, it's probably easier to break whatever you are doing into more low level steps and log info as needed
I see.

My implementation is mostly same with this

Only the following changes are introduced
  • CustomEventHandler is produced and registered within chat function
  • Queue is created within chat function
I think returning internal event to frontend side is a common requirement , but does llamaindex seem not to provide such functionality?

Am I need to implement own implementation as perhaps you have done here, or try CallBack?
@Logan M
What information are you trying to get?
Tbh I would write this myself with a workflow: https://docs.llamaindex.ai/en/stable/module_guides/workflow/#workflows

Theres a very nice streaming api
https://docs.llamaindex.ai/en/stable/module_guides/workflow/#streaming-events

And many examples
https://docs.llamaindex.ai/en/stable/module_guides/workflow/#examples

I love workflows because it makes it easy to expose and customize lower level operations.
I see. I will follow your way.
I can utilize llama deploy if I use workflow to implement the functionality so it seem good.
Thank you! πŸ™
Add a reply
Sign up and join the conversation on Discord