Find answers from the community

Updated last year

Hi everyone as you may already know we

At a glance
Hi everyone, as you may already know, we have a new language - Mojo (a slightly "better" version of python). Mojo can easily use python modules.

So, I have a question for the LlamaIndex developers. Today we have a rather slow llamaindex (something like 20 sec). Maybe we can try to use Mojo to make it faster?
L
2 comments
llamaindex is not slow because of computations. Most queries take 3-5 secs, and closer to 10s for more complicated query engines.

The majority of this wait time is waiting for openai to run.
but, I've only seen a few blog posts from mojo. Maybe you had something in mind.

In llamaindex, the largest wait times are either for API calls, or if you are running an LLM locally to generate text
Add a reply
Sign up and join the conversation on Discord