Find answers from the community

Updated 3 months ago

How has everyone dealt with speed issues

How has everyone dealt with speed issues? Our dataset isn't that large but querying an index seems to take a while (possibly because of the number of network requests in our API).

Has switching to a vector specific DB helped?
j
T
A
6 comments
have you tried our async support?

index.query(..., response_mode="tree_summarize", use_async=True)

(looking into making some of this default behavior)
@jerryjliu0 I have not, does that create asynchronous calls and once the network responds deals with it w/ a callback? (similar to promises in JS)
With the above command it’s still called from a synchronous wrapper
We do have a native async call in ‘aquery’ too
hey @jerryjliu0!

My partner @Tenzin | Tali AI and I have been using Llama index for a few weeks now. We have been incredibly impressed with the library and it has been instrumental in helping us process in our development process..

We are currently building a project called Minutescribe that started out as a meeting transcription application, but now includes a LLM that is plugged into many data sources, acting as a personal assistant. The project is still in the development phase and is changing rapidly.

We believe that Llama index will be an integral part of our project, and we are committed to building our project on top of it. We would love to discuss how we can collaborate and work together to further develop both of our projects. Specifically we are interested in learning how we can support you and llama.

In general we are trying to build relationships on the space.

Would you be open to jumping on a quick call to discuss?
@Ali | Tali AI sure! Feel free to DM me on discord
Add a reply
Sign up and join the conversation on Discord