Find answers from the community

s
F
Y
a
P
Updated last month

In this example: https://docs.llamaindex

L
z
9 comments
in the vector store in the storage context

Kind of janky. I can't remember if I mentioned this before, but some new knowledge graph stuff is in the pipeline that is hopefully much less janky
I just wound up coding it myself. One question though: can I get program = LLMTextCompletionProgram.from_defaults() to asyncify? I sort of charged through the project on faith of that being possible without knowing
from_defaults() isn't doing anything blocking though right?

To use the program in async, you can do await program.acall() though
yeah that's what I meant, for the calls to the llm
should be able to use acall() then (assuming you are running your LLM over an API, and not directly in the process)
do I need to change my llm instance to an async version or is it going to sort that out for me as well since I used acall of a class that uses an llm?
btw I could probably contribute a new storage adapter that streams live graphs (creation and sub to change feeds). I'd need to a see an existing repo of something similar. maybe llama-index-graph-stores-nebula is most fitting?
I only used llama-index for the pydantic programs
It sorts that out for you, assuming the LLM support async (most do)
Add a reply
Sign up and join the conversation on Discord