program = LLMTextCompletionProgram.from_defaults()
to asyncify? I sort of charged through the project on faith of that being possible without knowingawait program.acall()
thoughllm
instance to an async version or is it going to sort that out for me as well since I used acall
of a class that uses an llm?llama-index-graph-stores-nebula
is most fitting?