Find answers from the community

Updated 2 months ago

Do we have anyway to make the below code

Do we have anyway to make the below code an asynchronous call?
Plain Text
index = PropertyGraphIndex.from_documents(documents=[document],
                                          llm=llm_model,
                                          embed_model=embedding_model,
                                          kg_extractors=[
                                          ImplicitPathExtractor(),
                                          LLM_extractor,
                                          ],
                                          show_progress=True,
                                          )
L
p
17 comments
Python constructors cant be async. But, you could run what its doing manually and async. I think its a tad hacky at the moment though

Plain Text
from llama_index.core.ingestion import arun_transformations

# this will insert the kg nodes into the metadata of each text node
nodes = await arun_transformations([document], transformations=[SentenceSplitter(), ImplicitPathExtractor(), LLM_extractor])

# need some dummy extractor I think, but then throw it into the index
from llama_index.core.schema import TransformComponent

class DummyTransform(TransformComponent):
  def __call__(self, nodes, **kwargs):
    return nodes

index = PropertyGraphIndex(nodes=nodes, kg_extractors=[DummyTransform()], ...)
@Logan M it seems that the index becomes the type of
Plain Text
Coroutine[Any, Any, PropertyGraphIndex]
and I can't operate some other functions on it, such as index.property_graph_store (it says
Plain Text
'coroutine' object has no attribute 'property_graph_store'
. Is there anyway to keep the most of original functionalities?
And I don't see how I specify the embedding_model there...?
what, how πŸ˜… What line of code does it beome this?

And you can specify the embedding model in the constructor
@Logan M Actually, nvm on that πŸ˜… Just, even after implementing this, I got
Plain Text
asyncio.run() cannot be called from a running event loop
error, and if I use nest.asyncio, the error would become
Plain Text
 this event loop is already running
. Do we have any methods to resolve this?
What version of llama-index-core do you have? Pretty sure I fixed that
@Logan M I have llama-index-core of 0.10.57 currently. Which version do we have this fixed?
Hmm pretty sure that should have had it fixed πŸ€” I'm guessing you are running in fastapi or some async server?
@Logan M Yeah I'm calling the above with fastapi through an async function... πŸ˜… Has anyone had this case solved?
I think it probably requires some change to the library
Although longer term we really shouldn't be nesting async
But python constructors can't be async πŸ˜…
So there needs to he an async insert function (there currently isnt)
So you mean this problem is currently unsolved...? Or is there any way I can change that...?
Unsolved (although tbh I've never had an issue either so unsure actually)

Some possible solutions
  • set the loop type to asyncio if you haven't already uvicorn.run(..., loop="asyncio")
  • run the ingestion in a separate python thread so that it can use its own async loop
Got it. Thank you!
Add a reply
Sign up and join the conversation on Discord