Find answers from the community

Updated 4 months ago

Can anyone help check this? Is it only

Can anyone help check this? Is it only me facing this issue?
L
j
23 comments
You can see in the logs that it technically didn't extract any relationships (0items when generating embeddings)
Probably the default schema doesn't match your data well. Really, its made for you to create your own schema, I don't expect the default to work well in every case
Hi Logan, thanks again for your work. The problem is that I am not using my own data... i strictly follow the tutorial and using the default schema and default data in the tutorial
the only two different settings from the tutorials are 1) I used neo4j desktop not the docker, but i don't think it really matters? 2) the llm model is "gpt-4o" rather than "gpt-3.5-turbo" as the later is deprecated... I tried gpt-3.5 as well, and faced the same issue.
wowza this is a bonkers bug
been debugging for like the last 20 mins
v0.11 and pydantic v2 definitely broke it
should have some kind of a fix shortly in a PR
actually, the LLM is completely hallucinating the pydantic structure 😭
I dont think this will be a quick fix
that's okay, please keep me updated...
i began with my own dataset and it didn't work well, so i came back to the tutorial.
all the best with the debugging.
the graphRAG v2 tutorial also not work on my own dataset... you may also try that one later
fyi though, you almost certainly do not need a graph, just my hot take/2 cents πŸ™‚
graphrag v1 works well on my own dataset... so I am quite curious on the performance of GraphRAG v2.
Is it fixed now? which packages should i update?
I need to merge the above first and publish a new version of llama-index-core
waiting for cicd to pass, but was just giving you a heads up πŸ™‚
Add a reply
Sign up and join the conversation on Discord