Find answers from the community

Updated 2 months ago

New to llamaindex and experimenting with propertygraphindex using kuzu and ollama

Hey everyone - new to llamaindex and am just experimenting with propertygraphindex using kuzu and ollama with a local model. I'm basically running the example here, but subbing in Ollama. While paths seem to be extracted from the text by the local model without issue, the following code never seems to successfully complete generating embeddings -- it just seems to fail silently:
Plain Text
from llama_index.core import PropertyGraphIndex
from llama_index.core.indices.property_graph import SchemaLLMPathExtractor

index = PropertyGraphIndex.from_documents(
    documents,
    embed_model=embed_model,
    kg_extractors=[SchemaLLMPathExtractor(extract_llm)],
    property_graph_store=graph_store,
    show_progress=True,
)

which outputs this:
Plain Text
Extracting paths from text with schema: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 22/22 [00:39<00:00,  1.81s/it]
Generating embeddings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:03<00:00,  1.07s/it]
Generating embeddings: 0it [00:00, ?it/s]


When I run the actual code from the llamaindex example (using openAI), it finishes the embeddings:
Plain Text
Extracting paths from text with schema: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 22/22 [00:28<00:00,  1.28s/it]
Generating embeddings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00,  1.04it/s]
Generating embeddings: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00,  2.78it/s]


I'm probably doing something really obviously dumb, but I've been at it for a while and can't figure it out -- would really appreciate any help!

The config for the models is this:
Plain Text
from llama_index.embeddings.ollama import OllamaEmbedding
from llama_index.llms.ollama import Ollama
from llama_index.core import Settings

embed_model = OllamaEmbedding(model_name="llama3.2:1b", use_async=True)
extract_llm = Ollama(model="mistral-nemo", temperature=0.0, json_mode=True, request_timeout=3600)
generate_llm = Ollama(model="llama3.2:1b", temperature=0.3, request_timeout=3600)
L
t
4 comments
Notice the

Plain Text
Generating embeddings: 0it [00:00, ?it/s]


0it -- looking at the code, that means zero entities/relations are extracted
Pretty standard for an opensource model tbh
try the dynamic path extractor, the schema extractor is probably too strict
That's super helpful, Logan - thx. It runs with SimpleLLMPathExtractor, but doesn't get much. Trying now with an actual schema and it's taking its time... (not surprising). I'll experiment and see if there's a sweet spot.
Add a reply
Sign up and join the conversation on Discord