Find answers from the community

t
tsc67
Offline, last seen 3 months ago
Joined October 10, 2024
Hey everyone - new to llamaindex and am just experimenting with propertygraphindex using kuzu and ollama with a local model. I'm basically running the example here, but subbing in Ollama. While paths seem to be extracted from the text by the local model without issue, the following code never seems to successfully complete generating embeddings -- it just seems to fail silently:
Plain Text
from llama_index.core import PropertyGraphIndex
from llama_index.core.indices.property_graph import SchemaLLMPathExtractor

index = PropertyGraphIndex.from_documents(
    documents,
    embed_model=embed_model,
    kg_extractors=[SchemaLLMPathExtractor(extract_llm)],
    property_graph_store=graph_store,
    show_progress=True,
)

which outputs this:
Plain Text
Extracting paths from text with schema: 100%|██████████| 22/22 [00:39<00:00,  1.81s/it]
Generating embeddings: 100%|██████████| 3/3 [00:03<00:00,  1.07s/it]
Generating embeddings: 0it [00:00, ?it/s]


When I run the actual code from the llamaindex example (using openAI), it finishes the embeddings:
Plain Text
Extracting paths from text with schema: 100%|██████████| 22/22 [00:28<00:00,  1.28s/it]
Generating embeddings: 100%|██████████| 1/1 [00:00<00:00,  1.04it/s]
Generating embeddings: 100%|██████████| 2/2 [00:00<00:00,  2.78it/s]


I'm probably doing something really obviously dumb, but I've been at it for a while and can't figure it out -- would really appreciate any help!

The config for the models is this:
Plain Text
from llama_index.embeddings.ollama import OllamaEmbedding
from llama_index.llms.ollama import Ollama
from llama_index.core import Settings

embed_model = OllamaEmbedding(model_name="llama3.2:1b", use_async=True)
extract_llm = Ollama(model="mistral-nemo", temperature=0.0, json_mode=True, request_timeout=3600)
generate_llm = Ollama(model="llama3.2:1b", temperature=0.3, request_timeout=3600)
4 comments
L
t