Find answers from the community

Updated 2 months ago

Ollama

Hi I have 2 problems.
1.I was trying to use mistral from ollama locally, the model was downloaded, yet service context the trying to connect via API

llm = Ollama(model="mistral", request_timeout=30.0)


callback_manager = CallbackManager([LlamaDebugHandler()])
service_context = ServiceContext.from_defaults(
llm=llm, callback_manager=callback_manager, chunk_size=256,
embed_model="local")


File ~/anaconda3/lib/python3.11/site-packages/httpx/_transports/default.py:84 in map_httpcore_exceptions
raise mapped_exc(message) from exc

ConnectError: [Errno 61] Connection refused

  1. Using FAISS for indexing

create faiss index

d = 100
faiss_index = faiss.IndexFlatL2(d)

construct vector store

vector_store = FaissVectorStore(faiss_index)

while trying to test some wiki articles for indexing

add documents to index

for wiki_title in wiki_titles:
index.insert(docs_dict[wiki_title])

getting
File ~/anaconda3/lib/python3.11/site-packages/faiss/class_wrappers.py:228 in replacement_add
assert d == self.d

AssertionError

is this normal?

Please help with guidance if you have any. Thanks in advance
W
a
17 comments
Are you done with the first point mentioned in the setup process here: https://docs.llamaindex.ai/en/stable/examples/llm/ollama.html#setup ?
Hi I havent done it. will try update it thanks.
Thanks much llm problem is solved now
ValueError: Metadata filters not implemented for Faiss yet. having only issue with FAISS indexer could you please help with that?
Seems like you are trying metadata filtering which is not present with this index.
ah ok got it. just looking for open source indexer. Do you have any suggestion?
for metadata indexing
You can try LlamaIndex VectorStoreIndex, You can do metadata filtering with that.

https://docs.llamaindex.ai/en/stable/module_guides/indexing/metadata_extraction.html
North remembers πŸ₯Ά
Lol πŸ˜†
Hi using ollama LLM .

llm = Ollama(model="mistral", request_timeout=30.0)

service_context = ServiceContext.from_defaults(
llm=llm, callback_manager=callback_manager, chunk_size=256,
embed_model="local")

*

retriever = VectorIndexAutoRetriever(
index,
vector_store_info=vector_store_info,
service_context=service_context,
max_top_k=10000,
)

query_engine = RetrieverQueryEngine.from_args(retriever,llm=llm)

the problem is still the LLM resolver is looking for open ai keys. when i checked the resolver function it is checking for lang chain and open ai and not directly the ollama.


File ~/anaconda3/lib/python3.11/site-packages/llama_index/llms/utils.py:31 in resolve_llm
raise ValueError(

ValueError:
**
Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys

To disable the LLM entirely, set llm=None.
**

Could you please help wether the issue is from my code ? is there documents available for using ollama. Thanks .
Try setting the service_context global:
Plain Text
from llama_index import set_global_service_context


llm = Ollama(model="mistral", request_timeout=30.0)

service_context = ServiceContext.from_defaults(
    llm=llm, callback_manager=callback_manager, chunk_size=256,
    embed_model="local")
set_global_service_context(service_context)
perfect , works . Thank you
one more question please

dataset_path ="data/deeplake"

vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=True)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex(
[], storage_context=storage_context, service_context=service_context
)


add documents to index

for wiki_title in wiki_titles:
index.insert(docs_dict[wiki_title])

is it possible to index one time and save it and call it whenerver needed? is there any document available please?
I think, once you ingest the docs then you dont have to ingest it again in case of third party vector store like Deeplake vector store
program would b closed several times, each time if the number of documents is in millions it will take time to ingest, is there any possible way to save the ingested index adn load it wheneber necessary?
Add a reply
Sign up and join the conversation on Discord