Find answers from the community

Updated 2 months ago

hi i am getting 404 error '404 Not Found

hi i am getting 404 error '404 Not Found' for url 'http://localhost:11434/api/chat' and this link not working on browser but the home page is showing ollama is working when i open this http://localhost:11434/
W
k
11 comments
Hi,
hmm πŸ€” by default ollama serves API call on this endpoint only.
Can you check maybe ollama provides swagger to identify on which endpoint it is serving the model call
i try with langchain i am getting same error 404
i fix it i was using venv and run command ollama run mistral ollama download mistral but now i am getting httpx.ReadTimeout: timed out it becuase i don't have hight cpu so how i can give more timeout time to query funcation or how i can set it to false?
Plain Text
from llama_index.llms.ollama import Ollama

llm = Ollama(model="llama2", request_timeout=60.0) # change the value here for timeout
how i can use ollama on google colab? how to install it?
Plain Text
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.core.embeddings import resolve_embed_model
from llama_index.llms.ollama import Ollama


documents = SimpleDirectoryReader("data").load_data()

bge embedding model
Settings.embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")

ollama
Settings.llm = Ollama(model="mistral", request_timeout=30.0)

index = VectorStoreIndex.from_documents(
    documents, show_progress=True
)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
You'll have to check on ollama github whether it can be done or not.
can i use llm directly without ollama with transformers?
Add a reply
Sign up and join the conversation on Discord