Find answers from the community

s
F
Y
a
P
Home
Members
mr.niko.la
m
mr.niko.la
Offline, last seen last month
Joined September 25, 2024
m
mr.niko.la
·

Ollama

is there change to this loader ?

‘’’

ollama_pack = OllamaQueryEnginePack(model="stablelm-zephyr", documents=documents)

TypeError: init() takes 1 positional argument but 2 were given#
‘’’
7 comments
m
L
Using TogetherAI

Plain Text
    embedding_model: str = "togethercomputer/m2-bert-80M-8k-retrieval",
    generative_model: str = "mistralai/Mixtral-8x7B-Instruct-v0.1",


For chating with a folder full of invoices, pdf ,datasheet

This was from Feb script.
Is there a better embedding_model and generative_model to use?
2 comments
R
m
mr.niko.la
·

LLM

Is possible to use together.ai api endpoint with llama-index ?
25 comments
m
L
f
W
m
mr.niko.la
·

```

Plain Text
from llama_index.llama_pack import download_llama_pack

# download and install dependencies,  comment if already downloaded
OllamaQueryEnginePack = download_llama_pack("OllamaQueryEnginePack", "./ollama_pack")


anyone know how to reinitiate thie OllamaQueryEnginePack after editing base.py
64 comments
m
L
The llamapack code just needs to be updated.

Does anyone no how to Download the pack and edit the code…

Plain Text
llm = Ollama(model=self_model, base_url=self._base_url)
17 comments
m
L
Very easy to prototype but can get complicated for production. Also curious to see how others are doing it. Weather they are internal or external dev
6 comments
T
K
m
m
mr.niko.la
·

```

Plain Text



is this happen when you run out of memory?

im running dolphin-phi on 5600g amd apu using ollama
21 comments
L
m