Find answers from the community

s
F
Y
a
P
Updated last month

```

Plain Text
from llama_index.llama_pack import download_llama_pack

# download and install dependencies,  comment if already downloaded
OllamaQueryEnginePack = download_llama_pack("OllamaQueryEnginePack", "./ollama_pack")


anyone know how to reinitiate thie OllamaQueryEnginePack after editing base.py
L
m
64 comments
from ollama_pack import OllamaQueryEnginePack
should work, I think, since the ollama_pack directory should have an __init__.py file in it
if not, it might be from ollama_pack.base import OllamaQueryEnginePack ?
im running llama-index 0.9.21

Plain Text
---> 27 embed_model = OllamaEmbedding(model_name=self._model, base_url=self._base_url)


ValueError: "OllamaEmbedding" object has no field "_verbose"


tried it a fresh notebook container installl....
were supposed to edit model_name=self._model to just model=self._model. for embed_model jsut like llm ...?
It's model_name
And no idea where verbose is coming from, it's not in the latest code at least
Are you sure you have 0.9.21? Maybe restart your notebook or make a fresh venv?

This is the source code for v0.9.21 for OllamaEmbedding -- no mention of _verbose in the file. Very sus
https://github.com/run-llama/llama_index/blob/0c972242ab7de29f601de443d0747492e9cb9bc6/llama_index/embeddings/ollama_embedding.py#L9
I will try to uninstall the package and reinstall it.


Btw is there another way to run llama to get the interactive chat like feature. Jupyter with its cells seemed like a good fit
If you create an agent or chat engine, you can use .chat_repl()
@Logan M ok i recreated my jupyter container fresh no prior pip install


Plain Text
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[16], line 2
      1 # You can use any llama-hub loader ooto get documents!
----> 2 ollama_pack = OllamaQueryEnginePack(model="phi", documents=documents)

File ~/work/SPEED/ollama_pack/base.py:27, in OllamaQueryEnginePack.__init__(self, model, base_url, documents)
     23 self._base_url = base_url
     25 llm = Ollama(model=self._model, base_url=self._base_url)
---> 27 embed_model = OllamaEmbedding(model_name=self._model, base_url=self._base_url)
     29 service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
     31 self.llm = llm

File ~/work/SPEED/ollama_pack/base.py:69, in OllamaEmbedding.__init__(self, model_name, base_url, verbose, **kwargs)
     57 def __init__(
     58     self,
     59     model_name: str,
   (...)
     62     **kwargs: Any,
     63 ) -> None:
     64     super().__init__(
     65         model_name=model_name,
     66         **kwargs,
     67     )
---> 69     self._verbose = verbose
     70     self._base_url = base_url

File /opt/conda/lib/python3.11/site-packages/pydantic/v1/main.py:357, in BaseModel.__setattr__(self, name, value)
    354     return object_setattr(self, name, value)
    356 if self.__config__.extra is not Extra.allow and name not in self.__fields__:
--> 357     raise ValueError(f'"{self.__class__.__name__}" object has no field "{name}"')
    358 elif not self.__config__.allow_mutation or self.__config__.frozen:
    359     raise TypeError(f'"{self.__class__.__name__}" is immutable and does not support item assignment')

ValueError: "OllamaEmbedding" object has no field "_verbose"



this is bizzare do you think tis cus i am using !pip in the cell

i dont see how that is the issue...
so that is the fresh recreated container no prior pip package...
Somehow you still have an outdated installation lol

Plain Text
pip uninstall -y llama-index
pip install llama-index==0.9.21
this is wild. becuase its a official jupyter docker image recreating it should be as fresh
im not sure what is happening
but also its xmas

so merry christmas.

ill to post my docker compose yml file.

and can pick this up when yo guys are back to work.
I imagine using jupyter with llama-index is an important use-case so im happy to debug it and get it working.

maybe someone in the community has it working
Do you restart the runtime after installing that?

Tbh I would just run outside of docker, unless you absolutely need it πŸ˜…

With python, you really need to be using virtual environments to manage dependencies nicely, not 100% if you are or not. Or maybe you need to build a dockerfile with this image to properly install dependencies πŸ€·β€β™‚οΈ
Plain Text
  jupyter-main:
    image: jupyter/datascience-notebook:latest
    ports:
      - 8888:8888
    volumes:
      - jupyter_volume:/home/jovyan/work
    container_name: jupyter-main
    environment:
      - JUPYTER_ENABLE_LAB=yes
      - JUPYTER_TOKEN=docker


my docker-compose
tbh i been using docker as my way to isolate packages between my projects, but ill install conda. please let me know the proven conda command to run to set up the env that works fine for llama-index and ill try it without docker.
Yea conda works (I usually use poetry πŸ˜‰ )

Pretty sure it would just be

Plain Text
conda create --name llama-index python=3.11
conda activate llama-index
pip install llama-index ...
Should be fine
Then you just need to register the env with ipykernel and use it in your notebook
true i will use this break to leanr poetry and pydantic

ok ill follow up with my results.
anyhow cheers mate have a great xmas holiday.
πŸ§‘β€πŸŽ„ πŸŽ„ 🎁 πŸ’
Merry Christmas! πŸŽ„ :dotsCATJAM:
Plain Text



hmmm
Attachment
image.png
here is my llama index rag notebook
What if you run a normal python script
Instead of the notebook
when u get a chance please run it with ur working enviroment
maybe its the code.. but i got it from the hub instructions
Its not the hub
The error is coming from inside the llama-index package, but it's coming from a line of code that doesn't exist in the latest releases. Which means (somehow) your env is still borked
Try running in colab lol I will try when I get home from the gym
this is mental. conda supposed to be for this....

ok will try colab but after that im defeated. will wait for u to test the notebook.

just make sure to creat a docs folder and put a pdf.
ok bro. do u need to run ollama run phi but i dont think this error is related to that?

anyhow im all out of comp architecures, and env, notebooks im gonna leave it with you.

feel free to do it when ur back to work. no rush.

cheers
Attachment
image.png
πŸ€•
Attachment
image.png
LMAO I should really learn how to read
For whatever reason
the llama-pack defines it's own OllamaEmbedding class at the bottom, instead of using the one from llama-index
and I never noticed because jupyter cuts off the traceback :PSadge:
Just update the ollama pack code to just use the proper embeddings
Delete the class OllamaEmbedding from the pack

And then add to the top from llama_index.embeddings import OllamaEmbedding
lmao jupyter and it truncating exceptions... I dislike jupyter a ton πŸ™‚
merged the fix, published a new version of llama-hub πŸ‘
https://pypi.org/project/llama-index/#history

how long does it takes for pypi to update. ill just wait for it be pushed.
that was quirky for sure. but jupyter with its lab ide and cell interaciton works nicely for a chat bot interface.
Oh the error was in the llama pack

I updated llama-hub, not llama-index. If you download the latest pack it shooould be there
running `oolama run dolphin-phi
Attachment
image.png
thank you @Logan M for this xmas gift.

i was losing my mind when u inisted the code didnt have verbose

glad we were able to fix it. and hopefully this will help other juptyer users.
do we have a char / RAG channel.

i did my full research and this is cleanest way to get it up and going.
yea sorry for not noticing that the llama-pack had this code earlier πŸ˜… Glad it works now
no worries this is best part of OS. its buid in public and debug in public.
If there is no rag doc chat focused channel, may i suggest we create one. this is would be nice to have a dedicated place for it..
Given that it's a llamaindex discord, "rag doc chat" seems pretty well-suited to general or ai-discussion πŸ‘
Add a reply
Sign up and join the conversation on Discord