The community member is trying to install the "llama-index-embeddings-huggingface" package, but is running into version conflicts. They have tried installing a version less than 0.1.3, but encountered another issue. The community member is trying to follow the "3_rag_agent.py" example from the "python-agents-tutorial" repository, but wants to use non-OpenAI embeddings and run LlamaIndex locally.
The comments suggest trying a version less than 0.3.0 for the HuggingFace embeddings, or adjusting the version of llama-index-core. Another community member suggests leaving the version off and letting pip or poetry handle the versions. However, the original community member says this leads to version conflicts and they need to specify the version or remove the versions to resolve the conflict.
The community members suggest checking the community member's list of requirements, trying a fresh virtual environment, and checking the Torch version. The original community member tries a fresh virtual environment and notices their llama-index version is different from the one suggested. They also share their poetry shell configuration file and observe that "pip freeze | grep torch" returns nothing.
The answer is that the original community member was able to resolve the issue by changing their .toml file to use Python
when installing "pip install llama-index-embeddings-huggingface" , i ran into conflict of version issues. Tried to installed less than <0.1.3 package version as well , ran into another issue not sure. Just trying to do the "https://github.com/run-llama/python-agents-tutorial/blob/main/3_rag_agent.py" with non open AI embeddings as i am running llamindex locally, can any one guide me ?
when i leave it for pip to handle the versions , it ends up with version conflict and asking to specify the version or remove the versions to resolve conflict. Just now tried "pip install llama-index-embeddings-huggingface "package<0.3.0"" another subprocess exited with error message
I can try to start a fresh venv, but how to check my Torch version?, as i did not remember me specifiying it any where . Will try on a fresh venv and keep you posted tomorrow. Thank you
I did try on a new fresh venv and got this error on embeddings and my llama version are different than yours , please take a look my screen shot it says llama-index==0.11.23 where us yours is 0.12.0, does this makes any difference ? i am also attaching poetry shell config file as well . Can we force to install certain versions of llama-index ? Another observation is when i ran "pip freeze | grep torch " nothing returned in my poetry shell, just want to share
Thank you. All i did is change my .toml to have [tool.poetry.dependencies] python = "^3.11" instead of "3.13" , just this change and recreate the venv all went fine. Thank you , issue closed for me