Find answers from the community

Updated 7 months ago

Dear @Logan M

Dear @Logan M
Contexte : I am an IA expert of « one week » lol using Windows Os.

Question: does this feedback interest you : the learning curve to start with llama-index is good but where I lost a lot time is trying to config things to use local embedding/model. A lot errors/dependencies problems/config./installation. What you think, is there a way to enhance this experience like (reduce deps, by default use same embedding/model for index and engine). simplify local setup and simplify understanding of concepts to make it 100% offline.
Just wonder if it is on your radar or I should change work 🙂
L
e
7 comments
The easiest thing to do is to change the global defaults
We have a guide for exactly this
ollama is by far the easiest way to run locally
Thank you, Ian aware of this docs, I succeed using ollama and llama3 model, but success was after a lot effort compared to to other concept / easy to use code. (it is a feedback thanks)
Setting up local models will always take a bit more work compared to just using an API like openai -- I think the above is as easy as it gets
Add a reply
Sign up and join the conversation on Discord