Dear @Logan M Contexte : I am an IA expert of « one week » lol using Windows Os.
Question: does this feedback interest you : the learning curve to start with llama-index is good but where I lost a lot time is trying to config things to use local embedding/model. A lot errors/dependencies problems/config./installation. What you think, is there a way to enhance this experience like (reduce deps, by default use same embedding/model for index and engine). simplify local setup and simplify understanding of concepts to make it 100% offline. Just wonder if it is on your radar or I should change work 🙂
Thank you, Ian aware of this docs, I succeed using ollama and llama3 model, but success was after a lot effort compared to to other concept / easy to use code. (it is a feedback thanks)