Dear @Logan M Contexte : I am an IA expert of « one week » lol using Windows Os.
Question: does this feedback interest you : the learning curve to start with llama-index is good but where I lost a lot time is trying to config things to use local embedding/model. A lot errors/dependencies problems/config./installation. What you think, is there a way to enhance this experience like (reduce deps, by default use same embedding/model for index and engine). simplify local setup and simplify understanding of concepts to make it 100% offline. Just wonder if it is on your radar or I should change work 🙂