llama index always wants to use openai, despite me specifying not to use it in my app. I'm assuming that I am calling for the models incorrectly. Can somebody look at my code and let me know what I'm doing wrong:
https://pastebin.com/9ddUR9mb as of now the only way I can get it to work is by modifying both llms/utils.py and embeddings/utils.py within the llama_index module