Find answers from the community

Updated 2 months ago

LIRag - Pastebin.com

llama index always wants to use openai, despite me specifying not to use it in my app. I'm assuming that I am calling for the models incorrectly. Can somebody look at my code and let me know what I'm doing wrong: https://pastebin.com/9ddUR9mb as of now the only way I can get it to work is by modifying both llms/utils.py and embeddings/utils.py within the llama_index module
L
D
5 comments
At the top of your code, you are either loading or creating a new index without specifying a sevrice context in either (both from_documents() and load_index_from_storage() need the service context as a kwarg)
you can probably just get away with a global service context here at the top

Plain Text
from llama_index import set_global_service_context

set_global_service_context(service_context)
Oi that makes perfect sense lol. Thanks for helping me gain a deeper understanding. Unfortunately, while making those changes did solve my original issue, it's made the app considerably less performant. I'm going to hit the docs and possible build back up from scratch with the knowledge I've gained thus far.
Thanks again (as always) Logan!
Yea running local models is both hard and usually slower πŸ˜… But there are hosting options like vLLM or text-generation-interface to help speed things up. Not sure if ollama has any tricks for this as well.
Add a reply
Sign up and join the conversation on Discord