The community member is having trouble setting up Llama Index entirely locally, as the examples in the documentation still require an OpenAI API. A comment suggests that to avoid the OpenAI case, the community member needs to pass both the llm and embed_model. The comment also provides a link to a Colab notebook that may help the community member set up Llama Index locally.
Would anyone be able to help me set up Llama Index entirely locally? I tried to follow the examples in the docs, but it still asks me for an OpenAI API form the example Service Context line