Find answers from the community

Updated 4 months ago

Local

At a glance

The community member is having trouble setting up Llama Index entirely locally, as the examples in the documentation still require an OpenAI API. A comment suggests that to avoid the OpenAI case, the community member needs to pass both the llm and embed_model. The comment also provides a link to a Colab notebook that may help the community member set up Llama Index locally.

Useful resources
Would anyone be able to help me set up Llama Index entirely locally? I tried to follow the examples in the docs, but it still asks me for an OpenAI API form the example Service Context line
W
1 comment
You'll need to pass both the llm and embed_model in order to avoid OpenAI case.

This will help you to setup LlamaIndex locally:https://colab.research.google.com/drive/16Ygf2IyGNkb725ZqtRmFQjwWBuzFX_kl?usp=sharing
Add a reply
Sign up and join the conversation on Discord