You'll need openai to generate embeddings yes. You can optionally run a local embedding model
(It might still complain about openai because the LLM, the other model used in llama index, also defaults to openai, but just set the key to a random string lol)