LlamaIndex does not use the embedding model from chroma, it will always use the one from the service context. By default, that is openai yes, and it's not bad.
If you set embed_model="local" in the service context, it will use BAAI/bge-small-en running locally, which is also really good and fast in my experience (especially if you have cuda installed)