The community member is wondering if they can use a custom LLM (Large Language Model) or embedding API deployed by themselves as the underlying model in the llamaindex library. Another community member responds that this is possible, and provides links to the relevant documentation on using custom LLM and embedding models in llamaindex. The second community member also expresses gratitude for the information.