The community members are discussing the use of large language models (LLMs) for embeddings. The original post asks if an LLM can be passed directly into the Hugging Face code for fine-tuning embeddings. The comments suggest that this might work, but it depends on the specific capabilities of the sentence-transformers library. Some community members are experimenting with this approach, while others express concerns about the potential for "bro science" if follow-up questions are not properly addressed. There is no explicitly marked answer, but the discussion suggests that using LLMs for embeddings is a special case that requires careful consideration.