Find answers from the community

Updated 3 months ago

llamaindex-embedding-lora/finetune_embed...

if we are using an llm for embeddings do we just pass the llm into the huggingface_https://github.com/marib00/llamaindex-embedding-lora/blob/main/finetune_embedding_lora.ipynb
L
d
9 comments
(why are you using an LLM for embeddings? πŸ˜… )

It might work? I'm really not sure actually, I think it depends on what sentence-transformers is supporting
I'm just experimenting lol
and too lazy to do retriver agent memory, and I also want to add memory to embedding model for science lol..
Interesting idea!
this could be bro science if you ask like follow up questions like "explain more" the retriver models isn't gonna take into previous question rihgt
I would think so, if I understand LLM embeddings properly πŸ‘€
so what if its an LLM
LLMs are decoders, other embed models are encoders, so embeddings with LLMs are a special case
oh yeah i know lol.. i'm reffering to llms that have modied to do both... I think they are doing both lol https://huggingface.co/GritLM/GritLM-7B
Add a reply
Sign up and join the conversation on Discord