Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated 3 months ago
0
Follow
llamaindex-embedding-lora/finetune_embed...
llamaindex-embedding-lora/finetune_embed...
Inactive
0
Follow
d
drewskidang
8 months ago
Β·
if we are using an llm for embeddings do we just pass the llm into the huggingface_
https://github.com/marib00/llamaindex-embedding-lora/blob/main/finetune_embedding_lora.ipynb
L
d
9 comments
Share
Open in Discord
L
Logan M
8 months ago
(why are you using an LLM for embeddings? π )
It might work? I'm really not sure actually, I think it depends on what sentence-transformers is supporting
d
drewskidang
8 months ago
I'm just experimenting lol
d
drewskidang
8 months ago
and too lazy to do retriver agent memory, and I also want to add memory to embedding model for science lol..
L
Logan M
8 months ago
Interesting idea!
d
drewskidang
8 months ago
this could be bro science if you ask like follow up questions like "explain more" the retriver models isn't gonna take into previous question rihgt
L
Logan M
8 months ago
I would think so, if I understand LLM embeddings properly π
d
drewskidang
8 months ago
so what if its an LLM
L
Logan M
8 months ago
LLMs are decoders, other embed models are encoders, so embeddings with LLMs are a special case
d
drewskidang
8 months ago
oh yeah i know lol.. i'm reffering to llms that have modied to do both... I think they are doing both lol
https://huggingface.co/GritLM/GritLM-7B
Add a reply
Sign up and join the conversation on Discord
Join on Discord