Log in
Log into community
Find answers from the community
View all posts
Related posts
Did this answer your question?
π
π
π
Powered by
Hall
Inactive
Updated last year
0
Follow
llama-cpp-python/examples/high_level_api...
llama-cpp-python/examples/high_level_api...
Inactive
0
Follow
g
geoHeil
last year
Β·
Can we use llama_cpp embeddings as well? i.e. instead of HuggingFaceEmbedding use
https://github.com/abetlen/llama-cpp-python/blob/main/examples/high_level_api/high_level_api_embedding.py
llm.create_embedding("Hello world!")? Does it make sense to standardize on one serving layer and not mix them?
L
g
3 comments
Share
Open in Discord
L
Logan M
last year
Usually for the best embeddings, you want a model actually trained for embeddings/retrieval (like
bge-base-en-v1.5
)
L
Logan M
last year
But otherwise, we would have to add an embeddings integration for llamacpp
g
geoHeil
last year
Understood. but this means for serving would you suggest the native HuggingFaceEmbedding or
https://github.com/huggingface/optimum
?
Add a reply
Sign up and join the conversation on Discord
Join on Discord