Find answers from the community

Updated 2 months ago

Custom Embeddings

Is it possible to make a wrapper for model LLM and use my own API for embedding and prediction?
E
B
2 comments
You can use a local embedding

Plain Text
from llama_index import ServiceContext
service_context = ServiceContext.from_defaults(embed_model="local")


or implement your own custom embedding

https://gpt-index.readthedocs.io/en/stable/examples/embeddings/custom_embeddings.html
Add a reply
Sign up and join the conversation on Discord