Find answers from the community

Updated last year

Hey @Logan M , is there anyway we can

At a glance
Hey @Logan M , is there anyway we can use the sagemaker model endpoints for building the llm and embedding in llama_index?
L
1 comment
We don't have a sagemaker integration -- looks like langchain does though. You could use that over in llama-index

Plain Text
from llama_index.llms import LangChainLLM
from llama_index.embeddings import LangchainEmbedding

# ensure LLM inputs are formatted for your model
# this example is for llama2
def completion_to_prompt(completion: str) -> str:
  return f"[INST] {completion} [/INST] "

llm = LangChainLLM(llm=<lc_llm>, completion_to_prompt=completion_to_prompt)
embed_model = LangchainEmbedding(<lc_embedding>)

service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)


Then use the service context or LLM whereever you need it

If you are up to it, contributing sagemake LLMs/embeddings to llama-index would be cool too πŸ™‚
Add a reply
Sign up and join the conversation on Discord