Find answers from the community

Updated 3 months ago

how might I use PaLM with simple vector

L
2 comments
throw it in the service context and off you go

Plain Text
from llama_index import ServiceContext, set_global_service_context

llm = <llm>
embed_model = <embed_model>

set_global_service_context(ServiceContext.from_defaults(llm=llm, embed_model=embed_model))
Add a reply
Sign up and join the conversation on Discord