Find answers from the community

Updated last year

yes that one , but i want to understand

yes that one , but i want to understand something , whats the role of service_context in a llama index rag pipeline , when the tutorial set the gpt4 as service_contest.predictor, will we be using the custom embedding or open ai embedding when we are retreiving the document during the query decomposition ?
T
4 comments
It can be used for configuring resources like the embedding model and LLM being used, node parser etc.
By default it will use OpenAI embeddings
You need to define a different embedding model if you don't want to use OpenAI
Default LLM is also OpenAI (3.5) and by setting service context llm to gpt-4 you'll use that
Add a reply
Sign up and join the conversation on Discord