yes that one , but i want to understand something , whats the role of service_context in a llama index rag pipeline , when the tutorial set the gpt4 as service_contest.predictor, will we be using the custom embedding or open ai embedding when we are retreiving the document during the query decomposition ?