with recent versions of llamaindex we've deprecated/removed servicecontext.
you can choose to 1) not specify the LLM or embedding model at all, 2) choose to set llm/embedding models globally, or 3) directly pass in the embedding/LLM in relevant modules
so in your example for instance you can pass gpt-4o into the query engine
e.g. check out these LLM customization docs:
https://docs.llamaindex.ai/en/stable/module_guides/models/llms/usage_custom/#example-changing-the-underlying-llm