Find answers from the community

Updated 3 months ago

Hi sorry for another 1

Hi - sorry for another 1.

I am trying to use GPT-4 via llama index and have made the changes in my code, but doesn't seem to be working. Any idea why?

See below my code and then the OpenAI usage data.

llm = OpenAI(model="gpt-4", temperature=0.1, max_tokens=256) # editing prompt & building index QA_PROMPT_TMPL = ( "XXX." "---------------------\n" "{context_str}" "\n---------------------\n" "Given this information, please answer the question: {query_str}\n" ) QA_PROMPT = QuestionAnswerPrompt(QA_PROMPT_TMPL) index = GPTVectorStoreIndex.from_documents(documents) retriever = VectorIndexRetriever( index=index, similarity_top_k=5, ) query_engine = RetrieverQueryEngine.from_args( retriever, response_mode="compact", text_qa_template=QA_PROMPT, )
L
t
4 comments
You'll need to setup the service context

Usually this is the easiest way

Plain Text
from llama_index import ServiceContext, set_global_service_context

llm = OpenAI(...)
service_context = ServiceContext.from_defaults(llm=llm)
set_global_service_context(service_context)
thanks - does that impact my prompt/ retriever/ query engine?
it will change the LLM used in the query engine (and also use the prompt you provided)

Since the embed_model is not changed, the retriever will still retrieve the same nodes
thank you very much
Add a reply
Sign up and join the conversation on Discord