Find answers from the community

Updated 2 years ago

Trying to use sub query question engine

At a glance
Trying to use sub query question engine : I can see internally llamaindex is still using davinci. how can i change this to gpt3.5turbo?

Plain Text
\nGiven the new context, refine the original answer to better answer the question. If the context isn\'t useful, return the original answer.", "stream": false, "model": "text-davinci-003", "temperature": 0.0, "max_tokens": 1740}' message='Post details'
k
L
W
15 comments
here is my full code @Logan M @WhiteFang_Jr
You never passed the service context into the vector index πŸ‘€
sub_query_engine = SubQuestionQueryEngine.from_defaults(
query_engine_tools=query_engine_tools,
service_context=service_context,
use_async=True,
)
i passed it into the subqery engine
can help me correct
this code is still working btw, just using davinci 3 internally
index= VectorStoreIndex.from_vector_store(vector_store = vector_store, storage_context=storage_context, service_context=service_context)
it was missing the service context on that line
Eagle eyes πŸ˜…
so service contect needs to go into query engine and vector index BOTH?
yes, or you can set global service context that will help you to not add it anywhere.
Add a reply
Sign up and join the conversation on Discord