Ok thanks I thought so too.
On another hand
How can I stream the output to give a better user experience.
I have tried a few options I read in the discussion as well the official docs but it doesn't work in my case.
synthesizer = get_response_synthesizer(llm=llm, response_mode="compact", streaming=True)
synth = get_response_synthesizer(streaming=True)
query_engine = SQLTableRetrieverQueryEngine(
sql_database, obj_index.as_retriever(streaming=True, similarity_top_k = 3),
llm=llm,
#synthesize_response=False,
response_synthesizer = synthesizer,
#sql_only=True
)
response = query_engine.query("How many farmers do we have?")
for token in response.response_gen:
print(token, end="", flush=True)
I get this output
AttributeError Traceback (most recent call last)
Cell In[24], line 1
----> 1 for token in response.response_gen:
2 print(token, end="", flush=True)
AttributeError: 'Response' object has no attribute 'response_gen'
I tried another option from the official docs
response.print_response_stream()
Another error output:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[25], line 1
----> 1 response.print_response_stream()
AttributeError: 'Response' object has no attribute 'print_response_stream'
I am using llama_index version