Find answers from the community

Updated 3 months ago

when trying to integrate the llama index

when trying to integrate the llama index with Chainlit, it gives me error message of

" File "app.py", line 152, in main
for token in response.response_gen:
AttributeError: 'Response' object has no attribute 'response_gen' "


def load_service_context(llm):
chunk_size = 1024

service_context = ServiceContext.from_defaults(chunk_size=chunk_size, llm=llm, callback_manager=CallbackManager([cl.LlamaIndexCallbackHandler()]),)
return service_context

def load_sql_auto_vector_query_engine(sql_tool, vector_tool, service_context):
query_engine = SQLAutoVectorQueryEngine(
sql_tool, vector_tool, service_context=service_context
)
return query_engine


guidance and help will be appreciated. πŸ˜„
Attachment
image.png
L
c
3 comments
Both the sql and vector query engines need to be created with streaming=True, but not sure how you are creating them
Yup, created them with streaming = true, code is working when removing for loop and if statement, had to pass response_message.content = response.respnse

any Idea?


also, Another issue @Logan M is when my vector database has data of llama2 files (documentation in pdf) and Postgres has dummy table of employees


when running the query ( how many tables are there in the database and tell me about llama and its history)

it is returning:: " The context does not provide information on the number of tables in a database. As for Llama, it is a collection of pre-trained and fine-tuned large language models (LLMs) ......"

but I want information on tables as well as llama2, possible? I'm switched to gpt-4 for this task.
HELPP :: My chatbot is not giving generalized answers for general queries, eg. Tell me about sony and consoles?

response : I'm sorry, but there is no information available about Sony in the database.
Add a reply
Sign up and join the conversation on Discord