Find answers from the community

Updated 2 months ago

Hi guys! We are trying to make a chatbot

Hi guys! We are trying to make a chatbot using openAI (gpt4o) and llama-index for RAG. We already defined some function tools and created the OpenAIAgent, but every time we try to stream the response of the agent to any input text, we recieve the raise of the following exception:
"2024-06-25 16:35:14,002 - root - ERROR - models/chatbot. An error occurred while streaming the answer to Buenas
Error: list index out of range (chatbot.py:174) "

We cannot figure out why there is a list index out of range error. Maybe with the handling of the chat history? Any help you can give us will be welcome! Thanks!
J
L
7 comments
This is the function that streams the response, in case you need more info about it:
async def stream_chat(self, input_text):
"""
The stream_chat function asynchronously streams chat responses and source URLs, handling exceptions when presented.
"""
try:
response = self.agent.stream_chat(input_text)
sources = self.get_sources_url(response.source_nodes)
stream_response = response.response_gen

self.chat_history = self.agent.chat_history

for content in stream_response:
yield (content, None)

sources = self.get_sources_url(response.source_nodes)
yield (None, sources)

except Exception as e:
logging.error(f"models/chatbot. An error occurred while streaming the answer to {input_text} \n Error: {e}")
I would probably be more helpful to remove the try/except so that we could see the full error/traceback
Sure! Here it is the error/traceback i can get from aws. I don't really know much about what could possibly be the origin of this error, but i'm trying my best to debug it on my own (of course any help will always be well received!)
If you guys need any more information, let me know so i can help you helping me πŸ€—
The weird thing is that this only happens when i deploy my app in aws. If i run some tests and interact with the chatbot in a local environment, it works perfectly
hmmm pretty weird πŸ˜… Thats a tricky one
I guess I would make sure your deployment has latest versions of packages?
I'll check it again. Anyways, do you know how can i debug that "messages" collection? I mean, how can i access to that data so i can check if there is a problem with the data from the db instead (openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid type for 'messages[3].content[0]': expected an object, but got a string instead.", 'type': 'invalid_request_error', 'param': 'messages[3].content[0]', 'code': 'invalid_type'}})
Add a reply
Sign up and join the conversation on Discord