File ".../python3.10/site-packages/llama_index/core/callbacks/token_counting.py", line 91, in get_llm_token_counts raise ValueError( ValueError: Invalid payload! Need prompt and completion or messages and response.
pip install -U llama-index-core llama-index-llms-openai llama-index-llms-azure-openai
{<EventPayload.EXCEPTION: 'exception'>: BadRequestError('Error code: 400 - {\'error\': ...}
EventPayload.PROMPT
or EventPayload.MESSAGES
, so in the else statement, the ValueError is raisedFile ".../python3.10/site-packages/openai/_base_client.py", line 1040, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': True, 'severity': 'high'}, 'jailbreak': {'filtered': False, 'detected': False}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'medium'}}}}}
The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766
EventPayload.EXCEPTION
and return 0
on the tokens counters (input, completion, etc) (This seems to be the default fallback result from before the merge of the PR)