Find answers from the community

Home
Members
Brandon
B
Brandon
Offline, last seen 3 months ago
Joined September 25, 2024
hi πŸ‘‹. are NVIDIA Inference Microservices (NIM) available in LlamaIndex yet? I see the blog post, but I can't seem to find anything in the codebase or docs related to actual support
1 comment
L
hi πŸ‘‹. I'm having issues with claude 3 streaming. I seem to only be getting the first delta. this is with a variety of engines, but can be replicated with SimpleChatEngine. I'm on the latest version of llama-index (0.10.18).
4 comments
B
L
hi πŸ‘‹. we want to provide users of our product with more information when things go wrong in a chat. for example, if they trigger content filtering by a provider like Azure, we want to tell them that.

we're using streaming and a variety of agents and engines to support chat, including OpenAIAgent, ReActAgent, SimpleChatEngine, and CondensePlusContextChatEngine.

when content filters get triggered, I see the following warning. however, the response I get is empty. is there something I can do to get the error message itself? I've even tried accumulating the streaming response like I normally do in successful calls, but I'm not seeing anything.
Plain Text
2024-04-15T18:52:19.426862Z [warning  ] Encountered exception writing response to history: Error code: 400 - {'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': True, 'severity': 'high'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}}} [llama_index.core.chat_engine.types]
20 comments
L
N
B