Find answers from the community

Updated 4 months ago

**Bug Description**

At a glance

A community member is encountering an issue while trying to access the "achat" interface to chat with the Gemini model using Vertex AI. The error message indicates an "Unknown field for GenerationConfig: safety_settings" error. The community member suspects this may be an issue with the llamaindex library, potentially due to a recent Google update. They have provided version details for the libraries they are using.

In the comments, another community member suggests that downgrading the llama-index-llms-vertex library to version 0.3.4 may fix the issue. The original poster tries this and says they will let the community know if it works.

Another community member indicates that they had raised the issue on GitHub and were hoping to get help here to close it. They also encounter a "ResponseValidationError" when trying to use the "achat" interface, and suggest skipping the response validation to potentially resolve the issue.

The community members discuss whether the integration was working before, and whether the issue is with the code or a recent Google update. Eventually, one community member determines that setting the MAX_TOKENS too low was the cause of the issue, and the integration now works.

There is no explicitly marked answer in the comments, but the community members provide suggestions and troubleshooting steps to

Bug Description

I'm encountering an issue while trying to access the "achat" interface to chat with the Gemini model (gemini-1.5-flash-002) using Vertex AI. The error message I'm receiving is as follows:

Plain Text
ERROR: Unknown field for GenerationConfig: safety_settings

 File "/workspaces/CORTEX/.venv/lib/python3.10/site-packages/llama_index/llms/vertex/base.py", line 384, in achat
   generation = await acompletion_with_retry(
 
 File "/workspaces/CORTEX/.venv/lib/python3.10/site-packages/llama_index/llms/vertex/utils.py", line 148, in acompletion_with_retry
   return await _completion_with_retry(**kwargs)


I suspect this may be an issue with llamaindex, potentially due to a recent Google update that affected some configurations. However, I am unsure of the root cause.

Version Details:
  • llama-index==0.11.14
  • llama-index-llms-vertex==0.3.6
  • google-ai-generativelanguage==0.6.4
  • google-generativeai==0.5.4
Steps to Reproduce:
Plain Text
llm = Vertex(...)
chat = await llm.achat(...)


Error: See above.

Relevant Logs/Tracebacks: No response.
L
A
10 comments
yea someone raised an issue on github for this. Would love a PR, otherwise I'll try to get to it at some point

I think downgrading may actually fix it... pip install -U llama-index-llms-vertex==0.3.4
It was me, I was hoping to get some help here so I could close there
Do I have to pass any extra parameters?

Plain Text
ResponseValidationError: The model response did not complete successfully.
Finish reason: 2.
Finish message: .
Safety ratings: [].
To protect the integrity of the chat session, the request and response were not added to chat history.
To skip the response validation, specify `model.start_chat(response_validation=False)`.
Note that letting blocked or otherwise incomplete responses into chat history might lead to future interactions being blocked by the service.
Was this integration working before? Or is it a problem of mine
Seems like google maybe updated and broke something? I really have no idea actually πŸ˜₯
Actually I think this was a problem of mine
I set MAX_TOKENS too low, now it worked
But the first problem I have no idea why it's not working
Add a reply
Sign up and join the conversation on Discord