Find answers from the community

Updated 4 months ago

i've been trying gemini pro using vertex

At a glance
The community member is trying to use Gemini Pro, an LLM provided by Vertex AI, through the LlamaIndex integration. They are encountering issues with Gemini's response validation, which blocks potentially "risky" content from being output. The community member wants to disable this response validation by setting model.start_chat(response_validation=False), but they are unsure how to do this within the LlamaIndex integration. The comments indicate that the community member tried setting response_validation=False directly through the Vertex AI Python API, but this did not resolve the issue and instead resulted in a different error. The comments also note that the response_validation=False setting does not seem to be implemented in LlamaIndex. There is no explicitly marked answer in the post or comments.
Useful resources
i've been trying gemini pro using vertex ai, using llamindex's integration of vertex ai. i've been defining it as "llm" and using Settings.llm=llm. i use gemini pro as the llm for llamaindex's Tree Summarize.

is there a way to specify a "model.start_chat(response_validation=False)" parameter for gemini? this should disable gemini's response validation, which blocks potentially "risky" material from appearing as output, by throwing up an error. for example, see this article: https://medium.com/@its.jwho/errorhandling-vulnerability-tests-on-gemini-19601b246b52.

the response validation of gemini is oversensitive. i'm summarising a court judgment which in which there was a "harassment" claim, but gemini is giving me a ResponseValidationError ("category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM"), and with the error message stating "To skip the response validation, specify `model.start_chat(response_validation=False)"

i see in utils.py for llamaindex there is "start_chat" in the code: https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/llms/llama-index-llms-vertex/llama_index/llms/vertex/utils.py

so in short, using llamaindex, how do i set "model.start_chat(response_validation=False)" as a parameter for vertex AI, gemini pro? if there is no easy way, is there a way of calling vertex ai's gemini pro using vertex AI's API, and then having it take effect as usual in llamaindex's "Settings.llm=llm"? thanks!
i
1 comment
niche problem. but just adding this as a record.

response_validation=False setting does not help in the case in which Gemini is being oversensitive to 'risky' topics.

i tried the vertex AI's python API call (i.e. not calling API through llamaindex). if i add the response_validation=False parameter, i don't get get the ResponseValidationError, but I get a different error (valueError: Content has no parts).

so while the response_validation=False setting doesn't seem to be implemented in llamaindex, it wouldn't have helped anyway.
Add a reply
Sign up and join the conversation on Discord