Find answers from the community

Updated 2 months ago

i've been trying gemini pro using vertex

i've been trying gemini pro using vertex ai, using llamindex's integration of vertex ai. i've been defining it as "llm" and using Settings.llm=llm. i use gemini pro as the llm for llamaindex's Tree Summarize.

is there a way to specify a "model.start_chat(response_validation=False)" parameter for gemini? this should disable gemini's response validation, which blocks potentially "risky" material from appearing as output, by throwing up an error. for example, see this article: https://medium.com/@its.jwho/errorhandling-vulnerability-tests-on-gemini-19601b246b52.

the response validation of gemini is oversensitive. i'm summarising a court judgment which in which there was a "harassment" claim, but gemini is giving me a ResponseValidationError ("category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM"), and with the error message stating "To skip the response validation, specify `model.start_chat(response_validation=False)"

i see in utils.py for llamaindex there is "start_chat" in the code: https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/llms/llama-index-llms-vertex/llama_index/llms/vertex/utils.py

so in short, using llamaindex, how do i set "model.start_chat(response_validation=False)" as a parameter for vertex AI, gemini pro? if there is no easy way, is there a way of calling vertex ai's gemini pro using vertex AI's API, and then having it take effect as usual in llamaindex's "Settings.llm=llm"? thanks!
i
1 comment
niche problem. but just adding this as a record.

response_validation=False setting does not help in the case in which Gemini is being oversensitive to 'risky' topics.

i tried the vertex AI's python API call (i.e. not calling API through llamaindex). if i add the response_validation=False parameter, i don't get get the ResponseValidationError, but I get a different error (valueError: Content has no parts).

so while the response_validation=False setting doesn't seem to be implemented in llamaindex, it wouldn't have helped anyway.
Add a reply
Sign up and join the conversation on Discord