i've been trying gemini pro using vertex ai, using llamindex's integration of vertex ai. i've been defining it as "llm" and using Settings.llm=llm. i use gemini pro as the llm for llamaindex's Tree Summarize.
is there a way to specify a "model.start_chat(response_validation=False)" parameter for gemini? this should disable gemini's response validation, which blocks potentially "risky" material from appearing as output, by throwing up an error. for example, see this article:
https://medium.com/@its.jwho/errorhandling-vulnerability-tests-on-gemini-19601b246b52.
the response validation of gemini is oversensitive. i'm summarising a court judgment which in which there was a "harassment" claim, but gemini is giving me a ResponseValidationError ("category: HARM_CATEGORY_HARASSMENT
probability: MEDIUM"), and with the error message stating "To skip the response validation, specify `model.start_chat(response_validation=False)"
i see in utils.py for llamaindex there is "start_chat" in the code:
https://github.com/run-llama/llama_index/blob/main/llama-index-integrations/llms/llama-index-llms-vertex/llama_index/llms/vertex/utils.pyso in short, using llamaindex, how do i set "model.start_chat(response_validation=False)" as a parameter for vertex AI, gemini pro? if there is no easy way, is there a way of calling vertex ai's gemini pro using vertex AI's API, and then having it take effect as usual in llamaindex's "Settings.llm=llm"? thanks!