Anyone have experience using create-llama and gemini-pro?
I believe I have most of it set up, but every time I test a query I get this (ValueError: Could not parse output: Thought: I need to use a tool to help me answer the question. Action: query_engine_tool({"input": "what is your name?"}))
for awareness, I pulled that output parser to my own and still didn't get the react agent to work (experimented with temp too but no luck).
What does seem to work is using anything else besides the react agent. condense_plus_context seems to get at least most of a proof of concept for using gemini-pro working with llamaindex.
gemini embeddings model seems to work with no issue too π