Find answers from the community

Updated 4 months ago

Anyone have experience using create-

At a glance
Anyone have experience using create-llama and gemini-pro?

I believe I have most of it set up, but every time I test a query I get this (ValueError: Could not parse output: Thought: I need to use a tool to help me answer the question.
Action: query_engine_tool({"input": "what is your name?"}))
L
!
5 comments
ha that sounds about right

Gemini is not following instructions in the react prompt, and is producing output that we can't parse

Although there is a PR here that might help improve this soon
https://github.com/run-llama/llama_index/pull/9575
I take it we probably just need gemini-ultra access if we're expecting a drop in replacement for gpt-4 then 😀
seems like it! gemini-pro seems a little bit worse than gpt-3.5 in this case
for awareness, I pulled that output parser to my own and still didn't get the react agent to work (experimented with temp too but no luck).

What does seem to work is using anything else besides the react agent. condense_plus_context seems to get at least most of a proof of concept for using gemini-pro working with llamaindex.

gemini embeddings model seems to work with no issue too πŸ‘
forgot to mention that streaming works well too
Add a reply
Sign up and join the conversation on Discord