Find answers from the community

Updated 9 months ago

Structured

have you ever tried any implementation of LlamaIndex with Google Gemma model?
the following examples give me the "json" error when using Gemma:

https://github.com/run-llama/llama_index/blob/main/docs/examples/query_engine/SQLAutoVectorQueryEngine.ipynb
https://github.com/run-llama/llama_index/blob/main/docs/examples/query_engine/SQLJoinQueryEngine.ipynb

Generally speaking, how can we tailor our custom LLM to work well with all the features of LlamaIndex? Such as GPT and Claude models.

Thanks in advance for your response.
L
A
10 comments
Features that require structured outputs generally don't work great with open source models
It generally requires a fairly capable LLM
do you think fine-tuning the LLM be helpful?
If you fine tuned it for outputting json and following the schema in the prompt, yes
can you please elaborate "outputting json"?
I do not get it!
right now what is the default output of Gemma?
It means we are prompting the LLM with a json schema. And it must output a json object that llama index can parse
Maye enabling some debug logs on model inputs/outputs would help
Oh, ok.
Can you kindly share some resources in this regard?
it would be really appreciated.
Plain Text
import llama_index.core

llama_index.core.set_global_handler("simple")
And another question.
Please accept my apologies for too many questions.

I have also used Mistral, it looks a better option for structured data.
However, when I query of SQL tables, it can't provide correct responses in some cases.
The responses are either incorrect or unable to provide any.

Do you think in this case fine-tuning is helpful?!
Add a reply
Sign up and join the conversation on Discord