Find answers from the community

Updated 3 months ago

Hello, I am seeking guidance on using

Hello, I am seeking guidance on using RAG to ground a model response format based on DSL semantics unknown by the LLM. My task is to assess how RAG can help align model responses with a specific JSON schema, which can be complex. As I am new to building RAG-augmented LLM apps, any advice on evaluating efficiency before starting the entire pipeline would be greatly appreciated. I can't rely on function callings nor newly introduced OpenAI feature (JSON mode, seeds sampling). I am trying to base my approach on the findings of this paper: https://arxiv.org/pdf/2308.00675.pdf where they seem to indicate you can teach an LLM to use a new tool/language by passing the documentation as input
L
a
2 comments
Like, you could prompt an LLM with documentation to then help it pick a tool

I think this is similar to user sends message -> retrieve documentation related to message -> show LLM docs + tools and ask it to pick a tool (I think this is essentially what this paper is describing)
Thanks Logan, in this case I am trying to teach the model a new grammar / semantics
Add a reply
Sign up and join the conversation on Discord