Hello! I have a question -- I'm using llama 2 and I have a big json blob (50mb of text) (it is quite nested and contains a lot of 1-page documents, and i was wondering what the best way to index that is? Would it still be using the JSON index (it's gona be tedious to come up with a full schema) or is there a way for me to turn it into a document somehow?
gotcha, okay yeah that's what i was thinking too, do you happen to know how 'smart' the json loader is compared to the document parser? id love to preserve the structure since its important, but also wondering if the json loader has capabilities beyond basic lookup/analytics on json data? e.g. summarization
okay, confirming that this was the issue ^ I was able to force llama 2 to output just the jsonpath and return the correct answer. I wonder if a thin regex wrapper around json_path_response_str could be helpful in the future for other users using LLM models without function calling (let me know if you like the idea and i can make a feature request)