Hey everyone I recently used the simpleCSV loader with LlamaIndex and I gotta say it was incredibly easy and the results just from a simple query were astonishing based on the little data I provided it.
But how is this different than say fine-tuning a GPT-3 model directly? I did this with about 50 prompt/completion pairs the results of the prompts were no where near as good as the custom knowledge LlamaIndex. in fact, quite poor
And the reason I wanted to fine tune the model was so I could utilize it in a MERN application (which is what I'm familiar with). How can I access this model in my own requests say with nodeJs?