don't get too excited too fast ... it may takes minutes before you get an answer from privateGPT ... Tested it myself with the provided source doc and I was not impressed by the results ... but of course, this is only a "first step" ... this is evolving every day ... https://github.com/imartinez/privateGPT/issues/43
It's using CPU models, which will really drag down speed. If you have the resources, llama index supports any model from huggingface, and will run on GPU if you can π
yes, I know π for now I'm happy with LLIDX + openAI ... I'm trying to "teach" openAI a new programming language ... I experiment different prompts and mixing techniques ... it's both fun and tedious ! π