Find answers from the community

Updated 2 years ago

spend too much time waiting for the answ...

don't get too excited too fast ... it may takes minutes before you get an answer from privateGPT ... Tested it myself with the provided source doc and I was not impressed by the results ... but of course, this is only a "first step" ... this is evolving every day ...
https://github.com/imartinez/privateGPT/issues/43
L
t
2 comments
It's using CPU models, which will really drag down speed. If you have the resources, llama index supports any model from huggingface, and will run on GPU if you can πŸ™‚
yes, I know πŸ˜‰ for now I'm happy with LLIDX + openAI ... I'm trying to "teach" openAI a new programming language ... I experiment different prompts and mixing techniques ... it's both fun and tedious ! πŸ˜„
Add a reply
Sign up and join the conversation on Discord