Find answers from the community

Updated 2 years ago

Custom llm

the links you shared
L
n
8 comments
Yea, that's usually the easiest way.

If you don't have access to the pipeline, you'll have to load the model and tokenizsr yourself, tokenize the text, call the model, and return the newly generated text

So if you have a model that you can already test giving inputs to, you likely have this all written already somewhere
(Pipelines are all local running models btw)
I'm trying to use gpt4all
can you help
Yea that thread I linked before had a gpt4all notebook, I'll grab the linm
GitHub - autratec/GPT4ALL_Llamaindex: Notebook to integrate GPT4ALL...
Add a reply
Sign up and join the conversation on Discord