Find answers from the community

Updated last year

Is it possible to use local llm s such

Is it possible to use local llm's such as llama.cpp for DocumentSummaryIndex? I keep getting llama_tokenize_with_model: too many tokens
Error
L
2 comments
hmm that's kind of a weird error. Maybe decrease the context_window a bit on the LLM?
Might be due to token counting errors
Add a reply
Sign up and join the conversation on Discord