below what I got: This model's maximum context length is 8192 tokens. However, your messages resulted in 14620 tokens. Please reduce the length of the messages.
And that's why when I ingest, there isn't any textsplitter. Like I said, the textsplitter is in node_parser. But I got again the same error even if I added the nodeparser in the service_context
How did you create the index? Did you use document objects and from_documents() and insert() functions?
It could be a language thing causing the documents not to split well into nodes. You could change to use the recursive character text splitter instead if so
Ive always used tokentextsplitting() and it worked with pre previous version. Btw I'm using Node() and then insert(). Should I change to from_documents?
No! And I suspect I should do that. When I used to use Document(), insert() with GPTPineconeIndex the splitting was in automatic. Now I miss this step ahahah
Yea, if you create nodes directly, the splitting is not automatic. You'll want to call text_splitter.split_text_with_overlaps(text) before creating the nodes