How do I solve this issue: "Token indices sequence length is longer than the specified maximum sequence length for this model (4846 > 1024)." It keeps on coming.
This is the fuction that I am running:
def construct_index(directory_path): # set maximum input size max_input_size = 512 # set number of output tokens num_outputs = 256 # set maximum chunk overlap max_chunk_overlap = 20 # set chunk size limit chunk_size_limit = 600
Hi @Nachos , i noticed you're using the embedding model "text-embedding-ada-002" in the LLMPredictor. You should choose a valid language model https://platform.openai.com/docs/models/gpt-3