Find answers from the community

Updated 3 months ago

i get this is as output

i get this is as output
Plain Text
',} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the questions that this document can answer. \n,} of the'
L
1 comment
That looks like to me that falcon is not working well πŸ˜…

The max input size to falcon is 2048 right? That should be set on the predictor, I suspect it might be causing inference issues


Plain Text
llm_predictor = HuggingFaceLLMPredictor(
    max_input_size=2048,
    max_new_tokens=256,
    tokenizer_kwargs={"max_length": 2048},
    ...
)
Add a reply
Sign up and join the conversation on Discord