chunk_size_limit
within ServiceContext.from_defaults
and defining it within PromptHelper.from_llm_predictor
. I only defined chunk_size_limit
within SimpleNodeParser
as part of the text_splitter
: splitter = TokenTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap) node_parser = SimpleNodeParser(text_splitter=splitter, include_extra_info=False, include_prev_next_rel=True)
ServiceContext
docs (Key Components > Customization > ServiceContext) about kwargs:ServiceContext(chunk_size=512)
passes this over to default node parser which is like doing SimpleNodeParser(chunk_size=512, chunk_overlap=0)
, am I wrong?LLM(max_tokens=256)
& PromptHelper(num_output=256)
, docs say Number of outputs for the LLM.
or set number of output tokens
somewhere else, but I dont understand what this means for real. Does this define the length of the final answer?0.6.14
that changed how prompt_helper is used? prompt_helper = PromptHelper.from_llm_predictor(llm_predictor=llm_predictor, chunk_size_limit=1024)
0.6.14
and kept langchain version 0.0.205
, but then choosing the newer gpt-3.5-turbo-16k
model gives error:[ERROR] ValueError: Unknown model: gpt-3.5-turbo-16k. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001
gpt-3.5-turbo
model suddenly gives me error nltk package not found, please run pip install nltk
- even though nltk is the first item on the requirements.txt
file for my project - Im confused# define node parser and LLM llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-4", streaming=True)) service_context = ServiceContext.from_defaults(chunk_size_limit=1024, llm_predictor=llm_predictor) text_splitter = TokenTextSplitter(chunk_size=service_context.chunk_size_limit) node_parser = SimpleNodeParser(text_splitter=text_splitter)
node_parser=node_parser
?