Find answers from the community

Updated 2 years ago

Is there a way of changing the limit on

At a glance

The community members are discussing how to change the limit on the number of tokens returned by the response. One community member suggests using the max_tokens parameter in the OpenAI LLM class from the LangChain library, but another community member says they tried it and it didn't work. The discussion then focuses on the differences between the GPTKeywordTableIndex and GPTPineconeIndex classes, and a community member eventually finds a solution by feeding the parameters into the GPTPineconeIndex constructor instead of the load_from_disk method.

Useful resources
Is there a way of changing the limit on the # of tokens returned by the response? Is it limited to 256 tokens or is there a way to increase it?
j
S
7 comments
yeah we use langchain's llm, so you can change max_tokens in the OpenAI LLM class: https://gpt-index.readthedocs.io/en/latest/how_to/custom_llms.html
Thanks, yeah I was looking into that, I tried it and it didn't work
Initially I thought it might be because the example was for GPTKeywordTableIndex, but my code is a GPTPineconeIndex.
But both of them have the same internal architecture, so this shouldn't matter, right?
Ohh nvm I got it to work by feeding it into the GPTPineconeIndex constructor rather than the load_from_disk method, thank you!
oh weird. wait load_from_disk doesn't work?
Add a reply
Sign up and join the conversation on Discord