----------------------
To set up a chunk size limit and use it to query, you can follow these steps:
- Create a
ServiceContext
with a specified chunk_size_limit
. - Initialize an index using the
ServiceContext
. - Query the index.
Here's an example using
GPTSimpleVectorIndex
:
from gpt_index import GPTSimpleVectorIndex, ServiceContext, SimpleDirectoryReader
# Set up the chunk size limit
chunk_size_limit = 512
# Create a ServiceContext with the specified chunk size limit
service_context = ServiceContext.from_defaults(chunk_size_limit=chunk_size_limit)
# Load documents
documents = SimpleDirectoryReader('../paul_graham_essay/data').load_data()
# Initialize the index with the ServiceContext
index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)
# Query the index
query = 'What is most interesting about this essay?'
answer = index.query(query)
print(answer.get_formatted_sources())
print('query was:', query)
print('answer was:', answer)
In this example, the
chunk_size_limit
is set to 512. The
ServiceContext
is created with this limit and used to initialize the
GPTSimpleVectorIndex
. Finally, the index is queried with a given question.
Source:
Building a Chatbot (
https://gpt-index.readthedocs.io/en/latest/guides/tutorials/building_a_chatbot.html)
----------------------
Note: you can ask me a follow up question by @ mentioning me again
:speech_balloon:
----------------------