Was the documentation just from one big file?
I know something that usually helps is splitting large documents into chapters or sections, before indexing. If these sections are different enough, they could even be their own indexes and used in something like a router query engine.
You can also customize the chunking logic a bit. By default, it will split into chunks of 1024 tokens, with some overlap.
from llama_index import ServiceContext
service_context = ServiceContext.from_defaults(chunk_size=1500)
index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
You can also increase the top_k a bit, instead of larger chunks (the default is 2)
query_engine = index.as_query_engine(similarity_top_k=3)