Find answers from the community

Home
Members
decentralizer
d
decentralizer
Offline, last seen 2 months ago
Joined September 25, 2024
Hi, i have 2 different simple vector indices. I created a composable graph on top of these 2 indices. When I'm dealing with just one index, i'm able to put QA_PROMPT_TMPL in my query function, however, I couldn't find a way to do this for composable grap index. Any suggestions?
1 comment
L
Hmm this would work but we are exporting multiple channels within a server. it might be a little tricky to create seperate indices as the time range of the conversations can be over 2 years in some cases. It's so sad that GPT is not really good with dates
2 comments
d
L
Hi,

I have an simple vector index that I created with chunk_size_limit=1024

The input prompt itself that I pass to query function is ~5000 tokens. I tried using prompt_helper (below) to create chunks but I think that is useful when you create an index, not while making the query call.



index = GPTListIndex.load_from_disk('./index.json') max_input_size = 4096 num_output = 256 max_chunk_overlap = 50 prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap) llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="text-davinci-002")) service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor) response = index.query(query_str, mode="default", response_mode="default",service_context=service_context)


The error that I'm getting is:
Got a larger chunk overlap (200) than chunk size (-2877), should be smaller.

Any suggestions?
1 comment
L
If I pass the documents directly:

PDFReader = download_loader("PDFReader") loader = PDFReader() document1 = loader.load_data(file=Path('./file1.pdf')) document2 = loader.load_data(file=Path('./file2.pdf')) graph_builder = QASummaryGraphBuilder(service_context=service_context_gpt4) graph = graph_builder.build_graph_from_documents(documents=[document1, document2])

'list' object has no attribute 'get_text'
2 comments
L