Find answers from the community

Updated 3 months ago

Over multiple indeces is

Over multiple indeces is.
I summarized it like this.
Plain Text
# loop for the number of documents
  index = GPTTreeIndex.from_documents(nodes, service_context=service_context)
  indexList.append(index)
  index_summary = index.query(query_str, response_mode="tree_summarize")
  indexSummaries.append(index_summary)

graph = ComposableGraph.from_indices(
        GPTTreeIndex,
        indexList,
        index_summaries=indexSummaries,
)
1
L
R
r
17 comments
I think when generating summaries using a tree index, you should use mode="summarize" instead of response_mode="tree_summarize", although maybe newer versions have made both options have the same functionality πŸ˜…

If your summaries are too similar, you might have better luck combining your nodes into fewer indexes in the graph?
When I did it the way you said, the amount of information increased. thank you.

By the way, Running the program this way will result in an error at query time. Recreating save_path.json solves it, but is this due to llama-index?
Plain Text
graph = ComposableGraph.from_indices(
        GPTTreeIndex,
        indexList,
        index_summaries=indexSummaries,
)
graph.save_to_disk("save_path.json")

query_configs = [
    QueryConfig(
        index_struct_type="tree",
        query_mode=QueryMode.DEFAULT,
        query_kwargs={
            "child_branch_factor": 1
        }
    ),
]

max_input_size = 3500
num_output = 512
max_chunk_overlap = 50
prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap)
# define LLM
llm_predictor_chatgpt = LLMPredictor(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo", max_tokens=1500))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor_chatgpt, prompt_helper=prompt_helper, chunk_size_limit = 1500, llama_logger=llama_logger)

query = ""
response = graph.query(query, query_configs=query_configs, service_context=service_context)
Plain Text
  File "C:\Users\\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\indices\composability\graph.py", line 145, in query
    return query_runner.query(query_str)
  File "C:\Users\\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\indices\query\query_runner.py", line 341, in query
    return query_combiner.run(query_bundle, level)
  File "C:\Users\\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\indices\query\query_combiner\base.py", line 66, in run
    return self._query_runner.query_transformed(
  File "C:\Users\\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\indices\query\query_runner.py", line 186, in query_transformed
    nodes = query_obj.retrieve(query_bundle)
  File "C:\Users\\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\indices\query\base.py", line 249, in retrieve
    nodes = self._retrieve(query_bundle, similarity_tracker=similarity_tracker)
  File "C:\Users\\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\indices\query\tree\leaf_query.py", line 382, in _retrieve
    return self._retrieve_level(
  File "C:\Users\\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\indices\query\tree\leaf_query.py", line 357, in _retrieve_level
    selected_nodes = self._select_nodes(
  File "C:\Users\\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\indices\query\tree\leaf_query.py", line 258, in _select_nodes
    self._service_context.prompt_helper.get_numbered_text_from_nodes(
  File "C:\Users\\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_index\indices\prompt_helper.py", line 214, in get_numbered_text_from_nodes
    node_text = " ".join(node.get_text().splitlines())
AttributeError: 'Response' object has no attribute 'splitlines'
Huh, that's kind of weird πŸ€” but it only happens once, and then you cab query again?
(One tip, nun_output should be the same as max_tokens)
(And max input size should be 4097, the default πŸ‘Œ)
@kapa.ai i want to import QueryConfig
Let me check the knowledge base for you. One moment! :books:
@kapa.ai i want to import llm_predictor_chatgpt
Let me check the knowledge base for you. One moment! :books:
----------------------
To import QueryConfig from the GPT index library, you can use the following code:

Plain Text
from gpt_index.indices.query.schema import QueryConfig


You can find more information about QueryConfig in the documentation (https://gpt-index.readthedocs.io/en/latest/reference/query.html#gpt_index.indices.query.schema.QueryConfig).
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Sorry @roxrow, I seem to be having a bit of trouble. Don't worry though, I've pinged my supprt and they will try to fix me up! :technologist:

Perhaps try again in 15mins?
@roxrow trying to use gpt-3.5?
@kapa.ai what are the different response_mode
Let me check the knowledge base for you. One moment! :books:
----------------------
There are three different response_mode options available in LlamaIndex:

  1. default: For the given index, it creates and refines an answer by sequentially going through each Node, making a separate LLM call per Node. This mode is suitable for more detailed answers.
Plain Text
response = index.query("What did the author do growing up?", response_mode="default")


  1. compact: For the given index, it compacts the prompt during each LLM call by stuffing as many Node text chunks that can fit within the maximum prompt size. If there are too many chunks to fit in one prompt, it creates and refines an answer by going through multiple prompts.
Plain Text
response = index.query("What did the author do growing up?", response_mode="compact")


  1. tree_summarize: Given a set of Nodes and the query, it recursively constructs a tree and returns the root node as the response. This mode is suitable for summarization purposes.
Plain Text
response = index.query("What did the author do growing up?", response_mode="tree_summarize")


You can find more information about these response modes in the Usage Pattern Guide (https://gpt-index.readthedocs.io/en/latest/guides/primer/usage_pattern.html#setting-response-mode).
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord