Find answers from the community

Updated 5 months ago

New nodes

At a glance

The post describes a bug where the response is empty when new_nodes is empty. Community members discuss this issue and suggest potential solutions:

- If no nodes are added to the index, there is nothing to query.

- The community member needs GPT to provide meaningful responses even when no reference nodes are found.

- They suggest using second-stage processing and filtering nodes based on a custom logic, but encounter an issue where the query engine returns None when all nodes are filtered out.

- Another community member points out a bug in the documentation, where the example code is missing a required parameter for the node_postprocessor.postprocess_nodes function.

- The community members discuss using LlamaIndex as a custom tool in a framework like Langchain to achieve a more chatbot-like experience, as LlamaIndex is currently focused on being a search tool.

There is no explicitly marked answer in the comments.

Useful resources
BUG!
reference: (https://gpt-index.readthedocs.io/en/latest/how_to/query/second_stage.html#using-as-independent-module-lower-level-usage),
when new_nodes is empty,response is emtpy.
Plain Text
    list_index = GPTListIndex(
        new_nodes,
        service_context=service_context
    )
    query_engine = list_index.as_query_engine(
        similarity_top_k=1, response_mode="compact",
        text_qa_template=TEXT_QA_TEMPLATE,
        refine_template=REFINE_TEMPLATE
    )
    response = query_engine.query(query_str)
    print(response)
L
a
8 comments
If you didn't add nodes to the index, then there is nothing to query ...
I know that, but I need to have GPT answer user questions whether or not a reference node is found. So what show I do?
I'm not sure I understand the full use case here 🤔
Ok, let's think step by step:
  1. I need filter nodes(custom logic, node.score > 0.5 or something else), so I use second-stage processing low level usage(https://gpt-index.readthedocs.io/en/latest/how_to/query/second_stage.html#using-as-independent-module-lower-level-usage)
  2. Sometime all nodes are filtered, at this time, query_engine.query("hello!") return None, BUT, I need query_engine return meaningful results (just like "hello I'm chatbot") instead of None
  3. what should I do?
Also I found a bug in document (https://gpt-index.readthedocs.io/en/latest/how_to/query/second_stage.html#using-as-independent-module-lower-level-usage)
your example does not pass (query bundle) required by node_postprocessor.postprocess_nodes
Plain Text
new_nodes = node_postprocessor.postprocess_nodes(resp_nodes)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...site-packages/llama_index/indices/postprocessor/node_recency.py", line 70, in postprocess_nodes
    raise ValueError("Missing query bundle in extra info.")
ValueError: Missing query bundle in extra info.
If you want a more chatbot experience, I would use llama index as a custom tool in something like langchain

Right now, llama index is focused on being a good search tool.

In the future, we have plans to add more native chat controls though 🙏🙏
Maybe It's not experience problem but a bug. When the search does not find the corresponding node, the second-stage engine should at least be able to reply according to the user's question, instead of None
At the core of llama index, every llm response needs to be grounded by context (hence, it being a search tool)

Very welcome to PRs to improve this 💪
Add a reply
Sign up and join the conversation on Discord