Hello everyone, I'm doing a simple QA using below structure: 0) "Context text: (45,000 characters)"
1) Indexing context text using: -- -- llama_index.indices.list.base.ListIndex (GPTListIndex) -- -- service_context: OpenAI("GPT-3.5-Turbo") + callback_manager(LlamaDebugHandler) -- -- Creating a query engine with QuestionAnswerPrompt + RefinePrompt
2) With the created Query Engine on point 1): -- performing a query
The response is not working as well as I wished in some cases. In order to understand better I tried to dig into the structure and debug. One of the things that jumped into my attention is that the response has the nodes sent to the GPT (14 nodes) and they don't have any score on any of the nodes. So the question is... is it normal that (llama_index NodeWithScore) doesn't contain any score in any of the nodes returned?