I thought that llama-index was essentially a way to add extra information on top of ChatGPT or other models So there's no way that it could perform worse than not having the extra info :THONK:
When using llama index, it's supposed to only answer using the top k nodes it retrieved. So it might not be retrieving enough to answer the question π€