I am creating an OpenAI assistant using a) llama index and b) native OpenAI API. In both cases I choose to upload some grounding documents, instruction prompt and enable the retrieval toolkit. However, in the case of (a) Llamaindex - quite bad texts are generated. Variant (b) openai native works like a charm though. What can be causeing such issues? I was expecting to get similar results - as internally the openAI APIs should be called in both cases