is it possible to use llama index as a utility tool kit, meaning, instead of calling chains... LLMChain and PromptTemplate, call OpenAI openai.ChatCompletion.create directly? And manually create my context, memory, etc?
no_text: Only runs the retriever to fetch the nodes that would have been sent to the LLM, without actually sending them. Then can be inspected by checking response.source_nodes. The response object is covered in more detail in Section 5.
Would no_text and then calling openai just work for me?