The community members are discussing the possibility of using the LlamaIndex library as a utility toolkit, instead of relying on the higher-level components like LLMChain and PromptTemplate. They explore the idea of directly calling the OpenAI library's openai.ChatCompletion.create function and manually creating the context, memory, and other components.
The comments indicate that it is possible to use the individual components of LlamaIndex, including the base-level LLM, retrievers, response synthesizers, memory, and other modules. However, the community members note that LlamaIndex needs to know how to call the OpenAI library, and this is done through the LLM object. They suggest that one could write their own LLM class if they want to customize the behavior.
The community members also discuss the no_text feature in LlamaIndex, which runs the retriever to fetch the nodes that would have been sent to the LLM, without actually sending them. They suggest that using no_text and then calling the OpenAI library directly could be a viable approach.
is it possible to use llama index as a utility tool kit, meaning, instead of calling chains... LLMChain and PromptTemplate, call OpenAI openai.ChatCompletion.create directly? And manually create my context, memory, etc?
no_text: Only runs the retriever to fetch the nodes that would have been sent to the LLM, without actually sending them. Then can be inspected by checking response.source_nodes. The response object is covered in more detail in Section 5.
Would no_text and then calling openai just work for me?