Find answers from the community

Updated 2 years ago

Components

At a glance

The community members are discussing the possibility of using the LlamaIndex library as a utility toolkit, instead of relying on the higher-level components like LLMChain and PromptTemplate. They explore the idea of directly calling the OpenAI library's openai.ChatCompletion.create function and manually creating the context, memory, and other components.

The comments indicate that it is possible to use the individual components of LlamaIndex, including the base-level LLM, retrievers, response synthesizers, memory, and other modules. However, the community members note that LlamaIndex needs to know how to call the OpenAI library, and this is done through the LLM object. They suggest that one could write their own LLM class if they want to customize the behavior.

The community members also discuss the no_text feature in LlamaIndex, which runs the retriever to fetch the nodes that would have been sent to the LLM, without actually sending them. They suggest that using no_text and then calling the OpenAI library directly could be a viable approach.

Useful resources
is it possible to use llama index as a utility tool kit, meaning, instead of calling chains... LLMChain and PromptTemplate, call OpenAI openai.ChatCompletion.create directly? And manually create my context, memory, etc?
L
f
5 comments
You can use every single component in llamaindex individually.

You can call the base level LLM
https://gpt-index.readthedocs.io/en/stable/core_modules/model_modules/llms/usage_standalone.html

Basically any base level component can be used like this (retrievers, response synthesizers, memory, node postprocessors, embeddings, and LLMs)
Thanks, is it possible to use llama index without the base level LLM and just OpenAI's library?
Not really. Llama index needs to know how to call openai, and that's with the llm object

You could just write your own LLM class if there was something you wanted to customize
no_text: Only runs the retriever to fetch the nodes that would have been sent to the LLM, without actually sending them. Then can be inspected by checking response.source_nodes. The response object is covered in more detail in Section 5.

Would no_text and then calling openai just work for me?
Yea that works, just taking the text from the nodes and figuring it out from there
Add a reply
Sign up and join the conversation on Discord