----------------------
Yes, it is possible to create multiple pipelines using the same Large Language Model (LLM). In LlamaIndex, LLMs are a core component and can be used as standalone modules or plugged into other core LlamaIndex modules like indices, retrievers, and query engines. They are always used during the response synthesis step and depending on the type of index being used, LLMs may also be used during index construction, insertion, and query traversal
[source ()].
For instance, you can instantiate an LLM and pass it to a
ServiceContext
, which you then pass to other stages of the pipeline. Here is an example of how you can do this:
from llama_index.llms import OpenAI
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
llm = OpenAI(temperature=0.1, model="gpt-4")
service_context = ServiceContext.from_defaults(llm=llm)
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(
documents, service_context=service_context
)
[source ()]
You can repeat this process to create multiple pipelines using the same LLM.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------