Find answers from the community

Updated last year

Is it possible to create multiple pipelines using the same LLM?

@kapa.ai Is it possible to create multiple pipelines using the same LLM?
k
2 comments
Please stand by @Matthews as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
Yes, it is possible to create multiple pipelines using the same Large Language Model (LLM). In LlamaIndex, LLMs are a core component and can be used as standalone modules or plugged into other core LlamaIndex modules like indices, retrievers, and query engines. They are always used during the response synthesis step and depending on the type of index being used, LLMs may also be used during index construction, insertion, and query traversal [source ()].

For instance, you can instantiate an LLM and pass it to a ServiceContext, which you then pass to other stages of the pipeline. Here is an example of how you can do this:

Plain Text
from llama_index.llms import OpenAI
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext

llm = OpenAI(temperature=0.1, model="gpt-4")
service_context = ServiceContext.from_defaults(llm=llm)

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(
    documents, service_context=service_context
)

[source ()]

You can repeat this process to create multiple pipelines using the same LLM.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord