Find answers from the community

Updated 12 months ago

How do I use Langchain document loaders with Llamaindex?

At a glance

The post asks how to use Langchain document loaders with Llamaindex. A community member responds that Llamaindex allows using any data loader, including those from Langchain, as an "on-demand" data query tool within a Langchain agent. They provide an example of how to set up a Llamaindex query engine with a Langchain agent.

In a follow-up comment, another community member asks how to use the E5-large embedding model with Llamaindex. The response indicates that the specific code for setting up the E5-large model is not provided, but a general example is given for setting up an embedding model with Llamaindex. The community member is advised to refer to the Llamaindex documentation or the model's specific documentation for more detailed instructions on setting up the E5-large model.

@kapa.ai How do I use Langchain document loaders with Llamaindex?
k
c
5 comments
Please stand by @cablecutter as I search the knowledge base. I normally take about 20 seconds or so. :wave:
----------------------
LlamaIndex allows you to use any data loader, including those from Langchain, within the LlamaIndex core repository or in LlamaHub as an "on-demand" data query Tool within a LangChain agent. The Tool will load data using the data loader, index the data, and query the data, returning the response in an ad-hoc manner.

Here is an example of how you can use a LlamaIndex query engine along with a Langchain agent:

Plain Text
from llama_index.langchain_helpers.agents import (
    IndexToolConfig,
    LlamaIndexTool,
)

tool_config = IndexToolConfig(
    query_engine=query_engine,
    name=f"Vector Index",
    description=f"useful for when you want to answer queries about X",
    tool_kwargs={"return_direct": True},
)

tool = LlamaIndexTool.from_tool_config(tool_config)


In this example, a "Tool" is created from a QueryEngine directly. This tool can then be used within a Langchain agent.

For more detailed instructions and examples, you can refer to the LangChain integration guide () and the OnDemandLoaderTool Tutorial () provided in the LlamaIndex documentation.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
@kapa.ai Can you tell me how to use E5-large embedding model with llamaindex?
Let me check the knowledge base for you. One moment! :books:
----------------------
To use the E5-large embedding model with LlamaIndex, you would typically set it up in the service context object. However, the specific code for setting up the E5-large model is not provided in the extracts.

Here is a general example of how to set up an embedding model with LlamaIndex:

Plain Text
from llama_index import ServiceContext
from llama_index.embeddings import OpenAIEmbedding

embed_model = OpenAIEmbedding()  # replace with your specific model setup
service_context = ServiceContext.from_defaults(embed_model=embed_model)

# Optionally set a global service context to avoid passing it into other objects every time
from llama_index import set_global_service_context

set_global_service_context(service_context)


In this example, OpenAIEmbedding() is used as a placeholder. You would replace this with the specific setup for the E5-large model.

Please refer to the LlamaIndex documentation () or the model's specific documentation for more detailed instructions on how to set up the E5-large model.
----------------------
Note: you can ask me a follow up question by @ mentioning me again :speech_balloon:
----------------------
Add a reply
Sign up and join the conversation on Discord