Find answers from the community

Updated 2 years ago

Llama index

At a glance

The community member is using LlamaIndexTool with Langchain, but is having issues with the asynchronous functionality not working as expected. Other community members suggest that the LlamaIndex utilities are largely unmaintained, and recommend using the updated tool abstractions in LlamaIndex instead. They explain that LlamaIndex has its own agents to reduce dependencies and make it easier to maintain and develop. While Langchain compatibility will be supported, the community members advise against relying on Langchain for core functionality. Additionally, a community member inquires about contributing a PaLM agent implementation, and is advised to use the React agent instead of the OpenAIAgent, as the latter is specific to the function-calling API.

Useful resources
I'm using LlamaIndexTool with Langchain, but when I use async it doesn't seem to actually run asyncronously. Is this not supported fully yet, or is there something I missed?
L
j
w
12 comments
Ohhhh these utilities are largely un-maintained. We have our own agents now
https://gpt-index.readthedocs.io/en/stable/core_modules/agent_modules/agents/modules.html

If you want to use an async query engine with langchain, create a custom async tool and use aquery()
yeah that's a tad outdated - we have updated tool abstractions now that you can still plug into your langchain agent or llamaindex agents! https://gpt-index.readthedocs.io/en/latest/core_modules/agent_modules/tools/usage_pattern.html
Cool, thanks guys
Out of curiousity, was there a particular reason why LlamaIndex has it's own agents that replace Langchain?
Less dependencies, easier to maintain and develop
Does that mean it's possible langchain compatibility may cease to be maintained in future?
We will always support langchain LLMs and embedding models. But depending on langchain for core-functionality should probably be avoided when possible
Fair enough, thanks for the info
I'm currently working on a research paper, and part of it entails comparing different LLMs. I noticed that PaLM agents are not yet implemented. If I wanted to contribute, would I implement a PaLM version of openai_agent.py and test_openai_agent.py?
PaLM should just use the react agent actually

The only reason we have OpenAIAgent is because it uses the function-calling API rather than a react loop
(Or maybe PaLM added function calling lol not sure)
I see, I'll try that then
Add a reply
Sign up and join the conversation on Discord