The community member is using LlamaIndexTool with Langchain, but is having issues with the asynchronous functionality not working as expected. Other community members suggest that the LlamaIndex utilities are largely unmaintained, and recommend using the updated tool abstractions in LlamaIndex instead. They explain that LlamaIndex has its own agents to reduce dependencies and make it easier to maintain and develop. While Langchain compatibility will be supported, the community members advise against relying on Langchain for core functionality. Additionally, a community member inquires about contributing a PaLM agent implementation, and is advised to use the React agent instead of the OpenAIAgent, as the latter is specific to the function-calling API.
I'm using LlamaIndexTool with Langchain, but when I use async it doesn't seem to actually run asyncronously. Is this not supported fully yet, or is there something I missed?
I'm currently working on a research paper, and part of it entails comparing different LLMs. I noticed that PaLM agents are not yet implemented. If I wanted to contribute, would I implement a PaLM version of openai_agent.py and test_openai_agent.py?