Find answers from the community

Updated 2 years ago

Langchain tool access

@Logan M I have connected langchain with agents as llama hub + pinecone. How can I control the langchain to reply only for greeting based messages and not to access public data other than agents. Should I use langchain prompt to control it.
1
L
i
i
7 comments
So you have an agent that also has tools?

Whether or not the agent uses a tool is entirely dependent on the description of the tool. So you'll need to write a good (and sometimes creative) tool description so that the agent only uses the tool when it needs it
Basically llama index tool is working fine the given description. And for general messages from users such as "Hi" / "Hello" >> Langchain + LLM is taking care of inputs. But they can also access public knowledge through this bot.

I want to restrict langchain only for greetings and AI conversations. Llama Index tool is good for other queries.
I mean, to me that sounds like it works fine?

For general messages, it is not using llama index. But if the LLM decides it needs the tool during the conversation, it will be used.

You could change the description of the tool to something like "Useful for answering questions about X. Only use this tool if the user message contains 'hey llama index'"

This way, the user essentially has to choose to invoke the tool
Hi
I have exactly the same problem as @intvijay
I created a LLamaIndex tools that I provided to langchain through create_llama_chat_agent / initialize_agent (similar to https://github.com/jerryjliu/llama_index/blob/main/examples/chatbot/Chatbot_SEC.ipynb); it works well to retrieve the information from the tool
The problem is that if it didn't find the information in the tool/index, it would just use its internal knowledge and still answer, instead of saying that it doesn't know
When using only LlamaIndex, when asking a question for which no information is found in the indices, the LLM would answer something along the lines of Based on the context information provided, there is no information available., while with langchain it would still respond something (and possibly hallucinate)
I'm trying to get reliable information out of the indices, but with langchain and the LlamaIndex indices, it is possible to get a bad answer for now (and possibly for it to look quite good if you're not an expert on the subject)
@jerryjliu0 already started this thread for my related question: https://discordapp.com/channels/1059199217496772688/1089188531949285406
This is my exact problem. I m trying to control langchain from its internal knowledge.
@iraadit got it. This sounds like something you could probably tweak in the outer langchain agent prompt.

In the llamaindex default prompts, we explicitly say something like "using only the context provided and without relying on prior knowledge, provide the answer". You may want to introduce some hard constriants into the langchain agent as well
Add a reply
Sign up and join the conversation on Discord