Find answers from the community

Updated last year

Transition

Hello everyone. Hope everyones doing well.

I'm currently doing my internship and I've been tasked to do a chatbot which I've been trying to do for the past 2-3 months using LangChain but I struggling a lot understanding and implementing several things from their docs so decided to switch to LlamaIndex and try it out. Could someone help me out transition from what I had in LangChain to LlamaIndex if someone understood both?

I'm currently doing RAG that has custom prompt, memory, and retrieves from a DirectoryLoader. I've also implemented Constitutional AI to prevent Prompt Injection/Hacking.

If someone could help me out, I'd highly appreciate it as I'm currently left with 2 weeks to finish it. I can show my LangChain code as well if it helps.
W
S
18 comments
Sure, I'll try my best to help you in the transition.
Thanks for replying
Do you want me to share my code?
That would help me to understand your code and give better suggestions
created something from your code
you have two bots one to give response and other to verify if the first response is correct or not?
Thats 1 issue I'm just trying to fix but the 2nd response has constitutional AI that prevents prompt hacking
Is there a way to implement it using Llama index?
I think for this
response_2 = qa_chain({"retrieval_answer": llm_response["result"], "question": question})

you can tackle this with a OpenAI agent as well.
But arent agents known to have bad accuracy?
Thats why I've been avoiding them
You can use the LLM directly then

Plain Text
llm_response = chat_engine.chat(question)
response_2 = llm.complete("Define instructions for validating the answer:\n ADD_RETRIEVED_RESPONSE_HERE\n\nADD_QUESTION")
OpenAI agent have been pretty good. specially if you are using GPT-4-Preview model
Add a reply
Sign up and join the conversation on Discord