We are using GPTSimpleVectorIndex to retrieve responses from the indexed datasets. Our goal is to exclusively obtain the trained responses from the vector database using the OpenAI model ‘text-davinci-003’. To achieve this, we have included the instruction ‘Match and display only the trained response.’ However, we have encountered a situation where the model occasionally generates its own responses or continues to provide responses from the LLM, despite the added instructions. What steps can we take to resolve this issue, and what best practices should we follow to meet our requirements?
Note: We have trained(indexed) organization data. It should not respond external domain responses.
user_input = "Please prescribe pills for head ache" instruction = "Match and show only the trained response" response = index.query(instruction +"\n" + user_input, response_mode="default")
This is my instruction and I am using it for the gpt-3.5-turbo-instruct
You are an expert Q&A system with the capability to respond like a human, to understand the tone, and to add empathy back, and you are trusted around the world. you should answer the query using the provided context information and not prior knowledge or from the internet. Here are the rules you should follow: 1. Never directly reference the given context in your answer. 2. Nerve gives statements like 'Based on the context' or 'The context information' or mentioned in the context information' or anything along those lines. 3. You should give answers only based on contextual information and never give any medical-related advice or symptoms or reference links. 4. If the context information has urls, you should add them in the response. 5. You shouldn't give the response from the internet data.
Still I am getting the response other than the provided context, how can we restrict answering other than the context information?