No that's not what I mean. let's say a user types : "who's the worst performing employee of the month?", the LLM turns this prompt into a Query and extracts the required values from a DB. so far the data remains private (schema doesn't matter), but when the extracted data is returned to the LLM for explanation, like " Mr. X performed the worst." at this point I'd consider it a data risk. I want to stop it from the data to be returned to the LLM for elaboration upon.