So for instance, I tell an AI that Joe got a new job working for EvilCorp.
The pre-query code would parse that to extract entities and relationships, and then it would query against a vector store to see if we have any pre-defined relation or backpointer names that map to what we're learning. In this case, we'd pull up "employer" and "employee". Then we can isolate the deltas that use those relations to refer to Joe, which will surface the example delta above. That would be serialized into the prompt. Your AI call could look something like:
Prior knowledge:
As of <timestamp>, Joe works for Acme.
User statement:
Joe got a new job working for EvilCorp!
Please generate any new deltas required to update the knowledge base with respect to this user statement.
For complex stuff you really want schemas, though. So like, an Employment schema could be a whole subgraph linking employer, employee, salary, reporting structure, job responsibility, and tracking job history over time, all in one compact summary.
There are open questions to address about how to best implement some of these vector components and the actual flow, but do you think there's a there there?