Agentic AI

Autonomy makes the data-protection boundary sharper, not softer.

Agentic AI raises the ordinary AI questions, but it also adds drift: the system can initiate new tasks, infer new categories of personal data, cross service boundaries, and keep acting after the original user prompt has disappeared. OpenCompliance should surface those risks as typed boundaries, not bury them in policy prose.

What should become explicit

1

Purpose-bound autonomy

An agent should carry a declared task purpose, permitted action scope, and lawful-basis context. If it expands into a new processing purpose, that should trigger a fresh review path rather than silently inheriting the original prompt.

2

Data minimisation in motion

Real-time adaptation makes minimisation harder. OpenCompliance should model what data the agent is allowed to collect for a task, what is prohibited, and when special-category or inferred sensitive data triggers escalation instead of quiet reuse.

3

Rights and human intervention

If an agent can act on people, the package should show how access, rectification, erasure, objection, and human override can actually be exercised. Rights should not depend on reverse-engineering a pile of logs after the fact.

4

Explainability and reconstruction

For consequential actions, the system should be able to reconstruct what the agent saw, which tools it called, what policy gate it crossed, and where a human approved or overrode it. OpenCompliance should treat reconstruction as a first-class artifact problem.

5

Accuracy and hallucination control

If an autonomous system can chain outputs into later actions, one bad inference can become a series of bad decisions. The corridor should show validation, monitoring, escalation, and rollback points instead of assuming the model output is self-authenticating.

6

Sensitive access and hostile manipulation

Agentic systems widen the attack surface. OpenCompliance should make sensitive-data access, prompt-injection resistance, adversarial testing, patching, and incident handling visible at the same level as ordinary configuration evidence.

7

Retention and deletion

Agent traces, memory, tool outputs, and generated inferences need explicit retention schedules. The site should make it clear that task continuity is not a blank cheque for indefinite storage.

8

Supply-chain roles and transfers

Agentic systems routinely cross vendors, models, connectors, and jurisdictions. OpenCompliance should make controller, processor, recipient, and transfer boundaries explicit rather than treating the whole AI stack as one black box.

AI artifact checklist

Artifact 1

Task-purpose and lawful-basis record

A machine-readable statement of the agent’s declared purpose, permitted actions, lawful basis, and escalation conditions for new processing purposes or sensitive inferences.

Artifact 2

Rights-ready action log

A structured event log that makes it possible to answer access, rectification, erasure, objection, and review questions without exposing unnecessary personal data by default.

Artifact 3

Human oversight and override trace

An explicit record of approvals, overrides, escalations, and blocked autonomous actions, especially where an outcome could have legal or similarly significant effects.

Artifact 4

Retention and deletion evidence

A task-specific retention policy plus deletion or expiry evidence for prompts, memory entries, tool outputs, inferred data, and related traces.

Artifact 5

Supplier role and transfer map

A registry of model providers, connector vendors, subprocessors, recipients, transfer paths, and contract controls so cross-border and third-party dependencies stay inspectable.

Artifact 6

Security and manipulation evidence

Access-control, prompt-injection resistance, adversarial testing, patching, and incident-response evidence tied to the agent’s actual tool and data access.

Worked scenario

Example

DSAR triage by an agentic case worker

A customer asks for access to all personal data held about them. An agentic case worker is allowed to search a support system, a CRM, and a product audit store, but not billing archives or unrelated employee systems. It can classify likely duplicates, prepare a draft bundle, and escalate legal edge cases.

What must be visible

The trace has to answer five questions

  • What declared purpose and lawful-basis context applied to the run.
  • Which stores and tools were actually accessed, and which were blocked.
  • What the agent inferred or excluded, and where a human reviewer intervened.
  • How the resulting bundle supports access, rectification, erasure, and objection if challenged later.
  • How long the trace, memory, and exported package are retained before deletion or restricted archive.

What this protects against

Drift

Purpose creep and hidden inference

Without a verifiable trace, the system can quietly expand into new tasks, data sources, or inferred sensitive categories while still looking operationally normal.

Opacity

Unreviewable autonomous action

If there is no reconstructable trace, the organisation cannot show what the agent saw, did, or why it was permitted. That makes rights handling and oversight much weaker in practice.

Sprawl

Silent vendor and transfer expansion

Agents that hop across models, connectors, and services can create hidden processor, recipient, and cross-border transfer paths unless those boundaries are recorded explicitly.

Propagation

Error that turns into action

The danger is not only one hallucinated answer. It is a hallucinated answer that becomes a real workflow step, data deletion, denial, escalation, or disclosure downstream.

What Exists Publicly

The current public site already makes some of this boundary visible: AI frameworks are treated as role-based and non-flattenable, the ExampleCo AI corridor is explicitly documentary-heavy, the trust model is fail-closed, and the public artifact model already distinguishes proof, attestation, and judgment.

What Is Still Missing

OpenCompliance still needs richer live-evidence connectors, rights-operation evidence, real signer identities, sensitive-inference handling, and deeper controller/processor transfer surfaces before it can claim a strong public agentic-AI data-protection corridor.