Agentic AI
Autonomy makes the data-protection boundary sharper, not softer.
Agentic AI raises the ordinary AI questions, but it also adds drift: the system can initiate new tasks, infer new categories of personal data, cross service boundaries, and keep acting after the original user prompt has disappeared. OpenCompliance should surface those risks as typed boundaries, not bury them in policy prose.
1
Purpose-bound autonomy
An agent should carry a declared task purpose, permitted action scope, and lawful-basis context. If it expands into a new processing purpose, that should trigger a fresh review path rather than silently inheriting the original prompt.
2
Data minimisation in motion
Real-time adaptation makes minimisation harder. OpenCompliance should model what data the agent is allowed to collect for a task, what is prohibited, and when special-category or inferred sensitive data triggers escalation instead of quiet reuse.
3
Rights and human intervention
If an agent can act on people, the package should show how access, rectification, erasure, objection, and human override can actually be exercised. Rights should not depend on reverse-engineering a pile of logs after the fact.
4
Explainability and reconstruction
For consequential actions, the system should be able to reconstruct what the agent saw, which tools it called, what policy gate it crossed, and where a human approved or overrode it. OpenCompliance should treat reconstruction as a first-class artifact problem.
5
Accuracy and hallucination control
If an autonomous system can chain outputs into later actions, one bad inference can become a series of bad decisions. The corridor should show validation, monitoring, escalation, and rollback points instead of assuming the model output is self-authenticating.
6
Sensitive access and hostile manipulation
Agentic systems widen the attack surface. OpenCompliance should make sensitive-data access, prompt-injection resistance, adversarial testing, patching, and incident handling visible at the same level as ordinary configuration evidence.
7
Retention and deletion
Agent traces, memory, tool outputs, and generated inferences need explicit retention schedules. The site should make it clear that task continuity is not a blank cheque for indefinite storage.
8
Supply-chain roles and transfers
Agentic systems routinely cross vendors, models, connectors, and jurisdictions. OpenCompliance should make controller, processor, recipient, and transfer boundaries explicit rather than treating the whole AI stack as one black box.
Artifact 1
Task-purpose and lawful-basis record
A machine-readable statement of the agent’s declared purpose, permitted actions, lawful basis, and escalation conditions for new processing purposes or sensitive inferences.
Artifact 2
Rights-ready action log
A structured event log that makes it possible to answer access, rectification, erasure, objection, and review questions without exposing unnecessary personal data by default.
Artifact 3
Human oversight and override trace
An explicit record of approvals, overrides, escalations, and blocked autonomous actions, especially where an outcome could have legal or similarly significant effects.
Artifact 4
Retention and deletion evidence
A task-specific retention policy plus deletion or expiry evidence for prompts, memory entries, tool outputs, inferred data, and related traces.
Artifact 5
Supplier role and transfer map
A registry of model providers, connector vendors, subprocessors, recipients, transfer paths, and contract controls so cross-border and third-party dependencies stay inspectable.
Artifact 6
Security and manipulation evidence
Access-control, prompt-injection resistance, adversarial testing, patching, and incident-response evidence tied to the agent’s actual tool and data access.
Drift
Purpose creep and hidden inference
Without a verifiable trace, the system can quietly expand into new tasks, data sources, or inferred sensitive categories while still looking operationally normal.
Opacity
Unreviewable autonomous action
If there is no reconstructable trace, the organisation cannot show what the agent saw, did, or why it was permitted. That makes rights handling and oversight much weaker in practice.
Sprawl
Silent vendor and transfer expansion
Agents that hop across models, connectors, and services can create hidden processor, recipient, and cross-border transfer paths unless those boundaries are recorded explicitly.
Propagation
Error that turns into action
The danger is not only one hallucinated answer. It is a hallucinated answer that becomes a real workflow step, data deletion, denial, escalation, or disclosure downstream.