Framework Depth

Go deep in order, not wide by slogan.

OpenCompliance now treats framework depth as a machine-readable planning surface. The core order is ISO 27001, then SOC 2, then GDPR, then IRAP. After that, the next meaningful wave is AI governance and AI security, prioritised by legal relevance, public-source reviewability, and fit with legal-tech deployer style environments.

Core Order

1

ISO 27001

Still the strongest security-management baseline for customer diligence. OpenCompliance keeps ISO exact anchors honest by treating them as a blocker state until licensed review exists, while continuing private seed decomposition and public narrow-control promotion.

2

SOC 2

Still central for US buyer assurance. The current public corridor already exposes the technical overlap with ISO 27001, but exact criterion and point-of-focus publication remains explicitly blocked until licensed review exists.

3

GDPR

Public exact anchors are feasible, which makes GDPR the first place where OpenCompliance can move from family proxies toward a broader reviewed article-level layer without licensing blockers.

4

IRAP

IRAP and the public ISM controls matter because the Australian market needs a public exact-anchor path and because hosted shared-responsibility assumptions can be made explicit rather than buried in prose.

AI Priorities

Tier 1

Immediate AI frameworks

The next AI standards worth serious depth are the EU AI Act, UK ICO AI guidance, the UK AI Cyber Security Code of Practice, NIST AI RMF, NIST AI 600-1, NIST AI 100-4, ISO/IEC 5338, ISO/IEC 42001, and ISO/IEC 42005. That mix gives one binding EU regime, one UK privacy regulator lens, one UK AI security baseline, one mature voluntary control model, one GenAI-specific extension, one synthetic-content transparency layer, and three ISO lifecycle or governance candidates. The UK ICO, NIST AI 600-1, and now NIST AI 100-4 layers all have reviewed public anchors in the pilot.

Tier 2

Regional and technical follow-ons

ISO/IEC 5259-5, Australia’s Voluntary AI Safety Standard, ETSI EN 304 223, ETSI TS 104 008, ISO/IEC 23894, NIST SP 800-218A, and NIST AI 700-2 are the next meaningful layer. They are especially useful for AI data governance, AI security, continuous conformity, and evaluation-oriented operational practice. Australia and the two ETSI entries already have reviewed public anchors in the pilot, and NIST AI 700-2 now has its first reviewed evaluation anchor as well.

Tier 3

Watch list

NIST AI 800-1, ISO/IEC AWI 25704, and ISO/IEC 42006 matter, but they are lower-priority for the current OpenCompliance corridor. The first is still draft and focused on misuse risk for dual-use foundation models. The second is useful as a process-assessment pointer but is still under development. The third is more about the certification ecosystem than first-order operator controls. The EU GPAI Code of Practice remains a narrow provider-oriented layer rather than the next operator-depth priority.

Open rule

Public review before fake completeness

Public exact anchors should only be published where the source is actually open enough to review responsibly. That means GDPR, IRAP, UK AI guidance, Australian AI guardrails, NIST, and ETSI can move faster than ISO 27001, SOC 2, or the ISO AI standards that still need licensed review.

Coverage Snapshot

Current Counts

Core frameworks

ISO 27001 and SOC 2 each map to 35 public controls today, with 15 Lean-backed controls in the current public corridor. GDPR currently maps to 17 public controls with 17 reviewed exact-anchor entries. IRAP currently maps to 31 public controls with 36 reviewed exact-anchor entries and one candidate entry still marked as not yet reviewed, including secure baseline, configuration-exception, CI-policy, change-governance, access-review, and patch-exception slices.

Current Counts

AI frameworks

The AI layer now maps eight public controls: five implemented and three planned. The exact-anchor layer now reaches across the EU AI Act, UK ICO AI guidance, the UK AI Cyber Security Code, NIST AI RMF, NIST AI 600-1, NIST AI 100-4, the Australian Voluntary AI Safety Standard, ETSI EN 304 223, ETSI TS 104 008, NIST AI 700-2, and candidate ISO AI standards including ISO/IEC 5338, ISO/IEC 42001, ISO/IEC 42005, ISO/IEC 5259-5, and ISO/IEC AWI 25704. It is still mostly documentary by design. That is honest, not a defect.

Public Specs

Machine-readable planning layer

The public specs repo now includes both a framework priority list and a framework coverage report, so others can inspect not just what OpenCompliance already maps, but also the explicit order in which deeper work should happen.

See the repository directory.

Interpretation

Useful for buyers and contributors

Buyers can see where the public proof corridor is already strongest. Contributors can see which frameworks are blocked by licensing, which are publicly reviewable now, and which AI standards are important enough to deserve the next round of detailed work.