The Accountability Gap in AI Systems
Modern AI systems make thousands of decisions per day. A healthcare platform triages patients. A welfare system allocates benefits. An educational tool recommends learning paths. Each decision affects real people.
But when something goes wrong, a fundamental question emerges: Why did the AI make that decision?
Most organizations cannot answer this question reliably. They can show that a model was trained on certain data. They might have logs showing an API was called. But the complete chain — from input data through model inference to the final recommendation that a human acted upon — is rarely preserved in an auditable format.
This is the accountability gap. And as regulations like the EU AI Act, FDA's AI/ML guidance, and GDPR's automated decision-making provisions take effect, closing this gap is no longer optional. It is a compliance requirement.
Introducing the Decision Proof Unit
A Decision Proof Unit (DPU) is a structured, cryptographically verifiable record that captures the complete context of an AI-assisted decision at a specific point in time.
Think of it as a "receipt" for an AI decision — but one that is tamper-evident, timestamped, and linked to every other decision in an unbreakable chain.
Core Properties
A DPU has four fundamental properties:
- Completeness: It captures all inputs, model metadata, outputs, and human actions associated with a decision
- Immutability: Once finalized, a DPU cannot be modified without breaking the cryptographic chain
- Traceability: Every DPU links to its predecessor, creating a verifiable sequence of decisions
- Portability: DPUs export in standardized formats (JSON-LD) for cross-system verification
Technical Architecture
The Hash Chain
At the heart of DPU is a SHA-256 hash chain. Each decision record is linked to the previous one through a computed chain hash:
chainHash = SHA-256(content + previousHash + timestamp)
The first record in any chain uses a Genesis hash, establishing the chain's origin. Every subsequent record references the previous hash, creating an append-only sequence that is computationally infeasible to alter retroactively.
If anyone modifies a historical record, the chain breaks. The hash of the tampered record no longer matches the reference stored in the next record. This makes unauthorized modifications immediately detectable.
Evidence Levels
DPUs progress through three evidence levels:
| Level | Name | Description |
|---|---|---|
| 0 | DRAFT | Initial capture, still editable |
| 1 | DOCUMENTED | Reviewed and structured, limited edits |
| 2 | AUDIT_READY | Locked, hash-chain sealed, no modifications |
This progression mirrors real-world decision workflows. A clinician drafts an assessment, reviews and documents it, then finalizes it for the official record. DPU enforces this progression at the system level: once a record reaches AUDIT_READY, the hash is sealed and any modification breaks the chain.
Five-Level Governance
DPU implements a five-level governance framework that evaluates every decision against organizational policies:
- Policy Existence: Is there a documented policy governing this type of decision?
- Evidence Level: Has sufficient evidence been collected and documented?
- Human Review: Has a qualified human reviewed the AI's recommendation?
- Risk Threshold: Does the decision fall within acceptable risk parameters?
- Dual Approval: For high-stakes decisions, have two independent reviewers approved?
Each governance check produces a pass/fail result that is recorded in the DPU. This creates a verifiable governance trail alongside the decision trail.
Why DPU Matters for Regulated Industries
Healthcare
The FDA's guidance on AI/ML-based Software as a Medical Device (SaMD) requires manufacturers to demonstrate that their AI systems produce consistent, reliable, and traceable outcomes. DPU provides:
- Clinical decision audit trails linking AI recommendations to clinician actions
- Model version tracking ensuring reproducibility across software updates
- Patient safety logging that satisfies post-market surveillance requirements
Public Sector
Government agencies deploying AI for benefit allocation, fraud detection, or case prioritization face public accountability requirements. Citizens have the right to understand how AI-influenced decisions affect them. DPU enables:
- Algorithmic transparency through exportable decision records
- Audit readiness for inspector general reviews and legislative inquiries
- Bias documentation showing what data and models influenced each decision
Financial Services
Under the EU AI Act and existing financial regulations (MiFID II, PSD2), AI-assisted credit scoring, fraud detection, and investment recommendations require explainability. DPU delivers:
- Regulatory reporting with standardized decision proof exports
- Customer explanation capabilities for automated decision notifications
- Internal audit support for compliance and risk management teams
Implementation: How DPU Works in Practice
Step 1: Instrument Decision Points
Identify every point in your application where AI generates a recommendation, classification, or score that influences a human decision. These are your "decision points."
For a healthcare platform, decision points might include:
- Patient risk score generation
- Treatment recommendation display
- Triage priority assignment
- Automated alert triggering
Step 2: Capture Decision Context
At each decision point, capture the complete context:
- Input data: What data was available to the model at decision time?
- Model metadata: Which model version, which parameters, which confidence level?
- Output: What did the model recommend or predict?
- Human action: What did the human operator do with the recommendation?
- Timestamp: When did each step occur?
Step 3: Build the Chain
Each captured decision is hashed and linked to the previous record:
Record N:
content: { inputs, model, output, humanAction }
previousHash: hash(Record N-1)
timestamp: ISO-8601
chainHash: SHA-256(content + previousHash + timestamp)
The chain grows monotonically. New records append; old records never change.
Step 4: Enforce Governance
Before a decision record is sealed, the five-level governance framework evaluates it:
- Does a relevant policy exist?
- Is the evidence level sufficient for this decision type?
- Has a qualified human reviewed the output?
- Is the risk within acceptable bounds?
- For high-stakes decisions, has a second reviewer approved?
Governance results are recorded within the DPU, creating a verifiable compliance trail.
Step 5: Export and Audit
DPU records export in JSON-LD format using the schema:
schema.cronozen.com/decision-proof/v2
This standardized format allows:
- Cross-system verification by auditors
- Regulatory submission in a machine-readable format
- Integration with existing compliance and GRC (Governance, Risk, Compliance) platforms
DPU vs. Traditional Logging
| Aspect | Traditional Logging | DPU |
|---|---|---|
| Integrity | Logs can be modified or deleted | Hash-chain makes tampering detectable |
| Completeness | Captures system events | Captures full decision context |
| Governance | Separate from logging | Embedded in every record |
| Portability | Proprietary formats | Standardized JSON-LD |
| Evidence levels | None | DRAFT → DOCUMENTED → AUDIT_READY |
| Regulatory alignment | Requires custom mapping | Built for EU AI Act, FDA, GDPR |
Traditional application logs record what happened in the system. DPU records why a decision was made, who reviewed it, and whether it complied with policy. This is the difference between system observability and decision accountability.
The Zero-Dependency Design
Cronozen's DPU implementation (@cronozen/dpu-core) follows a zero-dependency, domain-independent architecture:
- No database dependency: DPU core logic operates independently of any specific database or storage layer
- Domain agnostic: The same DPU framework works for healthcare, public sector, finance, or education
- Pluggable governance: Organizations define their own policy rules; DPU enforces the framework
- Lightweight integration: A single function call captures and chains a decision proof
This design means DPU can be integrated into any existing application stack without requiring architectural changes to the host application.
Getting Started
Whether you are preparing for EU AI Act compliance, building a healthcare SaaS that needs FDA-aligned documentation, or simply want to future-proof your AI governance, DPU provides a technical foundation that scales.
The key insight is this: AI accountability is not a feature you add later. It is an architectural decision you make from the start. Every AI-assisted decision your platform makes today without a proof layer is a decision that cannot be audited tomorrow.
Want to see DPU in action? Try Proof Layer Free and explore how decision proof units can transform your AI governance from reactive to proactive.