The Logging Illusion

Every AI governance platform promises audit trails. "We log every interaction." "Complete audit history." "Full traceability."

But there is a fundamental problem with logging: logs can be altered. A database record that says "Manager approved AI recommendation at 14:32" proves nothing if the database itself can be modified. Logs tell you what the system claims happened. They do not prove what actually happened.

As AI regulations tighten — the EU AI Act enforcement begins August 2, 2026, the Colorado AI Act on June 30, 2026 — regulators will not accept claims. They will demand proof.

This is the gap between audit trails and Decision Proof Units.


What Audit Trails Do

Traditional audit trails record events in a database:

Component What It Records
Timestamp When an event occurred
Actor Who or what triggered the action
Action What was done (create, read, update, delete)
Data What data was involved
Result What outcome was produced

This is valuable. It provides visibility into system operations. But it has three critical limitations.

Limitation 1: Mutability

Database records can be altered — by administrators, by software bugs, by malicious actors. Even "append-only" logs can be truncated or overwritten at the infrastructure level. If someone modifies a log entry, there is no way to detect the change from the log itself.

Limitation 2: No Proof of Human Review

An audit trail can record that a human user clicked "Approve." It cannot prove that the human actually reviewed the AI's reasoning before clicking. The difference between genuine oversight and rubber-stamping is invisible to a log file.

Limitation 3: No Decision Context

Logs record what happened but rarely capture why. When an AI recommends a treatment plan for a child with developmental disabilities, the log might show "recommendation generated." It does not capture the input data that led to the recommendation, the alternative options considered, or the reasoning chain.


What Decision Proof Units Do Differently

A Decision Proof Unit (DPU) is a cryptographically signed, immutable record of an AI decision and its full context. It goes beyond logging in three fundamental ways.

1. Cryptographic Immutability

Every DPU is hashed using SHA-256 and chained to previous records. If any record is modified after creation, the hash chain breaks — and the tampering is immediately detectable. This is not a database feature that can be turned off. It is a mathematical guarantee.

DPU Record:
├── AI Input (hashed)
├── AI Output (hashed)
├── Model Version
├── Timestamp (RFC 3161)
├── Human Reviewer ID
├── Review Action (approved / modified / rejected)
├── Modification Details (if any)
├── Previous Record Hash
└── Record Hash (SHA-256 of all above)

2. Proof of Human Oversight

A DPU captures not just that a human reviewed the AI output, but how they reviewed it:

Scenario What DPU Records
Manager approves AI recommendation without changes Approval + timestamp + no modifications
Manager modifies AI recommendation Original output + modified output + reason for change
Manager rejects AI recommendation Rejection + reason + alternative action taken
No human review (automated) Flagged as "auto-approved" with policy reference

This distinction matters for AI Act Article 14, which requires "effective human oversight" for high-risk AI systems. A DPU can prove that oversight was genuine, not perfunctory.

3. Full Decision Context

A DPU preserves the complete context of an AI decision:

  • Input data: What information the AI received
  • Model state: Which model version produced the output
  • Output: The AI's recommendation or decision
  • Alternatives: Other options the AI considered (where applicable)
  • Confidence: The AI's confidence level in its output
  • Human judgment: What the human reviewer decided and why

Side-by-Side Comparison

Capability Audit Trail Decision Proof Unit
Records events Yes Yes
Tamper detection No (or limited) Yes (hash chain)
Proves human review quality No Yes
Captures decision context Partial Full
Regulatory evidence grade Supportive Primary evidence
Storage immutability Database-dependent Cryptographic
Cross-system verification No Yes (hash verification)
Cost Low Moderate

Why This Matters for Compliance

EU AI Act (August 2, 2026)

Article 50 requires AI systems to maintain transparency. Article 14 mandates human oversight for high-risk systems. Article 12 requires automatic recording of events ("logs"). But Article 73 goes further: it prohibits altering AI systems or their logs in ways that could compromise oversight.

An audit trail satisfies Article 12. A DPU satisfies Articles 12, 14, 50, and 73 simultaneously.

Colorado AI Act (June 30, 2026)

Requires deployers of high-risk AI to provide "a description of the purpose of the AI system" and maintain records demonstrating compliance. DPUs provide this documentation automatically.

GDPR Article 22

Automated decision-making affecting individuals requires the ability to explain and contest decisions. DPUs preserve the full context needed for explanation and contestation.


Real-World Example: Rehabilitation Center

A rehabilitation center uses AI to recommend therapy schedules for children with developmental disabilities.

With Audit Trail Only:

2026-04-13 09:15 | AI generated schedule recommendation for Patient #1247
2026-04-13 09:22 | Therapist Kim approved recommendation

With DPU:

DPU #8847291
├── Input: Patient age 6, diagnosis ASD, prior 12 sessions of speech therapy,
│         progress score 67/100, parent preference: morning sessions
├── AI Model: schedule-optimizer v2.3.1
├── Output: Recommended 2x/week speech + 1x/week sensory integration
├── Alternatives Considered: 3x/week speech only (confidence 0.71),
│                            2x/week speech + 1x/week behavioral (confidence 0.68)
├── Selected: 2x/week speech + 1x/week sensory (confidence 0.82)
├── Reviewer: Therapist Kim (License #KR-ST-2019-4421)
├── Review Time: 7 minutes 14 seconds
├── Action: Approved with modification (changed Monday slot to Wednesday)
├── Reason: "Parent works Monday mornings - schedule conflict"
├── Hash: a3f8c92d...
└── Previous Hash: 7b1e4f0a...

The first record tells you something happened. The second record proves what happened, why it happened, that a qualified human reviewed it, and that the record hasn't been tampered with.


When Do You Need DPU vs Audit Trail?

Scenario Audit Trail Sufficient DPU Recommended
Internal analytics dashboard Yes No
Customer-facing AI chatbot Maybe Yes
AI-assisted medical/therapy decisions No Yes
Automated financial decisions No Yes
HR/recruitment AI screening No Yes
Regulatory-reported AI outputs No Yes
Public sector AI services No Yes

If AI decisions affect people's rights, health, finances, or opportunities — you need proof, not just logs.


How Cronozen Implements DPU

Cronozen's Decision Proof Unit is built into every AI interaction on the platform.

  • Automatic: No manual steps required. Every AI recommendation generates a DPU automatically.
  • Hash-chained: SHA-256 hash chain ensures tamper detection across the entire record history.
  • Human oversight capture: Records not just approval/rejection but review duration, modifications, and reasoning.
  • Audit package export: One-click generation of compliance evidence packages for regulators.
  • 5-year retention: Automatic record lifecycle management with regulatory retention compliance.
  • API accessible: DPU records are accessible via REST API for integration with external compliance systems.

What is a Decision Proof Unit?The Technical Foundation of AI Accountability

EU AI Act ComplianceEU AI Act Compliance Checklist

AI Decision TraceabilityFrom Black Box to Verifiable Proof