compliance

Structural Conformance Under the EU AI Act

Article 11 technical documentation as a replayable measurement, Article 43 conformity assessment as re-execution

The EU AI Act's high-risk regime asks for engineering-grade evidence. How KAIROS makes Annex IV documentation replayable, the conformity assessment re-executable, and the human oversight mandate satisfiable without alert fatigue.

The EU AI Act’s high-risk regime arrives at full applicability on 2 August 2026. Providers preparing the Annex IV documentation package now are discovering that Articles 9 through 15 were drafted with the explicit expectation that the underlying evidence would be engineering-grade. The existing AI stack does not produce engineering-grade evidence. KAIROS does.

What the Regime Asks For

Articles 9 through 15 set the substantive requirements for a high-risk AI system. Article 9 requires a documented risk management system maintained across the lifecycle. Article 10 requires data governance with provenance and quality discipline. Article 11 requires technical documentation in the shape of Annex IV. Article 12 requires automatic record-keeping that allows traceability over the system’s operation. Article 14 requires human oversight mechanisms specifically designed to prevent or minimise harm. Article 15 requires accuracy, robustness, and cybersecurity properties that can be demonstrated.

Article 43 then requires a conformity assessment. For systems that fall outside Annex I product-safety law and outside the available harmonised standards, the procedure runs through Annex VII: the notified body assesses the provider’s quality management system and reviews the technical documentation directly. The conformity verdict rests on what the notified body can read, replay, or measure.

The 2 August 2026 deadline is the publication-readiness gate for that evidence package.

Where the Existing Stack Falls Short

Most current high-risk AI system documentation answers Articles 9 to 15 with prose, attestation, and statistical model evaluation. The risk management system is a narrative document. The data governance evidence is a procedural record. The technical documentation is an architecture description with metrics from a holdout test set. The accuracy, robustness, and cybersecurity claims rest on benchmark scores and pen-test reports.

This is the instrument problem described in Auditability as a Measurement. Two notified bodies reading the same Annex IV file can reach different conclusions about the adequacy of the risk management system. A benchmark score from a static test set has no binding authority over the deployed system at runtime. A pen-test report describes a snapshot.

A provider can satisfy Article 11 in form and miss the substance. The Article asks for evidence the notified body can verify.

What KAIROS Adds to Each Requirement

Article 9 (risk management). The deployed engine evaluates structural margin per agent, per evaluation, against a signed deployment policy. The Kairos margin K_gate = gamma − gamma_floor is the signed buffer-units residual against the policy floor. Risk management acquires a numeric trajectory.

Article 11 (technical documentation). The calibration artifact and the deployment policy are signed engineering artifacts. The Annex IV package includes both, together with the determinism guarantees on the runtime that reads them. The notified body reviews artifacts the body can re-execute.

Article 12 (record-keeping). Each EvaluationEnvelope is hash-bound to the calibration artifact, the deployment policy, and the input snapshot. The log is the structured record that, with the artifacts, reproduces what happened.

Article 14 (human oversight). The engine emits recommendation tiers, each tied to a structural margin reading. Human escalation fires when the margin compresses against the floor. Routine decisions remain inside the autonomous envelope. The Permissions Fallacy article describes this in detail: structural escalation satisfies the human oversight mandate without producing the alert fatigue that nullifies it.

Article 15 (accuracy, robustness, cybersecurity). The engine carries a CI-gated proof of correctness across six decision paths, with a 100 percent pass rate on the strict-mode scored scenario suite. Wilson 95% confidence intervals are computed on every reported rate. The CI-Gated Proof of Correctness article describes the harness.

The Notified Body as a Replayer

The shift in the conformity assessment is small in vocabulary and large in consequence. A notified body has traditionally read technical documentation. With a KAIROS-equipped provider, the notified body re-executes the deployed control on the reference seed and verifies the verdict against the provider’s submitted envelope, bit-for-bit, to a determinism tolerance of ε = 10⁻⁶.

This changes the procedural shape of Annex VII. The assessment of the quality management system remains a process review. The assessment of the technical documentation acquires a numerical layer. The notified body’s verdict on the risk management system is grounded in numbers the body has independently computed.

The same logic applies to the post-market monitoring obligation in Article 72. The monitoring evidence is replayable. A regulator opening an Article 79 procedure under the suspicion of non-conformity can re-execute the deployed control against the provider’s recorded inputs and verify the operational record.

What Providers Preparing Now Should Include

The 2 August 2026 deadline is under three months away. For providers building the Annex IV documentation package now, three additions raise the evidence to measurement standard:

  • The signed calibration artifact as a referenced engineering object, with its SHA-256 fingerprint recorded in the technical documentation.
  • The signed deployment policy as a versioned engineering object, with its policy floor and resolved override bounds recorded.
  • A reference replay corpus: a representative seed, the input snapshots, and the resulting envelopes. The notified body re-executes this corpus under controlled conditions.

The first two are existing artifacts in any KAIROS deployment. The third is a packaging exercise the provider runs once and reuses across every notified body review.

Limits Worth Stating

The deterministic envelope does not establish that the high-risk classification was correctly applied under Article 6. It does not adjudicate the Annex III use-case mapping. It does not replace the data governance obligations of Article 10, which remain a provenance and quality discipline distinct from the runtime measurement layer. The envelope is evidence for the substance of Articles 9, 11, 12, 14, and 15 once the provider’s classification and data governance positions are established.

The conformity assessment is also a process review. The quality management system under Article 17 remains within scope, and the engine surface does not produce evidence about the provider’s organisational discipline.

What the engine surface eliminates is interpretive drift on the substantive technical requirements. The notified body verifies the same gate verdict the provider submitted. The compliance question moves from “do the reviewers agree on the documentation” to “do the inputs justify the verdict the documentation reports.” The first depends on judgment. The second depends on data.

Direct Next Steps

Read the foundational claim that grounds this article: Auditability as a Measurement.

Read the validation harness that backs the Article 15 robustness claim: CI-Gated Proof of Correctness.

Read the structural-escalation argument that addresses the Article 14 human oversight mandate: The Permissions Fallacy.

The next article in this series addresses the analogous evidence shift under NIS2 Article 23.

Privacy Policy

1. Data We Collect

When you sign up for early access or our newsletter, we collect your email address. We do not collect personal data beyond what you voluntarily provide.

2. How We Use Your Data

Your email is used solely to send product updates, early-access invitations, and research announcements from AnankeLabs. We do not sell, rent, or share your data with third parties.

3. Cookies & Analytics

This site does not use tracking cookies or third-party analytics. We may use server-side request logs for basic traffic monitoring.

4. Data Storage & Security

Submitted data is stored on secure, encrypted infrastructure. We retain your information only as long as necessary to provide the services you requested.

5. Your Rights

You may request deletion of your data at any time by contacting us. We will process deletion requests within 30 days.

6. Contact

For privacy inquiries, email [email protected].

Terms of Use

1. Acceptance

By accessing this site, you agree to these terms. If you do not agree, discontinue use immediately.

2. Intellectual Property

All content, software, research, and materials on this site are the property of AnankeLabs. The KAIROS engine, Rosetta adapter layer, Spindle simulation framework, and Serious Gaming SDK are proprietary technologies. No license is granted except as explicitly stated in a signed agreement.

3. Early Access Program

Early access is provided on an as-is basis. AnankeLabs reserves the right to modify, suspend, or terminate early access at any time without notice.

4. Limitation of Liability

AnankeLabs provides this site and its materials "as is" without warranty of any kind. We are not liable for any damages arising from your use of this site or reliance on its content.

5. Simulation Outputs

KAIROS simulation outputs are analytical tools, not predictions. They should not be used as the sole basis for financial, military, policy, or safety-critical decisions.

6. Governing Law

These terms are governed by the laws of Sweden.

7. Contact

For legal inquiries, email [email protected].