
Companion long-form on X — read the X Article version
Key Takeaways
- 6.5%. That was the pass rate on the EBA-led DORA register dry-run in 2024. The other 93.5% failed on AI/SaaS vendor rows, sub-processor chains, and audit rights — not on infrastructure.
- BaFin's January 2026 guidance bridges DORA and the AI Act. Via Article 9(10) of the AI Act, AI vendors processing data for a financial entity are squarely Article 28 third parties — even when you bought them as SaaS.
- Senior managers face up to €1M personal liability under national implementations, on top of entity-level penalties capped at 2% of group turnover or €10M.
- Your OpenAI or Copilot DPA does not satisfy Article 28. Audit-on-request, URL-published sub-processor lists, and contractual region promises are not the evidence a competent authority is looking for.
I have spent the last three months reading DORA register rows. Most of them were drafted by senior, conscientious compliance officers at mid-tier banks, asset managers, insurers, and payment institutions. Most of them will fail on the next sample audit. Not because the compliance work is bad — it is often excellent — but because the underlying contracts with AI vendors were never designed to live inside a regime like DORA.
This piece is for the CTO, CIO, Head of Compliance or DPO at an EU financial entity who already knows DORA exists, has a register, and is now staring at the AI columns wondering whether OpenAI, Anthropic, Microsoft 365 Copilot, or Mistral La Plateforme will hold up when BaFin or the ACPR knocks on the door. Short answer: not in their current shape. Long answer: there is a path, and it goes through hardware-level evidence rather than another round of contract redlines.
Who DORA Actually Catches — And Why AI Got Pulled In
DORA's scope under Article 2 covers around 22,000 financial entities across the EU: credit institutions, payment and e-money institutions, investment firms, crypto-asset service providers, central counterparties and trade repositories, insurance and reinsurance undertakings, IORPs, credit rating agencies, crowdfunding service providers, and administrators of critical benchmarks. Plus — and this is where it widens — ICT third-party service providers when they are designated as critical.
Until late 2025 there was a debate about whether AI/SaaS vendors fell under DORA at all or only under sectoral guidance. The January 2026 BaFin Rundschreiben settled it for the German market and signalled the direction of travel for the rest of the EU: any third party whose service supports an "ICT-related function" is in scope of Article 28, and the AI Act's Article 9(10) explicitly cross-references DORA for risk-management purposes. The practical map for AI in financial services now looks like this:
- Customer-facing assistants. Chatbots that answer authenticated customer questions, suggest products, or surface account information. Always Article 28 third parties; usually classed as supporting an important function.
- Underwriting and credit-decision assist. LLMs that summarise applications, extract from loan files, or pre-score. Almost always supporting a critical or important function. Often also high-risk under AI Act Annex III.
- Trading-floor research and surveillance. LLMs that summarise filings, flag suspicious orders, or draft client communications. Critical-or-important when they inform regulated activity.
- Compliance, AML, KYC. LLMs reading documents, flagging risk, drafting SAR/STR narratives. Default to critical because of supervisory exposure.
- Internal "copilots." Microsoft 365 Copilot, Glean, Notion AI, and similar reading internal data on behalf of staff. In scope as soon as the data they touch supports a regulated function — which it almost always does.

If any of those describe your shop, you owe a fully populated Article 28 register row per vendor instance — with the sub-processor chain, the criticality classification, the contract clauses, the exit plan, and the supporting evidence. The fastest self-check most compliance teams use:
# DORA Article 28 — ICT third-party register self-check.
# If ANY answer is "no", your AI vendor entry will fail the BaFin sample audit.
vendor = current_ai_vendor() # OpenAI, Anthropic, Microsoft 365 Copilot, Mistral...
questions = [
"Can you produce a SIGNED list of every sub-processor and physical location?",
"Is the contract enforceable under EU law with a named EU controlling entity?",
"Does the contract grant unrestricted on-site audit rights to YOU and YOUR regulator?",
"Is there a documented exit plan testable within the supervisory window?",
"Can the vendor evidence — cryptographically — that your data is processed in the region you contracted?",
"Does the contract name a specific CRITICAL or IMPORTANT function status under DORA?",
]
if not all(answered_yes(questions)):
print("FAIL: Article 28 register row will be flagged.")
print("Expected outcome at next BaFin/ACPR/MFSA sample: formal finding,")
print("remediation deadline, and management body sign-off liability.")Why Standard AI Vendor Contracts Fail Article 28
Article 28(2) and 28(7) are where audits stall. Together they require: a written contract with a specific list of clauses, an enforceable EU-law forum, full audit rights for both the financial entity and its competent authority, geographic transparency including sub-processor locations, contractual security obligations proportionate to the function, and a tested exit plan. The standard enterprise AI DPA hits roughly three of those cleanly. The other three are where the 6.5% pass rate came from.
- Audit rights. The boilerplate is "audit reports made available on request, with reasonable notice, not more than once per year." DORA expects on-site audit access for the competent authority. Most enterprise AI vendors will not sign that unmodified, and the negotiated middle ground (third-party audit reports plus questionnaire) is exactly what BaFin started flagging in 2025 sample audits.
- Sub-processor transparency. The standard pattern is "current list at this URL, we will notify you of changes." A DORA register row needs a point-in-time-signed sub-processor chain, geographic pinning per sub-processor, and a mechanism to evidence that the contracted region was actually used at inference time. URL change-logs do not produce evidence; they produce promises.
- Cybersecurity proportionality. Article 28(7)(d) wants security measures "proportionate to the criticality." For critical functions touching personal, regulated, or commercially-sensitive data, the operator-trust threat model matters. ISO 27001 and SOC 2 Type II are necessary but increasingly read by supervisors as floor, not ceiling. Hardware-rooted tamper resistance is now what differentiates a passing register row from a flagged one.
The good news: none of these are unfixable. The bad news: they are not fixable through another round of redlines on a SaaS contract. The contractual layer needs to be backed by a technical layer that produces evidence the supervisor can verify independently.
What an Audit-Grade Register Row Actually Looks Like
Article 28(3) requires financial entities to maintain a register of information at entity, sub-consolidated and consolidated level. The harmonised template the European Supervisory Authorities published in 2024 has 14 tables and around 400 fields. The painful columns for AI vendor rows are the function-criticality columns, the sub-processor chain columns, and the "ICT security measures" columns — the last of which has historically been a free-text paragraph that supervisors are now treating as a evidence claim.

Operationally, the workflow we ship to financial entities is roughly thirty lines of Python:
import requests, hashlib, json
# DORA Article 28 evidence — produced for every AI inference call.
# Goal: every register row points to a quote that can be replayed by the
# competent authority WITHOUT trusting VoltageGPU or the vendor.
quote = requests.get(
"https://api.voltagegpu.com/v1/pods/POD_ID/attestation",
headers={"Authorization": "Bearer vgpu_YOUR_KEY"},
).json()
# 1. Region pinning — DORA expects geographic transparency.
assert quote["region"] == "eu-fr-1"
assert quote["operator_jurisdiction"] == "FR"
# 2. Tamper-resistance — Article 28(7) "appropriate ICT security".
assert quote["tdx_version"] == "1.5"
assert quote["measurement_valid"] is True
assert quote["mr_td"] == EXPECTED_MR_TD
# 3. Bind the model + system prompt to the register row.
artifact_hash = hashlib.sha256(open("model.safetensors", "rb").read()).hexdigest()
register_row = {
"function": "FR-CRD-001-credit-decision-assist",
"criticality": "CRITICAL",
"vendor": "VoltageGPU",
"vendor_jurisdiction": "FR",
"sub_processor_chain": ["voltagegpu-eu-fr-1"],
"evidence_type": "intel-tdx-quote",
"evidence_artifact": quote["raw_quote"],
"model_artifact_hash": artifact_hash,
"exit_plan_ref": "DR-RUNBOOK-2026-Q2",
}
# 4. Persist into your DORA register store. This row is auditable
# end-to-end without any vendor cooperation.
post_to_register(json.dumps(register_row))
print("DORA Article 28 row written. Sample-audit-ready.")Three things that workflow gives you that nothing else currently does. First, the attestation quote is replayable on the auditor's laptop without any cooperation from VoltageGPU — the Intel signature chain rolls up to the Intel root of trust, not to us. Second, the artifact-hash binding pins the specific model and prompt template to the inference call, which closes the "but the vendor could swap the model after the fact" objection that currently haunts AI register rows. Third, the region field is evidenced rather than promised — a competent authority cross-checking jurisdiction gets cryptographic, not contractual, comfort.
Penalty Math — Entity Caps and Personal Liability

Two practical notes. The 2% entity cap is calculated on the group worldwide turnover for the preceding financial year, not the relevant subsidiary's — for a mid-cap European bank that is a board-level number, not an operations line item. And the individual cap is independent of personal misconduct: the supervisor only has to show that the named senior manager did not exercise the required oversight. The practical effect in 2026 has been to push AI vendor selection out of IT and onto the risk committee.
A Practical Roadmap For The Next 90 Days

- Inventory every AI vendor that touches a regulated function. Use the register self-check above. Rank by criticality. For most mid-market firms this is a one-week exercise that turns up two to three times more vendors than the official list.
- Triage the contracts. Group vendors into "contract is fixable with redlines," "contract needs technical compensating control," and "vendor cannot satisfy Article 28 at any criticality." The third bucket is where most of the surprise lives.
- Move critical-function inference behind an attested enclave. With VoltageGPU TDX pods the migration is essentially a one-line change to your inference base URL plus a quote-verification step in your register pipeline. With Azure Confidential Computing or AWS Nitro Enclaves the lift is heavier, the pricing premium is significant, and the GPU options are narrower — which matters when the AI Act and DORA both want H100/H200-class evidence.
- Wire the register row to evidence, not narrative. Every register row for a critical AI function should reference an artifact you can replay: an attestation quote, a model-artifact hash, a region-pinning record. Free-text paragraphs are now what gets flagged.
- Test the exit plan. Article 28(8) requires a documented and tested exit strategy. "We could fall back to GPT-3.5" is not tested. Run a half-day game-day with the regulated function team and document the timing.
- Brief the management body. Article 5(2) makes ICT risk management a board-level responsibility. Personal liability is real. The brief should be in the language of the register findings, not in the language of MLOps.
What This Article Does Not Solve (Pratfall, Honest Edition)
I would rather you know the limitations now than discover them mid-audit:
- Confidential computing does not produce your exit plan. Article 28(8) is operational. If you cannot evidence a tested fallback, no attestation quote will fix it.
- It does not eliminate concentration risk. Article 29 wants you to think about supplier-level dependency. Moving every critical AI workload onto one TEE provider is its own concentration story; we expect supervisors to start asking about this in late 2026.
- BaFin guidance is not pan-EU yet. The Rundschreiben binds the German market and signals direction. ACPR, Bank of Italy, MFSA, and CSSF are visibly tracking it but will publish their own. Build for the spirit; track the letter as it lands.
- The CTPP designation regime is still ramping. If your AI vendor is eventually designated a Critical ICT Third-Party Provider under Article 31, the oversight-fee bill and lead supervisor regime will reshape the contract again.
Who Should Read This Twice
- Heads of Compliance, DPOs, and Risk Officers at EU credit institutions, insurers, asset managers, payment institutions, and crypto-asset service providers.
- CIOs, CTOs, and CISOs who own the ICT third-party register and now need to retrofit AI vendors into a regime that did not anticipate them.
- Founders selling AI products into EU financial services — your buyers will increasingly require DORA-grade evidence as part of vendor onboarding from mid-2026, and a vendor that ships register-ready evidence wins.
Two starting points if you want to go deeper: our piece on the EU AI Act August 2026 deadline for the high-risk AI angle, and the GDPR & AI 2026 piece for the privacy-side of the same conversation. All three regimes converge on the same evidence pipeline.
FAQ
When does DORA actually apply to AI tools my bank or insurer uses?
Why does my OpenAI / Microsoft 365 Copilot contract fail Article 28?
What does "personal liability" actually mean for senior managers?
Does using a confidential computing provider automatically make me DORA-compliant?
How does this relate to GDPR Article 28 and the EU AI Act?
What does the 6.5% number actually refer to?
Get a DORA-grade register row in under a day
Pull a TDX attestation quote yourself and see what supervisor-grade Article 28 evidence actually looks like. $5 free credit, no credit card, EU-pinned by default.