Key Takeaways
- A Data Processing Agreement is no longer enough for special-category data under GDPR Article 9. Regulators increasingly want technical evidence, not paperwork.
- Software-only privacy promises break the moment a sub-processor has root, a privileged hypervisor, or a subpoena. TDX moves the trust boundary into silicon.
- Article 32 — "appropriate technical measures" is where CNIL and LfDI rulings now bite. Hardware-sealed processing is the highest bar currently shippable.
- VoltageGPU is EU-established (France, Solaize) and signs a GDPR-native DPA with TDX attestation as Schedule 2 evidence.
In late 2025, a mid-sized Paris litigation firm got publicly sanctioned by their bar council for putting confidential client NDAs into ChatGPT to draft a memo. The penalty was not trivial: a six-month suspension for the partner, a referral to the CNIL, and a sanction d'avertissement public on the firm's registry entry. Their defense was the one every founder I talk to wants to believe in: "but we had a signed agreement with OpenAI." The bar council read the agreement and was unmoved.
That is the story of GDPR and AI in 2026. The Data Processing Agreement — the Article 28 contract that vendors love to wave around — has stopped being the finish line. It is the starting line. Regulators have caught up. Auditors ask for technical evidence. And boards are starting to ask "okay, but can the vendor actually see our data?" If the answer is yes, you are one breach — or one badly-worded subpoena — from a very expensive year.
The Article 28 Trap
GDPR Article 28 governs processors and sub-processors. It requires a written contract spelling out subject matter, duration, nature, purpose, type of personal data, categories of data subjects, and the controller's rights. Every reputable AI vendor has one ready. That is not the trap. The trap is what Article 28 does notguarantee:
- It does not guarantee the sub-processor cannot read your data. It only says they have promised not to.
- It does not stop a US sub-processor from receiving a Section 702 order (Schrems II reality).
- It does not protect against privileged-insider exfiltration, which is the modal data breach now.
- It does not, in CNIL's 2024 guidance on generative AI, satisfy Article 32 for special-category data on its own.
The Paris firm above had a signed enterprise DPA. It still got sanctioned. The bar council's reasoning, paraphrased from the public ruling: "a contractual promise not to look is not a guarantee of confidentiality. Counsel must adopt technical measures that make the promise verifiable."
Why Software-Only Promises Fail in 2026
Most AI vendors' privacy posture is software-only: encrypted in transit, encrypted at rest, role-based access, audit logs. That is good security hygiene. It is also not enough for the threat model GDPR Article 32 now expects you to defend against. Three structural holes:
- The hypervisor problem. A cloud operator's hypervisor sees every page of guest RAM. Your prompts, completions, and model weights live in plaintext in system memory and VRAM during inference. Encryption-at-rest does nothing here.
- The privileged-operator problem. A sub-processor's SRE with root can technically read every byte of your traffic. The DPA forbids it. Reality shows that forbidding does not equal preventing.
- The subpoena problem. A US-based sub-processor receiving an FBI Section 702 order can be legally compelled to hand over the data they technically have access to. The DPA cannot override US federal law.
The EU AI Act, which becomes broadly enforceable in 2026, makes this worse. High-risk systems — think legal AI, healthcare AI, HR AI — have to demonstrate "appropriate cybersecurity measures" (Art. 15). Both EDPB and CNIL have signaled that for special-category data, that bar now points squarely at confidential computing.
What Hardware Sealing Actually Fixes
Intel Trust Domain Extensions (TDX) is the third generation of confidential computing hardware to ship in volume. It creates a Trust Domain: a VM whose memory is encrypted with a per-TD AES-256-XTS key managed by the CPU itself. The hypervisor sees ciphertext. The host kernel sees ciphertext. Even the cloud operator sees ciphertext.
For GPU AI, the missing piece used to be the bus. Encrypted RAM is useless if the H100 receives plaintext over PCIe. Intel TEE-IO closes that gap: data traveling between the CPU and an attested H100/H200/B200 is encrypted end-to-end. The attacker model collapses to "the silicon is lying," which is several orders of magnitude harder than "someone has root."
Article 32 Evidence That Survives an Audit
GDPR Article 32 demands "appropriate technical and organisational measures." Until 2024, the vague wording let almost any reasonable security practice through. The 2024-2025 generation of CNIL and LfDI rulings tightened that interpretation considerably. For special-category data, "appropriate" now means measures that are verifiable by the data subject or supervisory authority. Confidential computing is the first technology to deliver that:
- Provision an attested TDX pod. The pod ships with a measured boot, recorded in the TD's identity (MRTD).
- Verify the attestation quote from your application before sending any personal data. If the quote does not match the expected MRTD — fail closed.
- Send the inference request to a TLS endpoint that terminates inside the enclave. The enclave is the only entity holding the cert key.
- Set
confidential: trueon the request to disable any residual logging and request VRAM scrubbing on completion. - Keep the signed quote. That cryptographic blob is your Article 32 evidence. Stash it with the request id in your audit log.
The DPIA self-check most law firms run quarterly:
# Quick GDPR Article 35 DPIA self-check (very compressed).
# If any answer is "yes" and your inference vendor is software-only, stop.
questions = [
"Are we processing special categories (Art. 9): health, legal, biometric, ethnic?",
"Are we processing data of children, employees, or vulnerable people?",
"Is the processing on a 'large scale' (LfDI 2024 guidance)?",
"Could a breach cause material or reputational harm to the data subject?",
]
if any_yes(questions):
print("DPIA required. Hardware-sealed processing strongly recommended.")
print("Software-only DPAs will not survive a CNIL or LfDI audit.")And the actual attestation verification is two dozen lines of Python:
import requests
# Verify the Intel TDX attestation BEFORE you ever send personal data.
# A signed quote from Intel proves the enclave is real and unmodified.
quote = requests.get(
"https://api.voltagegpu.com/v1/pods/POD_ID/attestation",
headers={"Authorization": "Bearer vgpu_YOUR_KEY"},
).json()
assert quote["tdx_version"] == "1.5"
assert quote["measurement_valid"] is True
assert quote["mr_td"] == EXPECTED_MR_TD # pinned at provisioning
# Article 32 "appropriate technical measure" satisfied at the silicon layer.
# Auditor evidence: the quote is cryptographically signed by Intel.
print("Enclave attested. Sub-processor cannot read personal data in memory.")Real Numbers: GDPR-Grade GPU Inference in the EU
EU pinning is a single API flag. We default to France (Solaize) and Germany (Gravelines) regions. Azure Confidential Computing prices fluctuate; we last benchmarked these on April 23, 2026.
Where Hardware Sealing Doesn't Help (Pratfall, Honest Edition)
I am not going to pretend confidential computing is a magic compliance wand. Three honest limitations:
- It does not save you from a bad prompt. If your application leaks PII back to an unauthorized data subject through a clever jailbreak, TDX did its job — your prompt engineer didn't. Output filtering is still your problem.
- It does not waive the DPA. Article 28 is a contractual obligation; attestation is a technical one. You need both. Anyone selling you "TDX, no DPA needed" is selling vapor.
- It does not produce a SOC 2 Type II report. SOC 2 audits the organization, TDX audits the silicon. Some procurement teams want SOC 2 by default; we're mid-audit, due Q3 2026.
These are tradeoffs, not show-stoppers. For a typical EU law firm or compliance team, you trade a US-flagged SaaS vendor for verifiable Article 32 evidence and a 50–74% lower bill.
Who Should Care
- EU law firms processing client privilege material, M&A documentation, or litigation discovery through LLMs — the use-case the Paris sanction was about.
- Compliance and audit functions running confidential AI on financial, regulatory, or whistleblower data.
- Healthtech operating in the EU where Article 9 special-category data meets the EU AI Act's high-risk definitions.
- HR teams piloting AI-assisted hiring or performance review — high-risk under Annex III of the AI Act.
If any of those describe you, two starting points:
- The confidential computing primer for the architecture story.
- The Harvey AI comparison for the legal-specific version of this argument with side-by-side DPA terms.
FAQ
Is a signed DPA enough for GDPR-compliant AI in 2026?
Does VoltageGPU qualify as a sub-processor under Article 28?
What about Schrems II and US sub-processors?
Where exactly does the encryption happen?
What evidence can I show a CNIL auditor?
Get a GDPR-grade pod in under 60 seconds
Don't take my word for it. Pull a TDX attestation quote yourself. $5 free credit, no credit card, EU-pinned by default.