Key Takeaways
- $9.31B → $15.15B. Mordor Intelligence sovereign cloud market sizing, 2024 to 2030. Forrester pegs European sovereign AI cumulative spend at €1.5 trillion through 2030.
- 62.74% CAGR. Combined Mordor and Gartner sourcing on the confidential AI sub-segment — the fastest-growing line item in the sovereign cloud category.
- Geopatriation. Gartner's emerging-trend term for moving workloads back from US-parent providers to sovereign regional clouds. Jurisdictional, not cost-driven. The DPO and audit committee are the buyers.
- August 2026. The EU AI Act phases general-purpose AI provider obligations on existing GPAI models. A procurement event for any EU-regulated buyer relying on US-parent inference.
- Technical maturity. Intel TDX 1.5, NVIDIA Confidential Computing on H100/H200/B200, and OpenAI-compatible APIs from EU controllers are production-ready simultaneously for the first time.
Five years ago, “sovereign cloud” was a French and German procurement preference and an analyst footnote. In 2026 it is a sized market, a Gartner emerging trend, and a regulatory event with a calendar date. This piece is a primary-source synthesis of where confidential AI stands in the middle of 2026 — what the numbers say, what the drivers are, what the technical stack looks like, and what to do about it if you are buying or building AI infrastructure in a regulated EU sector.
The shortest possible summary: the market that did not exist in 2018 is worth ~$9 billion in 2024 and projected to grow to ~$15 billion by 2030, the European sovereign AI category specifically attracts an estimated €1.5 trillion in cumulative spend through the end of the decade, and the regulatory and geopolitical drivers are structural rather than cyclical. Gartner has given the trend a name — Geopatriation — and the technical stack to support it (Intel TDX, NVIDIA Confidential Computing, OpenAI-compatible APIs from EU controllers) has reached production maturity at the same time as the August 2026 EU AI Act deadline. That alignment is what makes this an inflection year rather than another sovereign-cloud announcement cycle.
Market sizing — what the primary sources actually say
Three syndicated reports anchor the numbers buyers reference in 2026. Mordor Intelligence sizes the sovereign cloud market at USD 9.31 billion in 2024, projecting USD 15.15 billion by 2030. The driver mix in the Mordor report is fairly evenly split between regulated industries (financial services, healthcare, public sector) and geopolitically exposed multinationals revisiting their cloud exposure to US-parent providers after the CLOUD Act and FISA 702 became persistent talking points in European tenders.
Forrester publishes a narrower estimate for European sovereign AI specifically — €1.5 trillion in cumulative spend through 2030. The category includes infrastructure (compute, storage, networking), platforms (vector stores, orchestration, agent runtimes), and applications (regulated industry agents, sovereign copilots). Forrester treats AI separately from generic sovereign cloud because the AI-specific compliance overhead — model cards, transparency notices, EU AI Act documentation packs — is itself a market driver that pulls procurement into the sovereign category even for buyers who would otherwise be comfortable with hyperscaler EU regions.
The confidential AI sub-segment — workloads running inside Intel TDX, AMD SEV-SNP or equivalent TEEs with per-session attestation — is growing at a combined CAGR of 62.74% in Mordor and Gartner sourcing. This is the fastest-growing line item inside the sovereign cloud category and the one where VoltageGPU sits. The CAGR is higher than the headline sovereign cloud number because the confidential layer requires specific hardware and provider posture; supply is concentrated, demand is expanding rapidly, and the per-workload contract value is higher than commodity inference.
For context, the same period saw the global generative AI market itself grow from roughly $40 billion in 2024 toward an analyst-consensus $200-300 billion by 2030. The confidential AI slice is small in absolute terms but growing faster, which is the shape of a market moving from early-adopter regulated buyers into mainstream procurement.
Geopatriation — the term Gartner introduced
Gartner added Geopatriation to its emerging-trends taxonomy to describe a structural shift in cloud procurement: workloads moving back from globally distributed providers — typically US-parent hyperscalers — to sovereign regional clouds whose controlling legal entity is incorporated in the customer's regulatory zone. The term is deliberately distinct from the 2018-2022 cloud repatriation discussion.
Cloud repatriation in the 2018-2022 sense was a cost discussion. Buyers ran a TCO model against hyperscaler list price, decided that workloads in steady-state could run cheaper on bare-metal or colo, and moved them off. The buyer was the infrastructure team or FinOps, and the destination was typically on-premises or colocation.
Geopatriation is a jurisdictional discussion. Buyers run a legal exposure model against CLOUD Act, FISA 702, Schrems II, EU AI Act, DORA, NIS2 and SecNumCloud, decide that sensitive workloads cannot live in US-controlled inference, and move them to a sovereign regional cloud whose controller is incorporated in their regulatory zone. The buyer is the DPO, General Counsel and Audit Committee. The destination is a sovereign cloud, not on-premises.
The two trends share a verb (“move workloads back”) but the rest is different — drivers, buyers, destinations, success metrics. Confusing them leads to projects that optimise cost when the audit committee was worried about legal defensibility. The Geopatriation framing matters because it tells the project sponsor which department owns the decision and what the success criteria are.
Why 2026 is the inflection year — four compounding drivers
1. EU AI Act enforcement reaches existing models in August 2026. The Act phases in three steps: February 2025 (prohibited practices), August 2025 (GPAI provider obligations on new models), August 2026 (GPAI obligations on existing models, plus most high-risk AI system obligations). From August 2026, any provider placing a general-purpose AI system on the EU market must publish model cards, transparency notices, copyright posture for training data, retention rules and post-market monitoring documentation. European buyers consistently report that they prefer a European controller who can respond directly under EU law rather than escalating to a US headquarters. The deadline is a hard forcing function.
2. DORA has been mandatory since January 2025. The Digital Operational Resilience Act requires controllable processors, EU sub-processor chains, demonstrable resilience, an ICT third-party register, and an exit plan. The 2024 EBA dry-run register collection had a 6.5% pass rate. The 2025-2026 enforcement cycle is forcing financial entities to redo vendor selection on AI workloads and prefer providers who can produce cryptographic evidence rather than vendor PDFs. We have written about this specifically in DORA Article 28 for AI vendors.
3. Schrems II remains structurally unresolved. The CJEU invalidated Privacy Shield in 2020. The 2023 EU-US Data Privacy Framework is under legal challenge and widely expected to be either invalidated or narrowed by the CJEU in the 2026-2027 window. Transferring personal data to US providers currently requires Standard Contractual Clauses plus a transfer impact assessment. An increasing number of European DPOs decline to sign that combination for sensitive workloads — privileged legal documents, patient records, financial models, public-sector data. The legal uncertainty pushes geopatriation up the priority list.
4. The technical stack reached production maturity simultaneously. Intel TDX 1.5 is shipping on Sapphire Rapids and Emerald Rapids. NVIDIA Confidential Computing is generally available on H100 and H200, with B200 rolling out through 2026. Per-session attestation flows are documented and integrated into the major cloud control planes. And the OpenAI SDK contract — chat completions, embeddings, images, function calling — accepts a custom base_url with no source code change. This last detail matters because the lock-in argument against moving off a US-parent provider no longer holds: SDK migration is a configuration change, not a rewrite.
Any one of these would push some workloads off US-parent inference. Together they make the trend structural rather than cyclical, and they line up around the August 2026 deadline as the natural cutover point.
What the technical stack looks like in 2026
Three layers, all production-ready.
Intel TDX (Trust Domain Extensions) handles the host enclave. Memory is encrypted with per-tenant AES-256 keys managed in silicon. The hypervisor, host operator and cloud provider are excluded from the trust boundary. The TDX module produces a signed quote on demand — an attestation report that proves which Trust Domain image was loaded on which physical CPU, with a measurement (MR_TD) that pins the workload identity.
NVIDIA Confidential Computing on Hopper (H100, H200) and Blackwell (B200) handles the GPU side. Protected PCIe encrypts the channel between the CPU and the GPU. Device memory is encrypted with keys held in the GPU's secure root of trust. The GPU produces its own attestation report binding device identity to a customer-controlled key. Combined with TDX, this means a sealed inference path from the user's API call through encrypted RAM to encrypted device memory and back, with no point at which a privileged cloud operator can read plaintext.
Per-session attestation binds the cryptographic evidence to the specific workload. The attestation quote includes the model image hash, the system prompt hash, the pod identity and the operator jurisdiction. An auditor can replay the quote against Intel's and NVIDIA's public roots of trust without trusting the cloud provider or the customer. This is what shifts compliance from contractual posture to technical proof.
On top of this stack sits an OpenAI-compatible REST API. The standard endpoints — chat.completions, embeddings, images, function calling — accept the same payloads as the OpenAI API. Migrating an existing integration is a base_url and API key swap in the SDK code, not a rewrite. We covered the attestation flow specifically in our TDX attestation verification guide and the underlying technology in Intel TDX deep dive.
How hyperscalers are responding (and where they fall short)
Microsoft Cloud for Sovereignty, AWS European Sovereign Cloud, Google Cloud Sovereign Solutions, Azure Confidential VMs and AWS Nitro Enclaves are all responses to the same buyer signal. The hyperscaler offerings address some layers:
- Regional residency — EU regions exist and the data stays there for most standard operations.
- Contractual posture — sovereignty SKUs come with stricter DPAs, listed sub-processors and EU support staff.
- Partial hardware sealing — Azure offers Confidential VMs with AMD SEV-SNP and limited TDX preview, AWS offers Nitro Enclaves on CPU workloads, Google offers Confidential VMs with SEV-SNP.
What none of them solves is the parent company problem. Microsoft Ireland is a subsidiary of Microsoft Corporation (US). AWS EMEA Sàrl is a subsidiary of Amazon.com Inc (US). Google Belgium is a subsidiary of Alphabet Inc (US). The CLOUD Act applies to the parent regardless of which region holds the data. For tenders that explicitly screen out US-parent processors — and there are more of them every quarter — the hyperscaler sovereignty SKUs do not pass the legal filter.
The hyperscaler response confirms the trend without solving the underlying constraint. A buyer running a strict geopatriation policy needs an EU-incorporated provider, and the EU-incorporated providers running production-grade confidential AI in 2026 are a short list.
Where VoltageGPU positions
Disclosure: this is our piece, so we will be specific about where we sit. VoltageGPU is operated by VOLTAGE EI (France, SIREN 943 808 824). The controller is European. There is no US parent. Inference runs inside Intel TDX hardware enclaves on EU infrastructure with per-session attestation. The API is OpenAI-compatible (drop-in base_url swap). The documentation pack covers GDPR Article 28, the EU AI Act, DORA, NIS2, SecNumCloud, HDS and Gaia-X compatibility.
Plans run from $20/month (Plus, individual regulated user) through Starter ($349/month), Pro ($1,199/month, OpenAI-compatible API), Enterprise ($3,499+/month, SSO, SCIM, audit logs, named DPO contact) to Custom contracts up to $5,000+/month for dedicated regional clusters and Bring-Your-Own-Agent deployments. The full positioning is on Sovereign AI Cloud and the project methodology for moving workloads off US-parent providers is on Geopatriation.
The honest version: we are not the only EU-incorporated provider with hardware sealing, and we will not be the only one a year from now. Mistral and Aleph Alpha are real EU controllers, both have credible product, neither offers per-session TEE attestation as a default product line. OVHcloud is SecNumCloud-qualified and Gaia-X compatible but does not ship a TEE inference API. We sit in the niche of EU controller plus hardware-sealed inference plus OpenAI-compatible API. That niche is small today and it is the part of the market growing at 62.74% CAGR.
18-month forecast — what to actually expect
Three observable signals to track.
Procurement template change. RFPs in regulated EU sectors are starting to explicitly screen out US-parent processors for AI workloads, not just for traditional cloud. The language varies — “EU-incorporated controller”, “non-extraterritorial jurisdiction”, “SecNumCloud-aligned”, “Gaia-X compatible” — but the intent is consistent. By end of 2027 we expect this to be standard in financial services, healthcare and public sector tenders across the EU and in default policy for many European multinationals.
Audit findings. National competent authorities (BaFin, ACPR, CNIL, AEPD, Garante, MFSA) have published enough DORA and AI Act guidance that auditors now expect cryptographic evidence rather than vendor PDFs. The 6.5% DORA dry-run pass rate is the baseline; the next round will be higher, but the entities that move first will be the ones with attestation evidence in their register rows.
Workload mix. Chat completions, embeddings and retrieval flows are typically the first to geopatriate because the OpenAI SDK swap is mechanical. Fine-tuned models and agent workloads tend to migrate in a second wave on dedicated regional clusters, because the BYOA packaging takes longer to validate. The first wave runs through 2026; the second wave runs through 2027.
Net forecast: a multi-hundred-million-euro shift of AI inference spend out of US-parent providers into sovereign regional clouds by the end of 2027, with the largest absolute flow happening between Q3 2026 (EU AI Act GPAI deadline on existing models) and Q4 2027. The market that is $9 billion today reaches $15 billion by 2030 in the central Mordor case — which means roughly $1 billion of net new sovereign cloud spend per year through the decade, with confidential AI taking a disproportionate share because of its higher CAGR.
What to actually do — if you are buying or building
If you are buying: run an inventory. List your AI workloads on US-parent providers. Capture traffic volumes, data sensitivity, SLAs and existing DPA terms. Pick one workload — typically a chat completion or embedding flow — and run a parallel evaluation against an EU-incorporated confidential AI provider. Document the comparison. Take the evaluation pack to your DPO and audit committee. Most organisations discover 30-60% more US-controlled AI usage than the CISO assumed; the inventory itself is the most informative step.
If you are building: the lock-in argument that kept teams on US-parent providers is gone. The OpenAI SDK accepts a custom base_url. Model quality on regulated workloads (legal, financial, medical, compliance) is within striking distance of GPT-4-class systems with Qwen3.5-397B-TEE and DeepSeek-R1-TEE. The marginal capability gap on the most exotic tasks is the trade against hardware confidentiality, EU jurisdiction and direct EU AI Act applicability. For most production workloads, the trade is the right one — and the buyer in your enterprise customer base will increasingly require it.
The honest summary of 2026: the sovereign cloud market is real, the Geopatriation trend is named and tracked, the regulatory deadlines are hard, the technical stack is mature, and the SDK lock-in is no longer a meaningful barrier. The question for every team running regulated AI workloads is not whether to move some workload off US-parent providers — it is which workload to move first.
Sources
- Mordor Intelligence, Sovereign Cloud Market 2024-2030 — sizing ($9.31B 2024 → $15.15B 2030), driver mix and regional split.
- Forrester, European sovereign AI commentary — €1.5 trillion cumulative spend through 2030.
- Gartner, emerging-trends taxonomy — Geopatriation as a named structural trend in cloud procurement.
- European Commission, EU AI Act phased enforcement calendar — February 2025, August 2025, August 2026.
- European Banking Authority, DORA dry-run register collection 2024 — 6.5% pass rate headline.
- ANSSI, SecNumCloud qualification register — current qualified providers as of 2026.
- Intel, TDX 1.5 specification and attestation flow — public documentation.
- NVIDIA, Confidential Computing on H100/H200/B200 — public technical documentation.
Continue reading — Sovereign AI Cloud pillar, Geopatriation methodology, DORA Article 28 for AI vendors, EU AI Act August 2026 compliance.