What confidential computing is really about
We’ve spent years obsessing over encrypting data at rest and in transit, then quietly accepting that as soon as an application actually uses that data, it’s sitting in plaintext in memory where the platform can see it. Confidential computing is basically an admission that this gap is no longer acceptable.
The Confidential Computing Consortium puts a formal label on it: protect data in use by running workloads inside a hardware-based, attested Trusted Execution Environment (TEE). The idea is simple: even if your code runs in someone else’s cloud, the cloud operator shouldn’t be able to look over its shoulder.
Trusted Execution Environments: the building block
All of this rests on TEEs. Different vendors describe them in slightly different language, but it’s the same mental model.
Azure talks about a segregated area of CPU and memory protected with encryption so that anything outside can’t read or tamper with it. Google calls TEEs secure, isolated environments that block unauthorized access or modification of applications and data while they run. IBM leans on the “secure enclave within a CPU” analogy.
Strip away the marketing and you get this: a region of memory and CPU where:
- the hardware encrypts and integrity-protects what’s going on,
- only code inside that region can see the plaintext, and
- the TEE can produce a signed statement about what it’s running so others can verify it (attestation).
That last part is what turns this from “encrypted RAM” into something you can build real trust on.
How the big three clouds package TEEs
All three major public clouds use these hardware features, but they surface them in slightly different ways.
On Azure, the main entry point is confidential virtual machines. These run on processors with technologies like AMD SEV-SNP or Intel TDX. The VM’s memory is encrypted and isolated from the hypervisor and from Microsoft’s own operators. The sales pitch is straightforward: take an existing workload, put it in a confidential VM with minimal code changes, and now you have a data-in-use story that lines up better with “own your data” and regulatory requirements. Around that, Azure also has more fine-grained enclave options (historically Intel SGX) and services such as Confidential Ledger that themselves run inside TEEs.
Google Cloud’s centre of gravity is also confidential VMs. From the outside they look like standard Compute Engine machines; under the covers they use hardware memory encryption and integrity protection, with keys generated and stored in dedicated hardware rather than exposed to the hypervisor. In a lot of cases, turning on confidential computing is literally a machine-type choice or a checkbox in the UI. On top of that you get confidential GKE nodes (Kubernetes nodes backed by confidential VMs) and Confidential Space, which is a fully managed “secure enclave in the cloud” aimed at multiparty analytics and joint machine learning. Confidential Space leans heavily on remote attestation so each party can verify the environment before sending in encrypted data.
AWS takes a different route. Instead of pushing “confidential VMs” as the default, it focuses on Nitro Enclaves built on the same Nitro system that isolates EC2 instances. A Nitro Enclave is a chunk of vCPUs and memory carved out of a parent EC2 instance. By design, it has no network, no persistent storage, and no direct shell access, not even from root on the parent. Data is passed in and out over a local secure channel; only the enclave code ever sees the plaintext. AWS points people at use cases like processing PII, healthcare or financial data, or running key management and cryptographic operations inside the enclave. NitroTPM, a virtual TPM tied to the Nitro hardware, adds attestation: services like KMS can require proof that a specific enclave image is running on healthy Nitro infrastructure before they release keys.
Why attestation actually matters
The Confidential Computing Consortium definition emphasises “attested” TEEs for a reason. If you can’t prove what’s running inside the protected environment, you’re just hoping that some abstract hardware feature was configured correctly.
In a typical flow, the TEE starts up, measures its code and configuration, and produces an attestation report signed by hardware or a trusted service. Your verifier, often a key management or policy service, checks that report against an allow-list: correct CPU type, firmware, OS image, enclave hash, policy, region, and so on. Only if those checks pass does it release a decryption key or send sensitive data.
Azure, Google Cloud, and AWS all build these flows into their confidential computing offerings. The details differ, but the trust shift is the same: you no longer just trust “this VM in this account”; you trust “this specific measured environment that I can verify on every run.”
What people actually use confidential computing for
Across the major platforms, the same patterns keep coming up. Organisations move regulated or highly sensitive data, PII, health records, financial transactions, proprietary models, into TEEs so that even cloud administrators cannot see it in use. They run key management and crypto operations inside enclaves or confidential VMs to keep private keys inside a small, well-defined boundary. In multiparty scenarios, they use things like Azure enclaves or Google’s Confidential Space to combine datasets for analytics or machine learning without exposing raw data to any one party.
Independent write-ups often frame confidential computing as a privacy-enhancing technology: another tool, alongside encryption and differential privacy, for reducing how many people and systems can see the plain data.
Caveats and limits
None of this is free. TEEs introduce operational overhead: attestation flows to design, key-release policies to define, platform and region constraints to live with. Enclave-style models can require code changes and special SDKs; debugging and observability are trickier by design.
And TEEs don’t magically fix bad application security. If your code has logic bugs, injection flaws, or broken access control, those issues still exist inside the enclave. Confidential computing narrows who can see your data while it’s being processed; it does not remove the need for everything else you already know you should be doing.
What it does give you is a much clearer boundary: specific, verifiable environments, protected by hardware, that the big three clouds expose in slightly different ways but all built on the same core idea of a TEE.

