Confidential Computing: What is it and why do you need it?

Confidential Computing can significantly enhance enterprise security, but what is it, and what do you need to do about it?

A magnifying lens examines top secret information amid binary code.
DNY59 / Traffic Analyzer / Getty Images

Security remains one of the top three concerns for nearly every enterprise, both those using primarily in-house data centers as well as those migrated to the cloud (and that's both for fully public cloud or hybrid/multi-cloud users). Application solution providers, machine suppliers and cloud players have all gone to great lengths to enhance security.

But most efforts in the past at security have centered around protecting data at rest or in transit through encryption. Indeed, encryption of data while in a database, over a LAN/WAN or moving through a 5G network, is a key component of nearly every such system. Nearly every compute system, even smartphones, have data encryption built in, enhanced by specialized compute engines built into the processor chips. But one area that has been relatively ignored is the ability of all of this encryption to be defeated if a bad actor can access the device hardware through either a malicious app or a side channel intrusion. Encrypted data needs to be in the clear when processing it, and this is a real vulnerability. If you can get to the machine memory at this point, all data is available for easy viewing/copying. Eliminating this risk is the vision of Confidential Computing (CC). (I'm not a fan of the name as it's pretty non-descript, but it appears we're stuck with it).

What is it?

In 2019, the Linux Foundation launched the Confidential Computing Consortium. Its stated goal is to define standards for confidential computing as well as support and propagate the development of open-source CC tools and frameworks. Members include Alibaba, AMD, Arm, Facebook, Fortanix, Google, Huawei, IBM (Red Hat), Intel, Microsoft, Oracle, Swisscom, Tencent, and VMware. While several of these companies already have tools available, it's likely that in the future they will get rolled up into a more open sourced framework for CC, given the Linux Foundation background. The foundation has stated that: "The Consortium is concentrating on the area of "data in use," with the confidentiality of "data in transit" and "data at rest" as outside the scope of the Consortium. Contributions to the Confidential Computing Consortium, according to their website, already include:

  • Software Guard Extensions (Intel SGX) SDK, designed to help application developers protect select code and data from disclosure or modification at the hardware layer using protected enclaves in memory.
  • Open Enclave SDK, an open source framework that allows developers to build Trusted Execution Environment (TEE) applications using a single enclaving abstraction. Developers can build applications once that run across multiple TEE architectures.
  • Enarx, a project providing hardware independence for securing applications using TEEs.

As this is an ongoing "work in progress" standardization effort, there will likely be many more projects that come up in the future. But all should eventually be embedded into an open source framework for CC.

Why is CC different?

Unlike data encryption, CC uses hardware-based functions to create a "Trusted Execution Environment" (TEE) for data, compute functions, or an entire application. CC isolates this vaulted area from access by the OS or VM and thus protects against any potential for cross contamination as no one can gain access that isn't assigned to this TEE. Any attempt to alter the app code or tamper with the data will be prevented by the TEE. This is especially critical in multi-user systems, such as virtualized and public cloud systems, where cross contamination of data is a real risk. Indeed, some potential users of public cloud compute have resisted moving for this specific reason. While a slight chance of a "side channel" attack is possible if you have physical access to the hardware, the risk is trivial compared to the potential risks associated with non-CC systems.


Why now?

Trusted Execution Environments are the key to making CC work. We've had TEE's for some time, including on Arm based chips (Trust Zone) as well as in x86 chips (e.g., Intel SGX). Indeed, early versions of this concept go back more than a decade to TPM modules that were available in many PCs. The difference with modern versions of TEE is that they are built into the core of the chips and not as external add-ons that could be compromised over the interconnections.

Despite the fact that we've had TEE enabled systems available for some time, few enterprises have sought to use them, and many app providers don't support them either. The reason is that they have always been difficult to implement and you needed specific code enabled with the app to enforce the use of a TEE environment. Further, TEEs were not universally available on all processors (e.g. some Intel Xeon chips support SGX and some don't), nor were TEEs compatible across chip families. The result is that many organizations did not implement what could be a very important security method.

What's changed?

With a move to off premises and multi-tenant cloud computing, there is now a greater need to protect the processing integrity of customer data, as well as protect certain proprietary algorithms running in the processes. As a result cloud providers are making it easy to spin up new CC instances for customers to utilize. This eliminates the need for organizations to have their own CC enabled systems to run in. This is a win-win situation, as the customers get what they need to protect their data assets, and cloud providers bring in the necessary hardware assets that customers don't necessarily own themselves. This new availability is being brought about but an increasing number of processors that include the CC concepts built in. And as cloud providers generally obtain new high end processing capability early in tier stages of availability, this makes access for the user community much more rapid than if they had to acquire it on their own. Further, it enables app providers to quickly design CC into their product given the availability of hardware and toolkits running in the cloud, and further, allows them to have a more ready market to recover their development investment.

What should companies do?

The concepts behind CC are not new, but the availability of TEEs and CC in the cloud make it much more attractive to organizations that need to secure their data from application vulnerabilities. I recommend that enterprises explore the use of CC techniques in the next 6-12 months, and specify to their key application solution providers that they expect them to comply with the CC strategy and offer technology implementations within the same time period. CC can significantly enhance enterprise security by virtually eliminating the ability of data in process to be exploited. While there is no 100% sure thing when it comes to security, CC is a major step forward and should be implemented whenever possible, particularly for those organizations deploying applications in the cloud. I expect CC to become a standard approach to compute, especially in the cloud, within the next 1-2 years.