Intel SGX and why it matters

Cyber security visualisation. A shield icon on a circuit board

Security, privacy and trust have never been as important in enterprise computing as they are right now. Data has become the fuel powering organisations in their mission to compete and thrive, yet it also poses enterprises real challenges. Through a single breach, valuable IP, personal information or sensitive business information can be exposed. Data privacy and security regulations give people sovereignty over their data, but also hold back projects that could transform healthcare or help in the battle against crime. There are situations where closer collaboration could benefit several organisations in a sector, but they avoid it over fears about IP or data leaking out.

Over the last decade, the widespread adoption of stronger security practices combined with developments in hardware and software have enabled enterprises to mitigate some cybersecurity and privacy risks. System-level security protects platforms from attack from the BIOS level upwards, while accelerated encryption and hardware-based authentication offer more security for data in transit between systems and at rest. Yet there’s still one area where data can easily remain vulnerable and exposed – when it’s actually in use.

Here’s the problem: when data can be processed by an application, it needs to be unencrypted in memory, pulled in for processing, and the results of that processing output. At all points during this cycle, the data is effectively in the clear, visible to the operating system or hypervisor, other compute resources and any party with system-level access to the memory and CPU. This might not sound like a huge security problem, but there are multiple scenarios where it is. Any vulnerability that could be exploited to provide low-level access to the OS, the hypervisor, the CPU or RAM could expose data while in use. Bad actors working within an organisation could potentially gain unauthorised access.

Most seriously, there is always the risk that data running on a system in a third-party data centre, or the cloud could be visible and accessible to an insider attack. This is a deal-breaker for many enterprises that wish to make the most of the cost efficiencies and enormous compute resources of the cloud, because you could be trusting your most precious data – your IP, your security keys, your applications, your business data and your customers’ data – to systems where you don’t have full control.

This restricts organisations and prevents them from taking advantage of new services and applications that could help transform their business. It causes organisations in strongly regulated sectors, such as finance or healthcare, to abandon initiatives that might enable them to take advantage of cutting-edge AI technologies or collaborate with their peers. It’s a problem for companies working on driverless vehicles or running machine learning and analytics on data from IoT devices, where it makes more sense to run data-heavy compute operations nearer the edge.

As Richard Curran, Security Officer for the Datacenter Group at Intel puts it, ‘If data needs to be trusted, it needs to be protected at rest, it needs to be protected in flight, and it needs to be protected in use.’ The lack of security and privacy controls around data in use has become an impediment for cloud computing, while leaving vulnerabilities open within the data centre.

Why confidential computing is crucial

Confidential computing is all about removing these impediments and vulnerabilities, by using hardware-based techniques to isolate data and workloads from the rest of the system and running them within a trusted execution environment (TEE). Data exposed within the TEE can’t be seen by the operating system, other applications or other resources, or by those who have access to the hardware. Curran likens the overall impact on security to the difference between storing valuables inside the safe in a hotel lobby to storing them inside the safe in your hotel room. With the former, there’s a high level of security, but always some risk of insider threat. With the latter, only you have the code to unlock the safe, so you can control who can access what’s inside.

This puts the enterprise back in control of its workloads and its data, whether running on-premises, in a third-party data centre or inside the cloud. What’s more, this capability is available right now, through Intel’s Software Guard Extensions technology, or SGX.

How SGX Works

SGX works by splitting the processes in an application into two parts: secure and non-secure. The non-secure part runs as normal, but the secure part is encrypted and moved to what’s called an enclave; a protected, private region of the system memory, isolated from any other processes running, along with other applications, the OS or the hypervisor. There’s no way of seeing what’s going on inside the enclave, but the processes in the enclave can feed data back to the non-secure part of the application through specific channels. What’s more, the data can be sealed to ensure that it isn’t manipulated or tampered with.

With SGX, protected data is never out in the open. It’s only decrypted while inside the physical processor core, using keys generated at boot time and stored within the CPU. The window of opportunity to inspect it or manipulate it is effectively closed. As Curran says, protecting data in the enclave ‘helps prevent many software attacks, minimises the trusted compute block, and provides increased protection for secrets. In other words, whether you put keys in there or data, even when the attacker has full control of the platform, they cannot see the contents.’

The other key aspect of SGX is attestation, or the ability to verify that a specific piece of code ran or will run, unmodified, inside a specific, secure enclave. Through this, developers can guarantee that their application is communicating with the enclave, and that this enclave hasn’t been simulated or tampered with for the purposes of inspecting or modifying the data. What’s more, attestation proves that any results coming back from the enclave come unaltered from the same trusted source.

This attestation process provides confidence when applications are running with SGX protection on the premises, but it’s even more important when organisations are using applications running from, say, a co-located data centre or platform running in the public cloud. Through the use of secure enclaves and attestation, you can be sure that you’re sending data to an enclave where it’s fully secured and protected, and you can verify the integrity of data coming back. As Intel’s Richard Curran puts it, ‘You’ve got that foundation of trust built into the platform, and everything on top of that around resilience, avisibility and having a trusted compute capability – in other words, an attestation level of trust associated with the infrastructure. You can now protect your data.’

How SGX evolves

Now, Intel SGX isn’t actually a new technology; Intel introduced it with the Skylake-based Xeon E processors back in 2015, and it’s already in use in corporate data centres and cloud service providers around the world. What’s changing now, though, is that SGX is losing some of its biggest limitations and entering the data centre and high-performance computing mainstream.

For one thing, SGX is now a core part of Intel’s third-generation Xeon Scalable processors, based on Intel’s Ice Lake architecture. This puts SGX into two-socket capable systems for the first time and enables SGX to take advantage of the new hardware crypto-acceleration features built into Ice Lake. For another, where the enclave size was limited on previous generations – first to 128MB, then 256MB – enclaves can now reach 1TB in size. This, and the enhanced performance of the new Xeon Scalable processors, makes SGX a lot more compelling for workloads that involve large datasets, including medical imaging or machine learning with audio and video.

There are still performance overheads and limitations, which might impact compute intensive applications, but these are decreasing all the time, with the percentage impact on performance going from 20-to-30% in the first generation down to under 10% for many workloads now. What’s more, as the Xeon Scalable architecture evolves, SGX will be primed to take advantage of new memory protection features and new AI and cryptographic accelerators.

Using SGX

These technical improvements are welcome, but SGX is also becoming more accessible to enterprises of almost any size. The core functionality required – a supporting CPU and system BIOS – is now built into mainstream server and data centre hardware from the world’s leading manufacturers, with software support available through Intel and Microsoft SDKs. On top of that, there’s a whole ecosystem of Intel partners, including Anjuna, Graphene, Scontain and Fortanix, that can help enterprises rebuild their applications with SGX protection or ‘lift and shift’ them to SGX-protected containers, cloud services or virtual machines.

And if you want to use SGX within the cloud, most of the world’s biggest providers are now on board, including IBM with Cloud Data Shield, Albibaba, Swisscom and Microsoft Azure, through Microsoft Azure Confidential Computing. Firms looking for a private cloud solution can even run SGX on virtual machines or bare metal servers from G Core Labs or OVH.

In other words, the hardware, the software, the tools and the services are now here to protect your data while in use. But to understand why this is so important, you have to look at the opportunities SGX is opening up. In part two of this series, we’ll do just that.

Learn more about Intel SGS and confidential computing

ITPro

ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.

For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.