Hardware Acceleration of Fully Homomorphic Encryption: Making Privacy-Preserving Machine Learning Practical

Ahmad Al Badawi, David Bruce Cousins, Yuriy Polyakov, and Kurt Rohloff|
Share on twitter
Share on linkedin

Learn more about secure, collaborative computing

Hardware Acceleration of Fully Homomorphic Encryption: Making Privacy-Preserving Machine Learning Practical

Ahmad Al Badawi, David Bruce Cousins, Yuriy Polyakov, and Kurt Rohloff|
Share on facebook
Share on twitter
Share on linkedin

Learn more about secure, collaborative computing

The Problem: Protecting Sensitive Data

Organizations are increasingly collaborating on data, often using the cloud, to address some of the most important challenges of our time. Protecting privacy is crucial, particularly when handling sensitive data such as personally identifiable information (PII), personal health information (PHI), intellectual property, and intelligence insights. 

Data has three basic states: at rest, in transit, and in use. Typically, sensitive data is encrypted or otherwise protected while being stored (at rest) and transmitted (in transit). However, when the data is being processed in any way (in use), it must first be decrypted–making it vulnerable to cyberattacks.

Fully Homomorphic Encryption

Often described as the “holy grail” of encryption technologies, Fully Homomorphic Encryption (FHE) enables arbitrary computations over encrypted data without decrypting it at all, potentially solving the data-in-use problem. 

FHE for Machine Learning

One of the most promising application domains for FHE is Machine learning (ML). ML has advanced rapidly in recent years, with the number of applications multiplying to include medicine, finance, natural language processing, and more. ML often requires collaboration among several parties, and the need to decrypt data for processing and analysis (the in use state referred to above) creates vulnerability. Given the often sensitive nature of the data, performing ML with FHE is an effective privacy-preserving solution.

Challenges of FHE for ML

When FHE is used for intensive computations like those required for ML training, functional challenges arise which require additional complex operations, such as bootstrapping (which allows large chains of encrypted computation on data). FHE with bootstrapping is required to perform most ML training tasks. 

While bootstrapping addresses many functional challenges, it requires prohibitive amounts of compute power and time. For example, an unencrypted computation might take hundreds of milliseconds to complete on a standard laptop, yet could take many hours when running with FHE on a high-end server. 

The Solution: Hardware Acceleration

The most promising efforts to make bootstrapping in FHE practical are focused on acceleration via hardware platforms. FHE workloads exhibit a high level of task and data parallelism that can be exploited by parallel processors. A low-cost but computationally efficient and highly optimized hardware co-processor is an ideal platform for accelerating the execution of core FHE operations.

Figure ES-1 shows the achieved and projected speedup from hardware acceleration of FHE using several different hardware backends, including FPGA, GPU, and ASIC, which are further discussed in this paper. Target speedup is what we believe is practical for generic computation.

 

Figure ES-1: Achieved and projected speedup from hardware acceleration of FHE using different hardware backends. Multi-core CPU and GPU results are from [APAV+19], CPU-AVX-512 results are from [BKSD+21], and FPGA results are from [CRS16]. ASIC results are projected based on preliminary results of our ongoing research (to be published soon).

Hardware acceleration poses several significant challenges, including both computational and memory bottlenecks. Solving these problems has transformative potential for enabling privacy-preserving ML using FHE.

Cryptographic Software Framework: The OpenFHE Library

Hardware acceleration requires reliable software implementation of the target functionalities on general-purpose CPUs. One prominent FHE software library is OpenFHE. A community-driven, open-source project, the OpenFHE Library has a diverse group of contributors from both industry and academia, including Duality, Samsung, Intel, MIT, UCSD, and others. With simpler APIs, modularity, cross-platform support, and integration of hardware accelerators, it is a resource for organizations–including providers of advanced hardware capabilities–to engage with the expanding field of FHE. Providers do not need to be experts, as Open FHE simplifies the user’s access to many of the complicated cryptographic capabilities. 

Hardware Abstraction Layer

Unique among current FHE libraries, OpenFHE provides a standard Hardware Abstraction Layer (HAL) designed to support different hardware acceleration technologies, such as Advanced Vector eXtensions (AVX), Graphics Process Unit (GPU), Field-Programmable Gate Array (FPGA), and Application-Specific Integrated Circuit (ASIC). The Intel HEXL library backend is a living example of HAL instantiation in OpenFHE. 

The Opportunity for Hardware Acceleration Providers

Forward-looking providers of advanced hardware capabilities are increasingly exploring opportunities to accelerate FHE. Given the potential of hardware acceleration to make FHE practical in a variety of applications, this market trend is expected to grow. Such providers can refer to OpenFHE as the library to support their backends.

To learn more, download our white paper, co-written by Ahmad Al Badawi, David Bruce Cousins, Yuriy Polyakov, and Kurt Rohloff: Hardware Acceleration of Fully Homomorphic Encryption: Making Privacy Preserving Machine Learning Practical.

Sign up for more knowledge and insights from our experts