Chiraag Juvekar, Vinod Vaikuntanathan, Anantha Chandrakasan

arXiv:1801.05507v1

Abstract:

The growing popularity of cloud-based machine learning raises a natural question about the privacy guarantees that can be provided in such a setting. Our work tackles this problem in the context where a client wishes to classify private images using a convolutional neural network (CNN) trained by a server. Our goal is to build efficient protocols whereby the client can acquire the classification result without revealing their input to the server, while guaranteeing the privacy of the server’s neural network. To this end, we design GAZELLE, a scalable and low-latency system for secure neural network inference, using an intricate combination of homomorphic encryption and traditional two-party computation techniques (such as garbled circuits). GAZELLE makes three contributions. First, we design the GAZELLE homomorphic encryption library which provides fast algorithms for basic homomorphic operations such as SIMD (single instruction multiple data) addition, SIMD multiplication and ciphertext permutation. Second, we implement the GAZELLE homomorphic linear algebra kernels which map neural network layers to optimized homomorphic matrix-vector multiplication and convolution routines. Third, we design optimized encryption switching protocols which seamlessly convert between homomorphic and garbled circuit encodings to enable implementation of complete neural network inference. We evaluate our protocols on benchmark neural networks trained on the MNIST and CIFAR-10 datasets and show that GAZELLE outperforms the best existing systems such as MiniONN (ACM CCS 2017) by 20× and Chameleon (Crypto Eprint 2017/1164) by 30× in online runtime. Similarly when compared with fully homomorphic approaches like CryptoNets (ICML 2016) we demonstrate three orders of magnitude faster online run-time.