Thu 24 Jun 2021 01:55 - 02:00 at PLDI-A - Talks 2A: Machine Learning
As the demand for machine learning–based inference increases in tandem with concerns about privacy, there is a growing recognition of the need for secure machine learning, in which secret models can be used to classify private data without the model or data being leaked.
Fully Homomorphic Encryption (FHE) allows arbitrary computation to be done over encrypted data, providing an attractive approach to providing such secure inference.
While such computation is often orders of magnitude slower than its plaintext counterpart, the ability of FHE cryptosystems to do \emph{ciphertext packing}—that is, encrypting an entire vector of plaintexts such that operations are evaluated elementwise on the vector—helps ameliorate this overhead, effectively creating a SIMD architecture where computation can be vectorized for more efficient evaluation.
Most recent research in this area has targeted regular, easily vectorizable neural network models.
Applying similar techniques to irregular ML models such as decision forests remains unexplored, due to their complex, hard-to-vectorize structures.
In this paper we present COPSE, the first system that exploits ciphertext packing to perform decision-forest inference. COPSE consists of a staging compiler that automatically restructures and compiles decision forest models down to a new set of vectorizable primitives for secure inference.
We find that COPSE's compiled models outperform the state of the art across a range of decision forest models, often by more than an order of magnitude, while still scaling well.