Distributed Differentially Private Computation of Functions with Correlated Noise
Abstract
Many applications of machine learning, such as human health research, involve processing private or sensitive information. Privacy concerns may impose significant hurdles to collaboration in scenarios where there are multiple sites holding data and the goal is to estimate properties jointly across all datasets. Differentially private decentralized algorithms can provide strong privacy guarantees. However, the accuracy of the joint estimates may be poor when the datasets at each site are small. This paper proposes a new framework, Correlation Assisted Private Estimation (), for designing privacypreserving decentralized algorithms with better accuracy guarantees in an honestbutcurious model. can be used in conjunction with the functional mechanism for statistical and machine learning optimization problems. A tighter characterization of the functional mechanism is provided that allows to achieve the same performance as a centralized algorithm in the decentralized setting using all datasets. Empirical results on regression and neural network problems for both synthetic and real datasets show that differentially private methods can be competitive with nonprivate algorithms in many scenarios of interest.
1 Introduction
Privacysensitive learning is important in many applications: examples include human health research, business informatics, and locationbased services among others. Releasing any function of private data, even summary statistics and other aggregates, can reveal information about the underlying training data. Differential privacy (DP) [1] is a cryptographically motivated and mathematically rigorous framework for measuring the risk associated with performing computations on private data. More specifically, it measures the privacy risk in terms of the probability of identifying the presence of individual data points in a dataset from the results of computations performed on that data. As such, it has emerged as a defacto standard for privacypreserving technologies in research and practice [2, 3, 4].
Differential privacy is also useful when the private data is distributed over different locations (sites). For example, a consortium for medical research on a particular disease may consist of several healthcare centers/research labs, each with their own dataset of human subjects [5, 6]. Data holders may be reluctant or unable to directly share “raw” data to an aggregator due to ethical (privacy) and technical (bandwidth) reasons. From a statistical standpoint, the number of samples held locally is usually not large enough for meaningful feature learning. Consider training a deep neural network to detect Alzheimer’s disease based on neuroimaging data from several studies [6]: training locally at one site is infeasible as the number of subjects in each study is small. Decentralized algorithms can allow data owners to maintain local control of the data while passing messages to assist in a joint computation across many datasets. If these computations are differentially private, they can measure and control privacy risks.
Differentially private algorithms introduce noise to guarantee privacy: conventional distributed DP algorithms often have poor utility due to excess noise compared to centralized analyses. In this paper we propose a Correlation Assisted Private Estimation () framework, which is a novel distributed and privacypreserving protocol that provides utility close to centralized case. We achieve this by inducing (anti) correlated noise in the differentially private messages. The protocol can be applied to computing loss functions that are separable across sites. This class includes optimization algorithms, such as empirical risk minimization (ERM) problems, common in machine learning (ML) applications.
Related Works. There is a vast literature [7, 8, 9, 10, 11, 12, 13, 14, 15] on solving optimization problems in distributed settings, both with and without privacy concerns. In the machine learning context, the most relevant ones to our current work are those using ERM and stochastic gradient descent (SGD) [16, 17, 18, 19, 20, 21, 22, 23, 24]. Additionally, several works studied distributed differentially private learning for locally trained classifiers [25, 26, 27]. One of the most common approaches for ensuring differential privacy in optimization problems is to employ randomized gradient computations [18, 22]. Another common approach is to employ the output perturbation [17], which adds noise to the output of the optimization problem according to the sensitivity of the optimization variable. Note that, both of these approaches involve computing the sensitivity (of the gradient or the output variable) and then adding noise scaled to the sensitivity [1]. The problem with output perturbation is that the relation between the data and the parameter set is often hard to characterize. This is due to the complex nature of the optimization and as a result, the sensitivity is very difficult to compute. However, differentiallyprivate gradient descent methods can circumvent this by bounding the gradients at the expense of slowing down the convergence process. Finally, one can employ the objective perturbation [17, 14], where we need to perturb the objective function and find the minimizer of the perturbed objective function. However, the objective function has to satisfy some strict conditions, which are not met in many practical optimization problems [28]. In addition to optimization problems, Smith [29] proposed a general approach for computing summary statistics using the sampleandaggregate framework and both the Laplace and Exponential mechanisms [30]. Jing [31] proposed a unique approach that uses perturbed histograms for releasing a class of estimators in a noninteractive way.
Differentially private algorithms provide different guarantees than Secure Multiparty Computation (SMC) based methods (see [32, 33, 34, 35, 36, 37] for thorough comparisons between SMC and differential privacy based methods). Gade and Vaidya [38] applied a combination of SMC and DP for distributed optimization in which each site adds and subtracts arbitrary functions to confuse the adversary. Bonawitz et al. [39] proposed a communicationefficient method for federated learning over a large number of mobile devices. The most recent work in this line is that of Heikkilä et al. [40], who also studied the relationship of additive noise and sample size in a distributed setting. In their model, data holders communicate their data to computation nodes to compute a function. Our work is inspired by the seminal work of Dwork et al. [41] that proposed distributed noise generation for preserving privacy. We employ a similar principle as Anandan and Clifton [42] to reduce the noise added for differential privacy.
Our application to distributed DP function computation in this paper builds on the functional mechanism [28], which uses functional approximation [43] and the Laplace mechanism [1] to create DP approximations for any continuous and differentiable function. Zhang et al.’s approach [28] does not scale well to decentralized problems. We provide a better analysis of the sensitivity of their approximation and adapt the approach to the decentralized setting.
Our Contribution. The goal of our work is to reduce the amount of noise in conventional distributed differentially private schemes for applications of machine learning to settings similar to those found in research consortia. We summarize our contributions here:

We propose a novel distributed computation protocol, , that improves upon the conventional distributed DP schemes and achieves the same level of utility as the pooled data scenario in certain regimes. can be employed in a wide range of computations that frequently appear in machine learning problems.

We propose an improved functional mechanism (FM) using a tighter sensitivity analysis. We show analytically that it guarantees less noisy function computation for linear and logistic regression problems at the expense of an approximate DP guarantee. Empirical validation on real and synthetic data validates our approach.

We extend the FM to decentralized settings and show that can achieve the same utility as the pooled data scenario in some regimes. To the best of our knowledge, this work proposes the first distributed functional mechanism.

We demonstrate the effectiveness of our algorithms with varying privacy and dataset parameters. Our privacy analysis and empirical results on real and synthetic datasets show that the proposed algorithms can achieve much better utility than the existing state of the art algorithms.
Note that, we showed a preliminary version of the protocol in [44]. The protocol in this paper is more robust against site dropouts and does not require a trusted thirdparty.
2 Data and Privacy Model
Notation. We denote vectors with bold lower case letters (e.g., ), matrices with bold upper case letters (e.g. ), scalars with regular letters (e.g., ) and indices with lower case letters (e.g., ). Indices typically run from 1 to their uppercase versions (e.g., ). We denote the th column of the matrix as . We use , and for the Euclidean (or ) norm of a vector or spectral norm of a matrix, the Frobenius norm, and the trace operation, respectively. Finally, we denote the innerproduct between two arrays as . For example, if and are two matrices then .
Distributed Data Setting. We consider a distributed data setting with sites and a central aggregator node (see Figure 1). Each site holds samples and the total number of samples across all sites is given by . We assume that all parties are “honest but curious”. That is, the sites and the aggregator will follow the protocol but a subset may collude to learn another site’s data/function output. Additionally, we assume that the data samples in the local sites are disjoint. We use the terms “distributed” and “decentralized” interchangeably in this paper.
Definition 1 (()Differential Privacy [1]).
An algorithm taking values in a set provides differential privacy if , for all measurable and all data sets and differing in a single entry (neighboring datasets).
This definition essentially states that the probability of the output of an algorithm is not changed significantly if the corresponding database input is changed by just one entry. Here, and are privacy parameters, where low and ensure more privacy. The parameter can be interpreted as the probability that the algorithm fails to provide privacy risk . Several mechanisms can be employed to ensure that an algorithm satisfies differential privacy. Additive noise mechanisms such as the Gaussian or Laplace mechanisms [1, 45] and random sampling using the exponential mechanism [30] are among the most common ones. For additive noise mechanisms, the standard deviation of the noise is scaled to the sensitivity of the computation.
Definition 2 (sensitivity [1]).
The sensitivity of a vectorvalued function is , where and are neighboring datasets.
We will focus on and in this paper.
Definition 3 (Gaussian Mechanism [45]).
Let be an arbitrary dimensional function with sensitivity . The Gaussian Mechanism with parameter adds noise scaled to to each of the components of the output and satisfies differential privacy if
(1) 
Note that, for any given pair, we can calculate a noise variance such that addition of a noise term drawn from guarantees differential privacy. Since there are infinitely many pairs that yield the same , we parameterize our methods using [44] in this paper.
3 Correlation Assisted Private Estimation
3.1 Conventional Approach to Distributed DP Computations
We now describe the problem with conventional distributed DP and the approach to improve performance [44]. Suppose we want to compute the average of data samples. Each sample is a scalar with . We denote the vector of data samples as . We are interested in computing the DP estimate of the mean function: . To compute the sensitivity [1] of the scalarvalued function , we consider a neighboring data vector . We observe . Therefore, to compute the DP estimate of the average , we can employ the Gaussian mechanism [1, 45] to release , where and , which follows from the assumption
Each site holds samples (see Figure 1(a)). We assume for simplicity. To compute the global average nonprivately, the sites can send to the aggregator and the average computed by aggregator () is exactly equal to the average we would get if all the data samples were available in the aggregator node. However, with the privacy concern and considering that the aggregator is honestbutcurious, the sites can employ the conventional distributed DP computation technique. That is, the sites will release (send to the aggregator node) an DP estimate of the function of their local data . More specifically, each site will generate a noise and release/send to the aggregator, where
The aggregator can then compute the DP approximate average as . We observe
The variance of the estimator is DP estimate of the average as , where and . However, if we had all the data samples at the aggregator (pooleddata scenario), we could compute the
That is, the distributed DP averaging scheme will always result in a worse performance than the DP pooled data case.
3.2 Proposed Scheme:
Trust/Collusion Model. In our proposed scheme, we assume that all of the sites and the central node follow the protocol honestly. However, up to sites can collude with an adversary to learn about some site’s data/function output. The central node is also honestbutcurious (and therefore, can collude with an adversary). An adversary can observe the outputs from each site, as well as the output from the aggregator. Additionally, the adversary can know everything about the colluding sites (including their private data). We denote the number of noncolluding sites with such that . Without loss of generality, we designate the noncolluding sites with (see Figure 1(b)).
Correlated Noise. We design the noise generation procedure such that: i) we can ensure differential privacy of the algorithm output from each site and ii) achieve the noise level of the pooled data scenario in the final output from the aggregator. We achieve that by employing a correlated noise addition scheme. Considering the same distributed averaging problem as Section 3.1, we intend to release (and send to the aggregator) from each site , where and are two noise terms. The variances of and are chosen to ensure that the noise is sufficient to guarantee differential privacy to . Here, each site generates the noise locally and the noise jointly with all other sites such that . We employ the recently proposed secure aggregation protocol by Bonawitz et al. [39] to generate that ensures . The protocol utilizes Shamir’s outof secret sharing [46] and is communicationefficient.
Detailed Description of Protocol. In our proposed scheme, each site generates a noise term independently. The aggregator computes according to the protocol and broadcasts it to all the sites. Each site then sets to achieve . We show the complete noise generation procedure in Algorithm 1. Note that, the original protocol is intended for computing sum of dimensional vectors in a finite field . However, we need to perform the summation of Gaussian random variables over or . To accomplish this, each site can employ a mapping that performs a stochastic quantization [47] for largeenough . The aggregator can compute the sum in the finite field according to and then invoke a reverse mapping before broadcasting to the sites. Algorithm 1 can be readily extended to generate arrayvalued zerosum noise terms. We observe that the variance of is given by
(2) 
Additionally, we choose
(3) 
Each site then generates the noise independently and sends to the aggregator. Note that neither of the terms and has large enough variance to provide DP guarantee to . However, we chose the variances of and to ensure that the is sufficient to ensure a DP guarantee to at site . The chosen variance of also ensures that the output from the aggregator would have the same noise variance as the differentially private pooleddata scenario. To see this, observe that we compute the following at the aggregator (in Step 7 of Algorithm 2):
where we used . The variance of the estimator is , which is the exactly the same as if all the data were present at the aggregator. This claim is formalized in Lemma 1. We show the complete algorithm in Algorithm 2. The privacy of Algorithm 2 is given by Theorem 1. The communication cost of the scheme is discussed in Appendix A in the Supplement.
Theorem 1 (Privacy of Algorithm (Algorithm 2)).
Consider Algorithm 2 in the distributed data setting of Section 2 with and for all sites . Suppose that at most sites can collude after execution. Then Algorithm 2 guarantees differential privacy for each site, where satisfy the relation is the density for standard Normal random variable and are given by ,
(4)  
(5) 
Remark 1.
Proof.
As mentioned before, we identify the noncolluding sites with and the colluding sites with . The adversary can observe the outputs from each site (including the aggregator). Additionally, the colluding sites can share their private data and the noise terms, and for , with the adversary. For simplicity, we assume that all sites have equal number of samples (i.e., ) and .
To infer the private data of the sites , the adversary can observe and . Note that the adversary can learn the partial sum because they can get the sum from the aggregator and the noise terms from the colluding sites. Therefore, the vector is what the adversary can observe to make inference about the noncolluding sites. To prove differential privacy guarantee, we must show that holds with probability (over the randomness of the mechanism) at least . Here, and and are the probability density functions of under and , respectively. The vectors and differ in only one coordinate (neighboring). Without loss of generality, we assume that and differ in the first coordinate. We note that the maximum difference is as the sensitivity of the function is . Recall that we release from each site. We observe : . Additionally, , we have: . That is, the random variable is , where is a vector of all ones. Without loss of generality, we can assume [45] that and , where . Additionally, the random variable is , where . Therefore, is the density of , where and
With some simple algebra, we can find the expression for : . If we denote then we observe
where . Using the matrix inversion lemma for block matrices [48, Section 0.7.3] and some algebra, we have
where . Note that is a Gaussian random variable with parameters and given by (4) and (5), respectively. Now, we observe and
where is the Qfunction [49] and is the density for standard Normal random variable. The last inequality follows from the bound [49]. Therefore, the proposed ensures DP with . As the local datasets are disjoint and differential privacy is invariant under post processing, the release of also satisfies DP. ∎ for each site, assuming that the number of colluding sites is atmost
Remark 2.
We use the protocol [39] to generate the zerosum noise terms by mapping floating point numbers to a finite field. Such mappings are shown to be vulnerable to certain attacks [50]. However, the floating point implementation issues are out of scope for this paper. We refer the reader to the work of Balcer and Vadhan [51] for possible remedies. We believe a very interesting direction of future work would be to address the issue in our distributed data setting.
3.3 Utility Analysis
The goal is to ensure DP for each site and achieve at the aggregator (see Lemma 1). The protocol guarantees DP with guarantee is much better than the guarantee in the conventional distributed DP scheme. We empirically validate this claim by comparing with in Appendix B in the Supplement. Here, is the smallest guarantee we can afford in the conventional distributed DP scheme to achieve the same noise variance as the pooleddata scenario for a given . Additionally, we empirically compare and for weaker collusion assumptions in Appendix C in the Supplement. In both cases, we observe that is always smaller than . That is, for achieving the same noise level at the aggregator output (and therefore the same utility) as the pooled data scenario, we are ensuring a much better privacy guarantee by employing the scheme over the conventional approach. . We claim that this
Lemma 1.
Consider the symmetric setting: and for all sites . Let the variances of the noise terms and (Step 5 of Algorithm 2) be and , respectively. If we denote the variance of the additive noise (for preserving privacy) in the pooled data scenario by and the variance of the estimator (Step 7 of Algorithm 2) by then Algorithm 2 achieves the same expected error as the pooleddata scenario (i.e., ).
Proof.
The proof is given in Appendix D in the Supplement. ∎
Proposition 1.
(Performance gain) If the local noise variances are for then the algorithm achieves a gain of , where and are the noise variances of the final estimate at the aggregator in the conventional distributed DP scheme and the scheme, respectively.
Proof.
The proof is given in Appendix E in the Supplement. ∎
Note that, even in the case of site dropout, we achieve , as long as the number of active sites is above some threshold (see Bonawitz et al. [39] for details). Therefore, the performance gain of remains the same irrespective of the number of droppedout sites.
Remark 3 (Unequal Sample Sizes at Sites).
Note that the algorithm achieves the same noise variance as the pooleddata scenario (i.e., ) in the symmetric setting: and for all sites . In general, the ratio , where , is a function of the sample sizes in the sites. We observe: is a Schurconvex function, it can be shown using majorization theory [52] that achieves the smallest noise variance at the aggregator in the symmetric setting. For distributed systems with unequal sample sizes at the sites and/or different and at each site, we compute the weighted sum at the aggregator. In order to achieve the same noise level as the pooled data scenario, we need to ensure and . A scheme for doing so is shown in [44]. In this paper, we keep the analysis for the case for simplicity. , where the minimum is achieved for the symmetric setting. That is, . As
3.4 Scope of
is motivated by scientific research collaborations that are common in medicine and biology. Privacy regulations prevent sites from sharing the local raw data. Additionally, the data is often high dimensional (e.g., in neuroimaging) and sites have small sample sizes. Joint learning across datasets can yield discoveries that are impossible to obtain from a single site. can benefit functions with sensitivities satisfying some conditions (see Proposition 2). In addition to the averaging function, many functions of interest have sensitivities that satisfy such conditions. Examples include the empirical average loss functions used in machine learning and deep neural networks. Additionally, we can use the StoneWeierstrass theorem [43] to approximate a loss function and apply , as we show in Section 4.5. Furthermore, we can use the nomographic representation of functions to approximate a desired function in a decentralized manner [53, 54, 55, 56] (for applications in communications [57, 58, 59, 60]), while keeping the data differentially private. More common applications include gradient based optimization algorithms, means clustering and estimating probability distributions.
Proposition 2.
Consider a distributed setting with sites in which site has a dataset of samples and . Suppose the sites are computing a function with sensitivity employing the scheme. Denote and observe the ratio protocol achieves , if . Then the

for convex we have:

for general we have: .
Proof.
We review some definitions and lemmas [52, Proposition C.2] necessary for the proof in Appendix F in the Supplement. As the sites are computing the function with sensitivity , the local noise standard deviation for preserving privacy is proportional to by Gaussian mechanism [1]. It can be written as: , where is a constant for a given pair. Similarly, the noise standard deviation in the pooled data scenario can be written as: . Now, the final noise variance at the aggregator for protocol is: , which proves the case for general sensitivity function . Now, if is convex then the by Lemma 2 (Supplement) the function is Schurconvex. Thus the minimum of is obtained when . We observe: is convex, we achieve if . ∎ . Therefore, when . As we want to achieve the same noise variance as the pooleddata scenario, we need . Now, we observe the ratio:
4 Application of : Distributed Computation of Functions
As mentioned before, the framework can benefit any distributed differentially private function computation, as long as the sensitivity of the function satisfies the conditions outlined in Proposition 2. In this section, we propose an algorithm that is specifically suited for privacypreserving computation of cost functions in distributed settings. Let us consider a cost function that depends on private data distributed across sites. A central aggregator (see Figure 1) wishes find the minimizer of . This is a common scenario in distributed machine learning. Now, the aggregator is not trusted and the sites may collude with an adversary to learn information about the other sites. Since computing the by minimizing the expected cost/loss involves the sensitive information of the local datasets, we need to ensure that is computed in a privacypreserving way. In particular, we want to develop an algorithm to compute the differentially private approximate to , denoted , in a distributed setting that produces a result as close as possible to the nonprivate pooled .
Inline with our discussions in the previous sections, let us assume that each site holds a dataset of samples . The total sample size across all sites is . The cost incurred by a due to one data sample is . We need to minimize the average cost to find the optimal . The empirical average cost for a particular over all the samples is expressed as
(6) 
Therefore, we have
For centralized optimization, we can find a differentiallyprivate approximate to using output perturbation [17] or objective perturbation [17, 14]. We propose using the functional mechanism [28], which is a form of objective perturbation and is more amenable to distributed implementation. It uses a polynomial representation of the cost function and can be used for any differentiable and continuous cost function. We perturb each term in the polynomial representation of to get modified cost function . The minimizer guarantees differential privacy.
4.1 Functional Mechanism
We first review the functional mechanism [28] in the pooled data case. Suppose is a monomial function of the entries of : , for some set of exponents . Let us define the set of all with degree as
For example, , , , etc. Now, from the StoneWeierstrass Theorem [43], any differentiable and continuous cost function can be written as a (potentially infinite) sum of monomials of :
for some . Here, denotes the coefficient of in the polynomial and is a function of the th data sample. Plugging the expression of in (6), we can express the empirical average cost over all samples as
(7) 
The function depends on the data samples only through . As the goal is to approximate in a differentiallyprivate way, one can compute these to satisfy differential privacy [28]. Let us consider two “neighboring” datasets: and , differing in a single data point (i.e., the last one). Zhang et al. [28] computed the sensitivity as
and proposed an differentially private method that adds Laplace noise with variance to each for all and for all . In the following, we propose an improved functional mechanism that employs a tighter characterization of the sensitivities and can be extended to incorporate the protocol for the distributed settings.
4.2 Improved Functional Mechanism
Our method is an improved version of the functional mechanism [28]. We use the Gaussian mechanism [45] for computing the DP approximate of . This gives a weaker privacy guarantee than the original functional mechanism [28], which used Laplace noise for DP. Our distributed function computation method (described in Section 4.5) benefits from the fact that linear combinations of Gaussians are Gaussian. In other words, the proposed and the distributed functional mechanism rely on the Gaussianity of the noise. Now, instead of computing the sensitivity of , we define an array that contains as its entries for all . We used the term “array” instead of vector or matrix or tensor because the dimension of depends on the cardinality of the set . We can represent as a scalar for (because ), as a dimensional vector for (because ), and as a matrix for (because