This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Dynamics-Based Algorithm-Level Privacy Preservation for Push-Sum Average Consensus

Huqiang Chenga huqiangcheng@126.com Mengying Xiea,∗ xiemyscut@163.com Xiaowei Yangb xwyang@scut.edu.cn Qingguo Lüa qglv@cqu.edu.cn Huaqing Lic huaqingli@swu.edu.cn Key Laboratory of Dependable Services Computing in Cyber Physical Society-Ministry of Education, College of Computer Science, Chongqing University, Chongqing, 401331, China School of Software Engineering, South China University of Technology, Guangzhou, 510006, China Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, Chongqing, 400715, China
Abstract

In the intricate dance of multi-agent systems, achieving average consensus is not just vital–it is the backbone of their functionality. In conventional average consensus algorithms, all agents reach an agreement by individual calculations and sharing information with their respective neighbors. Nevertheless, the information interactions that occur in the communication network may make sensitive information be revealed. In this paper, we develop a new privacy-preserving average consensus method on unbalanced directed networks. Specifically, we ensure privacy preservation by carefully embedding randomness in mixing weights to confuse communications and introducing an extra auxiliary parameter to mask the state-updated rule in initial several iterations. In parallel, we exploit the intrinsic robustness of consensus dynamics to guarantee that the average consensus is precisely achieved. Theoretical results demonstrate that the designed algorithms can converge linearly to the exact average consensus value and can guarantee privacy preservation of agents against both honest-but-curious and eavesdropping attacks. The designed algorithms are fundamentally different compared to differential privacy based algorithms that enable privacy preservation via sacrificing consensus performance. Finally, numerical experiments validate the correctness of the theoretical findings.

keywords:
Average consensus, privacy preservation, tailored weights, unbalanced directed networks
journal: Journal of  Templates

1 Introduction

Recently, multi-agent systems have been rapidly expanding in industrial development. A key characteristic of these systems is that the various agents collaborate to achieve a consensus state. The average consensus state is a crucial evaluation metric for multi-agent systems, leading to the development of numerous average consensus algorithms. Considering a network with NN agents, the goal of such algorithms is to make the states of all agents converge asymptotically to the average of their initial values. Due to the inherent decentralized characteristics, average consensus approaches are widely used in many areas, such as collaborative filtering [1], decision-making system [2], social network [3], UAV formation [4], online learning [5], etc.

In order to make the states of all agents reach the average of initial values, most of average consensus approaches [6, 7, 8, 9, 10, 11] always demand that the agents share their correct states with each other. This may result in privacy information being revealed, and it is highly inadvisable from the perspective of privacy protection. Privacy concerns in multi-agent systems are of great significance in daily life. A simple example is a group of individuals engaging in a discussion regarding a specific topic and reaching a common view while maintaining the confidentiality of each individual view [12]. A further common example is in power systems where several generators need to agree on costs as well as ensuring the confidentiality of their respective generation information [13]. As the frequency of privacy breaches continues to rise, it has become increasingly urgent to safeguard the privacy of every individual in multi-agent systems.

1.1 Related Works

Several approaches have been available to tackle the growing privacy concerns in average consensus literature. One of the mostly widespread non-encryption privacy-preserving techniques is differential privacy [14], which essentially injects the uncorrelated noise to the transmitted state information. This strategy has already been applied in [15, 16, 17, 18, 19]. However, such approaches cannot achieve exact average consensus owing to its inherent privacy-accuracy compromise. This makes differentially private approaches unpalatable for sensor networks and cyber-physical systems with high requirements for consensus accuracy. To ensure computational accuracy, several improvement efforts were developed in [20, 21, 22], which focus on the strategic addition of correlated noise to the transmitted information, as opposed to the uncorrelated noise typically utilized in differential privacy. Another stand of interest is observability-based privacy-preserving approaches [23, 24, 25], where the privacy is guaranteed by minimizing the observation information of a certain agent. However, both the correlated-noise based and the observability-based approaches are vulnerable to external eavesdroppers who have the ability to wiretap all communication channels.

Note that the above mentioned approaches are only valid for undirected and balanced networks. In real-world scenarios, communication among agents is usually directed and unbalanced. For example, broadcasting at different power levels, the communication activity corresponds to a directed and unbalanced networks. To preserve privacy of nodes interacting on an unbalanced directed network, the authors in [26, 27, 28, 29] presented a series of encryption-based approaches by utilizing the homomorphic encryption techniques. However, this type of approaches requires substantial computational and communication overhead, which is unfriendly to resource-limited systems. Recently, state-decomposition based approaches [30, 31] have been favored by researchers. The idea of such approaches is to divide the states of agents into two sub-states with one containing insignificant information for communication with other agents and the other containing sensitive information only for internal information exchange. Another extension of privacy-preserving consensus is dynamics-based approaches [32, 33, 34, 35], which is also the focus of this work. An important benefit of such approaches is that no trade-off exists between privacy and consensus performances, and they are easy to implement in conjunction with techniques like homomorphic encryption, differential privacy, etc. In contrast to state-decomposition strategy, dynamics-based approaches have a simpler structure and seem easier to much understand and implement. Note that some of the above privacy-preserving strategies have also been recently applied in decentralized learning literature [36, 37, 38, 39, 40, 41, 42].

1.2 Main Contributions

In this paper, our work contributes to enrich the dynamic-based privacy-preserving methods over unbalanced directed networks. Specifically, the contributions contain the points listed next.

  1. i)

    Based on the conventional push-sum structure, we design a novel push-sum average consensus method enabling privacy preservation. Specifically, during the initial several iterations, we ensure privacy preservation by carefully embedding randomness in mixing weights to confuse communications and introducing an extra auxiliary parameter to mask the state-updated dynamics. As well, to ensure consensus accuracy, exploiting the intrinsic robustness of consensus dynamics to cope with uncertain changes in information exchanges, we carefully redesign the push-sum protocol so that the “total mass” of the system is invariant in the presence of embedded randomness.

  2. ii)

    We provide a formal and rigorous analysis of convergence rate. Specifically, our analysis consists two parts. One is to analyze the consensus performance of the initial several iterations with randomness embedded, and the other is to analyze that of remaining randomness-free dynamics, which has the same structure as the conventional push-sum method [6, 8, 9]. Our analysis exploits the properties of the mixing matrix product and norm relations to build consensus contractions of each dynamic. The result shows that the designed algorithm attains a linear convergence rate and explicitly captures the effect of mixing matrix and network connectivity structure on convergence rate.

  3. iii)

    Relaxing the privacy notion of considering only exact initial values in [22, 43, 44, 45], we present two new privacy notions for honest-but-curious attacks and eavesdropping attacks (see Definition 3), respectively, where the basic idea is that the attacker has an infinite number of uncertainties in the estimation of the initial value through the available information. The privacy notions are more generalized in the context that the attacker is not only unable to determine the exact initial value but also the valid range of the initial value.

Notations: \mathbb{N} and \mathbb{R} are the natural and real number sets, respectively. 𝟎\mathbf{0}, 𝟏\mathbf{1}, and 𝐈\mathbf{I} represent all-zero vector, all-one vector, and identity matrix, respectively, whose dimensions are clear from context. 𝐀=[Aij]N×N\mathbf{A}=[A_{ij}]_{N\times N} represents an N×NN\times N-dimensional matrix whose ijij-th element is AijA_{ij}. [][\cdot]^{\top} denotes the transpose of [][\cdot]. The symbol “\setminus” stands for set subtraction. The symbol |||\cdot| is applied to the set to represent the cardinality and to the scalar value to represent absolute value. The 2\ell_{2}-norm (resp. 1\ell_{1}-norm) is signified by \lVert\cdot\rVert (resp. 1\lVert\cdot\rVert_{1}).

2 Preliminaries

We recall several important properties and concepts associated with the graph theory, conventional push-sum protocol, and privacy.

2.1 Graph Theory

Consider a network consisting of NN agents and it is modeled as a digraph 𝒢=(𝒱,)\mathcal{G}=(\mathcal{V},\mathcal{E}), where 𝒱={1,,N}\mathcal{V}=\{1,\cdots,N\} is the agent set, and \mathcal{E} is the edge set which comprises of pairs of agents and characterizes the interactions between agents, i.e., agent ii affects the dynamics of agent jj if a directed line from ii to jj exists, expressed as (i,j)(i,j)\in\mathcal{E}. Moreover, let (i,i)(i,i)\notin\mathcal{E} for any i𝒱i\in\mathcal{V}, i.e., no self-loop exists in digraph. Let 𝒩iin={j|(j,i)}\mathcal{N}_{i}^{\text{in}}=\{j|(j,i)\in\mathcal{E}\} and 𝒩iout={j|(i,j)}\mathcal{N}_{i}^{\text{out}}=\{j|(i,j)\in\mathcal{E}\} be the in-neighbor and out-neighbor sets of agent ii, respectively. Accordingly, Diin=|𝒩iin|D_{i}^{\text{in}}=|\mathcal{N}_{i}^{\text{in}}| and Diout=|𝒩iout|D_{i}^{\text{out}}=|\mathcal{N}_{i}^{\text{out}}| denote the in-degree and out-degree, respectively. For any i,j𝒱i,j\in\mathcal{V}, a trail from ii to jj is a chain of consecutively directed lines. The digraph 𝒢\mathcal{G} is strongly connected if at least one trail lies between any pair of agents. The associated incidence matrix 𝐑=[Riεj]N×||\mathbf{R}=[R_{i\varepsilon_{j}}]_{N\times|\mathcal{E}|} of 𝒢\mathcal{G} is given by

Rie={1,if the starting point of thee-thedge(i,j)isi;1,if the starting point of thee-thedge(i,j)isj;0,otherwise.\displaystyle R_{ie}=\begin{cases}1,&\text{if the starting point of the}\,\,e\text{-th}\,\,\text{edge}\,\,(i,j)\,\,\text{is}\,\,i;\\ -1,&\text{if the starting point of the}\,\,e\text{-th}\,\,\text{edge}\,\,(i,j)\,\,\text{is}\,\,j;\\ 0,&\text{otherwise}.\\ \end{cases}

One can know that the sum of each column of 𝐑\mathbf{R} is zero, i.e., i=1NRil=0\sum\nolimits_{i=1}^{N}{R_{il}}=0 for any l[1,|ε|]l\in[1,|\varepsilon|]. The mixing matrix 𝐂=[Cij]N×N\mathbf{C}=[C_{ij}]_{N\times N} associated with 𝒢\mathcal{G} is defined as: Cji>0C_{ji}>0 if j𝒩iout{i}j\in\mathcal{N}_{i}^{\text{out}}\cup\{i\} and Cji=0C_{ji}=0 otherwise.

Definition 1.

(Sum-one condition:) For an arbitrary matrix 𝐀=[Aij]N×N\mathbf{A}=[A_{ij}]_{N\times N}, if i=1NAij=1\sum_{i=1}^{N}{A_{ij}}=1 for all j𝒱j\in\mathcal{V}, then 𝐀\mathbf{A} is column-stochastic. We claim that each column of 𝐀\mathbf{A} satisfies the sum-one condition.

Assumption 1.

The directed network 𝒢=(𝒱,)\mathcal{G}=(\mathcal{V},\mathcal{E}) is strongly connected, and it holds |𝒱|=N>2|\mathcal{V}|=N>2. Each column of the mixing matrix 𝐂\mathbf{C} satisfies the sum-one condition.

2.2 Conventional Push-Sum Method

Algorithm 1 Push-sum method
1:  Input: Initial states xi(0)=zi(0)=xi0x_{i}(0)=z_{i}(0)=x_{i}^{0}\in\mathbb{R} and yi(0)=1y_{i}(0)=1 for any i𝒱i\in\mathcal{V}. The mixing weight 𝐂=[Cij]N×N\mathbf{C}=[C_{ij}]_{N\times N} associated with 𝒢\mathcal{G}.
2:  for k=0,1,k=0,1,\cdots do
3:     for i=1,,ni=1,\cdots,n in parallel do
4:        Agent ii sends the computed Clixi(k)C_{li}x_{i}(k) and Cliyi(k)C_{li}y_{i}(k) to l𝒩ioutl\in\mathcal{N}_{i}^{\text{out}}.
5:        Agent ii uses Cijxj(k)C_{ij}x_{j}(k) and Cijyj(k)C_{ij}y_{j}(k) received from j𝒩iinj\in\mathcal{N}_{i}^{\text{in}} to update xix_{i} and yiy_{i} as follows:
xi(k+1)=j𝒩iin{i}Cijxj(k),\displaystyle x_{i}(k+1)=\sum_{j\in\mathcal{N}_{i}^{\text{in}}\cup\{i\}}{C_{ij}x_{j}(k)}, (1)
yi(k+1)=j𝒩iin{i}Cijyj(k),\displaystyle y_{i}(k+1)=\sum_{j\in\mathcal{N}_{i}^{\text{in}}\cup\{i\}}{C_{ij}y_{j}(k)}, (2)
6:        Agent ii computes zi(k+1)=xi(k+1)/yi(k+1)z_{i}(k+1)=x_{i}(k+1)/y_{i}(k+1).
7:        Until a stopping criteria is satisfied, e.g., agent ii stops if |zi(k+1)x¯0|<ϵ|z_{i}(k+1)-\bar{x}^{0}|<\epsilon for some predefined ϵ>0\epsilon>0, where x¯0j=1Nxj(0)/N\bar{x}^{0}\triangleq\sum\nolimits_{j=1}^{N}{x_{j}(0)}/N.
8:     end for
9:  end for

Regarding the investigation of average consensus, the push-sum algorithm [6, 8, 9] is a well-established protocol, which is summarized in Algorithm 1. All agents simultaneously update two variable states: xi(k)x_{i}(k) and yi(k)y_{i}(k), and the sensitive information of agent ii is the initial value xi(0)x_{i}(0). Define 𝐱(k)=[x1(k),,xN(k)]\mathbf{x}(k)=[x_{1}(k),\cdots,x_{N}(k)]^{\top}, 𝐲(k)=[y1(k),,yN(k)]\mathbf{y}(k)=[y_{1}(k),\cdots,y_{N}(k)]^{\top}, and 𝐂=[Cij]N×N\mathbf{C}=[C_{ij}]_{N\times N}. We can rewrite (1) and (2) in a compact form as follows:

𝐱(k+1)=𝐂𝐱(k),\displaystyle\mathbf{x}(k+1)=\mathbf{Cx}(k), (3)
𝐲(k+1)=𝐂𝐲(k),\displaystyle\mathbf{y}(k+1)=\mathbf{Cy}(k), (4)

initialized with 𝐱(0)=[x10,,xN0]\mathbf{x}(0)=[x_{1}^{0},\cdots,x_{N}^{0}]^{\top} and 𝐲(0)=𝟏\mathbf{y}(0)=\mathbf{1}.

Under Assumption 1, 𝐂k\mathbf{C}^{k} converges to rank-one matrix at an exponential rate [46, 47]. Let 𝐂\mathbf{C}^{\infty} be the infinite power of matrix 𝐂\mathbf{C}, i.e., 𝐂=limk𝐂k\mathbf{C}^{\infty}=\lim_{k\rightarrow\infty}\,\,\mathbf{C}^{k}. Applying the Perron-Frobenius theorem [48] gives 𝐂=𝝅𝟏\mathbf{C}^{\infty}=\bm{\pi}\mathbf{1}^{\top}, where 𝝅=[π1,,πN]\bm{\pi}=[\pi_{1},\cdots,\pi_{N}]^{\top}. Recursively calculating (3) and (4) yields:

𝐱(k)=𝐂k𝐱(0),𝐲(k)=𝐂k𝐲(0).\displaystyle\mathbf{x}(k)=\mathbf{C}^{k}\mathbf{x}(0),\,\,\mathbf{y}(k)=\mathbf{C}^{k}\mathbf{y}(0).

Then, it follows that

limkzi(k)=limkxi(k)yi(k)=[𝐂𝐱(0)]i[𝐂𝐲(0)]i=πij=1Nxj(0)πij=1Nyj(0)=j=1Nxj(0)N,\displaystyle\underset{k\rightarrow\infty}{\lim}\,\,z_{i}(k)=\underset{k\rightarrow\infty}{\lim}\,\,\frac{x_{i}(k)}{y_{i}(k)}=\frac{[\mathbf{C}^{\infty}\mathbf{x}(0)]_{i}}{[\mathbf{C}^{\infty}\mathbf{y}(0)]_{i}}=\frac{\pi_{i}\sum\nolimits_{j=1}^{N}{x_{j}(0)}}{\pi_{i}\sum\nolimits_{j=1}^{N}{y_{j}(0)}}=\frac{\sum\nolimits_{j=1}^{N}{x_{j}(0)}}{N}, (5)

where []i[\cdot]_{i} means the ii-th element of [][\cdot]. Thus, the ratio zi(k)z_{i}(k) gradually reaches to x¯0\bar{x}^{0}. More details of the analysis can be found in [6].

2.3 Privacy Concern

We introduce two prevalent attack types, namely, honest-but-curious attacks and eavesdropping attacks. Then, we explain that Algorithm 1 fails to preserve privacy due to the explicit sharing of state variables.

Definition 2.

An honest-but-curious attack is an attack in which some agents, who follow the state-update protocols properly, try to infer the initial values of other agents by using the received information.

Definition 3.

An eavesdropping attack is an attack in which an external eavesdropper is able to capture all sharing information by wiretapping communication channels so as to infer the private information about sending agents.

In general, in terms of information leakage, an eavesdropping attack is more devastating than an honest-but-curious attack as it can capture all transmitted information, while the latter can only access the received information. Yet, the latter has the advantage that the initial values {xj0}\{x_{j}^{0}\} of all honest-but-curious agents jj are accessible, which are unavailable to the external eavesdroppers.

For the average consensus, the sensitive information to be protected is the initial value xi(0)x_{i}(0), i𝒱i\in\mathcal{V}. At the first iteration, agent ii will send the computed values Cjixi(0)C_{ji}x_{i}(0) and Cjiyi(0)C_{ji}y_{i}(0) to all of its out-neighbors j𝒩ioutj\in\mathcal{N}_{i}^{\text{out}}. Then, the initial value xi(0)x_{i}(0) is uniquely inferable by the honest-but-curious agent jj using xi(0)=Cijxi(0)Cijyi(0)x_{i}(0)=\frac{C_{ij}x_{i}(0)}{C_{ij}y_{i}(0)} and yi(0)=1y_{i}(0)=1. Therefore, the honest-but-curious agents are always able to infer the sensitive information of its in-neighbors. Likewise, one can readily check that external eavesdroppers are also able to easily infer sensitive information about all agents. Therefore, the privacy concern is not addressed in the conventional push-sum method. In this work, we try to study the privacy concern and develop a privacy-preserving version of Algorithm 1 to achieve exact average consensus.

2.4 Performance Metric

Our task is to propose an average consensus algorithm that can achieve exact convergence while guaranteeing privacy security. According to the above discussion, the following two requirements for privacy-preserving push-sum algorithms must be satisfied.

  1. i)

    Exact output: After the last iteration of the algorithm, each agent should converge to the average consensus point x¯0\bar{x}^{0}.

  2. ii)

    Privacy preservation: During the entire algorithm implementation, the private information, i.e., the initial value xi0x_{i}^{0}, of each legitimate agent ii should be preserved against both honest-but-curious and eavesdropping attacks.

In order to respond to the above two requirements, two metrics are required to quantify them.

Output metric: To measure the accuracy of the output, we adopt the consensus error 𝐳(k)x¯0𝟏\lVert\mathbf{z}(k)-\bar{x}^{0}\mathbf{1}\rVert. The algorithm achieves exact consensus if limk𝐳(k)x¯0𝟏=0\lim_{k\rightarrow\infty}\lVert\mathbf{z}(k)-\bar{x}^{0}\mathbf{1}\rVert=0. Furthermore, the algorithm is said to be elegant if 𝐳(k)x¯0𝟏=𝒪(ρk)\lVert\mathbf{z}(k)-\bar{x}^{0}\mathbf{1}\rVert=\mathcal{O}(\rho^{k}), ρ(0,1)\rho\in(0,1).

Privacy metric: For the honest-but-curious attacks, we consider the presence of some honest-but-curious agents \mathcal{H}. The accessible information set of \mathcal{H} is represented as h(k)={j(k)|j}\mathcal{I}_{h}(k)=\{\mathcal{I}_{j}(k)|j\in\mathcal{H}\}, where j(k)\mathcal{I}_{j}(k) represents the information available to agent jj\in\mathcal{H} at iteration kk. Given a moment kk^{\prime}\in\mathbb{N}, the access information of agents \mathcal{H} in time period 0k0-k is h(0:k)=0kkh(k)\mathcal{I}_{h}(0:k^{\prime})=\cup_{0\leq k\leq k^{\prime}}\mathcal{I}_{h}(k). For any information sequence h(0:k)\mathcal{I}_{h}(0:k^{\prime}), define 𝒮0i\mathcal{S}_{0}^{i} as the set of all possible initial values at the legitimate agent ii, where all initial values leave the information accessed by agents \mathcal{H} unchanged. That is to say, there exist any two initial values xi0,x~i0𝒮0ix_{i}^{0},\tilde{x}_{i}^{0}\in\mathcal{S}_{0}^{i} with xi0x~i0x_{i}^{0}\neq\tilde{x}_{i}^{0} such that ~h(0:k)=h(0:k)\tilde{\mathcal{I}}_{h}(0:k^{\prime})=\mathcal{I}_{h}(0:k^{\prime}). The diameter of 𝒮0i\mathcal{S}_{0}^{i} is defined as

𝐃(𝒮0i)=supxi(0),x~i(0)𝒮0i|xi(0)x~i(0)|.\displaystyle\mathbf{D}(\mathcal{S}_{0}^{i})=\underset{x_{i}(0),\tilde{x}_{i}(0)\in\mathcal{S}_{0}^{i}}{\text{sup}}|x_{i}(0)-\tilde{x}_{i}(0)|.

For the eavesdropping attacks, we consider the presence of an external eavesdropper whose available information is denoted as e(k)\mathcal{I}_{e}(k), kk\in\mathbb{N}. Let e(0:k)=0kke(k)\mathcal{I}_{e}(0:k^{\prime})=\cup_{0\leq k\leq k^{\prime}}\mathcal{I}_{e}(k). Similar to the honest-but-curious attacks, we define 𝒮0\mathcal{S}_{0} as the set of all possible initial values for all agents, where all initial values leave the information accessed by an external eavesdropper unchanged. That is, there exist 𝐱(0),𝐱~(0)𝒮0\mathbf{x}(0),\mathbf{\tilde{x}}(0)\in\mathcal{S}_{0} with 𝐱(0)𝐱~(0)\mathbf{x}(0)\neq\mathbf{\tilde{x}}(0) such that e(k)=~e(k)\mathcal{I}_{e}(k)=\tilde{\mathcal{I}}_{e}(k). In addition, the diameter of 𝒮0\mathcal{S}_{0} is given as

𝐃(𝒮0)=sup𝐱(0),𝐱~(0)𝒮0𝐱(0)𝐱~(0).\displaystyle\mathbf{D}(\mathcal{S}_{0})=\underset{\mathbf{x}(0),\mathbf{\tilde{x}}(0)\in\mathcal{S}_{0}}{\text{sup}}\lVert\mathbf{x}(0)-\mathbf{\tilde{x}}(0)\rVert.

For the honest-but-curious and eavesdropping attacks, we use 𝐃(𝒮0i)\mathbf{D}(\mathcal{S}_{0}^{i}) for all legitimate agents i𝒱i\in\mathcal{V}\setminus\mathcal{H} and 𝐃(𝒮0)\mathbf{D}(\mathcal{S}_{0}) for all agents to measure the individual privacy and algorithm-level confidentiality, respectively. For more details, see the definition below.

Definition 4.

The algorithm is said to be elegant in terms of privacy preservation, if 𝐃(𝒮0i)=\mathbf{D}(\mathcal{S}_{0}^{i})=\infty or 𝐃(𝒮0)=\mathbf{D}(\mathcal{S}_{0})=\infty for any information sequence h(0:k)\mathcal{I}_{h}(0:k^{\prime}) or e(0:k)\mathcal{I}_{e}(0:k^{\prime}), kk^{\prime}\in\mathbb{N}, respectively.

The privacy concept outlined in Definition 4 shares similarities with the uncertainty-based privacy concept presented in [49], which derives inspiration from the ll-diversity principle [50]. Within the ll-diversity framework, the variety of any private information is gauged by the number of disparate estimates produced for the information. The higher this diversity, the more ambiguous the associated private information becomes. In our setting, the privacy information is the initial value xi0x_{i}^{0} (resp. 𝐱(0)\mathbf{x}(0)), whose diversity is measured by the diameter 𝐃(𝒮0i)\mathbf{D}(\mathcal{S}_{0}^{i}) (resp. 𝐃(𝒮0)\mathbf{D}(\mathcal{S}_{0})). Larger diameters imply greater uncertainty in the estimation of the initial values.

Remark 1.

Note that Definition 4 indicates that attackers cannot uniquely determine an exact value or even a valuable range of xi0x_{i}^{0}, and hence is more stringent than the notion defined in [22, 43, 44, 45], which only considers the privacy information not to be exactly inferred.

3 Privacy-Preserving Push-Sum Algorithm

Based on the discussion of Algorithm 1, one can know that adopting the same weight CijC_{ij} for both Cijxi(0)C_{ij}x_{i}(0) and Cijyi(0)C_{ij}y_{i}(0) cause privacy (i.e., initial values) leakage. To solve the issue, a dynamics-based weight generation mechanism is developed in [33], whose details are outlined in Protocol 1.

Protocol 1 Weight generation mechanism
1:  Required parameters: Parameters KK\in\mathbb{N} and η(0,1)\eta\in(0,1) are known to each agent.
2:  Two sets of tailored mixing weights associated with any edge (j,i)(j,i)\!\in\!\mathcal{E} are generated. Specifically, when kKk\!\leq\!K, two groups of mixing weights {Cji1(k)|j𝒩iout{i}}\{C_{ji}^{1}(k)\!\in\!\mathbb{R}|\!\,\,j\!\in\!\mathcal{N}_{i}^{\text{out}}\!\cup\!\{i\}\} and {Cji2(k)|j𝒩iout{i}}\{C_{ji}^{2}(k)\!\in\!\mathbb{R}|\!\,\,j\!\in\!\mathcal{N}_{i}^{\text{out}}\!\cup\!\{i\}\} associated with agent ii are generated, which satisfy j=1NCji1(k)=1\sum\nolimits_{j=1}^{N}{C_{ji}^{1}(k)}=1 and j=1NCji2(k)=1\sum\nolimits_{j=1}^{N}{C_{ji}^{2}(k)}=1; when k>Kk\!>\!K, only one group of mixing weights {Cji(k)=Cji1(k)=Cji2(k)(η,1)|j𝒩iout{i}}\{C_{ji}(k)=C_{ji}^{1}(k)=C_{ji}^{2}(k)\in(\eta,1)|\,\,j\in\mathcal{N}_{i}^{\text{out}}\cup\{i\}\}, satisfying j=1NCji(k)=1\sum\nolimits_{j=1}^{N}{C_{ji}(k)}=1, is generated. Note that {Cji1(k)}\{C_{ji}^{1}(k)\} and {Cji2(k)}\{C_{ji}^{2}(k)\} are mixed in xix_{i} and yiy_{i}, respectively. Moreover, agent ii always sets Cji1(k)=0C_{ji}^{1}(k)=0 and Cji2(k)=0C_{ji}^{2}(k)=0 for j𝒩iout{i}j\notin\mathcal{N}_{i}^{\text{out}}\cup\{i\}.

The main idea of the dynamics-based mechanism is to confuse the state variables of the agents by injecting randomness into the mixing matrix in the initial few iterations. Fig. 1 briefly depicts the basic process. Obviously, the dynamics-based protocol has two stages. The first stage is from iteration k=0k=0 to k=Kk=K, which can be regarded as a re-initialization operation on the initial value. This stage is key to privacy protection. The second stage is from k=K+1k=K+1 to k=k=\infty, which can be viewed as the normal executions of the conventional push-sum method. This stage is key to ensuring convergence.

Refer to caption
Figure 1: Dynamics-based protocol: A brief computation process from the view of agent ii over a simple 33-agent digraph.

The dynamics-based method has been proved to reach an exact consensus point, and the sensitive information of legitimate agents is not inferred by honest-but-curious attackers in [33]. However, there are three significant challenges that have not been addressed: I) In the initial KK iterations, although each weight is arbitrary, the sum-one condition still imposes a constraint on the weight setting; II) The method cannot protect sensitive information from external eavesdropping attackers; II) Only asymptotic convergence of algorithms is discussed in most of the average consensus literature [30, 31, 32, 33, 34, 35], and analysis of the speed of convergence is rare.

To solve the above issues, we carefully redesign the push-sum rule to address I) and II), and III) is tackled in Section IV. From Protocol 1, one knows that the dynamics-based method mainly operates on the first KK iterations to preserve the privacy information. Specifically, the update rule of the xx-variable is given as

xi(k+1)=j𝒩iin{i}Cij1(k)xj(k),kK,\displaystyle x_{i}(k+1)=\sum_{j\in\mathcal{N}_{i}^{\text{in}}\cup\{i\}}{C_{ij}^{1}(k)x_{j}(k)},\,\,k\leq K,

where Cij1(k)C_{ij}^{1}(k) is generated from Protocol 1. Note that the sum-one condition is used to ensure that the sum of all variables at each iteration kKk\leq K is invariant, that is,

i=1Nxi(k+1)=i=1Nxi(k).\displaystyle\sum_{i=1}^{N}{x_{i}(k+1)}=\sum_{i=1}^{N}{x_{i}(k)}. (6)

Thus, if we wish to circumvent the sum-one constraint, the new update rule must make (6) hold. Specifically, we take advantage of the fact that the amount of messages sent and received is equal for the entire system (i.e., the total mass of the system is fixed) and modify the update of the xx-variable as

xi(k+1)=xi(k)+Ξi(k)\displaystyle x_{i}(k+1)=x_{i}(k)+\varXi_{i}(k) (7)

with

Ξi(k)j𝒩iinCij1(k)xj(k)j𝒩ioutCji1(k)xi(k),\varXi_{i}(k)\!\triangleq\!\sum_{j\in\mathcal{N}_{i}^{\text{in}}}{C_{ij}^{1}(k)x_{j}(k)}\!-\!\sum_{j\in\mathcal{N}_{i}^{\text{out}}}{C_{ji}^{1}(k)x_{i}(k)},

where Cij1(k)C_{ij}^{1}(k) can take any value in \mathbb{R} (the sum-one condition is not required). One verifies that i=1NΞi(k)=0\sum\nolimits_{i=1}^{N}{\varXi_{i}(k)}=0. Obviously, summing xi(k+1)x_{i}(k+1) in (7) over i=1,,Ni=1,\cdots,N yields (6). However, the update rule (7) is valid for honest-but-curious attacks and still ineffective for eavesdropping attacks, see Corollary 2. Thus, we further introduce an auxiliary parameter σ(k)\sigma(k)\in\mathbb{R} for kKk\leq K, which is public information known for all agents, but not to the external eavesdropper. Details of our method are summarized in Algorithm 2.

Algorithm 2 Secure average consensus algorithm
1:  Input: Initial states xi(0)=zi(0)=xi0x_{i}(0)=z_{i}(0)=x_{i}^{0} and yi(0)=1y_{i}(0)=1 for i𝒱i\in\mathcal{V}; Parameters KK\in\mathbb{N}, σ(k)\sigma(k)\in\mathbb{R} for kk\in\mathbb{N}, and η(0,1)\eta\in(0,1); Network 𝒢\mathcal{G}.
2:  Weight generation: Two sets of random mixing weights associated with any edge (j,i)(j,i)\in\mathcal{E} are generated. For the variable yi(k)y_{i}(k) at any kk\in\mathbb{N}, a group of mixing weights {Cji2(k)(η,1)|j𝒩iout{i}}\{C_{ji}^{2}(k)\in(\eta,1)|j\in\mathcal{N}_{i}^{\text{out}}\cup\{i\}\} are generated, which satisfy j𝒩iout{i}Cji2(k)=1\sum\nolimits_{j\in\mathcal{N}_{i}^{\text{out}}\cup\{i\}}^{\,\,}{C_{ji}^{2}(k)}=1. For the variable xi(k)x_{i}(k), if kKk\leq K, a group of mixing weights {Cji1(k)|j𝒩iout{i}}\{C_{ji}^{1}(k)\in\mathbb{R}|\,\,j\in\mathcal{N}_{i}^{\text{out}}\cup\{i\}\} are generated; Otherwise, a group of mixing weights {Cji1(k)=Cji2(k)(η,1)|j𝒩iout{i}}\{C_{ji}^{1}(k)=C_{ji}^{2}(k)\in(\eta,1)|j\in\mathcal{N}_{i}^{\text{out}}\cup\{i\}\} are generated. Moreover, agent ii always sets Cji1(k)=0C_{ji}^{1}(k)=0 and Cji2(k)=0C_{ji}^{2}(k)=0 for j𝒩iout{i}j\notin\mathcal{N}_{i}^{\text{out}}\cup\{i\}.
3:  for k=0,1,k=0,1,\cdots do
4:     for i=1,,ni=1,\cdots,n in parallel do
5:        Agent ii sends the computed Cli1(k)xi(k)C_{li}^{1}(k)x_{i}(k) and Cli2(k)yi(k)C_{li}^{2}(k)y_{i}(k) to l𝒩ioutl\in\mathcal{N}_{i}^{\text{out}}.
6:        Agent ii uses Cij1(k)xj(k)C_{ij}^{1}(k)x_{j}(k) and Cij2(k)yj(k)C_{ij}^{2}(k)y_{j}(k) received from j𝒩iinj\in\mathcal{N}_{i}^{\text{in}} to update xix_{i} and yiy_{i} as follows:
xi(k+1)={xi(k)+σ(k)Ξi(k),ifkK;j𝒩iin{i}Cij1(k)xj(k),ifkK+1.\displaystyle x_{i}(k+1)\!=\!\begin{cases}x_{i}(k)\!+\!\sigma(k)\varXi_{i}(k),&\text{if}\,\,k\leq K;\\ \underset{j\in\mathcal{N}_{i}^{\text{in}}\cup\{i\}}{\sum}C_{ij}^{1}(k)x_{j}(k),&\text{if}\,\,k\geq K+1.\\ \end{cases} (8)
yi(k+1)=j𝒩iin{i}Cij2(k)yj(k),k0.\displaystyle y_{i}(k+1)=\underset{j\in\mathcal{N}_{i}^{\text{in}}\cup\{i\}}{\sum}C_{ij}^{2}(k)y_{j}(k),k\geq 0. (9)
7:        Agent ii computes zi(k+1)=xi(k+1)/yi(k+1)z_{i}(k+1)=x_{i}(k+1)/y_{i}(k+1).
8:        Until a stopping criteria is satisfied, e.g., agent ii stops if |zi(k+1)x¯0|<ϵ|z_{i}(k+1)-\bar{x}^{0}|<\epsilon for some predefined ϵ>0\epsilon>0.
9:     end for
10:  end for
Remark 2.

Note that we mainly embed randomness for 𝐂1(k)\mathbf{C}_{1}(k) in the first KK iterations and do not consider 𝐂2(k)\mathbf{C}_{2}(k). Since embedding randomness for 𝐂1(k)\mathbf{C}_{1}(k) alone can guarantee that 𝐂1(k)𝐂2(k)\mathbf{C}_{1}(k)\neq\mathbf{C}_{2}(k) for kKk\leq K, and the auxiliary variable yy does not contain privacy information, so there is no need to embed randomness for 𝐂2(k)\mathbf{C}_{2}(k) either. Of course, if embedding randomness for 𝐂2(k)\mathbf{C}_{2}(k) is necessary, the update of the yy-variable in (9) is formulated as:

yi(k+1)=yi(k)+σ(k)(j𝒩iinCij2(k)yj(k)j𝒩iinCij2(k)yj(k)),\displaystyle y_{i}(k+1)=y_{i}(k)\!+\!\sigma^{{}^{\prime}}(k)\Big{(}\sum_{j\in\mathcal{N}_{i}^{\text{in}}}{C_{ij}^{2}(k)y_{j}(k)}\!-\!\sum_{j\in\mathcal{N}_{i}^{\text{in}}}{C_{ij}^{2}(k)y_{j}(k)}\Big{)},

where σ(k)\sigma^{{}^{\prime}}(k) and Cij2(k)C_{ij}^{2}(k) are generated in a similar way as σ(k)\sigma(k) and Cij1(k)C_{ij}^{1}(k) of Algorithm 2.

4 Convergence Analysis

Following Algorithm 2, it holds from the dynamics (8)-(9) that

𝐱(k+1)=𝐂1(k)𝐱(k),kK,\displaystyle\mathbf{x}(k+1)=\mathbf{C}_{1}(k)\mathbf{x}(k),k\geq K, (10)
𝐲(k+1)=𝐂2(k)𝐲(k),k0,\displaystyle\mathbf{y}(k+1)=\mathbf{C}_{2}(k)\mathbf{y}(k),k\geq 0, (11)

where 𝐂1(k)=[Cij1(k)]N×N\mathbf{C}_{1}(k)=[C_{ij}^{1}(k)]_{N\times N} and 𝐂2(k)=[Cij2(k)]N×N\mathbf{C}_{2}(k)=[C_{ij}^{2}(k)]_{N\times N}. It is clear from the settings of Algorithm 2 that: i) 𝐂1(k)\mathbf{C}_{1}(k) and 𝐂2(k)\mathbf{C}_{2}(k) are time-varying and column-stochastic; and ii) 𝐂1(k)=𝐂2(k)\mathbf{C}_{1}(k)=\mathbf{C}_{2}(k) for kKk\geq K.

Define 𝚽1(k:s)=𝐂1(k)𝐂1(s)\mathbf{\Phi}_{1}(k:s)=\mathbf{C}_{1}(k)\cdots\mathbf{C}_{1}(s) and 𝚽2(k:s)=𝐂2(k)𝐂2(s)\mathbf{\Phi}_{2}(k:s)=\mathbf{C}_{2}(k)\cdots\mathbf{C}_{2}(s) for ks0k\geq s\geq 0. Particularly, 𝚽1(k:k)=𝐂1(k)\mathbf{\Phi}_{1}(k:k)=\mathbf{C}_{1}(k) and 𝚽2(k:k)=𝐂2(k)\mathbf{\Phi}_{2}(k:k)=\mathbf{C}_{2}(k). Recursively computing (10) and (11), we can obtain

𝐱(k+1)=𝚽1(k:K+1)𝐱(K+1),kK+1,\displaystyle\mathbf{x}(k+1)=\mathbf{\Phi}_{1}(k:K+1)\mathbf{x}(K+1),k\geq K+1, (12)
𝐲(k+1)=𝚽2(k:0)𝐲(0),k0,\displaystyle\mathbf{y}(k+1)=\mathbf{\Phi}_{2}(k:0)\mathbf{y}(0),k\geq 0, (13)

where it holds 𝚽1(k:K+1)=𝚽2(k:K+1)\mathbf{\Phi}_{1}(k:K+1)=\mathbf{\Phi}_{2}(k:K+1) for kK+1k\geq K+1. Then, it follows that

𝟏𝐱(k+1)=𝟏𝐱(K+1),kK+1,\displaystyle\mathbf{1}^{\top}\mathbf{x}(k+1)=\mathbf{1}^{\top}\mathbf{x}(K+1),k\geq K+1, (14)
𝟏𝐲(k+1)=𝟏𝐲(0)=N,k0,\displaystyle\mathbf{1}^{\top}\mathbf{y}(k+1)=\mathbf{1}^{\top}\mathbf{y}(0)=N,k\geq 0, (15)

where we use the column stochasticities of 𝚽1(k:K+1)\mathbf{\Phi}_{1}(k:K+1) and 𝚽2(k:0)\mathbf{\Phi}_{2}(k:0). For the first KK dynamics of xix_{i} in (8), using the fact that i=1NΞi(k)=0\sum\nolimits_{i=1}^{N}{\varXi_{i}(k)}=0 gives

𝟏𝐱(k+1)=\displaystyle\mathbf{1}^{\top}\mathbf{x}(k+1)= i=1Nxi(k+1)=i=1N(xi(k)+σ(k)Ξi(k))\displaystyle\sum_{i=1}^{N}{x_{i}(k+1)}=\sum_{i=1}^{N}{(x_{i}(k)+\sigma(k)\varXi_{i}(k))}
=\displaystyle\,\,= i=1Nxi(k)=𝟏𝐱(k)=𝟏𝐱(0),\displaystyle\sum_{i=1}^{N}{x_{i}(k)}=\mathbf{1}^{\top}\mathbf{x}(k)=\mathbf{1}^{\top}\mathbf{x}(0), (16)

which matches the relation (6). Combining (14) and (16) gives

𝟏𝐱(k+1)=𝟏𝐱(0),k0.\displaystyle\mathbf{1}^{\top}\mathbf{x}(k+1)=\mathbf{1}^{\top}\mathbf{x}(0),k\geq 0. (17)

Note that the dynamics of Algorithm 2 for iterations kKk\geq K are analogous to the conventional push-sum method. Considering (17) in depth, it can be seen that the injected randomness of the first KK dynamics has no impact on the consensus performance. Next we show that Algorithm 2 can guarantee a linear convergence rate. Let 𝐳(k)=[z1(k),,zN(k)]\mathbf{z}(k)=[z_{1}(k),\cdots,z_{N}(k)]^{\top}.

Theorem 1.

Let {(zi(k))i=1N}k\{(z_{i}(k))_{i=1}^{N}\}_{k\in\mathbb{N}} be the sequence generated by Algorithm 2, and the network 𝒢\mathcal{G} satisfies Assumption 1. Then, it holds, for all kk\in\mathbb{N},

𝐳(k)x¯0𝟏cρk,\lVert\mathbf{z}(k)-\bar{x}^{0}\mathbf{1}\rVert\leq c\rho^{k},

where ρ=(1ηN1)1N1\rho=(1-\eta^{N-1})^{\frac{1}{N-1}}, and cc is a constant given as

c=max{c1,(c2+c3)𝐱(0)1,(c2ρ1+c3)𝐱(1)1,,(c2ρK1+c3)𝐱(K+1)1},c=\max\bigg{\{}\!\!\!\!\begin{array}[]{c}c_{1},(c_{2}+c_{3})\lVert\mathbf{x}(0)\rVert_{1},(c_{2}\rho^{-1}+c_{3})\lVert\mathbf{x}(1)\rVert_{1},\\ \cdots,(c_{2}\rho^{-K-1}+c_{3})\lVert\mathbf{x}(K+1)\rVert_{1}\\ \end{array}\!\!\!\!\bigg{\}},

where c1=2Nc0𝐱(K+1)1ηNρK2c_{1}=2\sqrt{N}c_{0}\lVert\mathbf{x}(K+1)\rVert_{1}\eta^{-N}\rho^{-K-2}, c2=2NηN(N1)/Nc_{2}=2\sqrt{N}\eta^{-N}-(N-1)/\sqrt{N} and c3=N1/2ηNc0ρ1c_{3}=N^{-1/2}\eta^{-N}c_{0}\rho^{-1}.

Proof.

Details of the analysis are outlined in Appendix A. ∎

Remark 3.

Theorem 1 indicates that Algorithm 1 can achieve an 𝒪(ρk)\mathcal{O}(\rho^{k}) convergence rate with ρ=(1ηN1)1N1\rho=(1-\eta^{N-1})^{\frac{1}{N-1}}. Evidently, a smaller ρ\rho yields a better convergence rate. A straightforward way to obtain a smaller ρ\rho is to increase η\eta. However, it is essential to be aware that η\eta cannot be close to 11 arbitrarily due to the nonnegativity and column stochasticity of the mixing matrix for kK+1k\geq K+1. To satisfy the weight generation mechanism in Algorithm 2, it holds 0η1/(maxiDiout+1)0\leq\eta\leq 1/(\max_{i}{D_{i}^{\text{out}}}+1).

5 Privacy Analysis

We analyze that Algorithm 2 is resistant to both honest-but-curious and eavesdropping attacks.

5.1 Performance Against Honest-but-curious Attacks

For the honest-but-curious attacks, we make the following standard assumption.

Assumption 2.

Consider a strongly connected network 𝒢\mathcal{G}, where some colluding honest-but-curious nodes exist. We assume that each agent i𝒱i\in\mathcal{V} has at least one legitimate neighbor, i.e., 𝒩iout𝒩iin\mathcal{N}_{i}^{\text{out}}\cup\mathcal{N}_{i}^{\text{in}}\nsubseteq\mathcal{H}.

Remark 4.

Assumption 2 is common in security consensus literature [30, 31, 32, 33, 34, 35]. For example, in a real network, we hold an agent ii. If all other agents in the network are untrustworthy, we can simply add another agent ll that we hold to the network so that it is externalized to agent ii.

Theorem 2.

Under Assumptions 1-2, the initial value xi0x_{i}^{0} of legitimate agent i𝒱i\in\mathcal{V} can be safely protected if 𝒩iout𝒩iin\mathcal{N}_{i}^{\text{out}}\cup\mathcal{N}_{i}^{\text{in}}\nsubseteq\mathcal{H} during the running of Algorithm 2.

Proof.

Recalling the definition of privacy metric in Section II-D, it can be shown that the privacy of agent ii can be safely protected insofar as 𝐃(𝒮0i)=\mathbf{D}(\mathcal{S}_{0}^{i})=\infty. The available information to \mathcal{H} is h={j|j}\mathcal{I}_{h}=\{\mathcal{I}_{j}|j\in\mathcal{H}\}, where j\mathcal{I}_{j} denotes the information available to each individual jj\in\mathcal{H} given as

j=\displaystyle\mathcal{I}_{j}= {jstate(k)jsend(k)jreceive(k)|k0}\displaystyle\{\mathcal{I}_{j}^{\text{state}}(k)\cup\mathcal{I}_{j}^{\text{send}}(k)\cup\mathcal{I}_{j}^{\text{receive}}(k)|k\geq 0\}
{σ(k)|0kK}{ym(0)=1|m𝒱}\displaystyle\cup\{\sigma(k)|0\leq k\leq K\}\cup\{y_{m}(0)=1|m\in\mathcal{V}\}
{Cnj1(k),Cnj2(k)|m𝒱,k0}\displaystyle\cup\{C_{nj}^{1}(k),C_{nj}^{2}(k)|m\in\mathcal{V},k\geq 0\}

with

jstate(k)={xj(k),yj(k)}\displaystyle\mathcal{I}_{j}^{\text{state}}(k)=\{x_{j}(k),y_{j}(k)\}
jsend(k)={Cnj1(k)xj(k),Cnj2(k)yj(k)|n𝒩jout{j}}\displaystyle\mathcal{I}_{j}^{\text{send}}(k)\!=\!\{C_{nj}^{1}(k)x_{j}(k),C_{nj}^{2}(k)y_{j}(k)|n\in\mathcal{N}_{j}^{\text{out}}\cup\{j\}\}
jreceive(k)={Cjm1(k)xm(k),Cjm2(k)ym(k)|m𝒩jin}.\displaystyle\mathcal{I}_{j}^{\text{receive}}(k)=\{C_{jm}^{1}(k)x_{m}(k),C_{jm}^{2}(k)y_{m}(k)|m\in\mathcal{N}_{j}^{\text{in}}\}.

To prove 𝐃(𝒮0i)=\mathbf{D}(\mathcal{S}_{0}^{i})=\infty, it suffices to show that agents in \mathcal{H} fail to judge whether the initial value of agent ii is xi0x_{i}^{0} or x~i0=xi0+δ\tilde{x}_{i}^{0}=x_{i}^{0}+\delta where δ\delta is an arbitrary value in \mathbb{R} and xi0,x~i0𝒮0ix_{i}^{0},\tilde{x}_{i}^{0}\in\mathcal{S}_{0}^{i}. Note that agents in \mathcal{H} are only able to infer xi0x_{i}^{0} using h\mathcal{I}_{h}. In other words, if the initial value x~i0=xi0+δ\tilde{x}_{i}^{0}=x_{i}^{0}+\delta makes the information ~h\tilde{\mathcal{I}}_{h} accessed by agents of \mathcal{H} unchanged, i.e., ~h=h\tilde{\mathcal{I}}_{h}=\mathcal{I}_{h}, then 𝐃(𝒮0i)=\mathbf{D}(\mathcal{S}_{0}^{i})=\infty. Hence, we only need to prove that there is ~h=h\tilde{\mathcal{I}}_{h}=\mathcal{I}_{h} under two different initial values x~i0\tilde{x}_{i}^{0} and xi0x_{i}^{0}.

Since 𝒩iout𝒩iin\mathcal{N}_{i}^{\text{out}}\cup\mathcal{N}_{i}^{\text{in}}\nsubseteq\mathcal{H}, there exists at least one agent l𝒩iout𝒩iinl\in\mathcal{N}_{i}^{\text{out}}\cup\mathcal{N}_{i}^{\text{in}}\setminus\mathcal{H}. Thus, some settings on initial values of agent ll and mixing weights associated with agent ll that satisfy the requirements in Algorithm 2 such that ~h=h\tilde{\mathcal{I}}_{h}=\mathcal{I}_{h} holds for any variant x~i0\tilde{x}_{i}^{0}. More specifically, the initial settings are given as

x~i0=xi0+δ,x~l0=xl0δ,x~m0=xm0,m𝒱{i,l},\displaystyle\tilde{x}_{i}^{0}=x_{i}^{0}+\delta,\tilde{x}_{l}^{0}=x_{l}^{0}-\delta,\tilde{x}_{m}^{0}=x_{m}^{0},m\in\mathcal{V}\setminus\{i,l\}, (18)

where δ\delta is nonzero and does not equal either xi(0)-x_{i}(0) or xl(0)x_{l}(0). Apparently, such an initial value setting has no impact on the sum of the original initial values. Then, we properly choose the mixing weights such that ~h=h\tilde{\mathcal{I}}_{h}=\mathcal{I}_{h}. Here, “properly” means the choosing mixing weights should obey the weight generation mechanism in Algorithm 2. Our analysis will be continued in two cases, l𝒩ioutl\in\mathcal{N}_{i}^{\text{out}} and l𝒩iinl\in\mathcal{N}_{i}^{\text{in}}, respectively.

Case I: We consider l𝒩ioutl\in\mathcal{N}_{i}^{\text{out}}. One derives ~h=h\tilde{\mathcal{I}}_{h}=\mathcal{I}_{h} if the weights are set as

C~mn1(0)=Cmn1(0),m𝒱,n𝒱{i,l},\displaystyle\tilde{C}_{mn}^{1}(0)=C_{mn}^{1}(0),m\in\mathcal{V},n\in\mathcal{V}\setminus\{i,l\}, (19a)
C~mi1(0)=Cmi1(0)xi0/x~i0,m𝒱{i,l},\displaystyle\tilde{C}_{mi}^{1}(0)=C_{mi}^{1}(0)x_{i}^{0}/\tilde{x}_{i}^{0},m\in\mathcal{V}\setminus\{i,l\}, (19b)
C~li1(0)=(σ(0)Cli1(0)xi0+δ)/σ(0)x~i0,\displaystyle\tilde{C}_{li}^{1}(0)=(\sigma(0)C_{li}^{1}(0)x_{i}^{0}+\delta)/\sigma(0)\tilde{x}_{i}^{0}, (19c)
C~ml1(0)=Cml1(0)xl0/x~l0,m𝒱{l},\displaystyle\tilde{C}_{ml}^{1}(0)=C_{ml}^{1}(0)x_{l}^{0}/\tilde{x}_{l}^{0},m\in\mathcal{V}\setminus\{l\}, (19d)
C~ii1(0),C~ll1(0),\displaystyle\tilde{C}_{ii}^{1}(0),\tilde{C}_{ll}^{1}(0)\in\mathbb{R}, (19e)
C~mn1(k)=Cmn1(k),m,n𝒱,k1,\displaystyle\tilde{C}_{mn}^{1}(k)=C_{mn}^{1}(k),m,n\in\mathcal{V},k\geq 1, (19f)
C~mn2(k)=Cmn2(k),m,n𝒱,k0.\displaystyle\tilde{C}_{mn}^{2}(k)=C_{mn}^{2}(k),m,n\in\mathcal{V},k\geq 0. (19g)

Case II: We consider l𝒩iinl\in\mathcal{N}_{i}^{\text{in}}. One derives ~h=h\tilde{\mathcal{I}}_{h}=\mathcal{I}_{h} if the weights are set as

C~mn1(0)=Cmn1(0),m𝒱,n𝒱{i,l},\displaystyle\tilde{C}_{mn}^{1}(0)=C_{mn}^{1}(0),m\in\mathcal{V},n\in\mathcal{V}\setminus\{i,l\}, (20a)
C~mi1(0)=Cmi1(0)xi0/x~i0,m𝒱{i},\displaystyle\tilde{C}_{mi}^{1}(0)=C_{mi}^{1}(0)x_{i}^{0}/\tilde{x}_{i}^{0},m\in\mathcal{V}\setminus\{i\}, (20b)
C~ml1(0)=Cml1(0)xl0/x~l0,m𝒱{i,l},\displaystyle\tilde{C}_{ml}^{1}(0)=C_{ml}^{1}(0)x_{l}^{0}/\tilde{x}_{l}^{0},m\in\mathcal{V}\setminus\{i,l\}, (20c)
C~il1(0)=(σ(0)Cil1(0)xl0δ)/σ(0)x~l0,\displaystyle\tilde{C}_{il}^{1}(0)=(\sigma(0)C_{il}^{1}(0)x_{l}^{0}-\delta)/\sigma(0)\tilde{x}_{l}^{0}, (20d)
C~ii1(0),C~ll1(0),\displaystyle\tilde{C}_{ii}^{1}(0),\tilde{C}_{ll}^{1}(0)\in\mathbb{R}, (20e)
C~mn1(k)=Cmn1(k),m,n𝒱,k1,\displaystyle\tilde{C}_{mn}^{1}(k)=C_{mn}^{1}(k),m,n\in\mathcal{V},k\geq 1, (20f)
C~mn2(k)=Cmn2(k),m,n𝒱,k0.\displaystyle\tilde{C}_{mn}^{2}(k)=C_{mn}^{2}(k),m,n\in\mathcal{V},k\geq 0. (20g)

Combining Cases I and II, it can be derived that ~h=h\tilde{\mathcal{I}}_{h}=\mathcal{I}_{h} under the initial value x~i0=xi0+δ𝒮0i\tilde{x}_{i}^{0}=x_{i}^{0}+\delta\in\mathcal{S}_{0}^{i}. Then

𝐃(𝒮0i)supδ|xi0x~i0|=supδ|δ|=\displaystyle\mathbf{D}(\mathcal{S}_{0}^{i})\geq\underset{\delta\in\mathbb{R}}{\text{sup}}|x_{i}^{0}-\tilde{x}_{i}^{0}|=\underset{\delta\in\mathbb{R}}{\text{sup}}|\delta|=\infty

Therefore, the initial value xi0x_{i}^{0} of agent ii is preserved against agents \mathcal{H} if agent ii has at least one legitimate neighbor l𝒱l\in\mathcal{V}\setminus\mathcal{H}. ∎

Remark 5.

By (19e) and (20e), one knows that the privacy of Algorithm 2 does not have any requirement for the weights C~ii1(0)\tilde{C}_{ii}^{1}(0) and C~ll1(0)\tilde{C}_{ll}^{1}(0). The reason for this is that each agent ii in Algorithm 2 does not use such weights in the iterations k=0,1,,Kk=0,1,\cdots,K. One benefit of this operation is that it allows the mixing weights of the transmitted information to be used in the iterations k=0,1,,Kk=0,1,\cdots,K without requiring the satisfaction of the sum-one condition, which in turn provides better flexibility in the setting of mixing weights.

Corollary 1.

During the running of Algorithm 2, the initial value xi0x_{i}^{0} of agent ii\notin\mathcal{H} would be revealed if 𝒩iout𝒩iin\mathcal{N}_{i}^{\text{out}}\cup\mathcal{N}_{i}^{\text{in}}\subset\mathcal{H} holds.

Proof.

Recursively computing the update of xx-variable for kKk\leq K yields

xi(K+1)xi(0)=t=0Kσ(t)(n𝒩iinCin1(t)xn(t)m𝒩ioutCmi1(t)xi(t)).\displaystyle x_{i}(K+1)-x_{i}(0)=\sum_{t=0}^{K}\!{\sigma(t)\!\Big{(}\!\sum_{n\in\mathcal{N}_{i}^{\text{in}}}\!{C_{in}^{1}(t)x_{n}(t)}\!-\!\sum_{m\in\mathcal{N}_{i}^{\text{out}}}\!{C_{mi}^{1}(t)x_{i}(t)}\!\Big{)}}. (21)

Then, using the column stochasticities of 𝐂1(k)\mathbf{C}_{1}(k) for kK+1k\geq K+1 and 𝐂2(k)\mathbf{C}_{2}(k) for k0k\geq 0, we have

xi(k)=Cii1(k)xi(k)+m𝒩ioutCmi1(k)xi(k),kK,\displaystyle x_{i}(k)=C_{ii}^{1}(k)x_{i}(k)+\sum_{m\in\mathcal{N}_{i}^{\text{out}}}{C_{mi}^{1}(k)x_{i}(k)},k\geq K,
yi(k)=Cii2(k)yi(k)+m𝒩ioutCmi2(k)yi(k),k0.\displaystyle y_{i}(k)=C_{ii}^{2}(k)y_{i}(k)+\sum_{m\in\mathcal{N}_{i}^{\text{out}}}{C_{mi}^{2}(k)y_{i}(k)},k\geq 0.

Combining the above relations with (8) and (9), one arrives

xi(k)xi(K+1)=t=K+1k1(n𝒩iinCin1(t)xn(t)m𝒩ioutCmi1(t)xi(t)),\displaystyle x_{i}(k)-x_{i}(K+1)=\sum_{t=K+1}^{k-1}\!{\Big{(}\!\sum_{n\in\mathcal{N}_{i}^{\text{in}}}{C_{in}^{1}(t)x_{n}(t)}\!-\!\sum_{m\in\mathcal{N}_{i}^{\text{out}}}{C_{mi}^{1}(t)x_{i}(t)}\!\Big{)}}, (22)
yi(k)yi(0)=t=0k1(n𝒩iinCin2(t)yn(t)m𝒩ioutCmi2(t)yi(t)).\displaystyle y_{i}(k)-y_{i}(0)=\sum_{t=0}^{k-1}{\Big{(}\sum_{n\in\mathcal{N}_{i}^{\text{in}}}{C_{in}^{2}(t)y_{n}(t)}\!-\!\sum_{m\in\mathcal{N}_{i}^{\text{out}}}{C_{mi}^{2}(t)y_{i}(t)}\Big{)}}. (23)

Further, combining the results in (21) and (22) gives

xi(k)xi(0)=\displaystyle x_{i}(k)-x_{i}(0)= t=K+1k1(n𝒩iinCin1(t)xn(t)m𝒩ioutCmi1(t)xi(t))\displaystyle\sum_{t=K+1}^{k-1}{\Big{(}\sum_{n\in\mathcal{N}_{i}^{\text{in}}}{C_{in}^{1}(t)x_{n}(t)}-\sum_{m\in\mathcal{N}_{i}^{\text{out}}}{C_{mi}^{1}(t)x_{i}(t)}\Big{)}}
+t=0Kσ(t)(n𝒩iinCin1(t)xn(t)m𝒩ioutCmi1(t)xi(t)).\displaystyle+\sum_{t=0}^{K}{\sigma(t)\Big{(}\sum_{n\in\mathcal{N}_{i}^{\text{in}}}{C_{in}^{1}(t)x_{n}(t)}-\sum_{m\in\mathcal{N}_{i}^{\text{out}}}{C_{mi}^{1}(t)x_{i}(t)}\Big{)}}. (24)

Note that each agent jj\in\mathcal{H} has access to h\mathcal{I}_{h}. If 𝒩iout𝒩iin\mathcal{N}_{i}^{\text{out}}\cup\mathcal{N}_{i}^{\text{in}}\subset\mathcal{H} holds for legitimate agent ii, all the information involved on the right sides of (23) and (24) is accessible to the honest-but-curious agents. Then, using yi(0)=1y_{i}(0)=1 and (23), agent jj can capture yi(k)y_{i}(k) for all kk. Further, as Cij1(k)=Cij2(k)C_{ij}^{1}(k)=C_{ij}^{2}(k) for kK+1k\geq K+1, xi(k)x_{i}(k) can be inferred correctly by agent jj using

xi(k)=Cji1(k)xi(k)Cji2(k)yi(k)yi(k).\displaystyle x_{i}(k)=\frac{C_{ji}^{1}(k)x_{i}(k)}{C_{ji}^{2}(k)y_{i}(k)}y_{i}(k).

Making use of (24), the desired initial value xi(0)=xi0x_{i}(0)=x_{i}^{0} is revealed. ∎

5.2 Performance Against Eavesdropping Attacks

For the eavesdropping attacks, we make the following assumption.

Assumption 3.

Consider a strongly connected network 𝒢\mathcal{G}, where an external eavesdropper exists. We assume that the parameter σ(0)\sigma(0) is not accessible to the eavesdropper.

Theorem 3.

Under Assumptions 1 and 3, the initial values {xi0}i𝒱\{x_{i}^{0}\}_{i\in\mathcal{V}} of all agents can be preserved.

Proof.

From the definition of privacy metric in Section II-D, it is shown that all agents’ privacy can be safely protected insofar as 𝐃(𝒮0)=\mathbf{D}(\mathcal{S}_{0})=\infty. The available information to the external eavesdropper is given as

e={Cij1(k)xj(k),Cij2(k)yj(k)|i,j𝒱,ij,k0}.\displaystyle\mathcal{I}_{e}=\{C_{ij}^{1}(k)x_{j}(k),C_{ij}^{2}(k)y_{j}(k)|\forall i,j\in\mathcal{V},i\neq j,k\geq 0\}.

The dynamic (8) can be reformulated as

𝐱(k+1)=𝐱(k)+σ(k)𝐑Δ𝐱(k),kK,\displaystyle\mathbf{x}(k+1)=\mathbf{x}(k)+\sigma(k)\mathbf{R}\Delta\mathbf{x}(k),k\leq K, (25)

where 𝐑\mathbf{R} denotes the incidence matrix associated network 𝒢\mathcal{G}, and Δ𝐱(k)\Delta\mathbf{x}(k) is a stack vector whose ii-th element is Cmn1(k)xn(k)C_{mn}^{1}(k)x_{n}(k) with (m,n)(m,n) being the ii-th edge in \mathcal{E}. Note that the external eavesdropper is only able to infer all {xi(0)}i𝒱\{x_{i}(0)\}_{i\in\mathcal{V}} using e\mathcal{I}_{e}. To prove 𝐃(𝒮0)=\mathbf{D}(\mathcal{S}_{0})=\infty, it is required to indicate that any initial value 𝐱~(0)𝐱(0)+Δσ(0)𝐑Δ𝐱(0)𝒮0\mathbf{\tilde{x}}(0)\triangleq\mathbf{x}(0)+\Delta\sigma(0)\mathbf{R}\Delta\mathbf{x}(0)\in\mathcal{S}_{0} makes the information ~e\tilde{\mathcal{I}}_{e} accessed by the external eavesdropper unchanged, i.e., ~e=e\tilde{\mathcal{I}}_{e}=\mathcal{I}_{e}, where Δσ(0)\Delta\sigma(0) is any value in \mathbb{R}. Hence, we only need to prove that it holds ~e=e\tilde{\mathcal{I}}_{e}=\mathcal{I}_{e} under two different initial states 𝐱~(0)\mathbf{\tilde{x}}(0) and 𝐱(0)\mathbf{x}(0). Specifically, one derives ~e=e\tilde{\mathcal{I}}_{e}=\mathcal{I}_{e} if the weights are set as

C~mn1(0)=Cmn1(0)xn0/x~n0,m,n𝒱,mn,\displaystyle\tilde{C}_{mn}^{1}(0)=C_{mn}^{1}(0)x_{n}^{0}/\tilde{x}_{n}^{0},m,n\in\mathcal{V},m\neq n, (26a)
C~nn1(0),n𝒱,\displaystyle\tilde{C}_{nn}^{1}(0)\in\mathbb{R},n\in\mathcal{V}, (26b)
σ~(0)=σ(0)+Δσ(0),\displaystyle\tilde{\sigma}(0)=\sigma(0)+\Delta\sigma(0), (26c)
C~mn1(k)=Cmn1(k),m,n𝒱,k1,\displaystyle\tilde{C}_{mn}^{1}(k)=C_{mn}^{1}(k),m,n\in\mathcal{V},k\geq 1, (26d)
C~mn2(k)=Cmn2(k),k0,\displaystyle\tilde{C}_{mn}^{2}(k)=C_{mn}^{2}(k),k\geq 0, (26e)
σ~(k)=σ(k),k1.\displaystyle\tilde{\sigma}(k)=\sigma(k),k\geq 1. (26f)

Further, owing to the fact that the rank of 𝐑\mathbf{R} is N1N-1 and the nullity of 𝐑\mathbf{R} is ||N1|\mathcal{E}|-N-1, one concludes that Δ𝐱(0)\Delta\mathbf{x}(0) is any vector in ||\mathbb{R}^{|\mathcal{E}|}. In other words, the probability of Δ𝐱(0)\Delta\mathbf{x}(0) landing in the null space of 𝐑\mathbf{R} is zero. Thus, for any n𝒱n\in\mathcal{V}, it holds

[𝐑Δ𝐱(0)]n=m𝒩ninCnm1(0)xm(0)m𝒩noutCmn1(0)xn(0)0.\displaystyle[\mathbf{R}\Delta\mathbf{x}(0)]_{n}=\sum_{m\in\mathcal{N}_{n}^{\text{in}}}{C_{nm}^{1}(0)x_{m}(0)}-\sum_{m\in\mathcal{N}_{n}^{\text{out}}}{C_{mn}^{1}(0)x_{n}(0)}\neq 0.

Naturally, x~n(0)xn(0)=[Δσ(0)𝐑Δ𝐱(0)]n\tilde{x}_{n}(0)-x_{n}(0)=[\Delta\sigma(0)\mathbf{R}\Delta\mathbf{x}(0)]_{n} can be any value in 𝐑\mathbf{R}. Therefore,

𝐃(𝒮0)=sup𝐱(0),𝐱~(0)𝒮0𝐱(0)𝐱~(0)=supΔσ(0)Δσ(0)𝐑Δ𝐱(0)=.\displaystyle\mathbf{D}(\mathcal{S}_{0})=\underset{\mathbf{x}(0),\mathbf{\tilde{x}}(0)\in\mathcal{S}_{0}}{\text{sup}}\lVert\mathbf{x}(0)-\mathbf{\tilde{x}}(0)\rVert=\underset{\Delta\sigma(0)\in\mathbb{R}}{\text{sup}}\lVert\Delta\sigma(0)\mathbf{R}\Delta\mathbf{x}(0)\rVert=\infty.

That is to say, all initial values {xi(0)}i𝒱\{x_{i}(0)\}_{i\in\mathcal{V}} are preserved against the external eavesdropper. ∎

Corollary 2.

If the update rule for kKk\leq K in (8) is substituted with (7), Algorithm 2 cannot preserve the initial value of each agent ii against eavesdropping attacks.

Proof.

Recursively computing the update of xx-variable in (7) for kKk\leq K gives

xi(K+1)xi(0)=t=0K(n𝒩iinCin1(t)xn(t)m𝒩ioutCmi1(t)xi(t)).\displaystyle x_{i}(K+1)-x_{i}(0)=\sum_{t=0}^{K}{\Big{(}\sum_{n\in\mathcal{N}_{i}^{\text{in}}}{C_{in}^{1}(t)x_{n}(t)}-\sum_{m\in\mathcal{N}_{i}^{\text{out}}}{C_{mi}^{1}(t)x_{i}(t)}\Big{)}}. (27)

Note that (22) and (23) still hold in this setting. Combining (27) with (22), we have

xi(k)xi(0)=t=0k1(n𝒩iinCin1(t)xn(t)m𝒩ioutCmi1(t)xi(t)).\displaystyle x_{i}(k)-x_{i}(0)=\sum_{t=0}^{k-1}{\Big{(}\sum_{n\in\mathcal{N}_{i}^{\text{in}}}{C_{in}^{1}(t)x_{n}(t)}-\sum_{m\in\mathcal{N}_{i}^{\text{out}}}{C_{mi}^{1}(t)x_{i}(t)}\Big{)}}. (28)

Since the external eavesdropper can capture all transmitted information, all terms in the right sides of (23) and (28) can be accessed by the external eavesdropper. Then, using yi(0)=1y_{i}(0)=1 and (23), agent jj can capture yi(k)y_{i}(k) for all kk. Further, since Cij1(k)=Cij2(k)C_{ij}^{1}(k)=C_{ij}^{2}(k) for kK+1k\geq K+1, xi(k)x_{i}(k) can be inferred correctly by agent jj using

xi(k)=Cji1(k)xi(k)Cji2(k)yi(k)yi(k).\displaystyle x_{i}(k)=\frac{C_{ji}^{1}(k)x_{i}(k)}{C_{ji}^{2}(k)y_{i}(k)}y_{i}(k).

Making use of (28), the desired initial value xi(0)=xi0x_{i}(0)=x_{i}^{0} is inferred. ∎

Remark 6.

According to the discussions above, it is evident that the first KK-step perturbations are crucial for preserving privacy against honest-but-curious attacks, while the time-varying parameter σ(t)\sigma(t) is pivotal in protecting privacy from eavesdropping attacks. Note that we only require that σ(0)\sigma(0) is agnostic to the eavesdropper. Although this requirement is extremely stringent in practice, it still has some developmental significance. Specifically, due to the arbitrariness of σ(0)\sigma(0), we can mask only σ(0)\sigma(0) by some privacy-preserving techniques such as encryption and obfuscation. Since these techniques act only on σ(0)\sigma(0), they do not impact the convergence of the algorithm.

Remark 7.

Theorem 1 states that the randomness of embeddings in the first KK iterations has no impact on the consensus performance. Besides, from the privacy analysis, we can see that only changing the mixing weights and auxiliary parameter at the iteration k=0k=0 is enough to mask the initial values. That is, we can make Algorithm 2 protect the initial value xi(0)x_{i}(0) by simply embedding randomness to 𝐂1(0)\mathbf{C}_{1}(0) (i.e., setting K=1K=1). Here, our consideration of K1K\geq 1 is to preserve more intermediate states xi(k)x_{i}(k), but this also delays the consensus process, see Fig. 3. Therefore, if the intermediate states are not information of privacy concern, we directly set K=1K=1 to obtain the best convergence performance.

Discussion: The update rules of Algorithm 2 can also be extended to the case of vector states. Actually, privacy (i.e., the agent’s initial vector state) is naturally protected provided that each element of the vector state is assigned an independent mixing weights. The details are summarized in Algorithm 3.

Algorithm 3 Secure average consensus algorithm in the vector-state case
1:  Input: Initial states 𝐱i(0)=𝐱i0d\mathbf{x}_{i}(0)=\mathbf{x}_{i}^{0}\in\mathbb{R}^{d} and yi(0)=1y_{i}(0)=1 for i𝒱i\in\mathcal{V}; Parameters KK\in\mathbb{N}, 𝚲(k)d×d\mathbf{\Lambda}(k)\in\mathbb{R}^{d\times d} for kk\in\mathbb{N}, and η(0,1)\eta\in(0,1); Communication network 𝒢\mathcal{G}.
2:  Weight generation: See Table 1.
3:  for k=0,1,k=0,1,\cdots do
4:     for i=1,,ni=1,\cdots,n in parallel do
5:        Agent ii sends the computed 𝐂li1(k)𝐱i(k)\mathbf{C}_{li}^{1}(k)\mathbf{x}_{i}(k) and Cli2(k)yi(k)C_{li}^{2}(k)y_{i}(k) to l𝒩ioutl\in\mathcal{N}_{i}^{\text{out}}.
6:        Agent ii uses 𝐂ij1(k)𝐱j(k)\mathbf{C}_{ij}^{1}(k)\mathbf{x}_{j}(k) and Cij2(k)yj(k)C_{ij}^{2}(k)y_{j}(k) from j𝒩iinj\in\mathcal{N}_{i}^{\text{in}} to update 𝐱i\mathbf{x}_{i} and yiy_{i} as follows:
𝐱i(k+1)={𝐱i(k)+𝚲(k)𝚵i(k),ifkK;j𝒩iin{i}𝐂ij1(k)𝐱j(k)ifkK+1.\displaystyle\mathbf{x}_{i}(k+1)=\begin{cases}\mathbf{x}_{i}(k)+\mathbf{\Lambda}(k)\bm{\varXi}_{i}(k),&\text{if}\,\,k\leq K;\\ \underset{j\in\mathcal{N}_{i}^{\text{in}}\cup\{i\}}{\sum}{\mathbf{C}_{ij}^{1}(k)\mathbf{x}_{j}(k)}&\text{if}\,\,k\geq K+1.\\ \end{cases} (29)
yi(k+1)=j𝒩iin{i}Cij2(k)yj(k),k0,\displaystyle y_{i}(k+1)=\underset{j\in\mathcal{N}_{i}^{\text{in}}\cup\{i\}}{\sum}{C_{ij}^{2}(k)y_{j}(k)},k\geq 0, (30)
where 𝚵i(k)j𝒩iin𝐂ij1(k)𝐱j(k)j𝒩iout𝐂ji1(k)𝐱i(k)\bm{\varXi}_{i}(k)\!\triangleq\!\underset{j\in\mathcal{N}_{i}^{\text{in}}}{\sum}\!{\mathbf{C}_{ij}^{1}(k)\mathbf{x}_{j}(k)}\!-\!\underset{j\in\mathcal{N}_{i}^{\text{out}}}{\sum}\!{\mathbf{C}_{ji}^{1}(k)\mathbf{x}_{i}(k)}.
7:        Agent ii computes 𝐳i(k+1)=𝐱i(k+1)/yi(k+1)\mathbf{z}_{i}(k+1)=\mathbf{x}_{i}(k+1)/y_{i}(k+1).
8:        Until a stopping criteria is satisfied, e.g., agent ii stops if 𝐳(k)𝟏𝐱¯0<ϵ\lVert\mathbf{z}(k)-\mathbf{1}\otimes\mathbf{\bar{x}}^{0}\rVert<\epsilon for some predefined ϵ>0\epsilon>0, where 𝐱¯0=j=1N𝐱j(0)/N\mathbf{\bar{x}}^{0}=\sum\nolimits_{j=1}^{N}{\mathbf{x}_{j}(0)}/N.
9:     end for
10:  end for
Table 1: Parameter design
Parameter Iteration kKk\leq K Iteration kK+1k\geq K+1
𝚲(k)\mathbf{\Lambda}(k) 𝚲(k)=diag{σ1(k),,σd(k)}\mathbf{\Lambda}(k)=\text{diag}\{\sigma_{1}(k),\cdots,\sigma_{d}(k)\}, where each σl(k)\sigma_{l}(k), l=1,dl=1,\cdots d, is chosen from \mathbb{R} independently \setminus
Cij2(k)C_{ij}^{2}(k) Each Cij2(k)C_{ij}^{2}(k) is chosen from [η,1][\eta,1] for j𝒩iin{i}j\in\mathcal{N}_{i}^{\text{in}}\cup\{i\} with satisfying i=1NCij2(k)=1\sum\nolimits_{i=1}^{N}{C_{ij}^{2}(k)}=1
𝐂ij1(k)\mathbf{C}_{ij}^{1}(k) 𝐂ij1(k)=diag{Cij,11(k),,Cij,d1(k)}\mathbf{C}_{ij}^{1}(k)=\text{diag}\{C_{ij,1}^{1}(k),\cdots,C_{ij,d}^{1}(k)\}, where each Cij,l1(k)C_{ij,l}^{1}(k), l=1,dl=1,\cdots d, is chosen from \mathbb{R} for i𝒩jout{j}i\in\mathcal{N}_{j}^{\text{out}}\cup\{j\} independently 𝐂ij1(k)=Cij1(k)𝐈\mathbf{C}_{ij}^{1}(k)=C_{ij}^{1}(k)\mathbf{I}, where Cij1(k)C_{ij}^{1}(k) is equal to Cij2(k)C_{ij}^{2}(k)
Corollary 3.

The following statements hold:

  1. i)

    Let {(𝐳i(k))i=1N}k\{(\mathbf{z}_{i}(k))_{i=1}^{N}\}_{k\in\mathbb{N}} be the sequence generated by Algorithm 3. Define 𝐱¯0=j=1N𝐱j0/N\mathbf{\bar{x}}^{0}=\sum\nolimits_{j=1}^{N}{\mathbf{x}_{j}^{0}}/N as the average initial state. Under Assumption 1, it holds 𝐳(k)x¯0𝟏cvρk\lVert\mathbf{z}(k)-\bar{x}^{0}\mathbf{1}\rVert\leq c_{v}\rho^{k} for all kk\in\mathbb{N}, where cv=dcc_{v}=\sqrt{d}c.

  2. ii)

    Let \mathcal{H} denote the set of honest-but-curious agents. Under Assumptions 1-2, the initial value 𝐱i(0)\mathbf{x}_{i}(0) of agent ii\notin\mathcal{H} can be preserved against \mathcal{H} during the running of Algorithm 3;

  3. iii)

    Under Assumptions 1 and 3, the initial values 𝐱i(0)\mathbf{x}_{i}(0) of all agents ii\notin\mathcal{H} can be preserved against eavesdropping attacks during the running of Algorithm 3.

Proof.

The proof of i) follows a similar path to Theorem 1, the difference lies only in the use of the Kronecker product and thus omitted. Then, by the setup of Table I, it is possible to make each element of the vector state hold an independent coupling weight. According to the analysis of Theorems 2-3, we can know that each scalar-state element in the vector state can be preserved against both honest-but-curious and eavesdropping attacks. Therefore, each vector state can also be preserved. ∎

6 Experiments Validation

We construct simulations to confirm the consensus and the privacy performances of our methods. Two directed networks are built in Fig. 2.

Refer to caption
(a) 𝒢1\mathcal{G}_{1}
Refer to caption
(b) 𝒢2\mathcal{G}_{2}
Figure 2: Communication networks. A simple directed network 𝒢1\mathcal{G}_{1} with 55 agents and a large-scale directed network 𝒢2\mathcal{G}_{2} consisting of 10001000 agents.

6.1 Consensus Performance

We pick the network 𝒢1\mathcal{G}_{1} and set η=0.01\eta=0.01. For Algorithm 2, at iteration kKk\leq K, the mixing weights Cji1(k)C_{ji}^{1}(k) for j𝒩iout{i}j\in\mathcal{N}_{i}^{\text{out}}\cup\{i\} are selected from (100,100)(-100,100). The initial states x10,,x50x_{1}^{0},\cdots,x_{5}^{0} take values of 10,15,20,25,3010,15,20,25,30, respectively, and thus x¯0=20\bar{x}^{0}=20. The parameter σ(k)\sigma(k) is generated from 𝒩(0,10)\mathcal{N}(0,10) for all kKk\leq K. For Algorithm 3, at iteration kKk\leq K, the parameters σl(k)\sigma_{l}(k) are generated from 𝒩(0,10)\mathcal{N}(0,10) for l=1,,dl=1,\cdots,d with d=3d=3, and the mixing weights Cij,l1(k)C_{ij,l}^{1}(k) are chosen from (100,100)(-100,100) for l=1,,dl=1,\cdots,d and j𝒩iout{i}j\in\mathcal{N}_{i}^{\text{out}}\cup\{i\}. Each component of the initial values 𝐱i0d\mathbf{x}_{i}^{0}\in\mathbb{R}^{d}, i=1,,5i=1,\cdots,5, is generated from the Gaussian distributions with different mean values 0,20,400,20,40. Fig. 3 plots the evolutionary trajectories of the state variables under K=2K=2, and shows the evolutions of e(k)=𝐳(k)x¯0𝟏e(k)=\lVert\mathbf{z}(k)-\bar{x}^{0}\mathbf{1}\rVert over K=1,2,3,4K=1,2,3,4. One observes that: i) Each estimate zi(k)z_{i}(k) converges to the average value x¯0\bar{x}^{0}, and a linear consensus rate is achieved; and ii) a larger KK means a worse consensus accuracy.

Refer to caption
(a) Scalar case
Refer to caption
(b) Scalar case
Refer to caption
(c) Vector case
Refer to caption
(d) Vector case
Figure 3: Consensus performance. (a)-(b) The trajectories of states {zi(k)}\{z_{i}(k)\} and the evolutions of e(k)e(k) of Algorithm 2; (c)-(d) The trajectories of states {𝐳i(k)}\{\mathbf{z}_{i}(k)\} and the evolutions of e(k)e(k) of Algorithm 3.

6.2 Comparison with other works

We compare our algorithms with three data-obfuscation based methods, i.e., the differential privacy algorithm [17], the decaying noise algorithm [21], and the finite-noise-sequence algorithm [22]. Here, we set K=2K=2, and the adopted mixing matrix WW is generated using the rules in [17]. Specifically, the element WijW_{ij} is set to 1/(|𝒩jout|+1)1/(|\mathcal{N}_{j}^{\text{out}}|+1) if i𝒩jout{j}i\in\mathcal{N}_{j}^{\text{out}}\cup\{j\}; otherwise, Wij=0W_{ij}=0. Since the directed and unbalanced networks are more generalizable than the undirected and balanced ones adopted in [17, 21, 22], these algorithms cannot achieve average consensus, as reported in Fig 4.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Figure 4: Performance of the other works. (a)-(c) The trajectories of all states {xi(k)}\{x_{i}(k)\} in [17], [21], [22] in order.

6.3 Effect of network degrees and Scalability

Since the proposed algorithms are performed on unbalanced directed networks, we use different networks to explore the effect of different degrees on the consensus rate. We simulate 88 communication networks with 1010 agents. Specifically, a directed ring network connecting all agents is built. Then, in the first 44 networks, each agent arbitrarily selects Diout1=1,2,3,4D_{i}^{\text{out}}-1=1,2,3,4 out-neighbors in order; in the last 44 networks, each agent arbitrarily select Diin1=1,2,3,4D_{i}^{\text{in}}-1=1,2,3,4 in-neighbors in order. In this experiment, the variable x10,,x100x_{1}^{0},\cdots,x_{10}^{0} are taken sequentially from the interval [10,55][10,55] at intervals of 55. We set K=1K=1 and η=0.01\eta=0.01. For the iterations kKk\leq K, the mixing weights Cji1(k)C_{ji}^{1}(k) for j𝒩iout{i}j\in\mathcal{N}_{i}^{\text{out}}\cup\{i\} are selected from (5,5)(-5,5), and the parameter σ(k)\sigma(k) is generated from 𝒩(0,5)\mathcal{N}(0,5). Moreover, we employ the network 𝒢2\mathcal{G}_{2} to demonstrate the scalability of the proposed algorithms. Each initial value xi0x_{i}^{0} or 𝐱i0\mathbf{x}_{i}^{0} is generated from i.i.d 𝒩(0,1)\mathcal{N}(0,1). The parameters η\eta, KK, and dd take values of 0.050.05, 33, and 1010, respectively. The mixing weights and the parameter σ(k)\sigma(k) or 𝚲(k)\mathbf{\Lambda}(k) are generated in the same way as Section VI-A. As shown in Fig. 5, it is stated that i) As the out-degree or in-degree increases, Algorithm 2 has a faster consensus rate. A possible reason for this is that the increase in out-degree or in-degree leads to more frequent communication between agents, and thus more information is available for state updates, which in turn leads to a faster consensus rate; ii) The proposed algorithms still ensure that all agents linearly converge to the correct average value even if a large-scale network is used.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Figure 5: Performance of the proposed algorithm over different works. (a) The effect of out-degrees on consensus rate; (b) The effect of in-degrees on consensus rate; (c) The evolutions of e(k)e(k) over the large-scale network 𝒢2\mathcal{G}_{2}.

6.4 Privacy Performance

We evaluate the privacy-preserving performances of Algorithms 2 and 3. Under the network 𝒢1\mathcal{G}_{1}, we consider the initial value of the legitimate agent 11 will suffer from the joint inference of honest-but-curious agents 4,54,5, and agent 22 is legitimate. In the scalar-state case, we set x10=40x_{1}^{0}=40 and x20,,xN0x_{2}^{0},\cdots,x_{N}^{0} are generated from the Gaussian distributions with 5050 variance and zero mean, while the initial value 𝐱10=[50,50]\mathbf{x}_{1}^{0}=[50,50] and 𝐱20,,𝐱N0\mathbf{x}_{2}^{0},\cdots,\mathbf{x}_{N}^{0} are randomly generated from i.i.d. 𝒩(0,50)\mathcal{N}(0,50) in the vector-state case. Moreover, we set k=2k=2 and the maximal iteration M=200M=200.

To infer x10x_{1}^{0}, agents ={4,5}\mathcal{H}=\{4,5\} construct some linear equations based on their available information h={4,5}\mathcal{I}_{h}=\{\mathcal{I}_{4},\mathcal{I}_{5}\} outlined below:

x1(k+1)x1(k)+σ(k)C211(k)x1(k)=σ(k)Δx(k),0kK,\displaystyle x_{1}(k+1)-x_{1}(k)+\sigma(k)C_{21}^{1}(k)x_{1}(k)=\sigma(k)\Delta x(k),0\leq k\leq K, (31a)
x1(k+1)x1(k)+C211(k)x1(k)=Δx(k),K+1kM,\displaystyle x_{1}(k\!+\!1)\!-\!x_{1}(k)\!+\!C_{21}^{1}(k)x_{1}(k)\!=\!\Delta x(k),K\!+\!1\!\leq\!k\!\leq\!M, (31b)
y1(k+1)y1(k)+C212(k)y1(k)=Δy(k),0kM,\displaystyle y_{1}\!(k\!+\!1)\!-\!y_{1}\!(k)\!+\!C_{21}^{2}(k)y_{1}(k)\!=\!\Delta y(k)\!,0\!\leq\!k\!\leq\!M, (31c)

where

Δx(k)=m{4,5}C1m1(k)xm(k)n{4,5}Cn11(k)x1(k),\displaystyle\Delta x(k)=\sum_{m\in\{4,5\}}{C_{1m}^{1}(k)x_{m}(k)}-\sum_{n\in\{4,5\}}{C_{n1}^{1}(k)x_{1}(k)},
Δy(k)=m{4,5}C1m2(k)ym(k)n{4,5}Cn12(k)y1(k).\displaystyle\Delta y(k)=\sum_{m\in\{4,5\}}{C_{1m}^{2}(k)y_{m}(k)}-\sum_{n\in\{4,5\}}{C_{n1}^{2}(k)y_{1}(k)}.

Furthermore, agents \mathcal{H} can also construct, for k=K+1,K+2,,Mk=K+1,K+2,\cdots,M,

x1(k)z1(k)y1(k)=0,\displaystyle x_{1}(k)-z_{1}(k)y_{1}(k)=0, (31d)

where z1(k)z_{1}(k) can be derived from

z1(k)=C411(k)x1(k)C412(k)y1(k),\displaystyle z_{1}(k)=\frac{C_{41}^{1}(k)x_{1}(k)}{C_{41}^{2}(k)y_{1}(k)},

since C411(k)=C412(k)C_{41}^{1}(k)=C_{41}^{2}(k) for kK+1k\geq K+1.

The number of linear equations is 3MK+23M-K+2 while that of unknown variables to \mathcal{H} is 4M+54M+5, including x1(0),,x1(M+1)x_{1}(0),\cdots,x_{1}(M+1), C211(0)x1(0)C_{21}^{1}(0)x_{1}(0), \cdots, C211(M)x1(M)C_{21}^{1}(M)x_{1}(M), y1(1)y_{1}(1), \cdots, y1(M+1)y_{1}(M+1), and C212(0)y1(0),,C212(M)y1(M)C_{21}^{2}(0)y_{1}(0),\cdots,C_{21}^{2}(M)y_{1}(M). Consequently, there are infinitely many solutions due to the fact that the number of equations is less than that of unknown variables. The analysis of the vector-state case is similar to that of the scalar-state case, so it will not be elaborated here. To uniquely determine x10x_{1}^{0}, we use the least-squares solution to infer x10x_{1}^{0}. In this experiment, agents in \mathcal{H} estimate x10x_{1}^{0} or 𝐱10\mathbf{x}_{1}^{0} for 10001000 times. Figs. 6a-6b show the estimated results. One can observe that agents in \mathcal{H} fail to obtain a desired estimate of x10x_{1}^{0} or 𝐱10\mathbf{x}_{1}^{0}.

Next, we consider the case of eavesdropping attacks. The parameter settings follow the above experiment. To infer the value x10x_{1}^{0}, the external eavesdropper constructs some linear equations below based on its available information e\mathcal{I}_{e}:

x1(k+1)x1(k)=σ(k)Δx^(k),0kK+1,\displaystyle x_{1}(k+1)-x_{1}(k)=\sigma(k)\Delta\hat{x}(k),0\leq k\leq K+1, (32a)
x1(k+1)x1(k)=Δx^(k),K+1kM,\displaystyle x_{1}(k+1)-x_{1}(k)=\Delta\hat{x}(k),K+1\leq k\leq M, (32b)
y1(k+1)y1(k)=Δy^(k),0kM,\displaystyle y_{1}(k+1)-y_{1}(k)=\Delta\hat{y}(k),0\leq k\leq M, (32c)

where

Δx^(k)=m{4,5}C1m1(k)xm(k)n{2,4,5}Cn11(k)x1(k),\displaystyle\Delta\hat{x}(k)\!=\!\sum_{m\in\{4,5\}}{C_{1m}^{1}(k)x_{m}(k)}\!-\!\sum_{n\in\{2,4,5\}}{C_{n1}^{1}(k)x_{1}(k)},
Δy^(k)=m{4,5}C1m2(k)ym(k)n{2,4,5}Cn12(k)y1(k).\displaystyle\Delta\hat{y}(k)\!=\!\sum_{m\in\{4,5\}}{C_{1m}^{2}(k)y_{m}(k)}\!-\!\sum_{n\in\{2,4,5\}}{C_{n1}^{2}(k)y_{1}(k)}.

Further, the external eavesdropper can deduce from (32) that

x1(K+1)x1(0)=t=0Kσ(t)Δx^(t),\displaystyle x_{1}(K+1)-x_{1}(0)=\sum_{t=0}^{K}{\sigma(t)\Delta\hat{x}(t)}, (33a)
x1(k+1)x1(K+1)=t=K+1kΔx^(t),K+1kM,\displaystyle x_{1}\!(k\!+\!1)\!-\!x_{1}(K\!+\!1)\!=\!\sum_{t=K\!+\!1}^{k}{\Delta\hat{x}(t)},K\!+\!1\!\leq\!k\!\leq\!M, (33b)
y1(k+1)y1(0)=t=0kΔy^(t),0kM.\displaystyle y_{1}(k+1)-y_{1}(0)=\sum_{t=0}^{k}{\Delta\hat{y}(t)},0\leq k\leq M. (33c)

Obviously, all terms in the right side of (33) can be accessed by the external eavesdropper. Consequently, using y1(0)=1y_{1}(0)=1, the eavesdropper can be aware of all y1(k)y_{1}(k), kk\in\mathbb{N}. Moreover, the external eavesdropper can capture C211(k)x1(k)C_{21}^{1}(k)x_{1}(k) and C212(k)y1(k)C_{21}^{2}(k)y_{1}(k) for k=K+1,,Mk=K+1,\cdots,M. Then, x1(k)x_{1}(k) for k=K+1,,Mk=K+1,\cdots,M can be derived using

x1(k)=C211(k)x1(k)C212(k)y1(k)y1(k).\displaystyle x_{1}(k)=\frac{C_{21}^{1}(k)x_{1}(k)}{C_{21}^{2}(k)y_{1}(k)}y_{1}(k).

This implies that all information in (32b) and (32c) is captured by the external eavesdropper, which is considerably different from the case of honest-but-curious attacks. So, only (32a) has some unknown variables σ(k)\sigma(k), k=0,,Kk=0,\cdots,K and x1(0)x_{1}(0) for the external eavesdropper. The vector-state case leads to the same results as the scalar-state case by following the same analysis path, so it is not stated again. In this experiment, we still use the least-squares solution to estimate x10x_{1}^{0}. The external eavesdropper estimates x10x_{1}^{0} or 𝐱10\mathbf{x}_{1}^{0} for 10001000 times. Figs. 6c-6d show the estimated results. One observes that the external eavesdropper cannot obtain an expected estimate of x10x_{1}^{0} or 𝐱10\mathbf{x}_{1}^{0}.

Refer to caption
(a) Scalar case
Refer to caption
(b) Vector case
Refer to caption
(c) Scalar case
Refer to caption
(d) Vector case
Figure 6: Privacy Performance. (a) Estimation results of x10x_{1}^{0} by \mathcal{H}; (b) Estimation results of 𝐱10\mathbf{x}_{1}^{0} by \mathcal{H}; (c) Estimation results of x10x_{1}^{0} by the external eavesdropper; (d) Estimation results of 𝐱10\mathbf{x}_{1}^{0} by the external eavesdropper.

7 Conclusion

We proposed a dynamics-based privacy-preserving push-sum algorithm over unbalanced digraphs. We theoretically analyzed its linear convergence rate and proved it can guarantee the privacy of agents against both honest-but-curious and eavesdropping attacks. Finally, numerical experiments further confirmed the soundness of our work. Future research will consider efforts to prevent eavesdropping attacks under a weaker assumption, as well as consider efforts to protect privacy even after KK is removed.

CRediT authorship contribution statement

Huqiang Cheng: Methodology, Formal analysis, Writing - original draft. Mengying Xie: Methodology, Writing - review & editing. Xiaowei Yang: Supervision, Writing - review & editing. Qingguo Lü: Formal analysis, Writing - review & editing. Huaqing Li: Writing - review & editing, Funding acquisition.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Data available

Data will be made available on request.

Acknowledgment

This work is supported by the National Natural Science Foundation of China (62302068 and 61932006).

References

  • [1] H.-K. Bae, H.-O. Kim, W.-Y. Shin, S.-W. Kim, “How to get consensus with neighbors?”: Rating standardization for accurate collaborative filtering, Knowl.-Based Syst. 234 (2021) 107549.
  • [2] Y. Yang, W. Tan, T. Li, D. Ruan, Consensus clustering based on constrained self-organizing map and improved Cop-Kmeans ensemble in intelligent decision support systems, Knowl.-Based Syst. 32 (2012) 101–115.
  • [3] Z. Zhang, Y. Gao, Z. Li, Consensus reaching for social network group decision making by considering leadership and bounded confidence, Knowl.-Based Syst. 204 (2020) 106240.
  • [4] F. C. Souza, S. R. B. Dos Santos, A. M. de Oliveira, S. N. Givigi, Influence of network topology on UAVs formation control based on distributed consensus, in: Proceedings of the IEEE International Systems Conference, 2022, pp. 1–8.
  • [5] P. Wu, H. Huang, H. Lu, Z. Liu, Stabilized distributed online mirror descent for multi-agent optimization, Knowl.-Based Syst. 304 (2024) 112582.
  • [6] D. Kempe, A. Dobra, and J. Gehrke, Gossip-based computation of aggregate information, in: Proceedings of 44th Annual IEEE Symposium on Foundations of Computer Science, 2003, pp. 482–491.
  • [7] L. Xia, Q. Li, R. Song, Y. Feng, Dynamic asynchronous edge-based event-triggered consensus of multi-agent systems, Knowl.-Based Syst. 272 (2023) 110531.
  • [8] F. Bénézit, V. Blondel, P. Thiran, J. Tsitsiklis, M. Vetterli, Weighted gossip: Distributed averaging using non-doubly stochastic matrices, in: Proceedings of 2010 IEEE International Symposium on Information Theory, 2010, pp. 1753–1757.
  • [9] C. N. Hadjicostis, A. D. Domínguez-García, T. Charalambous, Distributed averaging and balancing in network systems: With applications to coordination and control, Found. Trends Syst. Control 5 (2–3) (2018) 99–292.
  • [10] Z. Liu, Y. Li, G. Lan, Z. Chen, A novel data-driven model-free synchronization protocol for discrete-time multi-agent systems via TD3 based algorithm, Knowl.-Based Syst. 287 (2024) 111430.
  • [11] W. Guo, H. Wang, W.-G. Zhang, Z. Gong, Y. Xu, R. Slowiński, Multi-dimensional multi-round minimum cost consensus models with iterative mechanisms involving reward and punishment measures, Knowl.-Based Syst. 293 (2024) 111710.
  • [12] J. N. Tsitsiklis, Problems in decentralized decision making and computation, Ph.D. thesis, MIT, 1984.
  • [13] Z. Zhang, M. Y. Chow, Incremental cost consensus algorithm in a smart grid environment, in: Proceedings of the IEEE Power and Energy Society General Meeting, 2011, pp. 1–6.
  • [14] C. Dwork, F. McSherry, K. Nissim, A. Smith, Calibrating noise to sensitivity in private data analysis, in: Proceedings of the 3rd Theory Cryptography Conference, 2006, pp. 265–284.
  • [15] Z. Huang, S. Mitra, N. Vaidya, Differentially private distributed optimization, in: International Conference on Distributed Computing and Networking, 2015, pp. 1–10.
  • [16] E. Nozari, P. Tallapragada, J. Cortés, Differentially private average consensus: Obstructions, trade-offs, and optimal algorithm design, Automatica, 81 (2017) 221–231.
  • [17] Z. Huang, S. Mitra, G. Dullerud, Differentially private iterative synchronous consensus, in Proceedings of the 2012 ACM Workshop on Privacy in the Electronic Society, 2012, pp. 81–90.
  • [18] L. Gao, S. Deng, W. Ren, C. Hu, Differentially private consensus with quantized communication, IEEE Trans. Cybern. 51 (8) (2021) 4075–4088.
  • [19] D. Ye, T. Zhu, W. Zhou, S. Y. Philip, Differentially private malicious agent avoidance in multiagent advising learning, IEEE Trans. Cybern. 50 (10) (2020) 4214–4227.
  • [20] M. Kefayati, M. S. Talebi, B. H. Khalaj, H. R. Rabiee, Secure consensus averaging in sensor networks using random offsets, in: IEEE International Conference on Telecommunications and Malaysia International Conference on Communications, 2007, pp. 556–560.
  • [21] Y. Mo, R. M. Murray, Privacy preserving average consensus, IEEE Trans. Autom. Control 62 (2) (2017) 753–765.
  • [22] N. E. Manitara, C. N. Hadjicostis, Privacy-preserving asymptotic average consensus, in: European Control Conference (ECC), 2013, pp. 760–765.
  • [23] S. Pequito, S. Kar, S. Sundaram, A. P. Aguiar, Design of communication networks for distributed computation with privacy guarantees, in: Proceedings of the 53rd IEEE Conference on Decision and Control, 2014, pp. 1370–1376.
  • [24] I. D. Ridgley, R. A. Freeman, K. M. Lynch, Simple, private, and accurate distributed averaging, in: Proceedings of IEEE 57th Annual Allerton Conference on Communication, Control, and Computing, 2019, pp. 446–452.
  • [25] A. Alaeddini, K. Morgansen, M. Mesbahi, Adaptive communication networks with privacy guarantees, in: Proceedings of American Control Conference, 2017, pp. 4460–4465.
  • [26] M. Kishida, Encrypted average consensus with quantized control law, in: Proceedings of IEEE Conference on Decision and Control, 2018, pp. 5850–5856.
  • [27] C. N. Hadjicostis, A. D. Dominguez-Garcia, Privacy-preserving distributed averaging via homomorphically encrypted ratio consensus, IEEE Trans. Autom. Control, 65 (9) (2020) 3887–3894.
  • [28] W. Fang, M. Zamani, Z. Chen, Secure and privacy preserving consensus for second-order systems based on paillier encryption, Systems & Control Letters, 148 (2021) 104869.
  • [29] M. Ruan, H. Gao, Y. Wang, Secure and privacy-preserving consensus, IEEE Trans. Autom. Control 64 (10) (2019) 4035–4049.
  • [30] Y. Wang, Privacy-preserving average consensus via state decomposition, IEEE Trans. Autom. Control, 64 (11) (2019) 4711–4716.
  • [31] X. Chen, L. Huang, K. Ding, S. Dey, L. Shi, Privacy-preserving push-sum average consensus via state decomposition, IEEE Trans. Autom. Control 68 (12) (2023) 7974–7981.
  • [32] H. Cheng, X. Liao, H. Li, Q. Lü, Dynamic-based privacy preservation for distributed economic dispatch of microgrids, IEEE Trans. Control Netw. Syst. (2024) http://dx.doi.org/10.1109/TCNS.2024.3431730.
  • [33] H. Gao, C. Zhang, M. Ahmad, Y. Wang, Privacy-preserving average consensus on directed graphs using push-sum, in: IEEE Conference on Communications and Network Security (CNS), 2018, pp. 1–9.
  • [34] H. Gao, Y. Wang, Algorithm-level confidentiality for average consensus on time-varying directed graphs, IEEE Trans. Netw. Sci. Eng. 9 (2) (2022) 918–931.
  • [35] Y. Liu, H. Gao, J. Du, Y. Zhi, Dynamics based privacy preservation for average consensus on directed graphs, in: Proceedings of the 41st Chinese Control Conference, 2022, pp. 4955–4961.
  • [36] Y. Wei, J. Xie, W. Gao, H. Li, L. Wang, A fully decentralized distributed learning algorithm for latency communication networks, Knowl.-Based Syst. (2024) https://doi.org/10.1016/j.knosys.2024.112829.
  • [37] S. Gade, N. H. Vaidya, Private optimization on networks, In: 2018 Annual American Control Conference (ACC), 2018, pp. 1402–1409.
  • [38] H. Cheng, X. Liao, H. Li, Q. Lü, Y. Zhao, Privacy-preserving push-pull method for decentralized optimization via state decomposition, IEEE Trans. Signal Inf. Proc. Netw. 10 (2024) 513–526.
  • [39] D. Han, K. Liu, H. Sandberg, S. Chai, Y. Xia, Privacy-preserving dual averaging with arbitrary initial conditions for distributed optimization, IEEE Trans. Autom. Control 67 (6) (2022) 3172–3179.
  • [40] H. Gao, Y. Wang, A. Nedić, Dynamics based privacy preservation in decentralized optimization, Automatica, 151 (2023) 110878.
  • [41] Y. Lin, K. Liu, D. Han, and Y. Xia, Statistical privacy-preserving online distributed nash equilibrium tracking in aggregative games, IEEE Trans. Autom. Control 69 (2024) 323–330.
  • [42] H. Cheng, X. Liao, H. Li, Distributed online private learning of convex nondecomposable objectives, IEEE Trans. Netw. Sci. Eng. 11 (2) (2024) 1716–1728.
  • [43] K. Liu, H. Kargupta, J. Ryan, Random projection-based multiplicative data perturbation for privacy preserving distributed data mining, IEEE Trans Knowl. Data Eng. 18 (1) (2005) 92–106.
  • [44] S. Han, W. K. Ng, L. Wan, V. C. Lee, Privacy-preserving gradient-descent methods, IEEE Trans Knowl. Data Eng. 22 (6) (2009) 884–899.
  • [45] N. Cao, C. Wang, M. Li, K. Ren, W. Lou, Privacy-preserving multi-keyword ranked search over encrypted cloud data, IEEE Trans. Parallel Distrib. Syst. 25 (1) (2013) 222–233.
  • [46] E. Seneta, Non-negative matrices and markov chains, Springer, 1973.
  • [47] J. A. Fill, Eigenvalue bounds on convergence to stationarity for nonreversible markov chains with an application to the exclusion process, Ann. Appl. Probab. 1 (1) (1991) 62–87.
  • [48] R. A. Horn, C. R. Johnson, Matrix Analysis, Cambridge university press Press, 2012.
  • [49] Y. Lu, M. Zhu, On privacy preserving data release of linear dynamic networks, Automatica 115 (2020) 108839.
  • [50] A. Machanavajjhala, D. Kifer, J. Gehrke, M. Venkitasubramaniam, LL-diversity: Privacy beyond kk-anonymity, ACM Trans. Knowl. Discovery Data 1 (1) (2007) 3–es.
  • [51] A. Nedić, A. Ozdaglar, P. A. Parrilo, Constrained consensus and optimization in multi-agent networks, IEEE Trans. Autom. Control 55 (4) (2010) 922–938.
  • [52] P. Paillier, Public-key cryptosystems based on composite degree residuosity classes, in: International conference on the theory and applications of cryptographic techniques, 1999, pp. 223–238.

Appendix A Proof of Theorem 1

Proof.

We divide the convergence analysis into two cases.

Case I: We consider the case of kK+2k\geq K+2. It holds 𝐂1(k)=𝐂2(k)\mathbf{C}_{1}(k)=\mathbf{C}_{2}(k). Recalling (12) and (13), we have, for l1l\geq 1,

𝐱(K+l+1)=𝚽1(K+l:K+1)𝐱(K+1),\displaystyle\mathbf{x}(K+l+1)=\mathbf{\Phi}_{1}(K+l:K+1)\mathbf{x}(K+1), (34)
𝐲(K+l+1)=𝚽2(K+l:K+1)𝐲(K+1).\displaystyle\mathbf{y}(K+l+1)=\mathbf{\Phi}_{2}(K+l:K+1)\mathbf{y}(K+1). (35)

Referring [51, Corollary 2], there exists a sequence of stochastic vectors {𝝋(k)}k\{\bm{\varphi}(k)\}_{k\in\mathbb{N}} such that, for any i,j𝒱i,j\in\mathcal{V},

|[𝚽1(k:K+1)]ijφi(k)|c0ρkK1,\displaystyle|[\mathbf{\Phi}_{1}(k:K+1)]_{ij}-\varphi_{i}(k)|\leq c_{0}\rho^{k-K-1},

where c0=2(1+ρN+1)/(1ρN1)c_{0}=2(1+\rho^{-N+1})/(1-\rho^{N-1}) and ρ=(1ηN1)1N1\rho=(1-\eta^{N-1})^{\frac{1}{N-1}}. Moreover, φi(k)ηN/N\varphi_{i}(k)\geq\eta^{N}/N. Thus, it follows that, for l1l\geq 1,

|[𝐌(K+l:K+1)]ij|c0ρl1.\displaystyle|[\mathbf{M}(K+l:K+1)]_{ij}|\leq c_{0}\rho^{l-1}. (36)

where 𝐌(K+l:K+1)𝚽1(K+l:K+1)𝝋(K+l)𝟏\mathbf{M}(K+l:K+1)\triangleq\mathbf{\Phi}_{1}(K+l:K+1)-\bm{\varphi}(K+l)\mathbf{1}^{\top}. Since 𝐂1(k)=𝐂2(k)\mathbf{C}_{1}(k)=\mathbf{C}_{2}(k), it holds that 𝚽1(K+l:K+1)=𝚽2(K+l:K+1)\mathbf{\Phi}_{1}(K+l:K+1)=\mathbf{\Phi}_{2}(K+l:K+1) for l1l\geq 1. So (34) and (35) can be evolved as

𝐱(K+l+1)=𝐌(K+l:K+1)𝐱(K+1)+𝝋(K+l)𝟏𝐱(K+1),\displaystyle\mathbf{x}(K+l+1)=\mathbf{M}(K+l:K+1)\mathbf{x}(K+1)+\bm{\varphi}(K+l)\mathbf{1}^{\top}\mathbf{x}(K+1), (37)
𝐲(K+l+1)=𝐌(K+l:K+1)𝐲(K+1)+N𝝋(K+l),\displaystyle\mathbf{y}(K+l+1)=\mathbf{M}(K+l:K+1)\mathbf{y}(K+1)+N\bm{\varphi}(K+l), (38)

It follows from [51, Corollary 2] that yi(k+1)=[𝐌(k:0)𝟏]i+Nφi(k)ηNy_{i}(k+1)=[\mathbf{M}(k:0)\mathbf{1}]_{i}+N\varphi_{i}(k)\geq\eta^{N} for any kk\in\mathbb{N}. Using the relation (16), one arrives

x¯0=j=1Nxj(0)N=𝟏𝐱(0)N=𝟏𝐱(K+1)N.\displaystyle\bar{x}^{0}=\frac{\sum\nolimits_{j=1}^{N}{x_{j}(0)}}{N}=\frac{\mathbf{1}^{\top}\mathbf{x}(0)}{N}=\frac{\mathbf{1}^{\top}\mathbf{x}(K+1)}{N}. (39)

Combining (37) and (38) with (39) yields

xi(K+l+1)yi(K+l+1)x¯0\displaystyle\frac{x_{i}(K+l+1)}{y_{i}(K+l+1)}-\bar{x}^{0}
=\displaystyle\,\,= xi(K+l+1)yi(K+l+1)𝟏𝐱(K+1)N\displaystyle\frac{x_{i}(K+l+1)}{y_{i}(K+l+1)}-\frac{\mathbf{1}^{\top}\mathbf{x}(K+1)}{N}
=\displaystyle= [𝐌(K+l:K+1)𝐱(K+1)]i+φi(k+l)𝟏𝐱(K+1)yi(K+l+1)Q(K;i)Nyi(K+l+1)\displaystyle\frac{[\mathbf{M}(K\!+\!l:K\!+\!1)\mathbf{x}(K\!+\!1)]_{i}\!+\!\varphi_{i}(k\!+\!l)\mathbf{1}^{\top}\mathbf{x}(K\!+\!1)}{y_{i}(K+l+1)}-\frac{Q(K;i)}{Ny_{i}(K+l+1)}
=\displaystyle= [𝐌(K+l:K+1)𝐱(K+1)]iyi(K+l+1)𝟏𝐱(K+1)[𝐌(K+l:K+1)𝐲(K+1)]iNyi(K+l+1),\displaystyle\frac{[\mathbf{M}(K+l:K+1)\mathbf{x}(K+1)]_{i}}{y_{i}(K+l+1)}-\frac{\mathbf{1}^{\top}\mathbf{x}(K+1)[\mathbf{M}(K+l:K+1)\mathbf{y}(K+1)]_{i}}{Ny_{i}(K+l+1)},

where

Q(K;i)\displaystyle Q(K;i)\triangleq 𝟏𝐱(K+1)[𝐌(K+l:K+1)𝐲(K+1)]i+Nφi(k+l)𝟏𝐱(K+1).\displaystyle\mathbf{1}^{\top}\mathbf{x}(K+1)[\mathbf{M}(K+l:K+1)\mathbf{y}(K+1)]_{i}+N\varphi_{i}(k+l)\mathbf{1}^{\top}\mathbf{x}(K+1).

Then, we can bound |zi(K+l+1)x¯0||z_{i}(K+l+1)-\bar{x}^{0}| as

|zi(K+l+1)x¯0|\displaystyle|z_{i}(K+l+1)-\bar{x}^{0}|
\displaystyle\leq |[𝐌(K+l:K+1)𝐱(K+1)]i|yi(K+l+1)\displaystyle\frac{|[\mathbf{M}(K+l:K+1)\mathbf{x}(K+1)]_{i}|}{y_{i}(K+l+1)}
+|𝟏𝐱(K+1)[𝐌(K+l:K+1)𝐲(K+1)]i|Nyi(K+l+1)\displaystyle+\frac{|\mathbf{1}^{\top}\mathbf{x}(K+1)[\mathbf{M}(K+l:K+1)\mathbf{y}(K+1)]_{i}|}{Ny_{i}(K+l+1)}
\displaystyle\leq 1ηN(max𝑗|[𝐌(K+l:K+1)]ij|)𝐱(K+1)1+1NηN×\displaystyle\frac{1}{\eta^{N}}\!\Big{(}\underset{j}{\max}|[\mathbf{M}(K\!+\!l:K\!+\!1)]_{ij}|\Big{)}\!\lVert\mathbf{x}(K\!+\!1)\rVert_{1}\!+\!\frac{1}{N\eta^{N}}\times
|𝟏𝐱(K+1)|(max𝑗|[𝐌(K+l:K+1)]ij|)𝐲(K+1)1\displaystyle|\mathbf{1}^{\top}\mathbf{x}(K\!+\!1)|\Big{(}\underset{j}{\max}|[\mathbf{M}(K\!+\!l:K\!+\!1)]_{ij}|\Big{)}\lVert\mathbf{y}(K\!+\!1)\rVert_{1}
\displaystyle\leq 2ηN(max𝑗|[𝐌(K+l:K+1)]ij|)𝐱(K+1)1,\displaystyle\frac{2}{\eta^{N}}\Big{(}\underset{j}{\max}|[\mathbf{M}(K+l:K+1)]_{ij}|\Big{)}\lVert\mathbf{x}(K+1)\rVert_{1},

where the second inequality uses the relation yi(K+l+1)ηNy_{i}(K+l+1)\geq\eta^{N}, and the last inequality is based on 𝐲(K+1)1=i=1N|yi(K+1)|=𝟏𝐲(K+1)=N\lVert\mathbf{y}(K+1)\rVert_{1}=\sum\nolimits_{i=1}^{N}{|y_{i}(K+1)|}=\mathbf{1}^{\top}\mathbf{y}(K+1)=N and |𝟏𝐱(K+1)|𝐱(K+1)1|\mathbf{1}^{\top}\mathbf{x}(K+1)|\leq\lVert\mathbf{x}(K+1)\rVert_{1}. Further taking into account (36), one derives that

|zi(K+l+1)x¯0|2ηNc0𝐱(K+1)1ρl1.\displaystyle|z_{i}(K+l+1)-\bar{x}^{0}|\leq 2\eta^{-N}c_{0}\lVert\mathbf{x}(K+1)\rVert_{1}\rho^{l-1}.

Thus, we arrive that

𝐳(K+l+1)x¯0𝟏c1ρK+l+1,\displaystyle\lVert\mathbf{z}(K+l+1)-\bar{x}^{0}\mathbf{1}\rVert\leq c_{1}\rho^{K+l+1}, (40)

where c1=2Nc0𝐱(K+1)1ηNρK2c_{1}=2\sqrt{N}c_{0}\lVert\mathbf{x}(K+1)\rVert_{1}\eta^{-N}\rho^{-K-2}. Consequently, for kK+2k\geq K+2, we have 𝐳(k)x¯0𝟏c1ρk\lVert\mathbf{z}(k)-\bar{x}^{0}\mathbf{1}\rVert\leq c_{1}\rho^{k}.

Case II: We consider the case of kK+1k\leq K+1. Using yi(k+1)=[𝐌(k:0)𝟏]i+Nφi(k)ηNy_{i}(k+1)=[\mathbf{M}(k:0)\mathbf{1}]_{i}+N\varphi_{i}(k)\leq\eta^{N}, one has

xi(k)yi(k)x¯k=xi(k)yi(k)𝟏𝐱(k)N\displaystyle\frac{x_{i}(k)}{y_{i}(k)}-\bar{x}^{k}=\frac{x_{i}(k)}{y_{i}(k)}-\frac{\mathbf{1}^{\top}\mathbf{x}(k)}{N}
=\displaystyle= xi(k)yi(k)𝟏𝐱(k)([𝐌(k1:0)𝟏]i+Nφi(k1))Nyi(k).\displaystyle\frac{x_{i}(k)}{y_{i}(k)}-\frac{\mathbf{1}^{\top}\mathbf{x}(k)([\mathbf{M}(k-1:0)\mathbf{1}]_{i}+N\varphi_{i}(k-1))}{Ny_{i}(k)}.

Then, we compute |zi(k)x¯k||z_{i}(k)-\bar{x}^{k}| as

|zi(k)x¯k|\displaystyle|z_{i}(k)-\bar{x}^{k}|
\displaystyle\leq |xi(k)|yi(k)+|𝟏𝐱(k)[𝐌(k1:0)𝟏]i|Nyi(k)+|𝟏𝐱(k)φi(k1)|yi(k)\displaystyle\frac{|x_{i}(k)|}{y_{i}(k)}+\frac{|\mathbf{1}^{\top}\mathbf{x}(k)[\mathbf{M}(k-1:0)\mathbf{1}]_{i}|}{Ny_{i}(k)}+\frac{|\mathbf{1}^{\top}\mathbf{x}(k)\varphi_{i}(k-1)|}{y_{i}(k)}
\displaystyle\leq 1ηN|xi(k)|+1NηN|𝟏𝐱(k)|(max𝑗|[𝐌(k1:0)]|ij)\displaystyle\frac{1}{\eta^{N}}|x_{i}(k)|+\frac{1}{N\eta^{N}}|\mathbf{1}^{\top}\mathbf{x}(k)|\Big{(}\underset{j}{\max}|[\mathbf{M}(k-1:0)]|_{ij}\Big{)}
+1ηN|𝟏𝐱(k)|(max𝑖φi(k1))\displaystyle+\frac{1}{\eta^{N}}|\mathbf{1}^{\top}\mathbf{x}(k)|(\underset{i}{\max}\,\,\varphi_{i}(k-1))
\displaystyle\leq 1ηN𝐱(k)1+1NηN𝐱(k)1c0ρk1+(1ηN(N1)N)𝐱(k)1,\displaystyle\frac{1}{\eta^{N}}\lVert\mathbf{x}(k)\rVert_{1}+\frac{1}{N\eta^{N}}\lVert\mathbf{x}(k)\rVert_{1}c_{0}\rho^{k-1}+(\frac{1}{\eta^{N}}-\frac{(N-1)}{N})\lVert\mathbf{x}(k)\rVert_{1},

where the last inequality uses the relation φi(k1)ηNN\varphi_{i}(k-1)\geq\frac{\eta^{N}}{N} for all i𝒱i\in\mathcal{V} and k1k\geq 1. Specifically, as 𝝋(k)\bm{\varphi}(k) is a stochastic vector, i=1Nφi(k)=1\sum\nolimits_{i=1}^{N}{\varphi_{i}(k)}=1 holds, which in turn gives maxi𝒱φi(k1)1(N1)ηN/N\max_{i\in\mathcal{V}}\,\,\varphi_{i}(k-1)\leq 1-(N-1)\eta^{N}/N. Thus, it yields that

𝐳(k)x¯k𝟏\displaystyle\lVert\mathbf{z}(k)-\bar{x}^{k}\mathbf{1}\rVert
\displaystyle\leq NηN𝐱(k)1+N1/2ηN𝐱(k)1c0ρk1+NηN𝐱(k)1\displaystyle\sqrt{N}\eta^{-N}\lVert\mathbf{x}(k)\rVert_{1}+N^{-1/2}\eta^{-N}\lVert\mathbf{x}(k)\rVert_{1}c_{0}\rho^{k-1}+\sqrt{N}\eta^{-N}\lVert\mathbf{x}(k)\rVert_{1}
\displaystyle\leq c2𝐱(k)1+c3𝐱(k)1ρk,\displaystyle c_{2}\lVert\mathbf{x}(k)\rVert_{1}+c_{3}\lVert\mathbf{x}(k)\rVert_{1}\rho^{k},

where c2=2NηN(N1)/Nc_{2}=2\sqrt{N}\eta^{-N}-(N-1)/\sqrt{N} and c3=N1/2ηNc0ρ1c_{3}=N^{-1/2}\eta^{-N}c_{0}\rho^{-1}.

Combining Cases I and II and defining

cmax{c1,(c2+c3)𝐱(0)1,(c2ρ1+c3)𝐱(1)1,,(c2ρK1+c3)𝐱(K+1)1},\displaystyle\!\!\!\!c\!\triangleq\!\max\!\bigg{\{}\!\!\!\!\begin{array}[]{c}c_{1},(c_{2}\!+\!c_{3})\lVert\mathbf{x}(0)\rVert_{1},(c_{2}\rho^{-1}\!+\!c_{3})\lVert\mathbf{x}(1)\rVert_{1},\\ \cdots,(c_{2}\rho^{-K-1}\!+\!c_{3})\lVert\mathbf{x}(K\!+\!1)\rVert_{1}\\ \end{array}\!\!\!\!\bigg{\}}, (41)

one derives, for all kk\in\mathbb{N},

𝐳(k)x¯0𝟏cρk,\displaystyle\lVert\mathbf{z}(k)-\bar{x}^{0}\mathbf{1}\rVert\leq c\rho^{k},

which is the desired result. ∎