This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A privacy-preserving distributed credible evidence fusion algorithm for collective decision-making

Chaoxiong Ma chaoxiongma@mail.nwpu.edu.cn    Yan Liang liangyan@nwpu.edu.cn    Xinyu Yang yang17691231609@163.com    Han Wu wuhan@mail.nwpu.edu.cn    Huixia Zhang zhanghuixia@mail.nwpu.edu.cn

The theory of evidence reasoning has been applied to collective decision-making in recent years. However, existing distributed evidence fusion methods lead to participants’ preference leakage and fusion failures as they directly exchange raw evidence and do not assess evidence credibility like centralized credible evidence fusion (CCEF) does. To do so, a privacy-preserving distributed credible evidence fusion method with three-level consensus (PCEF) is proposed in this paper. In evidence difference measure (EDM) neighbor consensus, an evidence-free equivalent expression of EDM among neighbored agents is derived with the shared dot product protocol for pignistic probability and the identical judgment of two events with maximal subjective probabilities, so that evidence privacy is guaranteed due to such irreversible evidence transformation. In EDM network consensus, the non-neighbored EDMs are inferred and neighbored EDMs reach uniformity via interaction between linear average consensus (LAC) and low-rank matrix completion with rank adaptation to guarantee EDM consensus convergence and no solution of inferring raw evidence in numerical iteration style. In fusion network consensus, a privacy-preserving LAC with a self-cancelling differential privacy term is proposed, where each agent adds its randomness to the sharing content and step-by-step cancels such randomness in consensus iterations. Besides, the sufficient condition of the convergence to the CCEF is explored, and it is proven that raw evidence is impossibly inferred in such an iterative consensus. The simulations show that PCEF is close to CCEF both in credibility and fusion results and obtains higher decision accuracy with less time-comsuming than existing methods.

Keyword: Credibility calculation, Distributed fusion, Evidence reasoning, Network consensus, Privacy-preserving, Collective decision-making.

1 Introduction

In recent years, the development of sensors and ad hoc network techniques has promoted research on distributed inference [19, 10]. As an important branch of distributed inference, collective decision-making [27] asks decentralized agents to collaboratively infer categorical variables of the global environment based on their local observations [9, 16]. For example, based on multi-view 3D data collected by sensors mounted on vehicles or infrastructure, road target categories are detected in a fusion center and then delivered to vehicles to cope with the problem of occluded vehicle vision [5]. A vehicular cooperative perception approach including communication network repair and a vehicle-to-vehicle (V2V) attention module is designed in [28] to reduce the negative impact of lossy communication on the perception task. In these distributed collaborative systems, low-accuracy sensors are widely used to reduce hardware costs, which leads to large measurement uncertainties and makes collective decision-making difficult.

As a common mathematical tool for representing and dealing with such uncertainty, the theory of evidence reasoning (ER) [33] has widely applied to decision-level information fusion such as cyber-attack detection [8, 49], risk analysis [6, 38], multi-criteria decision making [45, 32, 23], social learning [31, 35], clustering and classification [24, 60, 14, 17], and so on. It therefore naturally draws the attention of researchers who focus on distributed evidence fusion. In [29], the belief function is used to estimate the confidence level of the output information from the collaborative perception participants. In [7], the distributed agents transform local explorations into pieces of evidence to perceive the dominant color of a closed square environment by fusing evidence with eight evidence fusion rules with a designed three-step positive feedback iterative modulation-driven mechanism. In [62], the ER is used to recognize vehicle types and construct dynamic maps to extend the perception of road vehicles. All these works are just simple applications of traditional evidence fusion methods to distributed systems.

In fact, the landmark work for distributed evidence fusion is provided by [26], in which the basic fusion rule of ER, i.e., Dempster’s rule (DR), is naturally extended to the distributed systems for the first time. This work directly fuses the evidence of all agents based on the linear average consensus (LAC) algorithm and the commonality function representation of the evidence. It is proven to be equivalent to centralized DR in terms of fusion results and well adapted to both synchronous and asynchronous LAC mechanisms. However, the work does not address the two important issues of the distributed evidence fusion. On the one hand, the DR often produces counterintuitive results when fusing highly conflicting evidence [44, 1] and fails to cope with the cyclic propagation of evidence in networks gracefully. On the other hand, the direct sharing of raw evidence leads to participants’ preference leakage.

To the best of our knowledge, there are two types of available solutions for counterintuitive issue. The first type designs alternative reasoning rules to reassign conflicting terms in various optimization criteria [46, 12, 57]. In [13], DR and cautious rule (CR) are mixed to fuse neighborhood and non-neighborhood evidence, respectively. This strategy is verified to be self-stabilizing and solves the data insect triggered by cyclic propagation, but the idempotent of CR also leads to the fusion result not changing when multiple sources hold the same evidence. Although [18] designed three combinatorial layouts containing DR and CR to solve the data insect problem, the introduced alternative rules still carry inherent flaws. On the one hand, alternative fusion rules often lose commutativity and associativity [53], which are favorable for evidence fusion in a distributed manner. On the other hand, they are often invalid in dealing with high conflicts resulting from sensor failures.

Therefore, another type of solution, so-called credible evidence fusion (CEF), is more favored by researchers in centralized evidence fusion. The CEF retains DR while pre-processing evidence to reduce conflicts according to evidence credibility [39, 58], thus being an extension of Bayesian theory and consistent with human reasoning. Available distributed evidence fusion assigns credibility to information sources based on a priori external factors. For example, in road icing monitoring scenarios, the credibility of evidence is considered to be positively correlated with its propagation distance [18]. In critical infrastructure operational status monitoring, the quality of source historical evidence and the importance of network links are adopted as references for judging source credibility [40]. In [62], a fixed discount factor is imposed on the evidence to avoid high conflict leading to non-convergence of fusion. Although this a priori information-dependent subjective credibility partially solves the high conflict problem, it suffers from poor timeliness, poor access to a priori information, and poor adaptability to real-time data, which leads to potentially low fusion accuracy. Recently, a class of distributed evidence fusion strategies based on anomalous detection can be regarded as a binarized evidence credibility assessment process. Based on the random sample consensus (RANSAC) algorithm, distributed evidence consensus algorithms for DR and CR are proposed in [11]. Also based on the idea of outlier detection, [61] employs the connectivity-based outlier factor (COF) method to remove disturbed evidence. These outlier-detection-based methods require adjusting algorithm parameters to balance false positive and false negative rates, where a high false positive rate leads to the detachment of normal information sources, affecting fusion accuracy, and a high false negative rate treats interfering evidence as normal, leading to fusion failure. Therefore, it is still imperative to investigate methods for credibility calculation in distributed systems. Traditional centralized CEF (CCEF) assesses credibility based on the evidence difference measure matrix (EDMM), which is based on two-by-two comparisons between pieces of evidence and has better adaptability [58, 36, 54]. However, how to construct EDMM and assess evidence credibility in a distributed system is still unresolved.

The privacy protection of raw data is also an important topic for collective decision-making [56, 4]. Although evidence only expresses the occurrence probabilities of events in the publicly available framework of discernment (FoD), they still reflect sideways certain preferences of agent data, such as the business status of enterprises in economic surveys, the content of voters’ votes in social elections, and the details of the combat unit’s missions in military confrontations. Therefore, it is necessary to protect the privacy of agent evidence. Most of the existing distributed evidence fusion solutions require that evidence from neighbors be available, which definitely conflicts with privacy preservation. In other words, distributed CEF with privacy preservation remains an open and interesting problem.

In light of the aforementioned analysis, this paper aims to develop a privacy-preserving distributed credible evidence fusion method for collective decision-making with the premise of protecting agents’ raw evidence data. The main contributions include:

  1. 1)

    The privacy-preserving distributed credible evidence fusion (DCEF) problem is proposed for the need to deal with information uncertainty in collective decision-making, in which agents are required to fuse their evidence in a distributed manner and prevent their raw evidence from being known or inferred by others. The distributed fusion result is expected to converge to CCEF, which has shown good engineering adaptability.

  2. 2)

    A fully privacy-preserving distributed credible evidence fusion algorithm (PCEF) is designed. It has a three-level consensus: the evidence difference measure (EDM) neighbor consensus, the EDM network consensus, and the fusion network consensus.

  3. 3)

    Considering that the unavailability of raw evidence from other agents makes EDMM construction difficult, a privacy-preserving distributed construction strategy for EDMM covering EDM neighbor consensus and EDM network consensus is given, where the EDM neighbor consensus allows neighbors to calculate EDM between pieces of raw evidence without disclosure. The EDM network consensus locates the locally constructed EDMM to each agent and recovers the EDM between non-neighboring agents’ pieces of raw evidence by using a low-rank matrix complementation technique that has rank-adaptive capability.

  4. 4)

    A distributed fusion strategy based on differential privacy and credibility compensation is developed to prevent agents’ raw evidence from being inferred during the fusion network consensus, and its capability for privacy preservation is discussed.

This paper is organized as follows: Section 2 analyzes the challenges of DCEF with privacy-preserving; Section 3 presents the details of the PCEF; Section 4 simulates and verifies the effectiveness of PCEF; and the work is finally summarized.

2 Problem formulation

Refer to caption
Figure 1: The requirements of DCEF with privacy preservation of raw evidence.

As shown in Fig.1, NN distributed agents constitute a distributed system that is modeled as a strongly connected undirected graph. 𝒢=(𝒱,ε){\cal G}=\left({\cal V},\varepsilon\right), where 𝒱={1,2,,N}{\cal V}=\{1,2,\cdots,N\} and ε𝒱×𝒱\varepsilon\subset\cal V\times\cal V are the set of agents and edges, respectively. Two agents connected by an edge e=(i,j)εe=(i,j)\in\varepsilon are called neighbors. The set of neighbors of Agent ii is expressed as 𝒩i={j|(i,j) ε }{\cal N}_{i}=\{j|\left(i,j\right) \in\varepsilon \}. Let 𝒜=[aij]N×N{\cal A}=[a_{ij}]\in{\mathbb{R}^{N\times N}} and 𝒜i=[aiji]N×N{\cal A}^{i}=[a^{i}_{ij}]\in{\mathbb{R}^{N\times N}} be the adjacency matrix of 𝒢\cal G and the local adjacency matrix of Agent ii, respectively, with aij=aiji=1a_{ij}=a^{i}_{ij}=1 for (i,j)ε(i,j)\in\varepsilon and aij=aiji=0a_{ij}=a^{i}_{ij}=0 for otherwise.

The NN agents perform a collective decision-making task in which all complete and mutually exclusive potential decision results form a finite set, FoD [30], written as Ω={A^1,A^2,,A^n}\Omega=\{{\hat{A}}_{1},{\hat{A}}_{2},\cdots,{\hat{A}}_{n}\}. Here, A^i{\hat{A}}_{i} is a potential decision result. During the decision-making process, the agent’s information is uniformly characterized in the FoD space to cope with the heterogeneity and uncertainty of the source data. In the beginning, each of the NN agents holds a piece of evidence, also called basic belief assignment (BBA) or mass function, defined on the FoD. Note that Agent ii’s raw evidence 𝒎i\bm{m}_{i} is a mapping mi:2Ω[0,1]{m_{i}}:{2^{\Omega}}\to\left[{0,1}\right] such that:

AΩmi(A)=1\sum\nolimits_{A\subseteq\Omega}m_{i}\left(A\right)=1 (1)

and, in general, mi()=0{m_{i}}\left(\emptyset\right)=0.

Refer to caption
Figure 2: Overview of CCEF.

The communication constraints of agents and the task goals for the collective decision-making task are shown in Fig.1. Agents are required to collaboratively fuse all pieces of evidence. During this process, data sharing is limited to authenticated neighboring agents to prevent exposure of agent preferences. In other words, the raw evidence of an agent is not shared with or inferred by any other agents. As there is no fusion center, agents have to exchange and update data iteratively to reach consensus. The term “consensus” encompasses three aspects: first, the basis for credibility calculation, i.e., the EDMM, is consistent; second, all agents obtain the same fusion result; and third, the fusion result accurately converges to the CCEF. As shown in Fig.2, the CCEF procedure is outlined below at first:

  1. 1)

    EDM calculation: Compute EDMs among NN pieces of evidence. Here, a commonly used EDM, DismpDismp [36], is shown:

    dij=DismP(𝒎i,𝒎j)DistP(𝒎i,𝒎j)+ConfP(𝒎i,𝒎j)1+DistP(𝒎i,𝒎j)ConfP(𝒎i,𝒎j)d_{ij}=DismP(\bm{m}_{i},\bm{m}_{j})\triangleq\frac{{DistP(\bm{m}_{i},\bm{m}_{j})+ConfP(\bm{m}_{i},\bm{m}_{j})}}{{1+DistP(\bm{m}_{i},\bm{m}_{j})ConfP(\bm{m}_{i},\bm{m}_{j})}} (2)

    where dijd_{ij} is the EDM between 𝒎i\bm{m}_{i} and 𝒎j\bm{m}_{j}, and

    DistP(𝒎i,𝒎j)=12(A^kΩ,|A^k|=1|BetPi(A^k)BetPj(A^k)|η)1η\displaystyle DistP\left({\bm{m}_{i},\bm{m}_{j}}\right)=\frac{1}{2}{\left({\sum\nolimits_{\scriptstyle{{\hat{A}}_{k}}\in\Omega,\atop\scriptstyle\left|{{{\hat{A}}_{k}}}\right|=1}{{{\left|{BetP_{i}({\hat{A}}_{k})-BetP_{j}({\hat{A}}_{k})}\right|}^{\eta}}}}\right)^{\frac{1}{\eta}}} (3)
    ConfP(𝒎i,𝒎j)\displaystyle ConfP\left(\bm{m}_{i},\bm{m}_{j}\right) ={0,if Xmax𝒎iXmax𝒎jBetPi(Xmax𝒎i)BetPj(Xmax𝒎j),otherwise.\displaystyle=\begin{cases}0&,\text{if }X_{\max}^{\bm{m}_{i}}\cap X_{\max}^{\bm{m}_{j}}\neq\emptyset\\ BetP_{i}\left({X_{\max}^{\bm{m}_{i}}}\right)BetP_{j}\left({X_{\max}^{\bm{m}_{j}}}\right)&,\text{otherwise.}\end{cases} (4)
    s.t.Xmax𝒎l\displaystyle s.t.\quad X_{\max}^{{\bm{m}_{l}}} =argmaxA^kΩBetP𝒎l(A^k),l=i,j\displaystyle=\arg\mathop{\max}\limits_{{{\hat{A}}_{k}}\in\Omega}{BetP_{{\bm{m}_{l}}}}({{\hat{A}}_{k}}),l=i,j

    where η=2\eta=2; DistPDistP and ConfPConfP are respectively the probabilistic-based distance and the conflict coefficient. They are calculated by Pignistic probability, i.e., 𝑩𝒆𝒕𝑷i=[BetPi(A^1),BetPi(A^2),,BetPi(A^n)]T\bm{BetP}_{i}=[{BetP_{i}({\hat{A}}_{1}),BetP_{i}({\hat{A}}_{2}),\cdots,BetP_{i}({\hat{A}}_{n})}]^{T}:

    BetPi(A^k)=B2Ω,A^kB1|B|mi(B)BetP_{i}\left({{{\hat{A}}_{k}}}\right)=\mathop{\sum}\limits_{{B}\subset{2^{\Omega}},{{\hat{A}}_{k}}\subseteq B}\frac{1}{{|B|}}{m_{i}}\left(B\right) (5)

    where |B||B| is the cardinality of BB.

  2. 2)

    Credibility calculation: Compute the credibility CrediCred_{i} of 𝒎i\bm{m}_{i}:

    Credi=min1jNk=1,kjNdjkk=1,kiNdikCred_{i}=\frac{{\mathop{\min}\limits_{1\leq j\leq N}\sum\nolimits_{k=1,k\neq j}^{N}{{d_{jk}}}}}{{\sum\nolimits_{k=1,k\neq i}^{N}{{d_{ik}}}}} (6)
  3. 3)

    Evidence pre-processing: Furthermore, raw evidence is discounted in a crediability-dependent style to reduce conflicting [59]:

    {mi(A)=Credimi(A),if AΩmi(Ω)=1A2Ω,AΩmi(A),otherwise.\begin{cases}{{m^{\prime}}_{i}}(A)=Cred_{i}\cdot{m_{i}}(A)&,\text{if }A\neq\Omega\\ {{m^{\prime}}_{i}}(\Omega)=1-\sum\limits_{A\in{2^{\Omega}},{}A\neq\Omega}{{{m^{\prime}}_{i}}(A)}&,\text{otherwise.}\end{cases} (7)
  4. 4)

    Evidence fusion: Fuse all pre-processed evidence to obtain the fusion result 𝒎\bm{m}_{\oplus}:

    𝒎=𝒎1𝒎2𝒎N\bm{m}_{\oplus}=\bm{m}^{\prime}_{1}\oplus\bm{m}^{\prime}_{2}\oplus\cdots\oplus\bm{m}^{\prime}_{N} (8)

    where \oplus is the DR operator. Since DR follows the associativity, \oplus is shown with 𝒎i\bm{m}_{i} and 𝒎j\bm{m}_{j} as examples:

    (mimj)(A)={0,if A=BC=Ami(B)mj(C)1BC=mi(B)mj(C),if A2Ω\{}\left({{m_{i}}\oplus{m_{j}}}\right)\left(A\right)=\begin{cases}0&,\text{if }A=\emptyset\\ \frac{{\sum\limits_{B\cap C=A}{{m_{i}}\left(B\right){m_{j}}\left(C\right)}}}{{1-{\sum_{B\cap C=\emptyset}}{m_{i}}\left(B\right){m_{j}}\left(C\right)}}&,\text{if }A\in 2^{\Omega}\backslash\{\emptyset\}\end{cases} (9)

As for distributed evidence fusion, available studies reach all-agent consensus on fusion result via LAC [11, 26]:

𝒙i𝝎(t+1)\displaystyle\bm{x}_{i}^{\bm{\omega}}\left({t+1}\right) =𝒙i𝝎(t)+j𝒩iNcij(𝒙j𝝎(t)𝒙i𝝎(t))\displaystyle=\bm{x}_{i}^{\bm{\omega}}\left(t\right)+\sum\limits_{j\in{{\cal N}_{i}}}^{N}{{c_{ij}}\left({\bm{x}_{j}^{\bm{\omega}}\left(t\right)-\bm{x}_{i}^{\bm{\omega}}\left(t\right)}\right)} (10)
𝒙i𝝎(0)\displaystyle\bm{x}_{i}^{\bm{\omega}}\left({0}\right) =𝝎i\displaystyle=\bm{\omega}_{i}

where 𝒙i𝝎(t)\bm{x}_{i}^{\bm{\omega}}\left(t\right) represents the state of Agent ii at iteration step tt; cijc_{ij} is the Metropolis-Hastings weight [55]:

cij={1max{|𝒩i|,|𝒩j|}+1,if j𝒩i and ji1j𝒩i1max{|𝒩i|,|𝒩j|}+1,if j=i0,otherwisec_{ij}=\begin{cases}\frac{1}{\max\{\left|\mathcal{N}_{i}\right|,\left|\mathcal{N}_{j}\right|\}+1}&,\text{if }j\in\mathcal{N}_{i}\text{ and }j\neq i\\ 1-\sum\nolimits_{j\in{\cal N}_{i}}\frac{1}{\max\{\left|\mathcal{N}_{i}\right|,\left|\mathcal{N}_{j}\right|\}+1}&,\text{if }j=i\\ 0&,\text{otherwise}\end{cases} (11)

where |𝒩i|\left|{\mathcal{N}}_{i}\right| is the cardinality of 𝒩i{\mathcal{N}}_{i}. The 𝝎i\bm{\omega}_{i} is the weight assignment of 𝒎i\bm{m}_{i} [47]. It is another representation of evidence. For A2Ω\{,Ω}\forall A\in{2^{\Omega}}\backslash\left\{{\emptyset,\Omega}\right\}, ωi(A)\omega_{i}\left(A\right) is calculated by:

ωi(A)=BA(1)|B||A|lnQi(B)\omega_{i}\left(A\right)=\sum\limits_{B\supseteq A}{{{\left({-1}\right)}^{\left|B\right|-\left|A\right|}}\ln Q_{i}\left(B\right)} (12)

where Qi(A):2Ω[0,1]Q_{i}\left(A\right):{2^{\Omega}}\to\left[{0,1}\right] is the so-called commonality function that is converted by the mass function:

Qi(A)=BAmi(B)Q_{i}\left(A\right)=\sum\limits_{B\supseteq A}{m_{i}\left(B\right)} (13)

The Eq.(10) will lead to the fact that the states of all agents converge to i=1N𝝎i/N\sum\nolimits_{i=1}^{N}{{\bm{\omega}_{i}}}/N when 𝒢{\cal G} is a strongly connected graph. Considering that the fusion of mass functions with DR is equivalent to the addition of weight assignments, i.e., the weight assignment corresponding to the 𝒎i𝒎j\bm{m}_{i}\oplus\bm{m}_{j} is 𝝎i+𝝎j\bm{\omega}_{i}+\bm{\omega}_{j}, so i=1N𝝎i/N\sum\nolimits_{i=1}^{N}{{\bm{\omega}_{i}}}/N is exactly 1/N1/N of the fusion result of all pieces of evidence with DR. Let 𝝎i=1N𝝎i\bm{\omega}_{\oplus}\triangleq\sum\nolimits_{i=1}^{N}{{\bm{\omega}_{i}}} be the weight assignment of 𝒎\bm{m}_{\oplus}, then:

𝒎=A1𝝎A2𝝎Ar𝝎k=1rAk𝝎\bm{m}_{\oplus}=A_{1}^{\bm{\omega}_{\oplus}}\oplus A_{2}^{\bm{\omega}_{\oplus}}\oplus\cdots\oplus A_{r}^{\bm{\omega}_{\oplus}}\triangleq\mathop{\oplus}\limits_{k=1}^{r}A_{k}^{\bm{\omega}_{\oplus}} (14)

where Ak𝝎A_{k}^{\bm{\omega}_{\oplus}} is the simple mass function [11] corresponding to ω(Ak)\omega_{\oplus}({A_{k}}), and r=2n2r=2^{n}-2.

According to the analysis of CCEF and existing distributed evidence fusion above, there are several challenges to realizing the collective decision-making task shown in Fig.1:

  1. 1)

    In EDM calculation, Eq.(2) asks agents to collect a pair of raw evidence simultaneously, which is hindered by the privacy requirements, i.e., the forbiddings of raw evidence sharing and non-neighbors information sharing.

  2. 2)

    In credibility calculation, all EDMs among pieces of evidence are involved in Eq.(6). Therefore, it is essential to ensure that all agents possess identical EDMs.

  3. 3)

    In evidence fusion, uniform EDMs mean that evidence credibilities are publicly accessible to all agents, and hence the pre-processed evidence and raw evidence can be derived from each other according to Eq.(7). In other words, taking the pre-processed evidence as the initial state of Eq.(10) is not privacy-preserving.

3 The PCEF algorithm

Refer to caption
Figure 3: The flowchart of PCEF

To address the three challenges of CEF in a fully privacy-preserving distributed network, a method named PCEF that includes EDM neighbor consensus, EDM network consensus, and fusion network consensus is proposed in this section, as shown in Fig.3. In EDM neighbor consensus, neighboring agents compute the EDM between their raw evidence without knowledge of each other’s raw evidence and BetPBetP. In EDM network consensus, the EDM of non-neighboring agents’ raw evidence is estimated based on the correlation between the rows of the EDMM and the already-obtained EDMs between neighbors’ raw evidence. The result of this consensus is that each agent holds a complete, all-agent-consistent EDMM, which will be used for credibility computation. In fusion network consensus, differential privacy and credibility compensation are introduced into Eq.(10) for a finite time to protect the privacy of raw evidence and lead all agents to achieve consensus on fusion.

3.1 EDM neighbor consensus

Although the computation of DismpDismp does not involve accurate information about 𝒎i\bm{m}_{i} and 𝒎j\bm{m}_{j}, 𝑩𝒆𝒕𝑷\bm{BetP} still features the agent’s preferences. Therefore, the privacy of 𝑩𝒆𝒕𝑷\bm{BetP} is also required to be protected during the computation of DismpDismp besides the raw evidence. To this end, this section utilizes two secure multi-party computation protocols, namely the inner product computation of Pignistic probabilities and the identical judgment of two events with maximal subjective probabilities, to compute the EDM of neighboring agents’ raw evidence while protecting the privacy of the agent’s raw evidence and 𝑩𝒆𝒕𝑷\bm{BetP}.

The privacy calculation of DistPDistP is presented next. According to Eq.(3), DistPDistP can be written as:

DistP(𝒎i,𝒎j)=12(A^kΩ,|A^k|=1|BetPi(A^k)BetPj(A^k)|2)12=12(𝑩𝒆𝒕𝑷i𝑩𝒆𝒕𝑷j)T(𝑩𝒆𝒕𝑷i𝑩𝒆𝒕𝑷j)=12(𝑩𝒆𝒕𝑷i,𝑩𝒆𝒕𝑷i+𝑩𝒆𝒕𝑷j,𝑩𝒆𝒕𝑷j2𝑩𝒆𝒕𝑷i,𝑩𝒆𝒕𝑷j)\begin{array}[]{cl}DistP\left({{\bm{m}_{i}},{\bm{m}_{j}}}\right)&=\frac{1}{\sqrt{2}}\left(\sum\limits_{{\hat{A}}_{k}\in\Omega,\left|{\hat{A}}_{k}\right|=1}{{\left|BetP_{i}\left({\hat{A}}_{k}\right)-BetP_{j}\left({\hat{A}}_{k}\right)\right|}^{2}}\right)^{\frac{1}{2}}\\ &=\frac{1}{\sqrt{2}}\left(\bm{BetP}_{i}-\bm{BetP}_{j}\right)^{T}\left(\bm{BetP}_{i}-\bm{BetP}_{j}\right)\\ &=\frac{1}{{\sqrt{2}}}\left({\left\langle\bm{BetP}_{i},\bm{BetP}_{i}\right\rangle+\left\langle\bm{BetP}_{j},\bm{BetP}_{j}\right\rangle-2\left\langle\bm{BetP}_{i},\bm{BetP}_{j}\right\rangle}\right)\end{array} (15)

where ,\left\langle\centerdot,\centerdot\right\rangle is the dot product operation. It is apparently simple and safe to compute 𝑩𝒆𝒕𝑷i,𝑩𝒆𝒕𝑷i\langle\bm{BetP}_{i},\bm{BetP}_{i}\rangle locally at Agent ii and send it to neighboring agents. But 𝑩𝒆𝒕𝑷i,𝑩𝒆𝒕𝑷j\langle\bm{BetP}_{i},\bm{BetP}_{j}\rangle cannot be computed, as it demands the agent knows both 𝑩𝒆𝒕𝑷i\bm{BetP}_{i} and 𝑩𝒆𝒕𝑷j\bm{BetP}_{j}. Fortunately, the available privacy-preserving two-party shared dot product protocol [22, 21] allows neighboring agents to obtain 𝑩𝒆𝒕𝑷i,𝑩𝒆𝒕𝑷j\langle\bm{BetP}_{i},\bm{BetP}_{j}\rangle without knowing anything about each other’s 𝑩𝒆𝒕𝑷\bm{BetP}. For more details about two-party shared dot product protocol implementation and its mechanism discussion, see [52].

Remark 1.

Generally, the value of η\eta is recommended to be chosen as a positive integer as small as possible [36]. The Eq.(3) takes η=2\eta=2 to ensure the privacy-preserving computation of DistPDistP while reducing computational complexity. When η=1\eta=1, the DistPDistP is too simple to design a calculation protocol that preserves BetPBetP.

The ConfPConfP is the segmentation function of the maximal subjective probability of 𝑩𝒆𝒕𝑷\bm{BetP}, being zero in Xmax𝒎i=Xmax𝒎jX_{\max}^{\bm{m}_{i}}=X_{\max}^{\bm{m}_{j}}, or BetPi(Xmax𝒎i)BetPj(Xmax𝒎j)BetP_{i}\left(X_{\max}^{\bm{m}_{i}}\right)BetP_{j}\left(X_{\max}^{\bm{m}_{j}}\right) for different. Hence, it is critical to determine if the serial numbers ¯ki\mathchar 22\relax\mkern-10.0muk_{i} and ¯kj\mathchar 22\relax\mkern-10.0muk_{j} of Xmax𝒎iX_{\max}^{\bm{m}_{i}} and Xmax𝒎jX_{\max}^{\bm{m}_{j}} are equal. In the millionaire problem [34, 42, 20], two rich men compare their assets without the help of a third party, where each richer tries to prevent his opponent from knowing his wealth, so that the solution is aptly utilized to compare whether ¯ki\mathchar 22\relax\mkern-10.0muk_{i} and ¯kj\mathchar 22\relax\mkern-10.0muk_{j} are equal while keeping the neighboring agents as far away from the exact numbered values as possible. If the numbers are equal, Agent ii learns about ¯kj\mathchar 22\relax\mkern-10.0muk_{j}, and Agent jj learns about ¯ki\mathchar 22\relax\mkern-10.0muk_{i} without exchanging information about BetPi(Xmax𝒎i)BetP_{i}\left(X_{\max}^{\bm{m}_{i}}\right) and BetPj(Xmax𝒎j)BetP_{j}\left(X_{\max}^{\bm{m}_{j}}\right). If the numbers are not equal, Agent ii informs Agent jj about BetPi(Xmax𝒎i)BetP_{i}\left(X_{\max}^{\bm{m}_{i}}\right), and vice versa, but they do not exchange ¯kj\mathchar 22\relax\mkern-10.0muk_{j} and ¯ki\mathchar 22\relax\mkern-10.0muk_{i}. This ensures the privacy of 𝑩𝒆𝒕𝑷\bm{BetP}, as the serial number and maximal subjective probability are not disclosed simultaneously to the other agent.

The following Alg.1 shows how the neighboring agents calculate neighbored EDM.

Algorithm 1 EDM neighbor consensus.
0:  Evidence 𝒎i\bm{m}_{i} held by Agent ii and 𝒎j\bm{m}_{j} held by Agent jj.
0:  EDM dijd_{ij}.
1:  if i=ji=j then
2:     dij0d_{ij}\leftarrow 0.
3:  else if (i,j)ε(i,j)\in\varepsilon then
4:     Agent ii converts 𝒎i\bm{m}_{i} to 𝑩𝒆𝒕𝑷i\bm{BetP}_{i}, Agent jj converts 𝒎j\bm{m}_{j} to 𝑩𝒆𝒕𝑷j\bm{BetP}_{j}.
5:     Agent ii calculates and sends 𝑩𝒆𝒕𝑷i,𝑩𝒆𝒕𝑷i\left\langle\bm{BetP}_{i},\bm{BetP}_{i}\right\rangle to Agent jj, Agent jj calculates and sends 𝑩𝒆𝒕𝑷j,𝑩𝒆𝒕𝑷j\left\langle\bm{BetP}_{j},\bm{BetP}_{j}\right\rangle to Agent ii.
6:     Agents ii and jj compute 𝑩𝒆𝒕𝑷i,𝑩𝒆𝒕𝑷j\left\langle{\bm{BetP}_{i},\bm{BetP}_{j}}\right\rangle jointly via the privacy-preserving dot product protocol [52].
7:     Agents ii and jj compute DistP(𝑩𝒆𝒕𝑷i,𝑩𝒆𝒕𝑷j)DistP(\bm{BetP}_{i},\bm{BetP}_{j}) with Eq.(15).
8:     Agent ii gets the serial number ¯ki\mathchar 22\relax\mkern-10.0muk_{i} of Xmax𝒎iX_{max}^{\bm{m}_{i}}, Agent jj gets the serial number ¯kj\mathchar 22\relax\mkern-10.0muk_{j} of Xmax𝒎jX_{max}^{\bm{m}_{j}}.
9:     Agents ii and jj compare ¯ki\mathchar 22\relax\mkern-10.0muk_{i} and ¯kj\mathchar 22\relax\mkern-10.0muk_{j} jointly according to the solution for Millionaire Problem [34].
10:     if ¯ki¯kj\mathchar 22\relax\mkern-10.0muk_{i}\neq\mathchar 22\relax\mkern-10.0muk_{j} then
11:        Agent ii sends BetPi(Xmax𝒎i)BetP_{i}\left(X_{\max}^{\bm{m}_{i}}\right) to Agent jj, and Agent jj sends BetPj(Xmax𝒎j)BetP_{j}\left(X_{\max}^{\bm{m}_{j}}\right) to Agent ii.
12:        Agents ii and jj compute ConfP(𝒎i,𝒎j)BetPi(Xmax𝒎i)BetPj(Xmax𝒎j)ConfP(\bm{m}_{i},\bm{m}_{j})\leftarrow BetP_{i}\left(X_{\max}^{\bm{m}_{i}}\right)BetP_{j}\left(X_{\max}^{\bm{m}_{j}}\right).
13:     else
14:        Agents ii and jj compute ConfP(𝒎i,𝒎j)0ConfP(\bm{m}_{i},\bm{m}_{j})\leftarrow 0.
15:     end ifAgents ii and jj compute Dismp(𝒎i,𝒎j)Dismp(\bm{m}_{i},\bm{m}_{j}) according to Eq.(2).
16:  end if

3.2 EDM network consensus

The absence of information sharing between non-neighbors hinders the computation of non-neighboring EDMs. Even though many evidence sources suffer from interference and measurement noise, there are still only a few pieces of evidence that contradict the correct event completely, which is why credibility is based on the “majority-minority” principle. In other words, the differences among the majority of pieces of evidence are limited. Therefore, the matrix formed by EDMs is often row-correlated and hence brings out a somewhat low-rank property. On the principle that simpler is more reliable, the idea of low-rank/sparse matrix processing has been widely used in image processing [41], compressed sensing [48], and other fields. In this subsection, the low-rank matrix completion technology is used to estimate the non-neighbored EDMs.

3.2.1 EDM collection

In EDM neighbor consensus, the neighbored EDMs obtained by Agent ii in Alg.1 can be represented as a matrix called Local EDMM (LEDMM): D¯iP𝒜i(D)=𝒜iDN×N{\bar{D}_{i}}\triangleq{P_{{\mathcal{A}}_{i}}}(D)={{\mathcal{A}}_{i}}\odot D\in{\mathbb{R}}^{N\times N}, where \odot is the Hadamard product and D[dij]N×ND\triangleq[d_{ij}]\in{\mathbb{R}}^{N\times N} is the EDMM. In D¯i{\bar{D}_{i}}, the unknown non-neighbored EDMs are set to be zeros, indicating an optimistic assumption that non-neighbor evidence is completely consistent. Since more known elements improve the accuracy of matrix completion, all neighboring EDMs are first backed up to each agent based on LAC:

Di(t+1)\displaystyle D_{i}\left(t+1\right) =Di(t)+j𝒩icij(Dj(t)Di(t))\displaystyle=D_{i}\left(t\right)+\sum\nolimits_{j\in{{\cal N}_{i}}}{c_{ij}\left(D_{j}\left(t\right)-D_{i}\left(t\right)\right)} (16)
Di(0)\displaystyle D_{i}\left(0\right) =D¯i\displaystyle=\bar{D}_{i}

where Di(t)D_{i}\left(t\right) is the EDMM with missing elements held by Agent ii at tt-th iterative step, and cijc_{ij} is determined by Eq.(11). As i𝒱𝒜i=2𝒜\sum\nolimits_{i\in\mathcal{V}}{{{\mathcal{A}}_{i}}}=2\mathcal{A} is held for the fixed topological undirected graph, we have i𝒱D¯i=i𝒱P𝒜i(D)=2P𝒜(D)\sum\nolimits_{i\in\mathcal{V}}{{\bar{D}_{i}}}=\sum\nolimits_{i\in\mathcal{V}}{P_{\mathcal{A}_{i}}(D)}=2{P_{\mathcal{A}}}(D). According to the LAC, the state Di(t)D_{i}\left(t\right) of Agent ii converges to:

limtDi(t)=1Ni=1NDi(0)=2NP𝒜(D)\underset{t\to\infty}{\mathop{\lim}}\,{D_{i}}\left(t\right)=\frac{1}{N}\sum\limits_{i=1}^{N}{{D_{i}}\left(0\right)}=\frac{2}{N}{{P}_{\mathcal{A}}}\left(D\right) (17)

Hence, all agents obtain P𝒜(D)P_{\mathcal{A}}(D) after sufficient iterations, which includes all of the neighbored EDMs.

Remark 2.

It is important to note that another efficient approach for EDM collection is the maximum consensus algorithm. Agents take P𝒜i(D)P_{{\mathcal{A}}_{i}}(D) as their initial status and update their status by retaining the maximum value of the corresponding position of the neighboring agent’s LEDMM at each iteration step until the consensus is reached. In fact, this implementation not only ensures that all agents obtain a consistent P𝒜(D)P_{\mathcal{A}}(D), but also achieves consensus at a faster rate.

3.2.2 Non-neighbored EDMs estimation

Next, based on the network-wide consistent P𝒜(D)P_{\mathcal{A}}(D), the agents independently perform low-rank matrix completion to estimate non-neighbored EDMs:

minD~N×N,rank(D~)sf(D~)=1λP𝒜(DD~)F2+diag(D~)F2\mathop{\min}\limits_{\scriptstyle\tilde{D}\in{\mathbb{R}^{N\times N}},rank({\tilde{D}})\leqslant s}f({\tilde{D}})=\frac{1}{\lambda}\left\|{{P_{\cal A}}({D-\tilde{D}})}\right\|_{F}^{2}+\left\|\text{diag}(\tilde{D})\right\|_{F}^{2} (18)

where F{{\left\|\cdot\right\|}_{\text{F}}} is the Frobenius norm; D~\tilde{D} is the low-rank approximation of DD; diag(D~)\text{diag}(\tilde{D}) is the vector consisting of the main diagonal elements of D~\tilde{D}; and λ\lambda is the regularization parameter. This objective function explores the matrix space of rank(D~)srank(\tilde{D})\leqslant s to determine D~\tilde{D}, in which P𝒜(DD~)F2\left\|{{P_{\cal A}}({D-\tilde{D}})}\right\|_{F}^{2} guarantees that D~\tilde{D} is an approximation of DD and diag(D~)\text{diag}(\tilde{D}) is used to minimize the modulus of the main diagonal elements of D~\tilde{D} as Dismp(𝒎i,𝒎i)=0Dismp(\bm{m}_{i},\bm{m}_{i})=0.

Searching directly in the matrix space of rank(D~)srank(\tilde{D})\leq s tends to yield D~\tilde{D} with inaccurate rank, which reduces the precision of credibility. Hence, the optimization of Eq.(18) is decomposed into two iterative subtasks: optimization on the fixed-rank smooth Riemannian manifold k{\mathcal{M}}_{k} of rank ksk\leq s and rank adjustment of the optimization result.

3.2.3 Fixed-rank optimization

In fixed-rank manifold optimization, the objective function becomes:

minD~f(D~)=1λP𝒜(DD~)F2+diag(D~)F2\displaystyle\underset{\tilde{D}}{\mathop{\min}}\,f({\tilde{D}})=\frac{1}{\lambda}\left\|{{P}_{\mathcal{A}}}(D-\tilde{D})\right\|_{F}^{2}+\left\|diag({\tilde{D}})\right\|_{F}^{2} (19)
s.t.D~k={D~N×N:rank(D~)=k}\displaystyle s.t.\quad\tilde{D}\in{{\mathcal{M}}_{k}=\{\tilde{D}\in{{\mathbb{R}}^{N\times N}}:rank({\tilde{D}})=k\}}

According to the Riemannian gradient descent [15], Eq.(19) can be addressed by iteratively updating D~(t)\tilde{D}(t):

D~(t+1)=D~(t)(h(t)Z(t))\tilde{D}(t+1)={{\mathcal{R}}_{\tilde{D}(t)}}(h(t)Z(t)) (20)

with:

Z(t)\displaystyle Z(t) gradkf(D~(t))\displaystyle\triangleq-{{\operatorname{grad}}_{k}}f(\tilde{D}(t)) (21)
=(UUTf(D~(t))V(t)V(t)T+U(t)U(t)Tf(D~(t))V(t)V(t)T\displaystyle=-\left(UU^{T}\nabla f(\tilde{D}(t))V(t)V(t)^{T}+{{U(t)}_{\bot}}U(t)_{\bot}^{T}\nabla f(\tilde{D}(t))V(t){{V(t)}^{T}}\right.
+U(t)U(t)Tf(D~(t))V(t)V(t)T)\displaystyle\left.+U(t){{U(t)}^{T}}\nabla f(\tilde{D}(t)){{V}(t)_{\bot}}V(t)_{\bot}^{T}\right)
f(D~(t))=2λP𝒜(D~(t)D)𝒜+2ID~(t)\nabla f(\tilde{D}(t))=\frac{2}{\lambda}P_{\mathcal{A}}(\tilde{D}(t)-D)\mathcal{A}+2I\odot\tilde{D}(t) (22)

where \mathcal{R} is the restriction operator that guarantees D~(t+1)k\tilde{D}(t+1)\in{{\mathcal{M}}_{k}} [2, 50]; Z(t)Z(t) is the negative Riemannian gradient direction [3]; U(t)N×kU(t)\in{{\mathbb{R}}^{N\times k}} and V(t)N×kV(t)\in{{\mathbb{R}}^{N\times k}} are the left and right singular matrices of D~(t)\tilde{D}(t), respectively; U(t)N×(Nk){{U}(t)_{\bot}}\in{{\mathbb{R}}^{N\times(N-k)}} and V(t)N×(Nk){{V}(t)_{\bot}}\in{{\mathbb{R}}^{N\times(N-k)}} are the basis of the orthogonal complementary spaces of U(t)U(t) and V(t)V(t), respectively; f(D~(t))\nabla f(\tilde{D}(t)) is the Euclidean gradient of ff at D~(t)\tilde{D}(t). A classical retraction called partitioned eigenvalue decomposition is given here:

[Qu,Ru]\displaystyle\left[Q_{u},R_{u}\right] =qr(h(t)(IU(t)U(t)T)f(D(t)~)V(t))\displaystyle=\text{qr}\left(-h(t)(I-U(t)U(t)^{T})\nabla f(\tilde{D(t)})V(t)\right) (23)
[Qv,Rv]\displaystyle[Q_{v},R_{v}] =qr(h(t)(IV(t)V(t)T)f(D(t)~)TU(t))\displaystyle=\text{qr}\left(-h(t)(I-V(t)V(t)^{T})\nabla f(\tilde{D(t)})^{T}U(t)\right) (24)
[U,Σ,V]\displaystyle\left[{{U_{\cal R}},{\Sigma_{\cal R}},{V_{\cal R}}}\right] svd([Σ(t)h(t)U(t)Tf(D~(t))V(t)RvTRu   0],k)\displaystyle\leftarrow{\rm{svd}}\left({\left[{\begin{array}[]{*{20}{c}}{\Sigma(t)-h(t){U(t)^{T}}\nabla f(\tilde{D}(t))V(t)}&{\;\;\;{\kern 1.0pt}R_{v}^{T}}\\ {{R_{u}}}&{\;\;\;{\kern 1.0pt}{\bf{0}}}\end{array}}\right],k}\right) (27)
D~(t+1)\displaystyle{\tilde{D}}(t+1) =[U(t)Qu]UΣVT[V(t)Qv]T\displaystyle=[U(t)\quad Q_{u}]U_{\mathcal{R}}\Sigma_{\mathcal{R}}V_{\mathcal{R}}^{T}[V(t)\quad Q_{v}]^{T} (28)

where qr denotes the QR decomposition performed on the matrix and svd denotes the singular value decomposition performed on the matrix. UN×kU_{\cal R}\in{{\mathbb{R}}^{N\times k}}, Σk×k\Sigma_{\cal R}\in{{\mathbb{R}}^{k\times k}}, and VN×kV_{\cal R}\in{{\mathbb{R}}^{N\times k}} are results of singular value decomposition. Σ(t)\Sigma(t) is the diagonal matrix formed by the first kk singular values of D~(t)\tilde{D}(t). The optimization step length h(t)=γ(t)δζh(t)=\gamma(t){{\delta}^{\zeta}} is determined by the non-monotonic Armijo line search method with Barzilai-Borwein (BB) step [25], which seeks the smallest non-negative integer ζ\zeta that satisfies:

f(D~(t+1))=f(D~(t)(h(t)Z(t)))c(t)+βγ(t)δζgradkf(D~(t)),Z(t)f(\tilde{D}(t+1))=f({\cal R}_{\tilde{D}(t)}(h(t)Z(t)))\leqslant c(t)+\beta\gamma(t){\delta^{\zeta}}\langle{{{\operatorname{grad}}_{k}}f({\tilde{D}(t)}),Z(t)}\rangle (29)

where β,δ(0,1)\beta,\delta\in(0,1) are the fixed step length and step length discount factors, respectively; and c(t)c(t) is determined by:

c(t+1)=θq(t)c(t)+f(D~(t+1))q(t+1)c(t+1)=\frac{\theta q(t)c(t)+f\left({\tilde{D}\left(t+1\right)}\right)}{q(t+1)} (30)

with θ[0,1]\theta\in[0,1], q(0)=1q(0)=1, and q(t+1)=θq(t)+1q(t+1)=\theta q(t)+1. And the full decay coefficient γ(t){\gamma}(t) in Eq.(29) is computed as:

γ(t)=max(γmin,min(γ¯(t),γmax))\gamma(t)=\max({{\gamma_{\min}},\min({\bar{\gamma}(t),{\gamma_{\max}}})}) (31)

where γ¯(t)\bar{\gamma}(t) is:

γ¯(t)={S(t),S(t)|S(t),K(t)|,if t is odd,|S(t),K(t)|K(t),K(t),if t is even.\bar{\gamma}\left(t\right)=\begin{cases}\dfrac{{\left\langle{S\left(t\right),S\left(t\right)}\right\rangle}}{{\left|{\left\langle{S\left(t\right),K\left(t\right)}\right\rangle}\right|}}&,\text{if }t\text{ is odd,}\\[10.0pt] \dfrac{{\left|{\left\langle{S\left(t\right),K\left(t\right)}\right\rangle}\right|}}{{\left\langle{K\left(t\right),K\left(t\right)}\right\rangle}}&,\text{if }t\text{ is even.}\end{cases} (32)

with S(t)=h(t1)𝒯D~(t1)D~(t)(Z(t1))S(t)=h(t-1){{\cal T}_{\tilde{D}(t-1)\to\tilde{D}(t)}}({Z(t-1)}) and K(t)=𝒯D~(t1)D~(t)(Z(t1))Z(t)K(t)={{\cal T}_{\tilde{D}(t-1)\to\tilde{D}(t)}}(Z(t-1))-Z(t). The 𝒯\cal T is the vector transport on k{\cal M}_{k}, which transports Z(t1)Z(t-1) to Z(t)Z(t). Let:

Σ𝒯\displaystyle\Sigma_{\cal T} =U(t)TU(t1)U(t1)Tf(D~(t1))V(t1)V(t1)TV(t)\displaystyle=U(t)^{T}U(t-1)U(t-1)^{T}\nabla f(\tilde{D}(t-1))V(t-1)V(t-1)^{T}V(t) (33)
U𝒯\displaystyle U_{\cal T} =(IU(t))U(t)T(IU(t1)U(t1)T)f(D~(t1))V(t1)V(t1)TV(t)\displaystyle=(I-U(t))U(t)^{T}(I-U(t-1)U(t-1)^{T})\nabla f(\tilde{D}(t-1))V(t-1)V(t-1)^{T}V(t) (34)
V𝒯\displaystyle V_{\cal T} =(IV(t))V(t)T(IV(t1)V(t1)T)f(D~(t1))TU(t1)U(t1)TU(t)\displaystyle=(I\!-\!V(t))V(t)^{T}(I-V(t-1)V(t-1)^{T})\nabla f(\tilde{D}(t\!-\!1))^{T}U(t-1)U(t-1)^{T}U(t) (35)
TON\displaystyle T_{O}^{N} =U𝒯Σ𝒯V𝒯T\displaystyle=U_{\cal T}\Sigma_{\cal T}V_{\cal T}^{T} (36)

then:

S(t)\displaystyle S(t) =h(t1)gradsf(D~(t1)),gradsf(D~(t1))TON,TONTON\displaystyle=h(t-1)\frac{\langle\text{grad}_{s}f(\tilde{D}(t-1)),\text{grad}_{s}f(\tilde{D}(t-1))\rangle}{\langle T_{O}^{N},T_{O}^{N}\rangle}T_{O}^{N} (37)
K(t)\displaystyle K(t) =gradsf(D~(t))+TON\displaystyle=\text{grad}_{s}f(\tilde{D}(t))+T_{O}^{N} (38)

On the basis of the above formulation, a fixed rank optimization algorithm is given as follows to recover the EDM between non-neighboring agents’ pieces of evidence:

Algorithm 2 Fixed rank optimization.
0:  To-be-completed EDMM D~(t)\tilde{D}(t), the rank of EDMM kk, the adjacence matrix 𝒜\cal A, the maximum number of searches for non-negative integers IterζIter_{\zeta}, δ\delta, γmin\gamma_{\min}, γmax\gamma_{\max}, θ\theta, c(t)c(t), γ(t)\gamma(t), q(t)q(t).
0:  EDMM D~(t+1)\tilde{D}(t+1), c(t+1)c(t+1), γ(t+1)\gamma(t+1), q(t+1)q(t+1).
1:  ζ1\zeta\leftarrow 1.
2:  if γ(t)\gamma(t) is not given as a input then
3:     γ(t)P𝒜(gradsf(D~(t))),P𝒜(D~(t)D)+diag(D~(t))P𝒜(gradsf(D~(t)))F2\gamma(t)\leftarrow\dfrac{\langle P_{\cal A}(\text{grad}_{s}f(\tilde{D}(t))),P_{\cal A}(\tilde{D}(t)-D)+\text{diag}(\tilde{D}(t))\rangle}{||P_{\cal A}(\text{grad}_{s}f(\tilde{D}(t)))||_{F}^{2}}
4:  end if
5:  while ζIterζ\zeta\leq Iter_{\zeta} do
6:     D~(t+1)D~(t)(h(t)Z(t))\tilde{D}(t+1)\leftarrow{\mathcal{R}_{\tilde{D}(t)}}(h(t)Z(t))
7:     if Eq.(29) is satisfied then
8:        break.
9:     end if
10:     h(t)γ(t)δζh(t)\leftarrow\gamma(t)\delta^{\zeta}
11:     ζζ+1\zeta\leftarrow\zeta+1
12:  end while
13:  Calculate γ¯(t+1)\bar{\gamma}(t+1) with Eq.(32) and update γ(t+1)\gamma(t+1), c(t+1)c(t+1), and q(t+1)q(t+1).

3.2.4 Rank adjustment

To initiate the optimization of Eq.(18), an initial guess for rank(D~)rank(\tilde{D}) is considered. If the guessed rank is incorrect, then Eq.(19) is not an equivalent form of Eq.(18), leading to an increase in estimation error for credibility. Therefore, it becomes essential to adjust the rank(D~)rank(\tilde{D}) to ensure high precision in the estimation of non-neighbored EDMs. The following inequality is for judging if rank(D~)=k<srank(\tilde{D})=k<s is too small [15]:

Nsk(D~)F>ϵGk(D~)F\|N_{s-k}(\tilde{D})\|_{\text{F}}>\epsilon\|G_{k}(\tilde{D})\|_{\text{F}} (39)

where ϵ\epsilon is a positive number, and Nsk(D~)N_{s-k}(\tilde{D}) and Gk(D~)G_{k}(\tilde{D}) are projections of f(D~)-f({\tilde{D}}) in the normal subspace (TD~k)sk(T_{\tilde{D}}\mathcal{M}_{k})_{\leq s-k}^{\bot} and the tangent space TD~kT_{\tilde{D}}\mathcal{M}_{k}, respectively. If Eq.(39) holds, the following formula is used to obtain the matrix after rank increase:

D~~=[UW][Σ00αH][VY]\tilde{\tilde{D}}=\begin{bmatrix}U&W\end{bmatrix}\begin{bmatrix}\Sigma&0\\ 0&\alpha H\end{bmatrix}\begin{bmatrix}V&Y\end{bmatrix} (40)

with:

W=U[u¯1,,u¯l~],H=diag(σ¯1,,σ¯l~),Y=V[v¯1,,v¯l~]W={U_{\bot}}[{{{\bar{u}}_{1}},\ldots,{{\bar{u}}_{\tilde{l}}}}],H=\operatorname{diag}({{{\bar{\sigma}}_{1}},\ldots,{{\bar{\sigma}}_{\tilde{l}}}}),Y={V_{\bot}}[{{{\bar{v}}_{1}},\ldots,{{\bar{v}}_{\tilde{l}}}}] (41)
α=(𝒜+I)WHYT,(𝒜+I)(D~D)(𝒜+I)WHYTF2\alpha=-\frac{{\langle{({{\cal A}+I})\odot WH{Y^{T}},({{\cal A}+I})\odot({\tilde{D}-D})}\rangle}}{{\|{({{\cal A}+I})\odot WH{Y^{T}}}\|_{\text{F}}^{2}}} (42)

where Σdiag(σ1,,σk)\Sigma\triangleq diag({{\sigma}_{1}},\ldots,{{\sigma}_{k}}) is the diagonal matrix composed of the eigenvalues of D~k\tilde{D}\in\mathcal{M}_{k}, i.e., there is D~=UΣVT\tilde{D}=U\Sigma{{V}^{T}}; U¯[u¯1,,u¯¯r]\bar{U}\triangleq\left[{{{\bar{u}}_{1}},\ldots,{{\bar{u}}_{\mathchar 22\relax\mkern-10.0mur}}}\right], Σ¯diag(σ¯1,,σ¯¯r)\bar{\Sigma}\triangleq\operatorname{diag}\left({{{\bar{\sigma}}_{1}},\ldots,{{\bar{\sigma}}_{\mathchar 22\relax\mkern-10.0mur}}}\right), and V¯[v¯1,,v¯¯r]\bar{V}\triangleq\left[{{{\bar{v}}_{1}},\ldots,{{\bar{v}}_{\mathchar 22\relax\mkern-10.0mur}}}\right] are the SVD result of Uf(D~)V-U_{\bot}^{\top}\nabla f({\tilde{D}}){V_{\bot}} with ¯r=rank(UUf(D~)VV){\mathchar 22\relax\mkern-10.0mur}=rank(U_{\bot}U_{\bot}^{\top}\nabla f({\tilde{D}}){V_{\bot}}V_{\bot}^{\top}); l~=min(l,r)\tilde{l}=\min(l,r) is the value of rank increment, where lskl\leq s-k is the upper bound of rank-increasing for the current operation.

Let σ1σk>0{{\sigma}_{1}}\geq\cdots\geq{{\sigma}_{k}}>0, then the condition for determining that rank(D~)rank(\tilde{D}) is too large is:

max((σiσi+1)σi)>Δ\max\left(\frac{(\sigma_{i}-\sigma_{i+1})}{\sigma_{i}}\right)>\Delta (43)

where Δ>0\Delta>0 is the threshold for rank reduction. If Eq.(43) holds, the rank-reduced matrix is obtained via the singular value truncation. Let r~=argmini{σiΔσ1}<k\tilde{r}={\arg\min}_{i}\{\sigma_{i}\geq\Delta\sigma_{1}\}<k be the target value of rank(D~)rank(\tilde{D}). Taking the first r~\tilde{r} columns of UU and VV as Ur~{{U}_{{\tilde{r}}}} and Vr~{{V}_{{\tilde{r}}}} and Σr~{{\Sigma}_{{\tilde{r}}}} as the first r~\tilde{r} rows and r~\tilde{r} columns of Σ\Sigma, it obtains the rank-reduced matrix D~=Ur~Σr~Vr~Tr~\tilde{D}={{U}_{{\tilde{r}}}}{{\Sigma}_{{\tilde{r}}}}V_{{\tilde{r}}}^{T}\in{{\mathcal{M}}}_{{\tilde{r}}}.

Remark 3.

By iteratively performing fixed-rank optimization and rank adjustment, the agents will obtain a low-rank approximation D~\tilde{D} of the EDMM, based on which the evidence credibility can be computed. In this paper, we assume that all agents follow the same optimization parameters and all use P𝒜(D)P_{\mathcal{A}}(D) as the initial value of D~\tilde{D}, thus guaranteeing that the complementation of the EDMM is network-wide consistent.

3.3 Fusion network consensus

The globally consistent and element-complete EDMM provided in 3.2 allows agents to effortlessly calculate the credibility of any given evidence locally, which not only facilitates the preprocessing of their own raw evidence but also introduces a new challenge, i.e., their neighbors are able to recover their raw evidence from the preprocessed one received by using the obvious inverse operator of Eq.(7). It is therefore undesirable to share pre-processed evidence between agents directly.

Obviously, Eq.(10) fails to meet the requirements of credibility and security for evidence fusion, as it directly shares the weight assignment of the agent’s raw evidence as the initial state with its neighbors. To address this, a privacy-preserving term is introduced into Eq.(10):

𝒙i𝝎(t+1)\displaystyle\bm{x}_{i}^{\bm{\omega}}\left({t+1}\right) =𝒙i𝝎(t)+j𝒩iNcij(𝒙j𝝎(t)𝒙i𝝎(t))+𝐮i(t+1)\displaystyle=\bm{x}_{i}^{\bm{\omega}}\left(t\right)+\sum\limits_{j\in{{\cal N}_{i}}}^{N}{{c_{ij}}\left({\bm{x}_{j}^{\bm{\omega}}\left(t\right)-\bm{x}_{i}^{\bm{\omega}}\left(t\right)}\right)}+{\bf{u}}_{i}\left({t+1}\right) (44)
𝒙i𝝎(0)\displaystyle\bm{x}_{i}^{\bm{\omega}}\left({0}\right) =𝝎i+𝐮i(0)\displaystyle={{\bm{\omega}^{\prime}}_{i}}+{\bf{u}}_{i}\left({0}\right)

where 𝝎i{\bm{\omega}^{\prime}}_{i} is the weight assignment corresponding to the preprocessed evidence 𝒎i\bm{m}^{\prime}_{i}; 𝐮i(t){\bf{u}}_{i}\left({t}\right) is the privacy-preserving term that consists of a differential privacy term [43] 𝐮iR(t){\bf{u}}_{i}^{R}\left({t}\right) and a credibility compensation term 𝐮i𝝎(t){\bf{u}}_{i}^{\bm{\omega}}\left({t}\right):

𝐮i(t)=𝐮iR(t)+𝐮i𝝎(t){\bf{u}}_{i}\left(t\right)={\bf{u}}_{i}^{R}\left(t\right)+{\bf{u}}_{i}^{\bm{\omega}}\left(t\right) (45)

In Eq.(45), 𝐮i(t){\bf{u}}_{i}\left({t}\right) is used to mask the raw evidence, bringing uncertainty to the fusion result. To ensure consistent convergence, the following theorem is presented:

Theorem 1.

(Evidence fusion consensus) If 𝐮iR(t){\bf{u}}_{i}^{R}\left({t}\right) and 𝐮i𝛚(t){\bf{u}}_{i}^{\bm{\omega}}\left({t}\right) satisfy the following consensus condition:

  1. 1)

    privacy-preserving-free condition:

    1. i.

      𝐮iR(t){\bf{u}}_{i}^{R}(t) is self-canceling, i.e., t=0ti𝐮iR(t)=𝟎\sum\nolimits_{t=0}^{{t_{i}}}{{\bf{u}}_{i}^{R}(t)}=\bf{0}, where titmax{t_{i}}\leqslant{t_{\max}} is generated independently and randomly by Agent ii and is not less than the convergence time of EDMM with tmaxt_{\max} is observed by all agents.

    2. ii.

      𝐮iR(t){\bf{u}}_{i}^{R}(t) is finite-time effect, i.e., t=0ti𝐮iR(t)2>0\sum\nolimits_{t=0}^{t_{i}}{{{\left\|{{\bf{u}}_{i}^{R}(t)}\right\|}_{2}}}>0 and t>ti,𝐮iR(t)=𝟎\forall t>{t_{i}},{\bf{u}}_{i}^{R}(t)={\bf{0}}.

  2. 2)

    credibility compensation condition:The uij𝝎(t)u_{ij}^{\bm{\omega}}(t), the jj-th component of 𝐮i𝝎(t){\bf{u}}_{i}^{\bm{\omega}}\left({t}\right), satisfies:

    uij𝝎(t)={AkAj(1)|Ak||Aj|ln1Credi(t)(1Qi(Ak))1Credi(t1)(1Qi(Ak)),t>00,t=0\begin{array}[]{l}u_{ij}^{\bm{\omega}}(t)=\left\{\begin{array}[]{ll}\sum\limits_{{A_{k}}\supseteq{A_{j}}}{{{(-1)}^{|A_{k}|-|A_{j}|}}\ln\frac{1-Cred_{i}(t)(1-Q_{i}(A_{k}))}{1-Cred_{i}(t-1)(1-Q_{i}(A_{k}))}}&,t>0\\ 0{}&,t=0\end{array}\right.\end{array} (46)

then, the following conslusions hold:

  1. 1)

    All agents’ states converge to i=1Nωi/N\sum\nolimits_{i=1}^{N}{\omega_{i}^{*}}/N in which ωi{\omega_{i}^{*}} is the weight assignment of CrediCred^{*}_{i}-discount of 𝒎i\bm{m}_{i} with CrediCred^{*}_{i} being the credibility of 𝒎i\bm{m}_{i} given the converged EDMM.

  2. 2)

    The iterative fusion is independent of the differential privacy term.

  3. 3)

    If the iterative EDMM converges to CCEF, then Eq.(45) converges to the fusion result of CCEF Eq.(8).

Proof 1.

Here, the convergence of Eq.(44) is proven using the jj-th component of 𝐱i𝛚(t)\bm{x}_{i}^{\bm{\omega}}\!\left(t\right) as an example, since there is coupling among the components of 𝐱i𝛚(t)\bm{x}_{i}^{\bm{\omega}}\left(t\right). Define 𝐱¯j𝛚(t)=[x1j𝛚(t),x2j𝛚(t),,xNj𝛚(t)]T\underline{\bm{x}}_{\cdot j}^{\bm{\omega}}(t)\!\!=\!\![x_{1j}^{\bm{\omega}}(t),\!x_{2j}^{\bm{\omega}}(t),\!\cdots\!,x_{Nj}^{\bm{\omega}}(t)]^{T}, 𝐮¯j𝛚(t)=[u1j𝛚(t),u2j𝛚(t),,uNj𝛚(t)]T\underline{\bf{u}}_{\cdot j}^{\bm{\omega}}(t)\!=\![u_{1j}^{\bm{\omega}}(t),\!u_{2j}^{\bm{\omega}}(t),\!\cdots\!,\!u_{Nj}^{\bm{\omega}}(t)]^{T}\!, and 𝐮¯j𝐑(t)=[u1j𝐑(t),u2j𝐑(t),,uNj𝐑(t)]T\underline{\bf{u}}_{\cdot j}^{\bm{R}}(t)\!=\!{{[u_{1j}^{\bm{R}}(t),\!u_{2j}^{\bm{R}}(t),\!\cdots\!,\!u_{Nj}^{\bm{R}}(t)]}^{T}} as vectors consisting of the jj-th components of the states, fusion modification terms, and differential privacy terms of all agents, respectively. Then, there is:

𝒙¯j𝝎(t+1)\displaystyle\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}(t+1) =[c11c12c1Nc21c22c2NcN1cN2cNN]𝒙¯j𝝎(t)+𝐮¯j𝝎(t+1)+𝐮¯jR(t+1)\displaystyle=\left[{\begin{array}[]{*{20}{c}}{c_{11}}&{c_{12}}&\cdots&{c_{1N}}\\ {c_{21}}&{c_{22}}&\cdots&{c_{2N}}\\ \vdots&\vdots&\ddots&\vdots\\ {c_{N1}}&{c_{N2}}&\cdots&{c_{NN}}\end{array}}\right]\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}(t)+\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}(t+1)+\underline{\bf{u}}^{R}_{\cdot j}(t+1) (47)
=ΔC𝒙¯j𝝎(t)+𝐮¯j𝝎(t+1)+𝐮¯jR(t+1)\displaystyle\buildrel\Delta\over{=}C\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}(t)+\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}(t+1)+\underline{\bf{u}}^{R}_{\cdot j}(t+1)

The limit of 𝐱¯j𝛚(t)\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}(t) is expressed as the number of iterations tends to infinity:

limt𝒙¯j𝝎(t)\displaystyle\mathop{\lim}\limits_{t\to\infty}\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}(t) =limtC𝒙¯j𝝎(t1)+𝐮¯j𝝎(t)+𝐮¯jR(t)\displaystyle=\mathop{\lim}\limits_{t\to\infty}C\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}(t-1)+\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}(t)+\underline{\bf{u}}^{R}_{\cdot j}(t) (48)
=limtC2(𝒙¯j𝝎(t2)+𝐮¯j𝝎(t1)+𝐮¯jR(t1))+𝐮¯j𝝎(t)+𝐮¯jR(t)\displaystyle=\mathop{\lim}\limits_{t\to\infty}{C^{2}}\left(\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}(t-2)+\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}(t-1)+\underline{\bf{u}}^{R}_{\cdot j}(t-1)\right)+\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}(t)+\underline{\bf{u}}^{R}_{\cdot j}(t)
\displaystyle\vdots
=limt(Ct𝒙¯j𝝎(0)+¯λ=1tCt¯λ+1𝐮¯j𝝎(¯λ)+¯λ=1tCt¯λ+1𝐮¯jR(¯λ))\displaystyle=\mathop{\lim}\limits_{t\to\infty}\left({C^{t}}\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}\left(0\right)+\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=1}^{t}{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda+1}}\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)}+\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=1}^{t}{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda+1}}\underline{\bf{u}}^{R}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)}\right)

Due to t>ti\forall t>t_{i}, 𝐮¯i𝛚(t)=𝐮¯iR(t)=𝟎\underline{\bf{u}}^{\bm{\omega}}_{i}(t)=\underline{\bf{u}}^{R}_{i}(t)=\bf{0}, therefore:

limt𝒙¯j𝝎(t)\displaystyle\mathop{\lim}\limits_{t\to\infty}\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}(t) =limt(Ct𝒙¯j𝝎(0)+¯λ=1tCt¯λ+1𝐮¯j𝝎(¯λ)+¯λ=1tCt¯λ+1𝐮¯jR(¯λ))\displaystyle=\mathop{\lim}\limits_{t\to\infty}\left({C^{t}}\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}\left(0\right)+\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=1}^{t}{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda+1}}\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)}+\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=1}^{t}{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda+1}}\underline{\bf{u}}^{R}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)}\right) (49)
=limt(Ct𝒙¯j𝝎(0)+¯λ=1tiCt¯λ+1(𝐮¯j𝝎(¯λ)+𝐮¯jR(¯λ))+¯λ=ti+1tCt¯λ+1(𝐮¯j𝝎(¯λ)+𝐮¯jR(¯λ)))\displaystyle=\mathop{\lim}\limits_{t\to\infty}\left({C^{t}}\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}\left(0\right)+\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=1}^{t_{i}}{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda+1}}\left(\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)+\underline{\bf{u}}^{R}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)\right)}+\!\!\!\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=t_{i}+1}^{t}\!\!\!{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda+1}}\!\!\left(\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)+\underline{\bf{u}}^{R}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)\right)}\right)
=limt(Ct𝝎¯j+¯λ=0tiCt¯λ+1𝐮¯j𝝎(¯λ)+¯λ=0tiCt¯λ+1𝐮¯jR(¯λ))\displaystyle=\mathop{\lim}\limits_{t\to\infty}\left({C^{t}}\underline{\bm{\omega}}^{\prime}_{\cdot j}+\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=0}^{t_{i}}{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda+1}}\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)}+\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=0}^{t_{i}}{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda+1}}\underline{\bf{u}}^{R}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)}\right)

where 𝛚¯j=[ω1(Aj),ω2(Aj),,ωN(Aj)]T{\underline{\bm{\omega}}^{\prime}_{\cdot j}=\left[{\omega}^{\prime}_{1}\left(A_{j}\right),{\omega}^{\prime}_{2}\left(A_{j}\right),\cdots,{\omega}^{\prime}_{N}\left(A_{j}\right)\right]^{T}}. According to Eqs.(12)-(13), there is:

ωi(Aj)=AkAj(1)|Ak||Aj|ln(1Credi(0)(1Qi(Ak))){\omega^{\prime}_{i}}\left({{A_{j}}}\right)=\sum\limits_{{A_{k}}\supseteq{A_{j}}}{{{\left({-1}\right)}^{\left|{{A_{k}}}\right|-\left|{{A_{j}}}\right|}}\ln\left({1-Cred_{i}\left(0\right)\left({1-{Q_{i}}\left({{A_{k}}}\right)}\right)}\right)} (50)

As CC is a doubly stochastic matrix, it follows that:

limt𝒙¯j𝝎(t)=\displaystyle\mathop{\lim}\limits_{t\to\infty}\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}\left(t\right)= limtCt𝝎¯j+limt¯λ=0tiCt¯λ𝐮¯j𝝎(¯λ)+limt¯λ=0tiCt¯λ𝐮¯jR(¯λ)\displaystyle\mathop{\lim}\limits_{t\to\infty}{C^{t}}\underline{\bm{\omega}}^{\prime}_{\cdot j}+\mathop{\lim}\limits_{t\to\infty}\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=0}^{{t_{i}}}{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda}}\underline{\bf{u}}^{\bm{\omega}}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)}+\mathop{\lim}\limits_{t\to\infty}\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=0}^{{t_{i}}}{{C^{t-\mathchar 22\relax\mkern-10.0mu\lambda}}\underline{\bf{u}}^{R}_{\cdot j}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)} (51)
=\displaystyle= 1N(i=1Nωi(Aj)+i=1N¯λ=0tiuijω(¯λ)+i=1N¯λ=0tiuijR(¯λ))𝟏N\displaystyle\frac{1}{N}\left({\sum\limits_{i=1}^{N}{{{\omega}^{\prime}_{i}}\left(A_{j}\right)}+\sum\limits_{i=1}^{N}{\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=0}^{{t_{i}}}{u_{ij}^{\omega}(\mathchar 22\relax\mkern-10.0mu\lambda)}}+\sum\limits_{i=1}^{N}{\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=0}^{{t_{i}}}{u_{ij}^{R}(\mathchar 22\relax\mkern-10.0mu\lambda)}}}\right)\cdot{{\bf{1}}_{N}}
=\displaystyle= 1Ni=1N(ωi(Aj)+uijω(0)+¯λ=1tiuijω(¯λ))𝟏N\displaystyle\frac{1}{N}\sum\limits_{i=1}^{N}{\left({{{\omega^{\prime}}_{i}}\left({{A_{j}}}\right)+u_{ij}^{\omega}(0)+\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=1}^{{t_{i}}}{u_{ij}^{\omega}(\mathchar 22\relax\mkern-10.0mu\lambda)}}\right)}\cdot{{\bf{1}}_{N}}
=\displaystyle= 1Ni=1NAkAj(1)|Ak||Aj|ln(1Credi(0)(1Qi(Ak)))𝟏N\displaystyle\frac{1}{N}\sum\limits_{i=1}^{N}\sum\limits_{{A_{k}}\supseteq{A_{j}}}{{{\left({-1}\right)}^{\left|{{A_{k}}}\right|-\left|{{A_{j}}}\right|}}\ln\left({1-Cred_{i}\left(0\right)\left({1-{Q_{i}}\left({{A_{k}}}\right)}\right)}\right)}\cdot{{\bf{1}}_{N}}
+1Ni=1N¯λ=1tiAkAj(1)|Ak||Aj|ln1Credi(¯λ)(1Qi(Ak))1Credi(¯λ1)(1Qi(Ak))𝟏N\displaystyle+\frac{1}{N}\sum\limits_{i=1}^{N}\sum\limits_{\mathchar 22\relax\mkern-10.0mu\lambda=1}^{{t_{i}}}{\sum\limits_{{A_{k}}\supseteq{A_{j}}}{{{\left({-1}\right)}^{\left|{{A_{k}}}\right|-\left|{{A_{j}}}\right|}}\ln\frac{{1-Cred_{i}\left(\mathchar 22\relax\mkern-10.0mu\lambda\right)\left({1-{Q_{i}}\left({{A_{k}}}\right)}\right)}}{{1-Cred_{i}\left({\mathchar 22\relax\mkern-10.0mu\lambda-1}\right)\left({1-{Q_{i}}\left({{A_{k}}}\right)}\right)}}}}\cdot{{\bf{1}}_{N}}
=\displaystyle= 1Ni=1N(AkAj(1)|Ak||Aj|ln(1Credi(ti)(1Qi(Ak))))𝟏N\displaystyle\frac{1}{N}\sum\limits_{i=1}^{N}{\left({\sum\limits_{{A_{k}}\supseteq{A_{j}}}{{{\left({-1}\right)}^{\left|{{A_{k}}}\right|-\left|{{A_{j}}}\right|}}\ln\left({1-Cred_{i}\left({{t_{i}}}\right)\left({1-{Q_{i}}\left({{A_{k}}}\right)}\right)}\right)}}\right)}\cdot{{\bf{1}}_{N}}

Therefore, the fusion result is free from 𝐮𝐑\bf{u}^{R}. As the EDMM has converged at iteration step tit_{i}, it follows that Credi(ti)=CrediCred_{i}(t_{i})=Cred^{*}_{i}, and:

limt𝒙¯j𝝎(t)\displaystyle\mathop{\lim}\limits_{t\to\infty}\underline{\bm{x}}^{\bm{\omega}}_{\cdot j}\left(t\right) =1Ni=1N(AkAj(1)|Ak||Aj|ln(1Credi(ti)(1Qi(Ak))))𝟏N\displaystyle=\frac{1}{N}\sum\limits_{i=1}^{N}{\left({\sum\limits_{{A_{k}}\supseteq{A_{j}}}{{{\left({-1}\right)}^{\left|{{A_{k}}}\right|-\left|{{A_{j}}}\right|}}\ln\left({1-Cred_{i}\left({{t_{i}}}\right)\left({1-{Q_{i}}\left({{A_{k}}}\right)}\right)}\right)}}\right)}\cdot{{\bf{1}}_{N}} (52)
=1Ni=1N𝝎i(Aj)𝟏N\displaystyle=\frac{1}{N}\sum\limits_{i=1}^{N}{\bm{\omega}_{i}^{*}\left({{A_{j}}}\right)}\cdot{{\bf{1}}_{N}}

The Eq.(52) is valid for each component of 𝐱i𝛚\bm{x}^{\bm{\omega}}_{i}. Therefore, after a sufficient number of iterations, all agents’ states will converge consistently to i=1N𝛚i/N{\sum\nolimits_{i=1}^{N}{\bm{\omega}_{i}^{*}}}/{N}. When EDMM is accurately provided, the credibility computed by each agent is identical to that of centralized fusion. In this case, the fusion result i=1N𝛚i\sum\nolimits_{i=1}^{N}{\bm{\omega}_{i}^{*}} is precisely equivalent to that of CCEF.

Hence, the Theovem 1 is proven.

In fact, Theorem 1 not only describes the consistency of fusion but also ensures the fusion process is credible. The 𝝎i{{\bm{\omega}^{\prime}}_{i}} is determined by CrediCred_{i} whose value is related to the execution order of EDMM completion optimization and distributed fusion. Depending on whether Eq.(18) and Eq.(44) are executed simultaneously, the execution order is classified into two modes: serial and parallel. The serial execution mode starts the iteration of Eq.(44) until the EDMM output from Eq.(18) completely converges, which makes Agent already know before Eq.(44) is executed. Thus, 𝐮i𝝎(t)=𝟎{\bf{u}}_{i}^{\bm{\omega}}\left({t}\right)={\bf{0}}.

The parallel execution mode begins the iteration of Eq.(44) while the optimization of Eq.(18) is still ongoing. Indeed, once the neighbored EDMs are collected, the agents can already recover the non-neighboring EDMs. And although this recovery is not precise for some time at the beginning, it is sufficient to allow the agents to compute Credi(t)Cred_{i}(t) based on the element-complete EDMM and to start the execution of Eq.(44). Noting that 𝝎i(t)\bm{\omega}^{\prime}_{i}(t) is the Credi(t)Cred_{i}(t)-discount of 𝒎\bm{m}, one has accordingly 𝝎i=𝝎i(0)\bm{\omega}^{\prime}_{i}=\bm{\omega}^{\prime}_{i}(0). Before EDMM has fully converged, Credi(t)Cred_{i}(t) will keep changing, which leads to the existence of a gap between 𝝎i(t)\bm{\omega}^{\prime}_{i}(t) and 𝝎i(t1)\bm{\omega}^{\prime}_{i}(t-1), which is why the 𝐮i𝝎(t){\bf{u}}_{i}^{\bm{\omega}}(t) is introduced. And as D~(t)\tilde{D}(t) tends to converge, the amplitude of uij𝝎(t)u_{ij}^{\bm{\omega}}(t) becomes progressively smaller. When fully converged, there is uij𝝎(t)=0u_{ij}^{\bm{\omega}}(t)=0. The difference between agents’ states at this step is smaller than the initial value of the serial execution approach because Eq.(44) has been executed for some time, which means that they will reach agreement in fewer iterative steps, i.e., in favor of lowering the PCEF’s elapsed time.

Regarding security, the following theorem ensures the privacy of evidence for each agent under Eq.(44).

Theorem 2.

(Privacy-preserving of evidence fusion) Agent ii’s raw evidence is not inferred by Agent j𝒩ij\in{\cal N}_{i} if and only if {𝒩i,i}𝒩j\left\{{\cal N}_{i},i\right\}\not\subset{{\cal N}_{j}}.

Proof 2.

As Agent ii shares its state at any iteration step with its neighbors, Agent jj is able to infer:

𝒙i𝝎(t+1)(1l𝒩icil)𝒙i𝝎(t)l𝒩i𝒩jcil𝒙l𝝎(t)\displaystyle\bm{x}_{i}^{\bm{\omega}}(t+1)-(1-\!\!\sum\limits_{l\in{{\cal N}_{i}}}{{c_{il}}})\bm{x}_{i}^{\bm{\omega}}(t)-\!\!\!\sum\limits_{l\in{{\cal N}_{i}}\cap{{\cal N}_{j}}}\!\!\!{{c_{il}}\bm{x}_{l}^{\bm{\omega}}(t)} =l𝒩i,l𝒩jcil𝒙l𝝎(t)+𝐮iR(t+1)+𝐮i𝝎(t+1)\displaystyle=\!\!\!\!\!\sum\limits_{l\in{{\cal N}_{i}},l\notin{{\cal N}_{j}}}\!\!\!\!\!{{c_{il}}\bm{x}_{l}^{\bm{\omega}}(t)}+{\bf{u}}_{i}^{R}(t+1)+{\bf{u}}_{i}^{\bm{\omega}}(t+1) (53)
𝒙i𝝎(0)\displaystyle\bm{x}_{i}^{\bm{\omega}}(0) =𝝎i+𝐮iR(0)+𝐮i𝝎(0)\displaystyle={\bm{\omega}^{\prime}_{i}}+{\bf{u}}_{i}^{R}(0)+{\bf{u}}_{i}^{\bm{\omega}}(0)

The left side of Eq.(53) represents the known quantity for Agent jj, while the right side represents the unknown. The recovery of other agents’ raw evidence by Agent jj is contingent upon eliminating the differential privacy term. According to Theorem 1, the way to eliminate the differential privacy term is to accumulate Eq.(53):

t=0ti+1𝒙i𝝎(t)t=0ti(1l𝒩icil)𝒙i𝝎(t)+t=0til𝒩i𝒩jcil𝒙l𝝎(t)\displaystyle\sum\limits_{t=0}^{{t_{i}}+1}{\bm{x}_{i}^{\bm{\omega}}(t)}-\sum\limits_{t=0}^{{t_{i}}}{(1-\sum\limits_{l\in{{\cal N}_{i}}}{{c_{il}}})\bm{x}_{i}^{\bm{\omega}}(t)}+\sum\limits_{t=0}^{{t_{i}}}{\sum\limits_{l\in{{\cal N}_{i}}\cap{{\cal N}_{j}}}{{c_{il}}\bm{x}_{l}^{\bm{\omega}}(t)}} (54)
=t=0til𝒩i,l𝒩jcil𝒙l𝝎(t)+𝝎i+t=0ti𝐮i𝝎(t)+t=0ti𝐮iR(t)\displaystyle=\sum\limits_{t=0}^{{t_{i}}}{\sum\limits_{l\in{{\cal N}_{i}},l\notin{{\cal N}_{j}}}{{c_{il}}\bm{x}_{l}^{\bm{\omega}}(t)}}+{\bm{\omega}^{\prime}_{i}}+\sum\limits_{t=0}^{{t_{i}}}{{\bf{u}}_{i}^{\bm{\omega}}(t)}+\sum\limits_{t=0}^{{t_{i}}}{{\bf{u}}_{i}^{R}(t)}
=t=0til𝒩i,l𝒩jcil𝒙l𝝎(t)+𝝎i+t=0ti𝐮i𝝎(t)\displaystyle=\sum\limits_{t=0}^{{t_{i}}}{\sum\limits_{l\in{{\cal N}_{i}},l\notin{{\cal N}_{j}}}{{c_{il}}\bm{x}_{l}^{\bm{\omega}}(t)}}+{\bm{\omega}^{\prime}_{i}}+\sum\limits_{t=0}^{{t_{i}}}{{\bf{u}}_{i}^{\bm{\omega}}(t)}
=t=0til𝒩i,l𝒩jcil𝒙l𝝎(t)+𝝎i\displaystyle=\sum\limits_{t=0}^{{t_{i}}}{\sum\limits_{l\in{{\cal N}_{i}},l\notin{{\cal N}_{j}}}{{c_{il}}\bm{x}_{l}^{\bm{\omega}}(t)}}+{\bm{\omega}}_{i}^{*}

It is clearly that if {𝒩i,i}𝒩j\left\{{\cal N}_{i},i\right\}\subset{{\cal N}_{j}}, then:

t=0ti+1𝒙i𝝎(t)t=0ti(1l𝒩icil)𝒙i𝝎(t)+t=0til𝒩i𝒩jcil𝒙l𝝎(t)\displaystyle\sum\limits_{t=0}^{{t_{i}}+1}{\bm{x}_{i}^{\bm{\omega}}(t)}-\sum\limits_{t=0}^{{t_{i}}}{(1-\sum\limits_{l\in{{\cal N}_{i}}}{{c_{il}}})\bm{x}_{i}^{\bm{\omega}}(t)}+\sum\limits_{t=0}^{{t_{i}}}{\sum\limits_{l\in{{\cal N}_{i}}\cap{{\cal N}_{j}}}{{c_{il}}\bm{x}_{l}^{\bm{\omega}}(t)}} =𝝎i\displaystyle={\bm{\omega}}_{i}^{*} (55)

Therefore, Agent jj recovers 𝐦i\bm{m}_{i} from 𝛚i{\bm{\omega}}_{i}^{*}, as it is able to obtain CrediCred_{i}^{*} according to the EDMM given in 3.2.

If there is Agent l𝒩jl\not\in\mathcal{N}_{j} in 𝒩i\mathcal{N}_{i}, then Agent jj cannot get 𝐦i\bm{m}_{i} because it does not know anything about 𝐱l𝛚(t)\bm{x}_{l}^{\bm{\omega}}(t).

Thus, the Theorem 2 is proven.

According to Theorem 2, we can conclude as follows:

Proposition 1.

The privacy of all raw evidence will be preserved during PCEF if all agents satisfy Theorem 2.

Proposition 2.

In a fully connected network, the privacy of raw evidence cannot be preserved.

Algorithm 3 PCEF.
0:  evidence {𝒎1,𝒎2,,𝒎N}\{{\bm{m}_{1}},{\bm{m}_{2}},\cdots,{\bm{m}_{N}}\}, local adjacency matrices {𝒜1,𝒜2,,𝒜N}\{{{\cal A}_{1}},{{\cal A}_{2}},\cdots,{{\cal A}_{N}}\}, a guess for the rank kk, a guess for the rank maximum ss, and an upper bound on a single rank increase ll, the threshold for rank reduction Δ>0\Delta>0, the parameter for rank increase ϵ>0\epsilon>0, the maximum iterative steps for rank adjustments IterRAIter_{RA}, the maximum iterative steps for LAC IterconsenIter_{consen}, a time for all agents to agree tmax<Iterconsent_{max}<Iter_{consen}, the maximum iterative steps for rank unchanged IterRUcIter_{RUc}.
0:  Fusion result 𝒎{\bm{m}_{\oplus}}.
1:  Agents construct their LEDMM D¯i=P𝒜i(D)\bar{D}_{i}=P_{{\cal A}_{i}}(D).
2:  Localize P𝒜(D)P_{\cal A}(D) and 𝒜\cal A to each agent with a iterative number of IterconsenIter_{consen}.
3:  [U,Σ,V]svd(P𝒜(D),k)\left[U,\Sigma,V\right]\leftarrow\text{svd}(P_{\cal A}(D),k).
4:  r~max{(σiσi+1)/σi}\tilde{r}\leftarrow\max\left\{\left(\sigma_{i}-\sigma_{i+1}\right)\mathord{\left/\right.}{\sigma_{i}}\right\}.
5:  if k>r~k>\tilde{r} then
6:     kr~k\leftarrow\tilde{r}.
7:     [U,Σ,V]svd(P𝒜(D),k)[U,\Sigma,V]\leftarrow\text{svd}(P_{\cal A}(D),k).
8:     D~(0)UΣVT\tilde{D}(0)\leftarrow U\Sigma V^{T}.
9:  end if
10:  for each Agent i[1,N]i\in[1,N], in parallel do
11:     Generate a random number tit_{i} that satisfies IterRA<ti<IterconsenIter_{RA}<t_{i}<Iter_{consen}.
12:     t1t\leftarrow 1.
13:     while t[1,Iterconsen+IterRA]t\leq[1,Iter_{consen}+Iter_{RA}] do
14:        if tIterRAt\leqslant Iter_{RA} and kk has not been constant for the past IterRUcIter_{RUc} iteration steps then
15:           Get D~(t)\tilde{D}(t) from Algorithm 2 with D~(t1)\tilde{D}(t-1) as the initial value.
16:           if r~=argmini{σiΔσ1}<k\tilde{r}={\arg\min}_{i}\{\sigma_{i}\geq\Delta\sigma_{1}\}<k and max{(σiσi+1)/σi}>Δ\max\left\{\left(\sigma_{i}-\sigma_{i+1}\right)\mathord{\left/\right.}{\sigma_{i}}\right\}>\Delta  then
17:              kr~k\leftarrow\tilde{r}.
18:              Construct Ur~U_{\tilde{r}}, Σr~\Sigma_{\tilde{r}}, and Vr~V_{\tilde{r}}.
19:              D~(t)Ur~Σr~Vr~\tilde{D}(t)\leftarrow U_{\tilde{r}}\Sigma_{\tilde{r}}V_{\tilde{r}}.
20:           else if k<sk<s and Nsk(D~(t))F>ϵGk(D~(t))F{\|{{N_{s-k}}({{{\tilde{D}}(t)}})}\|_{\rm{F}}}>\epsilon{\|{{G_{k}}({{{\tilde{D}}(t)}})}\|_{\rm{F}}} then
21:              Construct WW, HH, YY and calculate α\alpha.
22:              D~(t)D~(t)+αWHYT\tilde{D}(t)\leftarrow\tilde{D}(t)+\alpha WH{Y^{T}}.
23:              Rearrange D~(t)\tilde{D}(t) in descending order of singular values.
24:              kk+l~k\leftarrow k+\tilde{l}.
25:           end if
26:           tRAtt_{RA}\leftarrow t.
27:        else
28:           if ttRA>Iterconsent-t_{RA}>Iter_{consen} then
29:              break.
30:           end if
31:        end if
32:        Calculate Credi(t)Cred_{i}(t) according to D~(t)\tilde{D}(t) and update agent state 𝒙i𝝎(t)\bm{x}^{\bm{\omega}}_{i}(t) with Eq.(44).
33:     end while
34:  end for
35:  Agents convert N𝝎i(t)N\bm{\omega}_{i}(t) to 𝒎{\bm{m}_{\oplus}} by Eq.(14).

In fact, Eq.(44) provides a two-stage privacy-preserving strategy for distributed evidence fusion. In the first stage, agents protect their individual information by introducing self-canceling random noise into the system. In the second stage, the differential privacy term is set to zero to ensure the convergence of agent states after the information of agents is well mixed. While the introduction of the differential privacy term does not affect the fusion result, its magnitude influences the time-consuming of consensus. Considering that the credibility compensation term decreases as the EDMM converges, it is suggested to progressively reduce the magnitude of the differential privacy term to facilitate a quicker consensus among agents.

The pseudo-code for PCEF is given as Alg.3, in which the EDMM completion parameters are set to be the same for all agents. Thus, the EDM collection is executed only once. This algorithm adopts several maximum numbers of iteration parameters to govern termination of iterative processes because it includes such consensus or optimization as EDM collection, EDMM completion, and distributed credible evidence fusion, which makes it difficult to provide generic iterative termination conditions for all distributed networks.

Remark 4.

Although PCEF is designed for undirected graphs, it is also suitable for directed graphs with a spanning tree. In this case, information is shared unidirectionally among some agents, which means that the EDM between these agents cannot be obtained in the EDM neighbor consensus.

3.4 Computational complexity

The computational complexity of PCEF is analyzed below:

  1. 1)

    EDM neighbor consensus: Agent ii needs to compute DismpDismp with its |𝒩i||{\cal N}_{i}| neighbors, which operation involves inner product computation, privacy-preserving two-party dot-product protocols, and Millionaire Problem Solving algorithms. The computational complexity for the vector inner product computation is lower to negligible than the latter two. According to the literature [52, 34], the computational complexities of the privacy-preserving two-party dot-product protocol and the Millionaire’s Problem Solving algorithm are O(5(2n1)+1)O(5(2^{n}-1)+1) and O(6n+4)O(6n+4), respectively. Therefore, the computational complexity of EDM neighbor consensus is up to O(max{|𝒩i|}(52n+6n))O(\max\{|{\cal N}_{i}|\}(5\cdot 2^{n}+6n)).

  2. 2)

    EDM network consensus: The computational complexity of this module can be analyzed in the following parts:

    • P𝒜(D)P_{\cal A}(D) localization: At each iterative step, the computational complexity for agent ii to update its own state is O(|𝒩i|N2)O(|{\cal N}_{i}|N^{2}). The worst computational complexity for IterconsenIter_{consen} iterations is O(Iterconsenmax{|𝒩i|}N2)O(Iter_{consen}\max\{|{\cal N}_{i}|\}N^{2}).

    • Fixed-rank optimization: The computational complexity of both the matrix addition and the Hadamard product involved in computing f(D~(t))f(\tilde{D}(t)) is O(N2)O(N^{2}). The computational complexity of the singular value decomposition of D~(t)\tilde{D}(t) is O(kN2)O(kN^{2}). The computational complexity of the matrix multiplication included in the Frobenius paradigm computation of the gradient gradsf(D~(t))\text{grad}_{s}f(\tilde{D}(t)) and updating γ\gamma, pp and qq is O(N3)O(N^{3}). In the worst case, the computational complexity of searching for the optimal BB step is O(IterζN3)O(Iter_{\zeta}N^{3}). Therefore, the computational complexity of this part is O((Iterζ+2)N3+(k+1)N2)O((Iter_{\zeta}+2)N^{3}+(k+1)N^{2}).

    • Rank increase/reduction operation: The computational complexity of the decrease rank operation is O(kN2)O(kN^{2}) because it mainly involves the multiplication operation of matrices. The complexity of the rank-increase operation is higher than the rank-decrease operation. The computational complexity for singular value decomposition of UTf(D~)V-U_{\bot}^{T}\nabla f(\tilde{D})V_{\bot} is O((sk)N2)O((s-k)N^{2}). The operations with higher computational complexity for updating D~\tilde{D} are singular value decomposition, QR decomposition, and matrix multiplication, all of which are O(N3)O(N^{3}). Therefore, the computational complexity of this part is O(3N3+sN2)O(3N^{3}+sN^{2}).

    In the worst case, the fixed-rank complement optimization and the increase/decrease rank operation are performed IterRAIter_{RA} times. Therefore, the computational complexity of EDM network consensus is O(IterRA((Iterζ+5)N3+(Iterconsenmax|𝒩i|+k+s+1)N2))O(Iter_{RA}((Iter_{\zeta}+5)N^{3}+(Iter_{consen}\max{|{\cal N}_{i}|}+k+s+1)N^{2})).

  3. 3)

    Evidence fusion consensus: This consensus involves the computation of 𝐮\bf{u} and the updating of agent state xi𝝎\textbf{x}_{i}^{\bm{\omega}}. It is executed Iterconsen+IterRAIter_{consen}+Iter_{RA} times in the worst case, so the computational complexity is O((Iterconsen+IterRA)(max|𝒩i|+(2n1))(2n2))O((Iter_{consen}+Iter_{RA})(\max{|{\cal N}_{i}|}+(2^{n}-1))(2^{n}-2)).

  4. 4)

    Transformation of ω\bm{\omega} to m\bm{m}_{\oplus}: It should execute DR 2n32^{n}-3 times to transform 𝝎\bm{\omega} into 𝒎\bm{m}_{\oplus}, each transformation with a complexity of O(22n)O(2^{2n}). Consequently, the overall computational complexity of obtaining the global fusion result is O((2n3)22n)O((2^{n}-3)\cdot 2^{2n})

In summary, the total computational complexity of PCEF is O((52n+6n)max|𝒩i|+(Iterζ+5)N3+(Iterconsesmax|𝒩i|+k+s+1)N2+(Iterconsen+IterRA)(max|𝒩i|+(2n1))(2n2)+(2n3)22n)O((5\cdot 2^{n}+6n)\max{|{\cal N}i|}+(Iter{\zeta}+5)N^{3}+(Iter_{conses}\max{|{\cal N}_{i}|}+k+s+1)N^{2}+(Iter_{consen}+Iter_{RA})(\max{|{\cal N}_{i}|}+(2^{n}-1))(2^{n}-2)+(2^{n}-3)\cdot 2^{2n}).

4 Simulation

In this section, the performance of PCEF on rank estimation, credibility estimation, and resistance to counterintuitive knowledge is first measured based on a numerical simulation experiment. Afterwards, the proposed method is compared with two existing distributed evidence fusion methods (i.e., RANSAC-Based and COF-Based methods) in a simulation that takes distributed unmanned swarm radar signal binning as the background.

4.1 Verification for the approximation of PCEF to CCEF

Refer to caption
Figure 4: The probability density functions of 5 categories.

This simulation aims to test the performance of PCEF, including if the estimated EDMM rank is accurate, if the computed evidence credibility approximates CCEF, and if the fusion of highly conflicting evidence produces counterintuitive results. Consider a classification task involving five categories whose probability density curves are all normally distributed with variance 1, as shown in Fig.4. Corresponding to the categories {a}\{a\}, {b}\{b\}, {c}\{c\}, {d}\{d\}, {e}\{e\}, the probability density functions have mean values of -2, -1, 0, 1, 2, respectively.

In a strongly connected undirected graph with a connection density of 0.4, 100 agents independently observe a target TT of category {a}\{a\}. The observations of Agent 1901\sim 90 are normal and follow a normal distribution with mean -2 and variance 1, while the observations of Agent 9110091\sim 100 are abnormal and follow a normal distribution with mean 1 or 2 and variance 1. The circular markers and asterisk markers in Fig.4 are for 90 normal observations and 10 abnormal observations, respectively. To transform these observations into pieces of evidence, denoted as 𝒎1𝒎100\bm{m}_{1}\sim\bm{m}_{100}, BKNN [37] is adopted as the basic classifier in the simulation. The parameters of the BKNN are set as γta=2\gamma_{t_{a}}=2, γtr=5\gamma_{t_{r}}=5, K=20K=20, and Ns=100N_{s}=100. See [37] for the exact meaning of these parameters. In addition, to guarantee that the agents are able to fuse evidence collaboratively, the parameters of PCEF are set as in Tab.LABEL:tab2:PCEFPara.

Refer to caption
(a) Singular values of DD ( Listed from largest to smallest).
Refer to caption
(b) The rank of D~\tilde{D}.
Refer to caption
(c) Credibility estimation results.
Refer to caption
(d) Estimation error of credibility.
Figure 5: Simulation results of PCEF

In low-rank matrix completion, it is appropriate to regard the ordinate of the last singular value that is not less than the Δ\Delta time of the largest singular value as the rank of the matrix. Here, the EDMM with 100 pieces of evidence is constructed using DismpDismp as the EDM. The singular values of this EDMM are computed and arranged in descending order, as shown in Fig.5(a). It can be seen that there are four singular values of DD greater than Δσ1\Delta\sigma_{1} when Δ\Delta is set to 0.1 according to Tab.LABEL:tab2:PCEFPara, i.e., rank(D)=4rank(D)=4.

An accurate rank estimation balances the efficiency and accuracy of the optimization of Eq.(19). The rank of the low-rank approximation matrix of the EDMM is recorded as shown in Fig.5(b). It is observed that rank(D~)rank(\tilde{D}) is progressively corrected over 60 iteration steps and remains constant over successive IterRUc=20Iter_{RUc}=20 iteration steps. Namely, the estimation of rank(D)rank(D) by PCEF is accurate.

The purpose of completing the EDMM is to assess the credibility of the to-be-fused evidence. Since PCEF is applying the idea process of CCEF to privacy-preserving distributed systems, this simulation evaluates the accuracy of credibility calculated with PCEF by utilizing the evidence credibility given by CCEF as the benchmark. The credibility and credibility errors of each agent’s evidence are shown in Fig.5(c) and Fig.5(d), respectively. In Fig.5(c), the evidence credibilities given by CCEF are marked by blue stars, and the credibility estimated by PCEF is indicated by red circles. In Fig.5(d), the credibility error is shown as the blue star marker. It is seen that the credibility estimation errors of all pieces of to-be-fused evidence fall in the interval [0.02,0.02][-0.02,0.02]. This suggests that the proposed strategy for estimating the evidence credibility based on the low-rank matrix completion technique in a distributed system is feasible and highly accurate.

Table 1: Fusion results with three method: DR, CCEF, and PCEF.
Fusion method {a}\{a\} {b}\{b\} {a,b}\{a,b\}
DR \cdot 1 \cdot
CCEF 0.9526 0.0441 0.0033
PCEF 0.9586 0.0385 0.0029

The PCEF is compared with the CCEF and DR to verify that it avoids counterintuitive results when fusing high-conflict pieces of evidence. There are high conflicts among the 100 pieces of evidence since 𝒎91𝒎100\bm{m}_{91}\sim\bm{m}_{100} are set as disturbed pieces of evidence. The fusion results for the three cases are shown in Tab.1. It shows that the fusion result of DR assigns all the confidence to {b}\{b\} due to the high conflict between pieces of evidence, which deviates seriously from the true value {a}\{a\}. While the PCEF and CCEF assign the highest confidence to {a}\{a\}, with 0.9586 and 0.9526, respectively, which indicates that the proposed method solves the problems of distributed evidence credibility assessment and high-conflict evidence fusion. In terms of confidence, PCEF gives fusion results that are very close to those of CCEF. It can be said that PCEF is a successful promotion of CCEF in privacy-preserving distributed systems. Moreover, it is the credibility error that makes the PCEF assign a higher support of 0.9586 to {a}\{a\} than the CCEF.

4.2 Comparison experiment

Refer to caption
Figure 6: Collaborative battlefield reconnaissance using unmanned aerial vehicle swarm.

Subject to the influence of the complex electromagnetic environment and the limitations of reconnaissance and detection capabilities, a single unmanned aerial vehicle (UAV) often fails to meet the requirements of electronic warfare. In recent years, UAV swarms based on distributed network technology have been applied to large-scale battlefield reconnaissance and target detection [51]. The swarm not only has the characteristics of large scale, low cost of a single aircraft, and being difficult to discover by detection equipment, but also provides complementary information by carrying different kinds of equipment. This complementarity is not only reflected in the multi-view information redundancy brought by hetero-functional devices but also in the high-low matching of same-functional devices. Considering that a high-precision device is often preferentially struck, it is crucial to protect the security of the platform’s information, which is also why this paper emphasizes the privacy protection of the agent’s raw evidence. Of course, the same privacy preservation requirements also appear in applications such as wildlife population density monitoring (to prevent illegal poaching) and pollutant level detection (to prevent pollution dumping).

As shown in Fig.6, collaborative multi-UAV radar signal sorting [61] is utilized as the background for application in this paper to compare the performance of PCEF and two existing distributed evidence fusion methods, RANSAC-Based and COF-Based. In the set scenario, a swarm of 100 UAVs collaborate to sort radar signals over a large-scale battlefield. These signals may come from five different transmitters labeled Ω={a,b,c,d,e}\Omega=\{a,b,c,d,e\}. The UAVs first perform signal sorting independently and then communicate via ad hoc networks to achieve collective decision-making. The UAVs first perform signal sorting independently and then communicate over an ad hoc network to collaborate on collective decision-making, which places special focus on the sorting of identical single-pulse signals. It is assumed that all UAVs have completed independent signal sorting, which is not the focus of this simulation.

Four methods, CCEF, PCEF, RANSAC-Based, and COF-Based, are applied to the sorting task, with their parameters set as follows:

  • CCEF: The CCEF, which has been widely validated, is used as the baseline in the same way as 4.1. It follows the process Eqs.(2)-(9).

  • PCEF: Follow the flow of Algorithm 2 and the parameter settings as in 4.1.

  • RANSAC-Based: According to [11], the parameters of RANSAC-Based are set to: the size of each random subsample ν=5\nu=5, success probability psuc=0.9999p_{suc}=0.9999, inlier probability pin=0.9p_{in}=0.9, conflict threshold τ=0.5\tau=0.5 that is suggested to be value in the range [0.4,0.6][0.4,0.6].

  • COF-Based: According to [61], the parameters of COF-Based are set as follows: the COF threshold τCOF=0.5\tau_{COF}=0.5 and the distance threshold τdist=0.7\tau_{dist}=0.7 which are suggested with values in the ranges [0.4,0.9][0.4,0.9] and [0.4,0.7][0.4,0.7].

Refer to caption
Figure 7: The Pignistic probabilities of the sorting results given by the four methods in 100 Monte Carlo trials.
Refer to caption
Figure 8: Error of three methods in 100 Monte Carlo trials.

As in 4.1, the communication network constituted by the UAVs maintains a connection density of 0.4. Given that the result of a single trial may be by chance, 100 Monte Carlo trials are executed. The Pignistic probabilities of the fusion results obtained by the four methods are shown in Fig.7. It is clear that the CCEF, PCEF, and RANSAC-Based methods assign confidence to the categories {c}\{c\}, {d}\{d\}, and {e}\{e\} almost to be zero, whereas the COF-Based method supports these categories more or less in most of the trials. A possible reason for this phenomenon is that the initial fusion centers generated by the COF-Based method with the neighborhood evidence fusion strategy are easily anomalous when the network connection density is high, which affects the subsequent voting. Also based on anomaly detection to identify disturbed evidence, RANSAC-Based samples randomly from all pieces of evidence to circumvent local fusion anomalies to some extent. In contrast, PCEF evaluates the evidence credibility in the interval [0,1] and adds untrustworthy information into m(Ω)m(\Omega) to reduce conflicts, which avoids hard division of evidence into inliers and outliers and is more in line with the stochastic nature of information in the presence of interference. It is also the reason why CEF is widely verified to be highly reliable in centralized fusion. Taking the Pignistic probability of CCEF as a baseline, the errors of PCEF, RANSAC-Based, and COF-Based are shown in Fig.8. It is observed that based on high precision credibility estimation, PCEF obtains very close confidence assignments to CCEF with errors no larger than 0.02, while RANSAC-Based and COF-Based methods have a large number of large discrepancy results. This suggests that PCEF is a more accurate extension of traditional centralized evidence fusion to distributed systems than RANSAC-Based method and COF-Based method.

Refer to caption
Figure 9: The correctness of the sorting decisions given by the four methods.
Refer to caption
Figure 10: The frequencies of the sorting decision results given by the four methods and DR in 100 trials.
Refer to caption
Figure 11: Time-consuming of PCEF, RANSAC-Based, and COF-Based methods.

The maximum Pignistic probability decision rule is employed to complete the final radar signal sorting decision. The decisions given by the four methods in 100 trials are shown in Fig.9. It can be seen that PCEF has fewer wrong decisions than RANSAC-Based and COF-Based. The Fig.10 shows the frequency statistics of the sorting decision results under the CCEF, PCEF, RANSAC-Based, COF-Based, and DR fusion methods. It is observed that all three distributed fusion methods significantly improve the problem of DR producing counterintuitive results. In particular, the frequency counts of correctly sorting radar signals under PCEF are the highest and consistent with CCEF, which indicates that the strategy of estimating the evidence credibility based on the matrix-completion technique is effective.

In this experiment, the runtimes of three distributed fusion methods are tested. The simulation experiments are performed using MATLAB 2020b on a computer with an AMD 4800H CPU and measure the individual UAV execution time for each fusion method. The time-consuming of each method over 100 independent trials is illustrated in Fig.11, which shows that the PCEF algorithm stands out with an average time-consuming of 0.1637 seconds, demonstrating the fastest processing speed. In comparison, the RANSAC-Based and COF-Based methods have average time-consumings of 0.4694 seconds and 0.1896 seconds, respectively. Overall, PCEF achieves approximately a 12% improvement in decision accuracy while consuming less time compared to existing methods.

5 Conclusion

This paper focuses on the distributed computation of evidence credibility and the privacy preservation of agents’ raw evidence and proposes a distributed evidence credibility fusion method called PCEF that is applied to collective decision-making. It includes a three-layer consensus mechanism to overcome the problems of preference leakage and false positive/negative rates that exist in available distributed evidence fusion and is proved to be equivalent to CCEF when credibility is accurately given. In EDM neighbor consensus, precise computation of EDM between adjacent agents without revealing the raw evidence is accomplished by transforming it into two two-party secure computation subtasks. In EDM network consensus, the EDMM elements known by all agents are localized to each agent, and further, all missing elements of the EDMM are estimated with a rank-adaptive matrix completion technology. By doing so, credibility is estimated on the premise of privacy preservation. Leveraging the estimated plausibility, agents are instructed to credibly fuse evidence relying on the LAC, where active perturbations are added to protect privacy. Two experiments were conducted to validate the effectiveness of PCEF, whose results illustrate that PCEF successfully approximates CCEF and outperforms the existing methods both in terms of time-consuming and fusion accuracy.

In this paper, the calculation of neighbored EDMs is designed based on DismpDismp. In the future, we will explore the privacy computation methods for other candidate EDMs. In addition, the author will also introduce blockchain technology into the PCEF in the future to resist fusion failures caused by malicious attacks, which are very common in distributed swarm applications.

Acknowledgements

This work is supported by the National Natural Science Foundation of China under Grant 61873205. It is also supported by the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University through the Grand CX2023063.

References