This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Age of kk-out-of-nn Systems on a Gossip Network

Erkan Bayram,  Melih Bastopcu,  Mohamed-Ali Belabbas,  Tamer Başar Research was supported in part by ARO MURI Grant AG285, NSF-CCF 2106358, ARO W911NF-24-1-0105 and AFOSR FA9550-20-1-0333. Coordinated Science Laboratory
University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA
Email: {ebayram2,bastopcu,belabbas,basar1}@illinois.edu
Abstract

We consider information update systems on a gossip network, which consists of a single source and nn receiver nodes. The source encrypts the information into nn distinct keys with version stamps, sending a unique key to each node. For decoding the information in a kk-out-of-nn system, each receiver node requires at least k+1k\!+\!1 different keys with the same version, shared over peer-to-peer connections. Each node determines kk based on a given function, ensuring that as kk increases, the precision of the decoded information also increases. We consider two different schemes: a memory scheme (in which the nodes keep the source’s current and previous encrypted messages) and a memoryless scheme (in which the nodes are allowed to only keep the source’s current message). We measure the “timeliness” of information updates by using the kk-keys version age of information. Our work focuses on determining closed-form expressions for the time average age of information in a heterogeneous random graph under both with memory and memoryless schemes.

I Introduction

In a peer-to-peer sensor or communication network, information spreading can be categorized into single-piece dissemination (when one node shares its information) and multicast dissemination (when all nodes share their information) [1]. But some applications lie between two categories, in which a node needs to collect a multitude of messages or observations generated at the same time to construct meaningful information. If any k+1k+1 out of a total of nn messages are sufficient to construct the information in a system, it is called a kk-out-of-nn system. Such systems find applications in various domains including robotics, cryptography, and communication systems.

For example, in cryptography, the information source can apply (k,n)(k,n)-Threshold Signature Scheme (TSS) [2] on the information to put it into nn distinct keys such that any subset of nn keys with k+1k+1 cardinality would be sufficient to decode the encrypted message. Furthermore, kk-out-of-nn systems enhance error correction performance in data transmission. For instance, the relative positioning of the target object using the Time Difference of Arrival (TDoA) technique [3] requires at least three simultaneously generated Time of Flight measurements while more measurement increase the accuracy of estimation of relative position of the object. Another common example is (n,k+1)(n,k+1)-MDS error correction codes [4], in which any k+1k+1 codeword suffices for message decoding, with additional redundant codewords that facilitate error correction.

Motivated by these applications, we consider in this work an information source that generates updates and then encrypts (encodes) them by using a kk-out-of-nn systems, e.g. (k,n)(k,n)-TSS. For the sake of simplicity, we consider the source using (k,n)(k,n)-TSS, but it is worth noting that our results are applicable to any kk-out-of-nn system, some of examples are discussed above. The source is able to send the encrypted messages to the nn receiver nodes instantaneously. Upon receiving these updates, the nodes start to share their local messages with their neighboring nodes to decrypt the source’s update. Each node determines the required number of keys kk to achieve a given precision rate α\alpha on the information for a given precision-rate function D(k,n,β)D(k,n,\beta), which will be introduced later. The nodes that get kk different messages of the same update from their neighbors can decode source’s information. We study two different settings where (ii) the nodes have memory, in which case they can hold the current and also the previous keys received from the source, and (iiii) the nodes do not have memory, in which case they can only hold the keys from the most current update. For both of these settings, we study the information freshness achieved by the receivers as a result of applying (k,n)(k,n)-TSS. Age of information (AoI) has been introduced as a new performance metric to measure the freshness of information in communication networks [5]. Inspired by recent studies, advancements in Age of Information (AoI) have been made in various gossip network scenarios, including those examining scalability, optimization, and security [6, 7, 8, 9, 10, 11, 12, 13]. In all these aforementioned works, the source sends its information to gossip nodes without using any encryption.

For the first time, in this work, we consider the version age of information in a gossip network where the source encrypts the information. Then, we have the following contributions:

  • We derive closed-form expressions for the time average of kk-keys version age for an arbitrary non-homogeneous network (e.g., independently activated channels).

  • We show that the time average of the kk-keys version age for a node in both schemes decreases as the edge activation rate increases, or as the number of keys required to decode the information decreases, or as the number of gossip pairs in the network increases in the numerical results,

  • We show that a memory scheme yields a lower time average of kk-keys version age compared to a memoryless scheme. However, the difference between the two schemes diminishes with infrequent source updates, frequent gossip between nodes, or a decrease in kk for a fixed number of nodes.

II System Model and Metric

We consider an information updating system consisting of a single source, which is labeled as node 0, and nn receiver nodes. The information at the source is updated at times distributed according to a Poisson counter, denoted by N0(t)N_{0}(t), with rate λs\lambda_{s}. We refer to the time interval between \ellth and (+1)(\ell+1)th information updates (messages) as the \ellth version cycle and denote it by UU^{\ell}. Each update is stamped by the current value of the process N0(t)=N_{0}(t)=\ell and the time of the \ellth update is labelled τ\tau_{\ell} once it is generated. The stamp \ell is called version-stamp of the information.

We assume that the source is able to instantaneously encrypt the information update by using (k,n)(k,n)-TSS once it is generated. To be more precise, we assume that the source puts the information update into nn distinct keys and sends one of the unique keys to each receiver node at the time τ\tau_{\ell}, instantaneously. Once a node gets a unique key from the source at τ\tau_{\ell} for version \ell, it is aware of that there is new information at the source. Each node wishes its knowledge of the source to be as timely as possible. The timeliness is measured for an arbitrary node jj by the difference between the latest version of the message at the source node, N0(t)N_{0}(t), and the latest version of the message which can be decrypted at node jj, denoted by Njk(t)N^{k}_{j}(t). This metric has been introduced as version age of information in [6, 14]. We call it kk-keys version age of node jj at time tt and denote it as

Ajk(t):=N0(t)Njk(t).\displaystyle A^{k}_{j}(t):=N_{0}(t)-N^{k}_{j}(t). (2)

Recall that in the (k,n)(k,n)-TSS, a node needs to have k+1k+1 keys with the version stamp \ell in order to decrypt the information at the source generated at τ\tau_{\ell}. Since the source sends a unique key to all receiver nodes, a node needs kk additional distinct keys with version \ell to decrypt the \ellth message. We denote by G=(V,E)\vec{G}=(V,\vec{E}) the directed graph with node set VV and edge set E\vec{E}. We let G\vec{G} represent the communication network according to which nodes exchange information. If there is a directed edge eijEe_{ij}\in\vec{E}, we call node jj gossiping neighbor of node ii. We consider a precision-rate function given by D(k,n,β):××+[0,1]D(k,n,\beta):\mathbb{N}\times\mathbb{N}\times\mathbb{R}^{+}\to[0,1] that quantifies how precisely a node decodes the status update if it collects k+1k+1 out of nn symbols, where β\beta represents a relevant system parameter. For a given precision rate α\alpha, the required number of keys for node jj is defined as kj(β,α):=inf{k:D(k,n,β)α}k_{j}(\beta,\alpha)\!:=\!\inf\{k\in\mathbb{N}:D(k,n,\beta)\!\geq\!\alpha\}. For simplicity, we assume D(k,n,β):=i=0k(ni)βi(1β)niD(k,n,\beta)\!:=\!\sum_{i=0}^{\lfloor k\rfloor}{n\choose i}\beta^{i}(1\!-\!\beta)^{n-i}. However, any function that is monotonically increasing in knk\leq n for fixed nn can serve as a precision-rate function. Here, the rate β\beta represents the amount of noise in each keys. We call a communication network (k,n)(k,n)-TSS feasible for rate (β,α)(\beta,\alpha) if the node 0 has out-bound connections to all other nodes and the smallest in-degree of the receiver nodes is greater than the smallest kj(β,α)k_{j}(\beta,\alpha) among all nodes.111In this work, all the nodes may have different kj(β,α)k_{j}(\beta,\alpha)s. Our results are directly applicable for any arbitrary selection of kj(β,α)k_{j}(\beta,\alpha)s. For that, we may omit the user index jj and instead use kk, directly.

We consider a (k,n)(k,n)-TSS feasible network, in which, nodes are allowed to communicate and share only the keys that are received from the source with their gossiping neighbor. The edge eije_{ij} is activated at times distributed according to the Poisson counter Nij(t)N^{ij}(t), which has a rate λij\lambda_{ij} and once activated, node ii sends a message to node jj, instantaneously. All counters are pairwise independent. This process occurs under two distinct schemes: with memory and memoryless.

Refer to caption
Figure 1: Sample timeline of the source update and the edge eije_{ij} activation. The last activation of eije_{ij} is marked by ()\bullet) and the previous activations of eije_{ij} are marked by (\circ).

In the memory scheme, nodes can store (and send) the keys of the previous updates. For example, if the edge eije_{ij} is activated at tt, node ii sends node jj all the keys that the source has sent to node ii since the last activation of Nij(t)N^{ij}(t) before tt. For the illustration in Fig. 1, node ii sends the set of keys with the versions {,+1,+2}\{\ell,\ell+1,\ell+2\} to node jj in the memory scheme. Note that this can be implemented by finite memory in a finite node network with probability 11; we will provide below the distribution of the number of keys in the message. In the memoryless scheme, nodes have no memory and only store the latest key obtained from the source. If the edge eije_{ij} is activated at tt, node ii sends node jj only the last key that the source sent to node ii before tt. Referring again to the illustration in Fig. 1, node ii in this case sends only the key with the version {+2}\{\ell+2\} to node jj.

Refer to caption
Refer to caption
Figure 2: Sample path of the kk-keys version age (a) Ak(t)A^{k}(t) for a node with memory and (b) A¯k(t)\bar{A}^{k}(t) for a node without memory.

Fig. 2(a) and Fig. 2(b) depict the sample path of the kk-keys version age process Ak(t){A}^{k}(t) (resp.  A¯k(t)\bar{A}^{k}(t)) for a node with memory (resp. without memory). It is worth noting that we associate the notation ¯\bar{\cdot} with the memoryless scheme. It is assumed that edge activations and source updates occur at the same time in both schemes in the figures. In the memory scheme, we define the service time of information with version \ell to an arbitrary node jj, denoted by SjS^{\ell}_{j}, as the duration between τ\tau_{\ell} and the time when node jj can decrypt the information with version \ell, as shown in Fig. 2(a). In the memoryless scheme, a node can miss an information update with version \ell if it cannot get kk more distinct keys before the next update arrives at τ+1\tau_{\ell+1}. Thus, for a node without memory, we define SjS^{\ell}_{j} as the duration between τ\tau_{\ell} and the earliest time when the node can decrypt information with a version of at least \ell. In Fig. 2(b), the node could only decode the information with version +2\ell\!+\!2 while missing \ell and +1\ell\!+\!1. Therefore, the service times SS^{\ell} and S+1S^{\ell+1} end at the same time as the service time S+2S^{\ell+2} ends.

Let Δjk(t)\Delta^{k}_{j}(t) be the total kk-keys version age of node jj, defined as the integrated kk-keys version age of node jj, Ajk(τ)A^{k}_{j}(\tau), until time tt. For both schemes, the time average of kk-keys version age process of node jj is defined as follows:

Δjk:=limtΔjk(t)t=limt1t0tAjk(τ)𝑑τ.\displaystyle\Delta^{k}_{j}:=\lim_{t\to\infty}\frac{\Delta^{k}_{j}(t)}{t}=\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}A^{k}_{j}(\tau)d\tau. (4)

We interchangeably call Δjk\Delta^{k}_{j} the version age of kk-keys for node jj on (β,α)({\beta},\alpha). If nodes in the network have no memory, we denote the version age of kk-keys for node jj by Δ¯jk\bar{\Delta}^{k}_{j}.

III Age Analysis

In this section, we first introduce the main concepts that will be useful to derive age expressions and then provide closed-form expressions for the version age of kk-keys with memory, Δjk\Delta^{k}_{j}; and without memory, Δ¯jk\bar{\Delta}^{k}_{j}.

Consider a set of random variables 𝒴={Yi}i=1n\mathcal{Y}=\{Y_{i}\}_{i=1}^{n}. We denote the kthk^{th} smallest variable in the set 𝒴\mathcal{Y} by Y(k:n)Y_{(k:n)}. We call Y(k:n)Y_{(k:n)} the kkth order statistic of nn-samples (k=1,2,,nk=1,2,\cdots,n) in the set 𝒴\mathcal{Y}. Let 𝒩j+={iV,eijE}\mathcal{N}^{+}_{j}=\{i\in V,e_{ij}\in\vec{E}\} be the set of nodes with out-bound connections to node jj; denote its cardinality by njn_{j}. Let XijX_{ij} be the times between successive activations of the edge eije_{ij}. Then, XijX_{ij} is an exponential random variable with mean 1/λij1/\lambda_{ij}. Let 𝒳j\mathcal{X}_{j} be the set of random variables XijX_{ij}, i𝒩j+\forall i\in\mathcal{N}^{+}_{j}. We denote the kkth order statistic of the set 𝒳j\mathcal{X}_{j} by 𝒳(k:nj)\mathcal{X}_{(k:n_{j})}.

III-A Nodes with Memory

In this section, we provide the closed-form expression Δjk{\Delta}^{k}_{j} in a (k,n)(k,n)-TSS feasible network for a given rate (β,α)(\beta,\alpha).

Theorem 1

Let the precision rate function D(,,β)D(\cdot,\cdot,\beta) be given and assume that G\vec{G} is a (k,n)(k,n)-TSS feasible network for a given (β,α)(\beta,\alpha). Consider an arbitrary node jj in G\vec{G}. The version age of kk-keys for node jj with memory (at k=kj(β,α)k=k_{j}(\beta,\alpha)) obeys

Δjk=𝔼[𝒳(k:nj)]𝔼[U] w.p. 1.\Delta^{k}_{j}=\frac{\mathbb{E}[\mathcal{X}_{(k:{n}_{j})}]}{\mathbb{E}[U]}\mbox{ w.p. }1. (5)

where UU is the interarrival time for the source update.

It is worth noting that Theorem 1 holds for any (k,n)(k,n)-TSS feasible network with heterogeneous (possibly different) edge activation rates. One can easily obtain an explicit form of Δk\Delta^{k} for a given node jj and G\vec{G} by using the c.d.f. of order statics provided in [15]. We need the following lemma for the proof of Theorem 1.

Lemma 1

If nodes have memory, then the service time of the information with version \ell to node jj is the kkth order statistic of the set of exponential random variables 𝒳j\mathcal{X}_{j}.

Proof:  We denote the set of the first activation times of edges that are connected to the node after τ\tau_{\ell} by 𝒳\mathcal{X}^{\ell}. For the case 𝒳(k:nj)U\mathcal{X}^{\ell}_{(k:n_{j})}\leq U^{\ell}, the result trivially follows from the definitions. Consider the case 𝒳(k:nj)>U\mathcal{X}^{\ell}_{(k:n_{j})}>U^{\ell}. By definition, a new status update arrives at all nodes at τ+1(=τ+U\tau_{\ell+1}(=\tau_{\ell}+U^{\ell}). However, the structure of a message sent after τ+1\tau_{\ell+1} from a node to another node ensures that it has the key with version stamp \ell. Therefore, the service time for a status update is also 𝒳(k:nj)\mathcal{X}^{\ell}_{(k:n_{j})} (regardless of the fact that a new update arrived). \blacksquare

We are now in a position to prove Theorem 1.

Proof:  [Proof of Theorem 1] Let kk be kj(β,α){k_{j}(\beta,\alpha)} for given (β,α)(\beta,\alpha). Let T:={τ}=0T:=\{\tau_{\ell}\}_{\ell=0}^{\infty} be the monotonically increasing sequence of times when status updates occur at the source node 0, with τ0:=0\tau_{0}\!:=\!0. Let T1={τa}a=0T^{1}\!=\!\{\tau_{\ell_{a}}\}_{a=0}^{\infty} be a subsequence of TT such that Ak(τa)=1A^{k}(\tau_{\ell_{a}})\!=\!1. Let LaL_{a} be the time elapsed between two consecutive successful arrivals of the subsequence T1T^{1}. Let RaR_{a} be the version age of kk-keys, Ak(t)A^{k}(t) integrated over the duration [τa,τa+1)[\tau_{\ell_{a}},\tau_{\ell_{a+1}}) in a node. Then, we have:

𝔼[Ra]𝔼[La]=𝔼[i=aa+11Sji]𝔼[i=aa+11Ui]=𝔼[i=aa+11𝒳(k:nj)i]𝔼[i=aa+11Ui]=𝔼[𝒳(k:nj)]𝔼[U]\displaystyle\frac{\mathbb{E}[R_{a}]}{\mathbb{E}[L_{a}]}\!=\frac{\mathbb{E}[\sum_{i=\ell_{a}}^{\ell_{a+1}-1}S^{i}_{j}]}{\mathbb{E}[\sum_{i=\ell_{a}}^{\ell_{a+1}-1}U^{i}]}\!=\frac{\mathbb{E}[\sum_{i=\ell_{a}}^{\ell_{a+1}-1}\mathcal{X}^{i}_{(k:n_{j})}]}{\mathbb{E}[\sum_{i=\ell_{a}}^{\ell_{a+1}-1}U^{i}]}\!=\frac{\mathbb{E}[{\mathcal{X}_{(k:n_{j})}}]}{\mathbb{E}[U]}

From Lemma 1, we have Sji=𝒳(k:nj)i{S}^{i}_{j}=\mathcal{X}^{i}_{(k:n_{j})} for any ii. It is worth noting that two random variables 𝒳(k:nj)i\mathcal{X}^{i}_{(k:n_{j})} and 𝒳(k:nj)i+1\mathcal{X}^{i+1}_{(k:n_{j})} are not independent in the memory scheme if aia+11\ell_{a}\!\leq\!i\!\leq\!\ell_{a+1}-1 for any aa\in\mathbb{N}, but they are identically distributed. Thus, we have 𝔼[𝒳(k:nj)]=𝔼[𝒳(k:nj)i]\mathbb{E}[\mathcal{X}_{(k:n_{j})}]=\mathbb{E}[\mathcal{X}^{i}_{(k:n_{j})}] and 𝔼[U]=𝔼[Ui]\mathbb{E}[U]=\mathbb{E}[U^{i}] for any ii. By construction of the sequence T1T^{1}, a pair of ((La,Ra),(Lb,Rb))((L_{a},R_{a}),(L_{b},R_{b})) for any aba\neq b is i.i.d.i.i.d.. Then, from [16, Thm. 6], we find the time average Δjk\Delta^{k}_{j}, which completes the proof. \blacksquare

The case of a fully connected directed graph G\vec{G} on n+1n+1 nodes (including the source node) with λij=λe/(n1)\lambda_{ij}={\lambda_{e}}/{(n-1)} for all edges eije_{ij} in E\vec{E}, is called scalable homogeneous network (SHN) and λe\lambda_{e} is the gossip rate.

Corollary 1

For a SHN, the version age of kk-keys for a node with memory is:

Δk=𝔼[𝒳(k:n1)]𝔼[U]=(n1)λsλe(i=nkn11i) w.p. 1.\displaystyle\Delta^{k}=\frac{\mathbb{E}[\mathcal{X}_{(k:n-1)}]}{\mathbb{E}[U]}=\frac{(n-1)\lambda_{s}}{\lambda_{e}}(\sum^{n-1}_{i=n-k}\frac{1}{i})\mbox{ w.p. }1. (6)

Proof:  In a SHN, every node has n1n-1 outbound connection; thus, nj=n1n_{j}=n-1 and the processes Ajk(t)A^{k}_{j}(t) are statistically identical for any node jj. Then, the set 𝒳j\mathcal{X}_{j} is the set of i.i.di.i.d exponential random variables with rate λe(n1)\frac{\lambda_{e}}{(n-1)} for jVj\in V. From [15], we have 𝔼[𝒳(k:n1)]=(n1)λe(Hn1Hn1k)\mathbb{E}[\mathcal{X}_{(k:n-1)}]=\frac{(n-1)}{\lambda_{e}}(H_{n-1}-H_{n-1-k}) where Hn=i=1n1iH_{n}=\sum_{i=1}^{n}\frac{1}{i}. From Theorem. 1, the results follows. \blacksquare

Corollary 2

For a finite kk and a SHN with a countable memory, we have the following scalability result:

limnΔk=kλsλe w.p. 1.\displaystyle\lim_{n\to\infty}\Delta^{k}=\frac{k\lambda_{s}}{\lambda_{e}}\mbox{ w.p. }1.

One can easily take the limit as nn goes to \infty in (6) to have Corollary 2 of Theorem 1. We now elaborate on the number of keys in the message that is sent over an edge. Let ji\mathcal{M}^{i}_{j} be the number of keys in the message that is sent over the edge eije_{ij}. In each update cycle, either the source is updated before the edge eije_{ij} is activated, which increases ji\mathcal{M}^{i}_{j} by 11 or the edge is activated before a source update in which case node ii sends all the keys to node jj, and thus ji\mathcal{M}^{i}_{j} reduces to 0. From [17, Prob. 9.4.1], ji\mathcal{M}^{i}_{j} has a geometric distribution with success probability λeij/(λeij+λs)\lambda_{e_{ij}}/(\lambda_{e_{ij}}+\lambda_{s}).

III-B Nodes without Memory

In the following section, we analyze the memoryless scheme, in which a message has only 11 key in any time. We provide a closed-form expression Δ¯jk\bar{\Delta}^{k}_{j}, in a (k,n)(k,n)-TSS feasible network for a given rate (β,α)(\beta,\alpha).

Theorem 2

Let the precision rate function D(,,β)D(\cdot,\cdot,\beta) be given and assume that G\vec{G} is a (k,n)(k,n)-TSS feasible network for a given (β,α)(\beta,\alpha). Consider an arbitrary node jj in G\vec{G}. The version age of kk-keys for node jj without memory (at k=kj(β,α)k=k_{j}(\beta,\alpha)) obeys

Δ¯jk=𝔼[min(𝒳(k:nj),U)]Pr(𝒳(k:nj)U)𝔼[U] w.p. 1.\bar{\Delta}^{k}_{j}=\frac{\mathbb{E}[\min(\mathcal{X}_{(k:{n}_{j})},U)]}{Pr(\mathcal{X}_{(k:n_{j})}\leq U)\mathbb{E}[U]}\mbox{ w.p. }1. (7)

where UU is the interarrival time for the source update.

We need the following Lemma for the proof of Theorem 2.

Lemma 2

Let A¯jk[]=A¯jk(τ)\bar{A}^{k}_{j}[\ell]=\bar{A}^{k}_{j}(\tau_{\ell}) be the version age of information at τ\tau_{\ell} for the node jj. The sequence A¯jk[]\bar{A}^{k}_{j}[\ell] is homogeneous success-run with the rate μj=Pr(𝒳(k:nj)jU)\mu_{j}\!=\!Pr(\mathcal{X}^{j}_{(k:n_{j})}\!\leq\!U).

Proof:  We remove the index jj and the number of keys kk on notation, as it would be sufficient to prove the results for an arbitrary node and k>0k>0. We denote the set of the first arrival times of edges that are connected to the node after τ\tau_{\ell} by 𝒳\mathcal{X}^{\ell}. One can easily show that (𝒳(k:n1),U)(\mathcal{X}^{\ell}_{(k:n-1)},U^{\ell}) and (𝒳(k:n1)+1,U+1)(\mathcal{X}^{\ell+1}_{(k:n-1)},U^{\ell+1}) for any \ell are independent. This implies that the sequence A[]{A}[\ell] has Markov property and it evolves as follows:

A¯[+1]={1if 𝒳(k:n1)UA¯[]+1if 𝒳(k:n1)>U\displaystyle\bar{A}[\ell+1]=\begin{cases}1&\mbox{if }\mathcal{X}^{\ell}_{(k:n-1)}\leq U^{\ell}\\ \bar{A}[\ell]+1&\mbox{if }\mathcal{X}^{\ell}_{(k:n-1)}>U^{\ell}\end{cases} (8)

and A¯[0]\bar{A}[0] is 11 by definition. Here, A¯[]\bar{A}[\ell] is a discrete-time Markov chain on infinite states 𝒜:={1,2,}\mathcal{A}\!:=\!\{1,2,\cdots\} with the initial distribution π=[1,0,]\pi\!=\![1,0,\cdots] and the state transition matrix of P\!P

P:=[μ1μμ1μ]P:=\left[\begin{smallmatrix}\mu&1-\mu&&\cdots\\ \mu&&1-\mu&\cdots\\ \vdots&&&\ddots&\end{smallmatrix}\right]

where μ=Pr(𝒳(k:n1)U)\mu\!=\!Pr(\mathcal{X}^{\ell}_{(k:n-1)}\!\leq\!U^{\ell}). Then, it says the sequence A¯[]\bar{A}[\ell] is a homogeneous success-run chain with rate μ\mu.\blacksquare

As an easy corollary to Lemma 2, the random variable A¯j[]\bar{A}_{j}[\ell] has truncated geometric distribution at +1\ell+1 with success rate μj\mu_{j}. Now, we can prove Theorem 2.

Proof:  [Proof of Theorem 2] Let kk be kj(β,α){k_{j}(\beta,\alpha)} for given β\beta and α\alpha. Let T1={τa}a=0T^{1}\!=\!\{\tau_{\ell_{a}}\}_{a=0}^{\infty} be a subsequence of TT such that A¯jk(τa)=1\bar{A}^{k}_{j}(\tau_{\ell_{a}})\!=\!1. Let LaL_{a} be the time elapsed between two consecutive successful arrivals of the subsequence T1T^{1}. Let RaR_{a} be the version age of kk-keys, A¯jk(t)\bar{A}^{k}_{j}(t) integrated over the duration [τa,τa+1)[\tau_{\ell_{a}},\tau_{\ell_{a+1}}) in a node. Then, we have RaR_{a} as follows:

Ra=τaτa+1A¯jk(t)𝑑t=i=aa+11A¯jk[i](𝟙Ei𝒳(k:nj)i+𝟙EicUi)\displaystyle R_{a}=\int_{\tau_{\ell_{a}}}^{\tau_{\ell_{a+1}}}\bar{A}^{k}_{j}(t)dt\!=\sum^{\ell_{a+1}-1}_{i=\ell_{a}}\bar{A}^{k}_{j}[i](\mathds{1}_{E_{i}}\mathcal{X}^{i}_{(k:n_{j})}+\mathds{1}_{E_{i}^{c}}U^{i})

where the event Ei={𝒳(k:nj)iUi}E_{i}=\{\mathcal{X}^{i}_{(k:n_{j})}\leq U^{i}\} and we denote complement by EicE_{i}^{c}. Then, we have;

𝔼[Ra]𝔼[La]\displaystyle\frac{\mathbb{E}[R_{a}]}{\mathbb{E}[L_{a}]} =𝔼[i=aa+11A¯[i]min(𝒳(k:nj)i,Ui)]𝔼[i=aa+11Ui]\displaystyle=\frac{\mathbb{E}[\sum^{\ell_{a+1}-1}_{i=\ell_{a}}\bar{A}[i]\min(\mathcal{X}^{i}_{(k:n_{j})},U^{i})]}{\mathbb{E}[\sum^{\ell_{a+1}-1}_{i=\ell_{a}}U^{i}]} (10)

A pair of random variables 𝒳(k:nj)i\mathcal{X}^{i}_{(k:n_{j})} and 𝒳(k:nj)i+1\mathcal{X}^{i+1}_{(k:n_{j})} are identically distributed. Thus, we have 𝔼[𝒳(k:nj)]=𝔼[𝒳(k:nj)i]\mathbb{E}[\mathcal{X}_{(k:n_{j})}]=\mathbb{E}[\mathcal{X}^{i}_{(k:n_{j})}] and 𝔼[U]=𝔼[Ui]\mathbb{E}[U]=\mathbb{E}[U^{i}] for any ii. Let Ma:=A¯[a+11]M_{a}:=\bar{A}[\ell_{a+1}-1] be the number of information updates that the node has missed between τa\tau_{\ell_{a}} and τa+1\tau_{\ell_{a+1}} (that is, Ma:=a+1aM_{a}:=\ell_{a+1}-\ell_{a}). Then, we have:

𝔼[Ra]𝔼[La]\displaystyle\frac{\mathbb{E}[R_{a}]}{\mathbb{E}[L_{a}]} =𝔼[min(𝒳(k:nj),U)]𝔼[Ma(Ma+1)2]𝔼[U]𝔼[Ma]\displaystyle=\frac{\mathbb{E}[\min(\mathcal{X}_{(k:n_{j})},U)]\mathbb{E}[\frac{M_{a}(M_{a}+1)}{2}]}{\mathbb{E}[U]\mathbb{E}[M_{a}]} (13)

From Lemma 2, we know that A¯[]\bar{A}[\ell] is a succes-run chain with rate μ\mu, then MaM_{a} has geometric distribution for sufficiently large aa\in\mathbb{N} with rate μ(=Pr(𝒳(k:nj)U))\mu(=Pr(\mathcal{X}_{(k:n_{j})}\leq U)). Then, we have:

𝔼[Ra]𝔼[La]\displaystyle\frac{\mathbb{E}[R_{a}]}{\mathbb{E}[L_{a}]} =𝔼[min(𝒳(k:nj),U)](2μ2μ2+12μ)1μ𝔼[U]=𝔼[min(𝒳(k:nj),U)]μ𝔼[U]\displaystyle=\frac{\mathbb{E}[\min(\mathcal{X}_{(k:n_{j})},U)](\frac{2-\mu}{2\mu^{2}}\!+\!\frac{1}{2\mu})}{\frac{1}{\mu}\mathbb{E}[U]}=\frac{\mathbb{E}[\min(\mathcal{X}_{(k:n_{j})},U)]}{\mu\mathbb{E}[U]}

By construction, a pair of ((La,Ra),(Lb,Rb))((L_{a},R_{a}),(L_{b},R_{b})) for any aba\neq b is i.i.d.i.i.d.. From [16, Thm. 6], we find the time average Δ¯jk\bar{\Delta}^{k}_{j}.\blacksquare

Corollary 3

For a SHN, the version age of kk-keys TSS for an individual node without memory is:

Δ¯k=λsλe+λs+j=2k(λsλe(nj)(n1)+λsi=1j1λe(ni)(n1)λe(ni)(n1)+λs)Pr(𝒳(k:n1)U) w.p. 1\\ \bar{\Delta}^{k}=\frac{\frac{\lambda_{s}}{\lambda_{e}\!+\!\lambda_{s}}\!+\!\sum\limits_{j=2}^{k}\left(\frac{\lambda_{s}}{\lambda_{e}\frac{(n-j)}{(n-1)}\!+\!\lambda_{s}}\!\prod\limits_{i=1}^{j\!-\!1}\frac{\lambda_{e}\frac{(n-i)}{(n-1)}}{\lambda_{e}\frac{(n-i)}{(n-1)}\!+\!\lambda_{s}}\right)}{Pr(\mathcal{X}_{(k:n-1)}\leq U)}\mbox{ w.p. }1 (15)

Proof:  Let YkY^{k} be min{𝒳(k:n1),U}\min\{\mathcal{X}_{(k:n-1)},U\}. Let X~i:n1\tilde{X}_{i:n-1} be the difference between iith and (i1)(i\!-\!1)th order statistics of the set 𝒳\mathcal{X} for i>1i>1 and X~1:n1=𝒳(1:n1)\tilde{X}_{1:n-1}\!=\!\mathcal{X}_{(1:n-1)}. Let Y~i=min{X~i:n1,U}\tilde{Y}_{i}=\min\{\tilde{X}_{i:n-1},U^{\ell}\}. One can see that, from memoryless property, X~i:n1\tilde{X}_{i:n-1} corresponds to the minimum of a set of (ni)(n\!-\!i) i.i.d.i.i.d. exponential random variables (r.v.) with mean (n1)λe\frac{(n-1)}{\lambda_{e}}. Thus, r.v. X~i:n1\tilde{X}_{i:n-1} is also an exponential r.v. with the parameter 1λe(n,i)\frac{1}{\lambda_{e}\mathcal{B}(n,i)} where (n,i)=(ni)(n1)\mathcal{B}(n,i)=\frac{(n-i)}{(n-1)}. Thus, r.v. Y~i\tilde{Y}_{i} is the minimum of two independent exponentially distributed r.v. From [17, Prob. 9.4.1], r.v. Y~i\tilde{Y}_{i} is exponentially distributed with the mean 1λe(n,i)+λs\frac{1}{\lambda_{e}\mathcal{B}(n,i)+\lambda_{s}} and we have:

Pr(U>X~i:n1)=λe(n,i)λe(n,i)+λs.\displaystyle Pr(U^{\ell}\!>\!\tilde{X}_{i:n-1})=\frac{\lambda_{e}\mathcal{B}(n,i)}{\lambda_{e}\mathcal{B}(n,i)+\lambda_{s}}. (17)

for i1i\geq 1. From the total law of expectation and memoryless property of UU^{\ell}, we have the following:

𝔼[Yk]=\displaystyle\mathbb{E}[Y^{k}]= Pr(UX~1:n1)𝔼[Y~1]\displaystyle Pr(U^{\ell}\!\leq\!\tilde{X}_{1:n-1})\mathbb{E}[\tilde{Y}_{1}] (19)
+Pr(UX~2:n1)Pr(U>X~1:n1)(𝔼[Y~1]+𝔼[Y~2])\displaystyle+\!\!Pr(U^{\ell}\!\leq\!\tilde{X}_{2:n-1})Pr(\!U^{\ell}\!\!>\!\!\tilde{X}_{1:n-1}\!)(\mathbb{E}[\tilde{Y}_{1}]\!\!+\!\!\mathbb{E}[\tilde{Y}_{2}]) (20)
++i=1k1Pr(U>X~i:n1)(i=1k𝔼[Y~i])\displaystyle+\ldots+\prod_{i=1}^{k-1}Pr(U^{\ell}\!>\!\tilde{X}_{i:n-1})(\sum_{i=1}^{k}\mathbb{E}[\tilde{Y}_{i}]) (21)

If we rearrange the sum above and we plug 𝔼[Y~i]\mathbb{E}[\tilde{Y}_{i}] above and Pr(U>X~i:n1)Pr(U^{\ell}\!>\!\tilde{X}_{i:n-1}) in (LABEL:eqn:prob_ul) into (19), we obtain Cor. 3. \blacksquare

IV Numerical Results and Conclusion

In this section, we compare empirical results obtained from simulations to our analytical results.

Refer to caption
Refer to caption
Figure 3: (a) Δk\Delta^{k} and (b) Δ¯k\bar{\Delta}^{k} as a function of λe\lambda_{e} on a SHN when λs=10\lambda_{s}=10. Solid lines in Fig. 3(a) and Fig. 3(b) show theoretical Δk\Delta^{k} and theoretical Δ¯k\bar{\Delta}^{k}, respectively. Simulation results for (2,6)(2,6),(2,8)(2,8),(4,6)(4,6),(4,8)(4,8) TSS are marked by ,,,×\blacklozenge,\bullet,\blacksquare,\times, respectively.

We consider a SHN. Fig. 3 depicts the simulation and the theoretical results for both Δk\Delta^{k} and Δ¯k\bar{\Delta}^{k} as a function of gossip rate λe\lambda_{e} when λs=10\lambda_{s}=10. The simulation results for Δk\Delta^{k} and Δ¯k\bar{\Delta}^{k} align closely with the theoretical calculations provided in Theorems 1 and 2, respectively. In both schemes, we observe that the version age of kk-keys for a node decreases with the rise in the gossip rate λe\lambda_{e}, while keeping the network size nn and λs\lambda_{s} constant. We observe, in Fig. 3(a) and Fig. 3(b), that both Δk\Delta^{k} and Δ¯k\bar{\Delta}^{k} increase with the growth of kk for a fixed λe\lambda_{e} and nn and they decreases as nn increases for fixed λe\lambda_{e} and kk. Also, we observe, in Fig. 3, that Δk\Delta^{k} is less than Δ¯k\bar{\Delta}^{k} for the same values of λe,λs\lambda_{e},\lambda_{s} and (k,n)(k,n). Fig. 4 depicts Δk\Delta^{k} as a function of the number of the nodes nn for various gossip rates λe\lambda_{e}. We observe, in Fig. 4, that Δk\Delta^{k} converges to kλsλe\frac{k\lambda_{s}}{\lambda_{e}} as nn grows. It aligns with Cor. 2. Fig. 5(b) depicts Δ¯k\bar{\Delta}^{k} and Δk\Delta^{k} as functions of (β,α)(\beta,\alpha) for the given D(k,n,β)D(k,n,\beta) in the introduction. We observe that in Fig. 5(b), both Δ¯k\bar{\Delta}^{k} and Δk\Delta^{k} increase as the required precision α\alpha increases. Additionally, for the same precision rate α\alpha, higher noise in the measurement β\beta results in greater Δ¯k\bar{\Delta}^{k} and Δk\Delta^{k}.

To quantify the value of memory in a network, we define the memory critical gossip rate of a (k,n)(k,n)-TSS network for a margin ε\varepsilon, denoted by λε(k,n)\lambda^{\varepsilon}(k,n), as the smallest gossip rate λe\lambda_{e} such that |ΔkΔ¯k|ε|\Delta^{k}-\bar{\Delta}^{k}|\leq\varepsilon. We first observe, in Fig. 5(a), that λε(k,n)\lambda^{\varepsilon}(k,n) xponentially increases as kk increases for fixed nn. Now, consider the event E:={𝒳(k:n1)U}E\!:=\!\{\mathcal{X}_{(k:n-1)}\leq U\}, one can easily see that Pr(E)Pr(E) converges to 11 as λe\lambda_{e} increases (frequent gossipping between nodes) or kk decreases for fixed nn. In this case, the expectation 𝔼[min(𝒳(k:n1),U)]𝔼[𝒳(k:n1)]\mathbb{E}[\min(\mathcal{X}_{(k:n-1)},U)]\!\to\!\mathbb{E}[\mathcal{X}_{(k:n-1)}] in (7). It implies that Δ¯k\bar{\Delta}^{k} approach to Δk\Delta^{k} as Pr(E)Pr(E) goes to 11. These observations show that Δ¯k\bar{\Delta}^{k} approaches Δk{\Delta}^{k} as λe\lambda_{e} increases or kk decreases.

Refer to caption
Figure 4: Δk\Delta^{k} as a function of nn when k=10k=10 and λs=15\lambda_{s}=15. Solid lines show the theoretical Δk\Delta^{k} while simulation results for λe={50,100,150}\lambda_{e}=\{50,100,150\} selections are marked by ,,×\blacklozenge,\bullet,\times, respectively. Dashed lines show the theoretical asymptotic value of Δk\Delta^{k} on nn.
Refer to caption
Refer to caption
Figure 5: (a) The memory critical gossip rate λε(k,30)\lambda^{\varepsilon}(k,30) as a function of kk for ε{102,101,1}\varepsilon\in\{10^{-2},10^{-1},1\} and (b) Δ¯k\bar{\Delta}^{k},Δk{\Delta}^{k} and k(β,α)k(\beta,\alpha) as a function of α[0,1]\alpha\in[0,1] for β{0.2,0.5,0.8}\beta\in\{0.2,0.5,0.8\} when λs=15\lambda_{s}=15 and n=30n=30.

Conclusion. In this work, we have provided closed-form expressions for the version age of kk-keys for both with memory and memoryless schemes. In our work, nodes only send the keys that are received from the source node, ensuring that any set of messages on the channels is not sufficient to decrypt the message at any time. An alternative approach might be to consider the case where nodes can share keys that they received from other nodes.

References

  • [1] E. Bayram, M.-A. Belabbas, and T. Başar, “Vector-valued gossip over ww-holonomic networks,” arXiv preprint arXiv:2311.04455, 2023.
  • [2] M.-S. Hwang, E. J.-L. Lu, and I.-C. Lin, “A practical (t,n)(t,n) threshold proxy signature scheme based on the rsa cryptosystem,” IEEE Trans. on knowledge and data Engineering, vol. 15, no. 6, pp. 1552–1560, 2003.
  • [3] F. Gustafsson and F. Gunnarsson, “Positioning using time-difference of arrival measurements,” in 2003 IEEE ICASSP’03, 2003. Proceedings., vol. 6.   IEEE, 2003, pp. VI–553.
  • [4] B. Buyukates and S. Ulukus, “Timely distributed computation with stragglers,” IEEE Trans. on Communications, vol. 68, no. 9, pp. 5273–5282, 2020.
  • [5] S. K. Kaul, R. D. Yates, and M. Gruteser, “Real-time status: How often should one update?” in IEEE Infocom, March 2012.
  • [6] R. D. Yates, “The age of gossip in networks,” in IEEE ISIT, July 2021, pp. 2984–2989.
  • [7] B. Buyukates, M. Bastopcu, and S. Ulukus, “Version age of information in clustered gossip networks,” IEEE Journal on Selected Areas in Information Theory, vol. 3, no. 1, pp. 85–97, March 2022.
  • [8] M. Bastopcu, B. Buyukates, and S. Ulukus, “Gossiping with binary freshness metric,” in 2021 IEEE Globecom, December 2021.
  • [9] M. Bastopcu, S. R. Etesami, and T. Başar, “The role of gossiping in information dissemination over a network of agents,” Entropy, vol. 26, no. 1, 2024.
  • [10] P. Mitra and S. Ulukus, “Age-aware gossiping in network topologies,” IEEE Trans. on Communications, 2024.
  • [11] E. Delfani and N. Pappas, “Version age-optimal cached status updates in a gossiping network with energy harvesting sensor,” in WiOpt.   IEEE, 2023, pp. 143–150.
  • [12] P. Kaswan and S. Ulukus, “Timestomping vulnerability of age-sensitive gossip networks,” IEEE Trans. on Communications, pp. 1–1, January 2024.
  • [13] ——, “Choosing outdated information to achieve reliability in age-based gossiping,” in IEEE, ICC 2024.   IEEE, 2024, pp. 2125–2130.
  • [14] B. Abolhassani, J. Tadrous, A. Eryilmaz, and E. Yeh, “Fresh caching for dynamic content,” in IEEE Infocom, May 2021, pp. 1–10.
  • [15] H. A. David and H. N. Nagaraja, Order statistics.   Wiley & Sons, 2004.
  • [16] R. G. Gallager, “Discrete stochastic processes,” Journal of the Operational Research Society, vol. 48, no. 1, pp. 103–103, 1997.
  • [17] R. D. Yates and D. J. Goodman, Probability and stochastic processes: a friendly introduction for electrical and computer engineers.   Wiley & Sons, 2014.