This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Age of Computing: A Metric of Computation Freshness in Communication and Computation Cooperative Networks

Xingran Chen, Member, IEEE, Yi Zhuang, and Kun Yang*, Fellow, IEEE Xingran Chen and Yi Zhuang are with School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuang, 611731, China (E-mail: xingranc@ieee.org, yizhuang265@163.com). Kun Yang is with School of Computer Science and Electronic Engineering, University of Essex, Essex, CO4 3SQ, U.K (Email: kunyang@essex.ac.uk).*: Corresponding author.
Abstract

In communication and computation cooperative networks (3CNs), timely computation is crucial but not always guaranteed. There is a strong demand for a computational task to be completed within a given deadline. The time taken involves both processing time, transmission time, and the impact of the deadline. However, a measure of such timeliness in 3CNs is lacking. To address this gap, we propose the novel concept of Age of Computing (AoC) to quantify computation freshness in 3CNs. Built on task timestamps, AoC is universally applicable metric for dynamic and complex real-world 3CNs. We evaluate AoC under two types of deadlines: (i) soft deadline, tasks can be fed back to the source if delayed beyond the deadline, but with additional latency; (ii) hard deadline, tasks delayed beyond the deadline are discarded. We investigate AoC in two distinct networks. In point-to-point, time-continuous networks, tasks are processed sequentially using a first-come, first-served discipline. We derive a general expression for the time-average AoC under both deadlines. Utilizing this expression, we obtain a closed-form solution for M/M/1 - M/M/1 systems under soft deadlines and propose an accurate approximation for hard deadline. These results are further extended to G/G/1-G/G/1 systems. Additionally, we introduce the concept of computation throughput, derive its general expression and an approximation, and explore the trade-off between freshness and throughput. In the multi-source, time-discrete networks, tasks are scheduled for offloading to a computational node. For this scenario, we develop AoC-based Max-Weight policies for real-time scheduling under both deadlines, leveraging a Lyapunov function to minimize its drift.

Index Terms — Age of computing, computation freshness, communication and computing cooperated networks, time-average AoC, Max-Weight Policy.

I Introduction

In the 6G era, emerging applications such as the Internet of Things (IoT), smart cities, and cyber-physical systems have significant demands for communication and computation cooperative networks (3CNs), which provide faster data processing, efficient resource utilization, and enhanced security [1]. 3CNs originated from mobile edge computing (MEC) technology, which aims to complete computation-intensive and latency-critical tasks, with the paradigm deploying distributedly tons of billions of edge devices at the network edges [2]. Besides MEC, 3CNs include fog computing and computing power networks. Fog computing can be regarded as a generalization of MEC, where the definition of edge devices is broader than that in MEC [3]. Computing power networks refers to a broader concept of distributed computing networks, including edge, fog, and cloud computing [4]. In all 3CNs, there is no established metric for capturing the freshness of computation. Recently, a notable metric called the Age of Information (AoI) has been proposed to describe information freshness in communication networks [5]. The AoI metric has broad applications in various communication and control contexts, including random access protocols [6], multiaccess protocols [7], remote estimation [8], wireless-powered relay-aided communication networks [9], and network coding [10, 11].

However, applying AoI in 3CNs is inappropriate because it only addresses communication latency and does not account for computation latency. In this paper, we propose a novel metric called the age of computing (AoC) to capture computation freshness in 3CNs. A primary requirement in 3CNs is that computational tasks are processed as promptly as possible and within a maximum acceptable deadline. The core idea of AoC is to combine communication delay, computation delay, and the impact of the maximum acceptable deadline. Communication and computation delays are caused by the transmission and processing of computational tasks, while the impact of the maximum acceptable deadline accounts for additional delays when task delays exceed the users’ acceptable threshold.

I-A Related Work

All related papers can be divided into two broad categories. The first category investigates information freshness in edge and fog computing networks. The second category focuses on freshness-oriented metrics.

I-A1 Information Freshness in Edge/Fog Computing Networks

In edge and fog computing networks, tasks or messages typically go through two phases: the transmission phase and the processing phase. The basic mathematical model for these networks is established as two-hop networks and tandem queues.

The first study to focus on AoI for edge computing applications is [12], which primarily calculated the average AoI. As an early work, [13] established an analytical framework for the peak age of information (PAoI), modeling the computing and transmission process as a tandem queue. The authors derived closed-form expressions and proposed a derivative-free algorithm to minimize the maximum PAoI in networks with multiple sensors and a single destination. Subsequently, [14] modeled the communication and computation delays as a generic tandem of two first-come, first-serve queues, and analytically derived closed-form expressions for PAoI in M/M/1-M/D/1 and M/M/1-M/M/1 tandems. Building on [14], [15] went further by considering both average and peak AoI in general tandems with packet management. The packet management included two forms: no data buffer, and a one-unit data buffer with last-come first-serve displine. This work illustrated how computation and transmission times could be traded off to optimize AoI, revealing a tradeoff between average AoI and average peak AoI. Expanding on [15], [16] explored the information freshness of Gauss-Markov processes, defined as the process-related timeliness of information. The authors derived closed-form expressions for information timeliness at both the edge and fog tiers. These analytical formulas explicitly characterize the dependency among task generation, transmission, and execution, serving as objective functions for system optimization. In [17], a multi-user MEC network where a base station (BS) transmits packets to user equipment was investigated. The study derived the average AoI for two computing schemes—local computing and edge computing—under a first-come, first-serve discipline.

There are other relevant works such as [18, 19, 20, 21]. [18] investigated information freshness in MEC networks from a multi-access perspective, where multiple devices use NOMA to offload their computing tasks to an access point integrated with an MEC server. Leveraging tools from queuing theory, the authors proposed an iterative algorithm to obtain the closed-form solution for AoI. In [19], a F-RAN with multiple senders, multiple relay nodes, and multiple receivers was considered. The authors analyzed the AoI performances and proposed optimal oblivious and non-oblivious policies to minimize the time-average AoI. [20] and [21] explored AoI performances in MEC networks using different mathematical tools. [20] considered MEC-enabled IoT networks with multiple source-destination pairs and heterogeneous edge servers. Using game-theoretical analysis, they proposed an age-optimal computation-intensive update scheduling strategy based on Nash equilibrium. Reinforcement learning is also a powerful tool in this context. [21] proposed a computation offloading method based on a directed acyclic graph task model, which models task dependencies. The algorithm combined the advantages of deep Q-network, double deep Q-network, and dueling deep Q-network algorithms to optimize AoI.

I-A2 Freshness-oriented Metrics

The AoI metric, introduced in [5], measures the freshness of information at the receiver side. AoI depends on both the frequency of packet transmissions and the delay experienced by packets in the communication network [6]. When the communication rate is low, the receiver’s AoI increases, indicating stale information due to infrequent packet transmissions. However, even with frequent transmissions, if the system design imposes significant delays, the receiver’s information will still be stale. Following the introduction of AoI, several related metrics were proposed to capture network freshness from different perspectives. Peak AoI, introduced in [7], represents the worst-case AoI. It is defined as the maximum time elapsed since the preceding piece of information was generated, offering a simpler and more mathematically tractable formulation.

Nearly simultaneously, the age of synchronization (AoS) [22] and the effective age [23] were proposed. AoS, as a complementary metric to AoI, drops to zero when the transmitter has no packets to send and grows linearly with time until a new packet is generated [22]. The effective age metrics in [23] include sampling age, tracking the age of samples relative to ideal sampling times, and cumulative marginal error, tracking the total error from the reception of the latest sample to the current time.

Later, the age of incorrect information (AoII) [24] and the urgency of information (UoI) [25] were introduced. AoII addresses the shortcomings of both AoI and conventional error penalty functions by extending the concept of fresh updates to “informative” updates—those that bring new and correct information to the monitor side [24]. UoI, a context-based metric, evaluates the timeliness of status updates by incorporating time-varying context information and dynamic status evolution [25], which enables analysis of context-based adaptive status update schemes and more effective remote monitoring and control.

Despite the variety of freshness-oriented metrics proposed, none are applicable for capturing computation freshness in 3CNs. None of these metrics simultaneously address the impact of both communication and computation delays, as well as the maximum acceptable deadline. Motivated by the need for a metric capturing freshness in 3CNs, we propose the AoC metric in this paper.

I-B Contributions

This paper introduces the Age of Computing (AoC), a novel metric designed to capture computation freshness in 3CNs (see Definition 3). The AoC concept is built on tasks’ arrival and completion timestamps, which makes it applicable to dynamic and complex real-world 3CNs. The AoC is defined under two types of deadlines: (i) soft deadline, i.e., if the task’s delay exceeds the deadline, the outcome is still usable but incurs additional latency; (ii) hard deadline, i.e., if the task’s delay exceeds the deadline, the outcome is discarded, and the task is considered invalid.

We then theoretically analyze the time-average AoC under both types of deadlines in a linear topology comprising a source, a transmitter, a receiver, and a computational node. Tasks arrive at the source at a constant rate and immediately enter a communication queue at the transmitter. After being transmitted/offloaded to the receiver, tasks are forwarded to the computational node for processing in a computation queue. The queuing discipline considered is first-come, first-served.

Under the soft deadline, we first derive a general expression for the average AoC (see Theorem 1). We then study a fundamental scenario where the task arrival process follows a Poisson distribution, and the transmission and computation delays adhere to exponential distributions—forming an M/M/1-M/M/1 system. In this case, we derive a closed-form expression for the average AoC (see Theorem 2). Subsequently, we extend our analysis to a more general scenario where the task arrival process follows a general distribution, and the transmission and computation delays also follow general distributions—resulting in a G/G/1-G/G/1 system. For this case, we derive the expression for the average AoC as well (see Theorem 3).

Under the hard deadline, we first derive a general expression for the average AoC (see Theorem 4). This expression involves intricate correlations, making it highly challenging to obtain a closed-form solution, even for an M/M/1-M/M/1 system. To address this, we provide an approximation of the average AoC and demonstrate its accuracy when the communication and computation rates are significantly larger than the task generation rate (see Theorem 5). We also define computation throughput as the number of tasks successfully fed back to the source per time slot (see Definition 4). A general expression for computation throughput is derived (see Lemma 1), along with an approximation (see Proposition 1). Furthermore, we explore the trade-off between computation freshness and computation throughput (see Lemma 2). In the end, we extend the accurate approximations for both the average AoC and computation throughput to more generalized scenario involving G/G/1-G/G/1 systems (see Theorem 6 and Proposition 2).

Finally, we apply the AoC concept to develop optimal real-time scheduling strategies focused on enhancing computation freshness in multi-source networks. Recognizing the importance of recent computational tasks, we adopt a preemptive scheduling rule. For real-time scenarios, we propose AoC-based Max-Weight policies for both deadlines (see Algorithms 1 and 2). By constructing a Lyapunov function (see (28)) and its drift (see (29)), we show that these policies minimize the Lyapunov drift in each time slot (see Propositions 3 and 4).

The remaining parts of this paper are organized as follows. Section II proposes and discusses the novel concept AoC. Section III and Section IV derive theoretical results for the AoC under the soft and hard deadlines, respectively. Section V designs real-time AoC-based schedulings in multi-source networks are proposed. We numerically verify our theoretical results in Section VI and conclude this work in Section VII.

II Age of Computing

In this section, we introduce the mathematical formulation of the novel concept, Age of Computing (AoC), which quantifies the freshness of computations within 3CNs. Consider a line topology comprising a source, a transmitter, a receiver, and a computational node (sink), as depicted in Fig. 1. In this topology, both the receiver and the computational node are equipped with caching capabilities. The process begins with the source generating/offloading computational tasks, which are immediately available at the transmitter and enter a communication queue awaiting transmission. Once transmitted, the tasks are received and cached by the receiver, where they await processing. Afterward, tasks are handed off to the computational node (sink), where they are processed and eventually depart from the system.

Refer to caption
Figure 1: A line topology consisting of a source, a transmitter, a receiver, and a computational node (sink).

II-A Definition

In the network, the queuing despline follows a first-come first-serve approach. For any task kk, let τk\tau_{k} denote the arrival time at the source, τk′′\tau_{k}^{\prime\prime} the time when the computation starts at the computational node, and τk\tau_{k}^{\prime} the time when the computation completes. The delay of task kk is then defined as τkτk\tau_{k}^{\prime}-\tau_{k}. A task kk is considered valid if its outcome can be fed back to the source111The feedback is an acknowledgment message with a small bit size, commonly used in communication systems to confirm the successful receipt of data.; otherwise, it is deemed invalid.

Definition 1.

(informative, processing, and latest tasks). The index of the informative task during [0,t][0,t], denoted by N(t)N(t), is given by

N(t)=max{k|τkt,and task k is valid}.\displaystyle N(t)=\max\{k|\tau_{k}^{\prime}\leq t,\text{and task }k\text{ is valid}\}. (1)

The index of the processing task during [0,t][0,t], denoted by P(t)P(t), is given by

P(t)=max{k|τk′′t}.\displaystyle P(t)=\max\{k|\tau_{k}^{\prime\prime}\leq t\}. (2)

The index of the latest task during [0,t][0,t], denoted by G(t)G(t), is given by

G(t)=max{k|τkt}.\displaystyle G(t)=\max\{k|\tau_{k}^{\prime}\leq t\}. (3)

Based on Definition 1, an informative task is a valid task that brings the latest information. The processing is the current task being processed, and the latest refers to the last completed task. The informative task and the latest task are not necessarily the same, i.e., G(t)N(t)G(t)\geq N(t). They coincide (G(t)=N(t)G(t)=N(t)) only when the latest task is also informative. At any time tt, if the computational node is idle (no task is being processed), then the processing task is exactly the latest one, i.e, P(t)=G(t)P(t)=G(t). If the computational node is occupied (a task is being processed), then P(t)=G(t)+1P(t)=G(t)+1. In summary, we have P(t)G(t)N(t)P(t)\geq G(t)\geq N(t).

Definition 2.

A maximum acceptable deadline w>0w>0 can be categorized into two types:

  • Soft deadline: A task is considered valid if its delay τkτk>w\tau_{k}^{\prime}-\tau_{k}>w.

  • Hard deadline: A task is considered invalid if its delay τkτk>w\tau_{k}^{\prime}-\tau_{k}>w.

We define the index of lastest task within [0,t][0,t] whose delay does not exceed the threshold ww as:

A(t)=max{k|τkt,τkτkw}.\displaystyle A(t)=\max\{k|\tau_{k}^{\prime}\leq t,\tau_{k}^{\prime}-\tau_{k}\leq w\}. (4)

Under the soft deadline, we have A(t)N(t)G(t)A(t)\leq N(t)\equiv G(t), while under the hard deadline, we have A(t)N(t)G(t)A(t)\equiv N(t)\leq G(t). The computation freshness at the computational node is formally defined as follows.

Definition 3.

(AoC). Under the soft deadline, the age of computing (AoC) is defined as the random process

csoft(t)=\displaystyle c_{\text{soft}}(t)= tτN(t)\displaystyle t-\tau_{N(t)}
+\displaystyle+ 1{P(t)>G(t)}A(t)G(t)(tτP(t)w)+.\displaystyle 1_{\{P(t)>G(t)\}}\cdot\frac{A(t)}{G(t)}\cdot(t-\tau_{P(t)}-w)^{+}. (5)

Under the hard deadline, the AoC is defined as the random process

chard(t)=tτN(t).\displaystyle c_{\text{hard}}(t)=t-\tau_{N(t)}. (6)

The key distinction in Definition 3 lies in how computation freshness is assessed under hard and soft deadlines:

  • Under the hard deadline, computation freshness in (6) is determined solely by the informative tasks, as tasks with delays exceeding the threshold ww are deemed invalid.

  • Under the soft deadline, computation freshness in (3) accounts for both the informative tasks (i.e, tτN(t)t-\tau_{N(t)}) and an additional latency incurred if the task being processed experiences a significant instantaneous delay (i.e, 1{P(t)>G(t)}A(t)G(t)(tτP(t)w)+1_{\{P(t)>G(t)\}}\frac{A(t)}{G(t)}(t-\tau_{P(t)}-w)^{+}). In particular, the additional latency includes 33 components:

    • The indicator function 1{P(t)>G(t)}1_{\{P(t)>G(t)\}} denotes whether a task is being processed at the computational node.

    • The ratio A(t)G(t)\frac{A(t)}{G(t)} represents the frequency of task delays exceeding the deadline up to time tt, effectively quantifying the level/frequency of conflict with respect to the deadline (up to time tt).

    • The term (tτP(t)w)+(t-\tau_{P(t)}-w)^{+} quantifies the amount by which the delay of the task currently being processed exceeds the threshold.

    • This additional latency, calculated by the computational node, vanishes as soon as the current task is completed.

The concept of information freshness, known as AoI [5], reflects the cumulative delay over a given time period. Building on this foundation, AoC extends the notion to computational tasks, representing the cumulative delay associated with their processing. AoC provides a more comprehensive understanding of the timeliness of computations in a system. While related, AoC and AoI are fundamentally distinct in their definitions and physical interpretations: AoC evaluates the freshness of the informative task (may including an additional latency incurred by the task being processed), whereas AoI focuses on the freshness of the latest task.

Since the AoC cx(t)c_{x}(t) with x{soft,hard}x\in\{\text{soft},\text{hard}\}, captures the computation freshness at time tt, we often consider the time-average AoC over a period to measure the computation freshness of a network. As TT\to\infty, we define the average AoC of a network as

ΘxlimT1T0Tcx(t)𝑑t,x{soft,hard}.\displaystyle\Theta_{x}\triangleq\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}c_{x}(t)dt,\quad x\in\{\text{soft},\text{hard}\}. (7)

Finally, we summarize the important notations and their descriptions in the paper in Table I.

kk the index of computational tasks
ww the threshold
τk\tau_{k} the arrival time of task kk at the source
τk′′\tau_{k}^{\prime\prime} time when the computation completes
τk\tau_{k}^{\prime} the time when the computation completes
N(t)N(t) the index of the informative task during [0,t][0,t]
P(t)P(t) the index of the processing task during [0,t][0,t]
G(t)G(t) the index of the latest task during [0,t][0,t]
A(t)A(t) the index of the latest task with a delay w\leq w during [0,t][0,t]
csoft(t)c_{\text{soft}}(t) the AoC under the soft deadline in time tt
chard(t)c_{\text{hard}}(t) the AoC under the hard deadline in time tt
Θsoft\Theta_{\text{soft}} the time-average AoC the soft deadline
Θhard\Theta_{\text{hard}} the time-average AoC the hard deadline
μt\mu_{t} the communication rate
μc\mu_{c} the computation rate
Xk+1X_{k+1} the inter-arrival time between task kk and task k+1k+1
TkT_{k} the delay of task kk
MM the number of invalid tasks between two valid tasks
TABLE I: Important Notations

II-B Insights and Applications

From (3) and (6), we observe that the only notation used is the timestamp, specifically the arrival timestamp τk\tau_{k} and the departure timestamp τk\tau_{k}^{\prime}. To establish this concept, we intentionally avoid incorporating physical parameters such as bandwidth, GPU cycles, or caching. Instead, we focus solely on the use of timestamps. The reason for this choice is that the timestamp is the most fundamental and universally applicable metric that can be recorded by the destination (i.e., computational nodes) in any realistic 3CNs.

Let mkm_{k} denote the timestamp when task kk reaches the receiver. The difference mkτkm_{k}-\tau_{k} represents the transmission delay for task kk, which is determined by the communication capability. This delay can be influenced by various network characteristics, such as bandwidth, information size, signal-to-noise ratio, channel capacity, network load, transmission protocols, and more. On the other hand, the difference τkmk\tau_{k}^{\prime}-m_{k} represents the computation delay for task kk, which depends on the computation capability. This delay is influenced by factors such as GPU cycles, task complexity, node load, memory size, access speed, I/O scheduling, and more. Finally, the total delay of task kk, given by τkτk\tau_{k}^{\prime}-\tau_{k}, encompasses both the transmission and computation delays. As such, it is influenced by both the communication and computation capabilities. The AoC concept effectively captures the impact of both these factors on task performance.

The AoC concept is also applicable in dynamic and complex environments, such as mobile computing. In such settings, the communication network may experience interruptions or temporary failures due to factors like signal loss, interference, or mobility-related issues (e.g., moving out of network coverage). Tasks may experience temporary delays in a queue due to these disruptions. Nevertheless, the AoC concept remains applicable. To calculate the AoC, we only need the timestamps of tasks (i.e., (τk,τk)(\tau_{k},\tau_{k}^{\prime})). As long as timestamps can be recorded, the AoC can be calculated, regardless of disruptions.

III AoC Analysis under the Soft Deadline

In this section, we first provide graphical insights into the curve of the AoC (see Fig. 2). Next, we derive a general expression for the time-average AoC, presented in Theorem 1 (Section III-A). Following this, we analyze the expression in a fundamental case: the M/M/1 - M/M/1 tandem system, as detailed in Theorem 2 (Section III-B). Finally, we extend the average AoC expression to more general cases, specifically G/G/1 - G/G/1 tandem systems, as discussed in Theorem 3 (Section III-C).

III-A General Expression for Θsoft\Theta_{\text{soft}}

For analytical tractability, we consider a simplified model: (i) A task is represented by (L,w,B)(L,w,B), where LL denotes the task’s input data size, ww represents the maximum acceptable deadline, and BB indicates the computation workload [1]. (ii) The network is characterized by (R,F)(R,F), where RR is the data rate of the communication channel at the transmitter, and FF is the CPU cycle frequency at the computational node. (iii) The expected transmission delay of a task is L/RL/R, and the expected computation delay is B/FB/F. (iv) Both delays each follow their respective distributions. We define μt=R/L\mu_{t}=R/L and μc=F/B\mu_{c}=F/B as the communication rate and computation rate, respectively.

Let h(t)=tτN(t)h(t)=t-\tau_{N(t)} and ϵ^(t)=A(t)G(t)\hat{\epsilon}(t)=\frac{A(t)}{G(t)}. From [5], the AoI concept shares the same formula as h(t)h(t). Using the AoI curve as a benchmark, the AoC curve is depicted in Fig. 2. In Fig. 2 (a), the delay of task kk exceeds the threshold (Tk>wT_{k}>w), but any waiting time before task kk is processed does not surpass the threshold. When the (instantenous) delay of a task is less than the deadline ww, the AoC curve coincides with the AoI curve. However, when the (instantenous) delay of a task exceeds the deadline ww, the portion of the delay exceeding the deadline increases at a rate 1+ϵ^(t)1+\hat{\epsilon}(t). In Fig. 2 (b), both the delay of task kk and the waiting time bofore task kk is processed (τk1τk>w\tau_{k-1}^{\prime}-\tau_{k}>w) exceed the threshold. When the processing of task kk begins, an additional latency is introduced. As a result, at time τk1\tau_{k-1}^{\prime}, there is an upwardjump (additional latency) of ϵ^(t)(τk1w)\hat{\epsilon}(t)(\tau_{k-1}^{\prime}-w). Subsequently, the AoC curve increases at a rate 1+ϵ^(t)1+\hat{\epsilon}(t).

Refer to caption
(a) The waiting time for task kk in the computation queue does not exceed ww.
Refer to caption
(b) The waiting time for task kk in the computation queue exceeds ww.
Figure 2: The curve AoC under the soft deadline

To uncover theoretical insights into the average AoC, we investigate it within queue-theoretic systems. We assume a stationary queuing discipline with a first-come, first-serve approach. On the source side, computational tasks arrive according to a stationary stochastic process characterized by a constant average rate. The inter-arrival between consecutive tasks is denoted by Xk+1=τk+1τkX_{k+1}=\tau_{k+1}-\tau_{k}. Upon arrival, each task experiences a transmission delay, denoted by Sk,tS_{k,t}, at the transmitter, which follows a distribution with a constant expectation. Similarly, the computation delay denoted by Sk,cS_{k,c}, at the computational node also follows a distribution with a constant expectation. Let the delay of task kk be denoted by TkT_{k}. In queuing systems, this delay is referred to as the system time, and we use these terms interchangeably. Under the stationary queuing discipline, {Xk}k\{X_{k}\}_{k}, {Sk,t}k\{S_{k,t}\}_{k}, {Sk,c}k\{S_{k,c}\}_{k}, and {Tk}k\{T_{k}\}_{k} are i.i.d, respectively.

Theorem 1.

The average AoC can be calculated as

Θsoft=𝔼[XkTk]+12𝔼[Xk2]𝔼[Xk]\displaystyle\Theta_{\text{soft}}=\frac{\mathbb{E}[X_{k}T_{k}]+\frac{1}{2}\mathbb{E}[X_{k}^{2}]}{\mathbb{E}[X_{k}]}
+ϵw𝔼[((Tkw)+)2]𝔼[((TkSk,cw)+)2]2𝔼[Xk],\displaystyle+\epsilon_{w}\cdot\frac{\mathbb{E}\big{[}\big{(}(T_{k}-w)^{+}\big{)}^{2}\big{]}-\mathbb{E}\big{[}\big{(}(T_{k}-S_{k,c}-w)^{+}\big{)}^{2}\big{]}}{2\mathbb{E}[X_{k}]}, (8)

where ϵw=Pr(Tk>w)\epsilon_{w}=\Pr(T_{k}>w).

Proof.

The proof is given in Appendix A. ∎

III-B Average AoC in M/M/1-M/M/1 Systems

In this section, we let the inter-arrival XkX_{k} follows an exponential distribution with parameter λ\lambda, meaning that computational tasks arrive at the source according to a Poisson process characterized by an average rate of λ\lambda. Let Sk,tS_{k,t} and Sk,cS_{k,c} have exponential distributions with parameters μt\mu_{t} (=R/L=R/L) and μc\mu_{c} (=F/X=F/X), respectively. In other words, the network forms an M/M/1-M/M/1 tandem.

Theorem 2.

Let ρt=λ/μt\rho_{t}=\lambda/\mu_{t} and ρc=λ/μc\rho_{c}=\lambda/\mu_{c}. Denote

δt=1ρt,δc=1ρc,ζt=eμtδtw,ζc=eμcδcw.\displaystyle\delta_{t}=1-\rho_{t},\,\delta_{c}=1-\rho_{c},\,\zeta_{t}=e^{-\mu_{t}\delta_{t}w},\,\zeta_{c}=e^{-\mu_{c}\delta_{c}w}.

The closed form expression for Θsoft\Theta_{\text{soft}} is given by: if μtμc\mu_{t}\neq\mu_{c}, then

Θsoft=\displaystyle\Theta_{\text{soft}}= 1λ+1μt+1μc+ρt2μtδt+ρc2μcδc+ρtρcμt+μcλ\displaystyle\frac{1}{\lambda}+\frac{1}{\mu_{t}}+\frac{1}{\mu_{c}}+\frac{\rho_{t}^{2}}{\mu_{t}\delta_{t}}+\frac{\rho_{c}^{2}}{\mu_{c}\delta_{c}}+\frac{\rho_{t}\rho_{c}}{\mu_{t}+\mu_{c}-\lambda}
+\displaystyle+ λμc(δcμtδt)2(μcμt)2(ζtμtδtζcμcδc)(ζtμt2δt2ζcμc2δc2);\displaystyle\lambda\frac{\mu_{c}(\delta_{c}\mu_{t}\delta_{t})^{2}}{(\mu_{c}-\mu_{t})^{2}}\big{(}\frac{\zeta_{t}}{\mu_{t}\delta_{t}}-\frac{\zeta_{c}}{\mu_{c}\delta_{c}}\big{)}\big{(}\frac{\zeta_{t}}{\mu_{t}^{2}\delta_{t}^{2}}-\frac{\zeta_{c}}{\mu_{c}^{2}\delta_{c}^{2}}\big{)}; (9)

if μt=μc\mu_{t}=\mu_{c}, then

Θsoft=\displaystyle\Theta_{\text{soft}}= 1λ+2μt+2ρt2μtδt+ρt2μt+μtδt\displaystyle\frac{1}{\lambda}+\frac{2}{\mu_{t}}+\frac{2\rho_{t}^{2}}{\mu_{t}\delta_{t}}+\frac{\rho_{t}^{2}}{\mu_{t}+\mu_{t}\delta_{t}}
+\displaystyle+ λζt2(1+μtδtw)(2μt2δt+wμt).\displaystyle\lambda\zeta_{t}^{2}(1+\mu_{t}\delta_{t}w)(\frac{2}{\mu_{t}^{2}\delta_{t}}+\frac{w}{\mu_{t}}). (10)
Proof.

The proof of Theorem 2 is given in Appendix B. ∎

III-C Extension: Average AoC in G/G/1-G/G/1 Systems

In this section, we let the inter-arrival XkX_{k} follows a general distribution with 𝔼[Xk]=1/λ\mathbb{E}[X_{k}]=1/\lambda, meaning that computational tasks arrive at the source according to a general stochastic process characterized by an average rate of λ\lambda. Let Sk,tS_{k,t} and Sk,cS_{k,c} have general distributions with 𝔼[Sk,t]=1/μt\mathbb{E}[S_{k,t}]=1/\mu_{t} and 𝔼[Sk,c]=1/μc\mathbb{E}[S_{k,c}]=1/\mu_{c}, respectively. In other words, the network forms an G/G/1-G/G/1 tandem.

To obtain the average AoC, it is necessary to know the stachastic information about the inter-arrival interval, the service delay at the transmission and computation nodes, and the system time of a task in both the transmission and computation queues. Let the system times of a task in the transmission and computation queues be denoted as Uk,tU_{k,t} and Uk,cU_{k,c}, respectively.

Note that sequences {Sk,t}k\{S_{k,t}\}_{k}, {Sk,c}k\{S_{k,c}\}_{k}, {Uk,t}k\{U_{k,t}\}_{k}, and {Uk,c}k\{U_{k,c}\}_{k} are i.i.d with respect to kk, respectively. The correpsonding density functions are denoted by fXf_{X}, fStf_{S_{t}}, fScf_{S_{c}}, fUtf_{U_{t}}, and fUcf_{U_{c}}, respectively. The joint density function of Uk,tU_{k,t} and Uk,cU_{k,c} is denoted as fUt,Ucf_{U_{t},U_{c}}. The following assumptions are then required.

Assumption 1.

The density functions fXf_{X}, fStf_{S_{t}}, fScf_{S_{c}}, and fUt,Ucf_{U_{t},U_{c}} are known.

Based on Assumption 1, the following steps can be taken:

  • By integrating fUt,Ucf_{U_{t},U_{c}}, we can obtain the marginal density functions fUtf_{U_{t}} and fUcf_{U_{c}}.

  • Since Sk,tS_{k,t} (respectively, Sk,cS_{k,c}) is independent of Uk,tSk,tU_{k,t}-S_{k,t} (respectively, Uk,cSk,cU_{k,c}-S_{k,c}), and the marginal density functions fUtf_{U_{t}} and fUcf_{U_{c}} are avaibale, we can derive the density functions of the waiting times in both queues, namely fUtStf_{U_{t}-S_{t}} and fUcScf_{U_{c}-S_{c}}, through inverse convolution.

  • Given that fUtStf_{U_{t}-S_{t}}, fUcScf_{U_{c}-S_{c}}, and fUt,Ucf_{U_{t},U_{c}} are known, we can again apply inverse convolution to derive the joint density functions fUt,UcScf_{U_{t},U_{c}-S_{c}} and fUc,UtStf_{U_{c},U_{t}-S_{t}}.

Theorem 3.

When Assuption 1 is satisfied, let μt\mu_{t} and μc\mu_{c} be in Theorem 2. The closed form expression for Θsoft\Theta_{\text{soft}} is gvien by:

Θsoft=\displaystyle\Theta_{\text{soft}}= 1μt+1μc+λ(0x2fX(x)𝑑x2+g1+g2)\displaystyle\frac{1}{\mu_{t}}+\frac{1}{\mu_{c}}+\lambda\big{(}\frac{\int_{0}^{\infty}x^{2}f_{X}(x)dx}{2}+g_{1}+g_{2}\big{)}
+\displaystyle+ λwη1(τ)𝑑τ2w(τw)2(η1(τ)η2(τ))dτ),\displaystyle\lambda\frac{\int_{w}^{\infty}\eta_{1}(\tau)d\tau}{2}\int_{w}^{\infty}(\tau-w)^{2}(\eta_{1}(\tau)-\eta_{2}(\tau))d\tau\big{)}, (11)

where ξ(x)=xfUt(u)𝑑u\xi(x)=\int_{x}^{\infty}f_{U_{t}}(u)du, η1(τ)=0τfUt,Uc(u,τu)𝑑u\eta_{1}(\tau)=\int_{0}^{\tau}f_{U_{t},U_{c}}(u,\tau-u)du, η2(τ)=0τfUt,UcSc(u,τu)𝑑u\eta_{2}(\tau)=\int_{0}^{\tau}f_{U_{t},U_{c}-S_{c}}(u,\tau-u)du, and

g1=\displaystyle g_{1}= 0xx(τx)fUt(τ)𝑑τfX(x)𝑑x,\displaystyle\int_{0}^{\infty}x\int_{x}^{\infty}(\tau-x)f_{U_{t}}(\tau)d\tau f_{X}(x)dx,
g2=\displaystyle g_{2}= 0x0τ0(ξ(x)fSt(y)+fSt(y)fUt(xy))\displaystyle\int_{0}^{\infty}x\int_{0}^{\infty}\tau\int_{0}^{\infty}(\xi(x)\cdot f_{S_{t}}(y)+f_{S_{t}}(y)\circledast f_{U_{t}}(x-y)\big{)}
fUc(τ+y)dydτfX(x)dx.\displaystyle\cdot f_{U_{c}}(\tau+y)dyd\tau f_{X}(x)dx.
Proof.

The proof is given in Appendix C. ∎

Remark 1.

Let XkX_{k}, Sk,tS_{k,t}, and Sk,cS_{k,c} be exponentially distributed with parameters λ\lambda, μt\mu_{t}, and μc\mu_{c}, respectively. The expression in (3) reduces to that in (2) and (2).

Remark 2.

Assumption 1 provides the minimum sufficient conditions for the closed form of the average AoC. However, it does not guarantee the convergence of Θsoft\Theta_{\text{soft}}, which heavily depends on the distributions of XkX_{k}, Sk,tS_{k,t}, Sk,cS_{k,c}, Uk,tU_{k,t}, and Uk,cU_{k,c}.

IV AoC under the Hard Deadline

In this section, we first provide graphical insights into the curve of the AoC (see Fig. 3). Next, we derive a general expression for the time-average AoC, presented in Theorem 4 (Section IV-A). Following this, we accurately approximate the expression in a fundamental case: the M/M/1-M/M/1 system, as detailed in Theorem 5 (Section IV-B). In addition, we define computation throughput (see Definition 4 in Section IV-C) and derive its expression (see Lemma 1 in Section IV-C). Subsequently, we investigate the trade-off between computation freshness and computation throughput (see Lemma 2 in Section IV-C). Finally, we generalize the accurate approximations for both the average AoC and computation throughput to broader cases, specifically G/G/1-G/G/1 systems, as presented in Theorem 6 and Proposition 2 (Section IV-D).

IV-A General Expression for Θhard\Theta_{\text{hard}}

From (6) in Definition 3, under the hard deadline, chard(t)c_{\text{hard}}(t) is solely determined by informative tasks. When w=0w=0, all tasks are considered invalid, leading to chard(t)=tc_{\text{hard}}(t)=t, which increases linearly with time tt. Conversely, when w=w=\infty, there is no dedline, and all tasks are considered valid. In this case G(t)=N(t)G(t)=N(t) for all tt, so chard(t)=tG(t)c_{\text{hard}}(t)=t-G(t), which is only affected by the lastest task.

Refer to caption
Figure 3: The curve of AoC under the hard deadline.

The AoC curve is depicted in Fig. 3, with the curve of AoI (see [5]) as a benchmark. In this figure, task k1k-1 is valid, so both AoI and AoC decreases at time τk1\tau_{k-1}^{\prime}. Suppose that after task k1k-1, the next valid task has the index k1+Mk-1+M. Here, MM is a random varibale with the distribution

Pr{M=n}\displaystyle\Pr\{M=n\}
=Pr{Tk>w,,Tk+n1>w,Tk+nw}.\displaystyle=\Pr\{T_{k}>w,\cdots,T_{k+n-1}>w,T_{k+n}\leq w\}. (12)

From (IV-A), M1M\geq 1. Since the network is stationary, the random variable MM associated with every valid task has the identical distribution as in (IV-A). The AoC does not decrease at times τk\tau_{k}^{\prime}, τk+1\tau_{k+1}^{\prime}, \cdots, τk+M1\tau_{k+M-1}^{\prime}, and decreases at time τk+M\tau_{k+M}^{\prime}. In the interval [τk1,τk+M)[\tau_{k-1}^{\prime},\tau_{k+M}^{\prime}), the AoC increases linearly with time tt.

Theorem 4.

The average AoC can be calculated as

Θhard=𝔼[TMj=1MXj]+12𝔼[(j=1MXj)2]𝔼[j=1MXj].\displaystyle\Theta_{\text{hard}}=\frac{\mathbb{E}[T_{M}\cdot\sum_{j=1}^{M}X_{j}]+\frac{1}{2}\mathbb{E}[\big{(}\sum_{j=1}^{M}X_{j}\big{)}^{2}]}{\mathbb{E}[\sum_{j=1}^{M}X_{j}]}. (13)
Proof.

The proof is given in Appendix D. ∎

Although the general expression for the average AoC is given by (13), a further exploration on this expression is challenging due to a couple of correlations involved. These correlations are detailed as follows.

  • (i)

    Correlation between delays: In the queuing system, if the delay of task kk, TkT_{k}, increases, the waiting time for task k+1k+1 also increases, leading to a larger delay for task k+1k+1, Tk+1T_{k+1}. Hence, the sequence {Tk}k\{T_{k}\}_{k} consists of identical but positively correlated delays.

  • (ii)

    Correlation between delays and MM (defined in (IV-A)): According to (IV-A), MM is the first index nn such that TnwT_{n}\leq w while all previous Tk>wT_{k}>w. A higher value of TkT_{k} suggests a higher likelihood of subsequent TjT_{j} values (for j>kj>k) also being high, thus making MM larger because it takes longer for a TkT_{k} to be less than or equal to ww. This indicates that TkT_{k} are MM are positively correlated.

  • (iii)

    Correlation between the inter-arrival times XkX_{k} and delays TkT_{k}: If XkX_{k} is larger, meaning that the inter-arrival time between task k1k-1 and task kk is longer, then TkT_{k} is likely smaller because the waiting time for task kk is reduced. Therefore, TkT_{k} and XkX_{k} are negatively correlated.

  • (iv)

    Correlation between the inter-arrival times XkX_{k} and MM: Since XkX_{k} are TkT_{k} are negatively correlated, and TkT_{k} and MM are positively correlated, XkX_{k} and MM are negatively correlated.

IV-B Average AoC in M/M/1-M/M/1 Systems

Due to all these correlations, it is extremely challenging to derive the closed-form expression for Θhard\Theta_{\text{hard}} in (13). However, we can approximate it accurately under specific conditions.

Theorem 5.

When μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the average AoC defined in (13) can be accurately approximated by Θ^hard\hat{\Theta}_{\text{hard}},

Θ^hard=𝔼[TM]+𝔼[X12]2𝔼[X1]+(𝔼[M2]2𝔼[M]12)𝔼[X1].\displaystyle\hat{\Theta}_{\text{hard}}=\mathbb{E}[T_{M}]+\frac{\mathbb{E}[X_{1}^{2}]}{2\mathbb{E}[X_{1}]}+\big{(}\frac{\mathbb{E}[M^{2}]}{2\mathbb{E}[M]}-\frac{1}{2}\big{)}\mathbb{E}[X_{1}]. (14)

Let ρt\rho_{t}, ρc\rho_{c}, δt\delta_{t}, δc\delta_{c}, ζt\zeta_{t}, and ζc\zeta_{c} be given in Theorem 2, if μtμc\mu_{t}\neq\mu_{c},

Θ^hard=\displaystyle\hat{\Theta}_{\text{hard}}= 1ζt(1+μtδtw)μt2δt21ζc(1+μcδcw)μc2δc2(1ζt)/μtδt(1ζc)/μcδc\displaystyle\frac{\frac{1-\zeta_{t}(1+\mu_{t}\delta_{t}w)}{\mu_{t}^{2}\delta_{t}^{2}}-\frac{1-\zeta_{c}(1+\mu_{c}\delta_{c}w)}{\mu_{c}^{2}\delta_{c}^{2}}}{(1-\zeta_{t})/\mu_{t}\delta_{t}-(1-\zeta_{c})/\mu_{c}\delta_{c}}
+\displaystyle+ μcμtλ(μcδc(1ζt)μtδt(1ζc));\displaystyle\frac{\mu_{c}-\mu_{t}}{\lambda\big{(}\mu_{c}\delta_{c}(1-\zeta_{t})-\mu_{t}\delta_{t}(1-\zeta_{c})\big{)}}; (15)

if μt=μc\mu_{t}=\mu_{c},

Θ^hard=\displaystyle\hat{\Theta}_{\text{hard}}= 2μtδt(2μtδt+2w+μtδtw2)ζt1ζt(1+μtδtw)\displaystyle\frac{\frac{2}{\mu_{t}\delta_{t}}-(\frac{2}{\mu_{t}\delta_{t}}+2w+\mu_{t}\delta_{t}w^{2})\zeta_{t}}{1-\zeta_{t}(1+\mu_{t}\delta_{t}w)}
+\displaystyle+ 1λ(1ζt(1+μtδtw)).\displaystyle\frac{1}{\lambda\big{(}1-\zeta_{t}(1+\mu_{t}\delta_{t}w)\big{)}}. (16)
Proof.

The proof of Theorem 5 is given in Appendix E. ∎

Remark 3.

(Lower Bound) Θ^hard\hat{\Theta}_{\text{hard}} in (14) captures an extreme case where the positive correlations among {Tk}k\{T_{k}\}_{k} are removed. Therefore, Θ^hard\hat{\Theta}_{\text{hard}} in (14) serves as a lower bound for Θhard\Theta_{\text{hard}}. This lower bound is approximately tight when μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda.

By exchanging μt\mu_{t} and μc\mu_{c} in (5) and (5), we observe that Θsoft\Theta_{\text{soft}} remains unchanged. This indicates that Θsoft\Theta_{\text{soft}} is symmetric with respect to (μt,μc)(\mu_{t},\mu_{c}). From a mathematical standpoint, this symmetry implies that both communication latency and computation latency equally affect Θsoft\Theta_{\text{soft}}. Therefore, in practical terms, to improve the computation freshness Θsoft\Theta_{\text{soft}}, one can reduce either the communication latency or the computation latency, as both have the same impact on the overall freshness.

IV-C Computation Throughput

Unlike the hard deadline, the frequency of informative tasks is influenced by two facts: the arrival rate λ\lambda and the deadline ww. We define the frequency of informative tasks as computation throughput. Formally, we have the following definition.

Definition 4.

(Computation Throughput) The computation throughput is defined as

Ξ=limtN(t)t.\displaystyle\Xi=\lim_{t\to\infty}\frac{N(t)}{t}. (17)
Lemma 1.

The computation throughput is given by,

Ξ=1𝔼[k=1MXk].\displaystyle\Xi=\frac{1}{\mathbb{E}[\sum_{k=1}^{M}X_{k}]}. (18)
Proof.

The proof of Lemma 1 is given in Appendix F. ∎

In (18), the arrival rate λ\lambda is reflected in XkX_{k}, while the deadline ww is captured by MM. It is worth noting that Definition 4 can apply to the case with a soft deadline. Under the soft deadline, (17) implies that Ξ=limtN(t)t=limtG(t)t=λ\Xi=\lim_{t\to\infty}\frac{N(t)}{t}=\lim_{t\to\infty}\frac{G(t)}{t}=\lambda, which is a trivial case. Therefore, we did not investigate the computation throughput concept in Section III.

Proposition 1.

When μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the computation throughput defined in (18) can be accurately approximated by Ξ^\hat{\Xi},

Ξ^=1𝔼[M]𝔼[X1].\displaystyle\hat{\Xi}=\frac{1}{\mathbb{E}[M]\mathbb{E}[X_{1}]}. (19)

Let ρt\rho_{t}, ρc\rho_{c}, δt\delta_{t}, δc\delta_{c}, ζt\zeta_{t}, and ζc\zeta_{c} be given in Theorem 2, if μtμc\mu_{t}\neq\mu_{c},

Ξ^=λμcδc(1ζt)μtδt(1ζc)μcμt,\displaystyle\hat{\Xi}=\lambda\cdot\frac{\mu_{c}\delta_{c}(1-\zeta_{t})-\mu_{t}\delta_{t}(1-\zeta_{c})}{\mu_{c}-\mu_{t}}, (20)

if μt=μc\mu_{t}=\mu_{c},

Ξ^=\displaystyle\hat{\Xi}= λ(1(1+μtδtw)ζt).\displaystyle\lambda(1-(1+\mu_{t}\delta_{t}w)\zeta_{t}). (21)
Proof.

The proof of Proposition 1 is given in Appendix G. ∎

Remark 4.

(Upper Bound) According to (IV-A), a higher value of TkT_{k} suggests a higher likelihood of subsequent TjT_{j} values (for j>kj>k) also being high, thus making MM larger. However, Ξ^\hat{\Xi} in (14) captures an extreme case where positive correlations among {Tk}k\{T_{k}\}_{k} are removed, resulting in a smaller expectation for MM. Consequently, Ξ^\hat{\Xi} in (19) serves an upper bound for Ξ^\hat{\Xi}. Additionally, this upper bound is approximately tight when μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda.

A Pareto-optimal point represents a state of resources allocation where improving one objective necessitates compromising the other. A pair (Θ^,Ξ^)(\hat{\Theta}^{*},\hat{\Xi}^{*}) is defined as a Pareto-optimal point if, for any (Θ^,Ξ^)(\hat{\Theta},\hat{\Xi}), both conditions (i) Θ^<Θ^\hat{\Theta}<\hat{\Theta}^{*} (or Θ^Θ^\hat{\Theta}\leq\hat{\Theta}^{*}) and (ii) Ξ^Ξ^\hat{\Xi}\geq\hat{\Xi}^{*} (or Ξ^>Ξ^\hat{\Xi}>\hat{\Xi}^{*}) cannot hold simulatenously [27]. This indicates that reducing computation freshness is impossible without degrading computation throughput, and increasing computation throughput cannot occur without compromising computation freshness.

While closed-form expressions for computation freshness (see (13)) and computation throughput (see (18)) are unavailable, their relationship can be approximated using (14) and (19), the analysis focuses on weakly Pareto-optimal points rather than strict Pareto-optimal points [28]. Consider the following optimization problem:

minλ:Ξ^>uΘ^.\displaystyle\min_{\lambda:\,\,\hat{\Xi}>u}\,\,\hat{\Theta}. (22)

Let the corresponding Θ^\hat{\Theta} and Ξ^\hat{\Xi} as Θ^(u)\hat{\Theta}(u) and Ξ^(u)\hat{\Xi}(u), respectively. The tradeoff between computation freshness and the computation throughput is explored in the following lemma.

Lemma 2.

The objective pair (Θ^(u),Ξ^(u))\big{(}\hat{\Theta}(u),\hat{\Xi}(u)\big{)} is a weakly Pareto-optimal point.

Proof.

The proof of Lemma 2 is gvien in Appendix H. ∎

IV-D Extension: Average AoC in G/G/1-G/G/1 Systems

In this section, we extend the average AoC in M/M/1-M/M/1 tandems (see Section IV-B) to general case, i.e., G/G/1 - G/G/1 tandems: Computational tasks arrive at the source via a random process characterized by an average rate of λ\lambda. Upon arrival, the tranmission delay of each task follows a general distribution with an average rate of μt\mu_{t}, and the computation delay at the computational node follows a general distribution with an average rate of μc\mu_{c}.

Theorem 6.

When Assuption 1 is satisfied, let μt\mu_{t} and μc\mu_{c} be in Theorem 2. When μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the average AoC defined in (13) can be accurately approximated by (14). In particular,

Θ^hard=\displaystyle\hat{\Theta}_{\text{hard}}= 0wτη1(τ)𝑑τFT(w)+0x2fX(x)𝑑x20xfX(x)𝑑x\displaystyle\frac{\int_{0}^{w}\tau\eta_{1}(\tau)d\tau}{F_{T}(w)}+\frac{\int_{0}^{\infty}x^{2}f_{X}(x)dx}{2\int_{0}^{\infty}xf_{X}(x)dx}
+\displaystyle+ 1FT(w)FT(w)0xfX(x)𝑑x.\displaystyle\frac{1-F_{T}(w)}{F_{T}(w)}\cdot\int_{0}^{\infty}xf_{X}(x)dx. (23)

where FT(w)=0wη1(τ)𝑑τF_{T}(w)=\int_{0}^{w}\eta_{1}(\tau)d\tau and η1(τ)\eta_{1}(\tau) is defined in Theorem 3.

Proof.

The proof is given in Appendix I. ∎

Proposition 2.

When Assuption 1 is satisfied, let μt\mu_{t} and μc\mu_{c} be in Theorem 2. When μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the computation throughput defined in (18) can be accurately approximated by (19). In particular,

Ξ^=FT(w)0xfX(x)𝑑x,\displaystyle\hat{\Xi}=\frac{F_{T}(w)}{\int_{0}^{\infty}xf_{X}(x)dx}, (24)

where FT(w)F_{T}(w) is given in Theorem 3.

Proof.

The proof of Proposition 2 is given in Appendix J. ∎

V AoC-based Scheduling for Multi-Source Networks

The AoC concept is not only utilized in continuous-time setting, but also suitable for descrete-time setting. In this section, we explore the application of the AoC concept to resource optimization and real-time scheduling in multi-source networks, providing an analysis of the resulting optimal scheduling policies.

V-A System Model in Multi-Source Networks

Consider a network with a computational node processing tasks from NN sources, as illustrated in Fig. 4. Time is slotted, indexed by k{1,2,,T}k\in\{1,2,\cdots,T\}, where TT is the time-horizon of this discrete-time system. In every time slot, computational tasks arrive source ii with rate λi\lambda_{i}. Upon arrival, tasks are immediately available at the corresponding transmitter ii, where they enter a communication queue awaiting transmission. Transmitter ii transmits tasks at a rate μt,i\mu_{t,i}: if a task is under transmission during a slot, the transmission completes with probability μt,i\mu_{t,i} by the end of the slot. Once transmitted, the tasks are received and cached by the receiver, where they await processing. The computational node processes tasks at a rate μc\mu_{c}. Without loss of generality, we assume μc=1\mu_{c}=1, the framework can be straightforwardly extended to cases where μc<1\mu_{c}<1. After processing, tasks depart from the system. In most realistic scenarios, recent computational tasks hold greater importance than earlier ones, as they often contain fresher and more relevant information. To address this, we adopt a preemptive rule [6, 8]: newly arrived tasks can replace previously queued tasks already present in the transmitter.

Refer to caption
Figure 4: A 3CN with Multi sources/transmitters, a receiver, and a computational node.

At each slot, the receiver either idles or selects a transmitter to transmits its task. Let ai(k){0,1}a_{i}(k)\in\{0,1\} be an indicator function where ai(k)=1a_{i}(k)=1 if transmitter ii is selected for transmission in time kk, and ai(k)=0a_{i}(k)=0 otherwise. Similarly, let di(k)=1d_{i}(k)=1 indicate that a task from transmitter ii successfully reaches the receiver at slot kk. Due to limited communication resources, at most one task can be transmitted across all transmitters in a given slot. Therefore, the following constraints hold:

i=1Nai(k)1,i=1Ndi(k)1,k.\displaystyle\sum_{i=1}^{N}a_{i}(k)\leq 1,\quad\sum_{i=1}^{N}d_{i}(k)\leq 1,\quad\forall k. (25)

Additionally, if a task is successfully transmitted (i=1Ndi(k)=1\sum_{i=1}^{N}d_{i}(k)=1), the reicever schedules a transmitter for the next time slot (i=1Nai(k+1)=1\sum_{i=1}^{N}a_{i}(k+1)=1). Conversely, if no task is delivered (i=1Ndi(k)=0\sum_{i=1}^{N}d_{i}(k)=0), the receiver does not schedule a transmitter for the next slot (i=1Nai(k+1)=0\sum_{i=1}^{N}a_{i}(k+1)=0). This relationship is formalized as: i=1Nai(k+1)=i=1Ndi(k)\sum_{i=1}^{N}a_{i}(k+1)=\sum_{i=1}^{N}d_{i}(k).

Let cix(k)c_{i}^{x}(k) with x{soft,hard}x\in\{\text{soft},\text{hard}\} be the AoC associated with source ii at the end of slot kk. The time-average AoC associated with source ii is given by 𝔼[k=1Tci(k)]/T\mathbb{E}[\sum_{k=1}^{T}c_{i}(k)]/T. For capturing the freshness of computation of this network employing scheduling policy πΠ\pi\in\Pi, we define the average sum AoC in the limit as the time-horizon grows to infinity as

Θx=limT1TNk=1Ti=1N𝔼[cix(k)],x{soft,hard}.\displaystyle\Theta_{x}=\lim_{T\to\infty}\frac{1}{TN}\sum_{k=1}^{T}\sum_{i=1}^{N}\mathbb{E}[c_{i}^{x}(k)],\,\,x\in\{\text{soft},\text{hard}\}. (26)

The AoC-optimal the scheduling policy πΠ\pi^{*}\in\Pi is the one that minimizes the average sum AoC:

Θx=minπΠΘx.\displaystyle\Theta_{x}^{*}=\min_{\pi\in\Pi}\Theta_{x}. (27)

At the end of this subsection, we introduce the concept of instantaneous delay for transmitter ii, denoted as zi(k)z_{i}(k). In particular, zi(k)z_{i}(k) represents the instantaneous delay of current task in time slot kk. If the transmitter is idle during slot kk, then the delay is defined as zi(k)=cix(k)z_{i}(k)=c_{i}^{x}(k).

V-B AoC-based Max-Weight Policies

Finding the global optimal policy for the optimization problem (27) is challenging due to the real-time nature of scheduling decisions. Drawing inspiration from [29], we employ Lyapunov Optimization to develop AoC-based Max-Weight policies. This policy have been shown to be near-optimal [29, 30]. The Max-Weight policy is designed to minimize the expected drift of the Lyapunov function in each time slot, thereby striving to reduce the AoC across the network.

We use the following linear Lyapunov Function:

L(k)1Ni=1Nβicix(t),x{soft,hard}.\displaystyle L(k)\triangleq\frac{1}{N}\sum_{i=1}^{N}\beta_{i}c_{i}^{x}(t),\,\,x\in\{\text{soft},\text{hard}\}. (28)

where βi\beta_{i} is a positive hyperparameter that allows the Max-Weight policy to be tuned for different network configurations and queueing disciplines. The Lyapunov Drift is defined as

Δ(Ξ(k)):=𝔼[L(k+2)L(k)|Ξ(k)],\displaystyle\Delta\big{(}\Xi(k)\big{)}:=\mathbb{E}[L(k+2)-L(k)|\Xi(k)], (29)

where Ξ(k)\Xi(k) represents the network state at the beginning of time slot kk222Here, we define Δ(Ξ(k))=𝔼[L(k+2)L(k)|Ξ(k)]\Delta\big{(}\Xi(k)\big{)}=\mathbb{E}[L(k+2)-L(k)|\Xi(k)] instead of Δ(Ξ(k))=𝔼[L(k+1)L(k)|Ξ(k)]\Delta\big{(}\Xi(k)\big{)}=\mathbb{E}[L(k+1)-L(k)|\Xi(k)] as in [29]. This adjustment is made because when a transmitter is scheduled, the corresponding task requires at least 2 time slots to complete the computation., which is defined by

Ξ(k)={cix(k),zi(k),{di(τ)}τk}i=1N,x{soft,hard}.\displaystyle\Xi(k)=\big{\{}c_{i}^{x}(k),z_{i}(k),\{d_{i}(\tau)\}_{\tau\leq k}\big{\}}_{i=1}^{N},\,\,x\in\{\text{soft},\text{hard}\}.

The Lyapunov Function L(k)L(k) increases with the AoC of the network, while the Lyapunov Drift Δ(Ξ(k))\Delta\big{(}\Xi(k)\big{)} represents the expected change in L(k)L(k) over a single slot. By minimizing the drift (29) in each time slot, the Max-Weight policy aims to maintain both L(k)L(k) and the network’s AoC at low levels.

V-B1 Under the Soft Deadline

To simplify (28), we derive the AoC in time slot k+2k+2 under the soft deadline assumption. Recall that μc=1\mu_{c}=1, meaning computational tasks are processed immediately upon arrival at the computational node without queuing.

Let i(zi(k))=1{zi(k)+1>w}Ai(k)Gi(k)(zi(k)+1w)\ell_{i}\big{(}z_{i}(k)\big{)}=1_{\{z_{i}(k)+1>w\}}\frac{A_{i}(k)}{G_{i}(k)}(z_{i}(k)+1-w), where Gi(k)G_{i}(k) is defined in (3) and Ai(k)A_{i}(k) is defined in (4). We can derive the expression for cisoft(k+2)c_{i}^{\text{soft}}(k+2) as follows (the proof is given by Appendix K):

cisoft(k+2)=1{i=1Ndi(k1)=0}(cisoft(k)+2)\displaystyle c_{i}^{\text{soft}}(k+2)=1_{\{\sum_{i=1}^{N}d_{i}(k-1)=0\}}\big{(}c_{i}^{\text{soft}}(k)+2\big{)}
+1{i=1Ndi(k1)=1,ai(k)=0}(cisoft(k)+2)\displaystyle+1_{\{\sum_{i=1}^{N}d_{i}(k-1)=1,a_{i}(k)=0\}}\big{(}c_{i}^{\text{soft}}(k)+2\big{)}
+1{i=1Ndi(k1)=1,ai(k)=1,di(k+1)=0}(cisoft(k)+2)\displaystyle+1_{\{\sum_{i=1}^{N}d_{i}(k-1)=1,a_{i}(k)=1,d_{i}(k+1)=0\}}\big{(}c_{i}^{\text{soft}}(k)+2\big{)}
+1{i=1Ndi(k1)=1,ai(k)=1,di(k+1)=1}\displaystyle+1_{\{\sum_{i=1}^{N}d_{i}(k-1)=1,a_{i}(k)=1,d_{i}(k+1)=1\}}
(zi(k)+2+i(zi(k)+1)).\displaystyle\cdot\Big{(}z_{i}(k)+2+\ell_{i}\big{(}z_{i}(k)+1\big{)}\Big{)}. (30)

From (28), (29), and (V-B1), we propose the Max-Weight algorithm, Algorithm 1, and prove that it minimizes the Lyapunov Drift in Proposition 3.

Algorithm 1 Max-Weight Policy for Soft Deadline
1:TT, {βi}i=1N\{\beta_{i}\}_{i=1}^{N}, {μt,i}i=1N\{\mu_{t,i}\}_{i=1}^{N}, {cisoft(0)}i=1N\{c_{i}^{\text{soft}}(0)\}_{i=1}^{N},
2:for 1kT1\leq k\leq T do
3:     if i=1Ndi(k1)=0\sum_{i=1}^{N}d_{i}(k-1)=0 then
4:         ai(k)=0a_{i}(k)=0 for i{1,2,,N}i\in\{1,2,\cdots,N\}.
5:     else if i=1Ndi(k1)=1\sum_{i=1}^{N}d_{i}(k-1)=1 then
6:         Calculate wi(k)cisoft(k)zi(k)i(zi(k)+1)w_{i}(k)\triangleq c_{i}^{\text{soft}}(k)-z_{i}(k)-\ell_{i}\big{(}z_{i}(k)+1\big{)}.
7:         Set ai(k)=1a_{i^{*}}(k)=1 if i=argmaxi(βiμt,iwi(k))i^{*}=\arg\max_{i}\big{(}\beta_{i}\mu_{t,i}w_{i}(k)\big{)}.
8:         If multiple ii^{*} exist, select one randomly.
9:     end if
10:end for
11:Output Θsoft\Theta_{\text{soft}}.
Proposition 3.

Algorithm 1 minimizes the Lyapunov Drift (see (29)) in every time slot.

Proof.

The proof is given in Appendix L. ∎

V-B2 Under the Hard Deadline

To obtain the Max-Weight policies, we utilize the recursion of cihard(k)c_{i}^{\text{hard}}(k) to minimize the Lyapunov Drift defined in (29).

By similar process in Appendix K, we can derive recursion for cihard(k+2)c_{i}^{\text{hard}}(k+2) is given as follows:

cihard(k+2)=1{i=1Ndi(k1)=0}(cihard(k)+2)\displaystyle c_{i}^{\text{hard}}(k+2)=1_{\{\sum_{i=1}^{N}d_{i}(k-1)=0\}}\big{(}c_{i}^{\text{hard}}(k)+2\big{)}
+1{i=1Ndi(k1)=1,ai(k)=0}(cihard(k)+2)\displaystyle+1_{\{\sum_{i=1}^{N}d_{i}(k-1)=1,a_{i}(k)=0\}}\big{(}c_{i}^{\text{hard}}(k)+2\big{)}
+1{i=1Ndi(k1)=1,ai(k)=1,di(k+1)=0}(cihard(k)+2)\displaystyle+1_{\{\sum_{i=1}^{N}d_{i}(k-1)=1,a_{i}(k)=1,d_{i}(k+1)=0\}}\big{(}c_{i}^{\text{hard}}(k)+2\big{)}
+1{i=1Ndi(k1)=1,ai(k)=1,di(k+1)=1}\displaystyle+1_{\{\sum_{i=1}^{N}d_{i}(k-1)=1,a_{i}(k)=1,d_{i}(k+1)=1\}}
(1{zi(k)+2>w}(cihard(k)+2)+1{zi(k)+2w}(zi(k)+2)).\displaystyle\cdot\Big{(}1_{\{z_{i}(k)+2>w\}}\big{(}c_{i}^{\text{hard}}(k)+2\big{)}+1_{\{z_{i}(k)+2\leq w\}}\big{(}z_{i}(k)+2\big{)}\Big{)}. (31)

From (28), (29), and (V-B2), we propose the Max-Weight algorithm, Algorithm 2, and prove that it minimizes the Lyapunov Drift in Proposition 4.

Algorithm 2 Max-Weight Policy for Hard Deadline
1:TT, {βi}i=1N\{\beta_{i}\}_{i=1}^{N}, {μt,i}i=1N\{\mu_{t,i}\}_{i=1}^{N}, {cihard(0)}i=1N\{c_{i}^{\text{hard}}(0)\}_{i=1}^{N},
2:for tTt\leq T do
3:     if i=1Ndi(k)=0\sum_{i=1}^{N}d_{i}(k)=0 then
4:         ai(k+1)=0a_{i}(k+1)=0 for i{1,2,,N}i\in\{1,2,\cdots,N\}.
5:     else if i=1Ndi(k)=1\sum_{i=1}^{N}d_{i}(k)=1 then
6:         Calculate wi(k)1{zi(k)+2w}(cihard(k)zi(k))w_{i}(k)\triangleq 1_{\{z_{i}(k)+2\leq w\}}\cdot\big{(}c_{i}^{\text{hard}}(k)-z_{i}(k)\big{)}.
7:         Set ai(k+1)=1a_{i^{*}}(k+1)=1 if i=argmaxi(βiμt,iwi(k))i^{*}=\arg\max_{i}\big{(}\beta_{i}\mu_{t,i}w_{i}(k)\big{)}.
8:         If multiple ii^{*} exist, select one randomly.
9:     end if
10:end for
11:Output Θhard\Theta_{\text{hard}}.
Proposition 4.

Algorithm 2 minimizes the Lyapunov Drift (see (29)) in every time slot.

Proof.

The proof is similar to Appendix L. ∎

VI Numerical Results

In this section, we verify our findings in Section III, Section IV, and Section V through simulations.

VI-A Simulations for AoC under the Soft Deadline

In Fig. 5, we compare theoretical and simulated average AoC under the soft deadline. The parameters are set as w=0.5w=0.5, μc=3\mu_{c}=3, μt{2,3}\mu_{t}\in\{2,3\}, and λ(0,1.8]\lambda\in(0,1.8]. The results show that the theoretical average AoC values derived in (2) and (2) closely match the simulation outcomes. This alignment implies that the theoretical expressions in Theorem 2 are accurate for both cases where μtμc\mu_{t}\neq\mu_{c} and μt=μc\mu_{t}=\mu_{c}. Additionally, in both scenarios, the average AoC initially decreases and then increases. When μt=2<3=μc\mu_{t}=2<3=\mu_{c}, the computation capability exceeds the communication capability. If λ0\lambda\to 0, the communication queue is almost empty, meaning that the communication resources (and thus the computation resources) are underutilized, resulting in a high average AoC. Conversely, if λμt\lambda\to\mu_{t}, the communication queue is busy, meaning that the communication capability is utilized almost to its total capacity, resulting in many tasks waiting in the communication queue to be transmitted, which also leads to a high average AoC. Since μt\mu_{t} and μc\mu_{c} are symmetric in (2) and (2), this conclusion is valid for the case when μtμc\mu_{t}\geq\mu_{c}. Using numerical methods (e.g., gradient methods in as described in [31]), we can find the optimal λ\lambda in (2) and (2).

Refer to caption
Figure 5: Average AoC v.s. λ\lambda under the soft deadline
Refer to caption
(a) Approximated and simulated AoC under different cases.
Refer to caption
(b) Approximated and simulated computation throughput when μt=2,μc=3\mu_{t}=2,\mu_{c}=3.
Figure 6: Numerical results for the average AoC under the hard deadline

VI-B Simulations for AoC under the Hard Deadline

In Fig. 6, we investigate the average AoC under the hard deadline scenario. The comparisons between the approximated and simulated average AoC are presented in Fig. 6(a). We observe that when μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the approximations of the average AoC in (5) and (5) closely match the simulation results. This agreement verifies the theoretical expressions in Theorem 5. Furthermore, the curves of the approximated average AoC are below those of the simulated average AoC, implying that the approximations in (5) and (5) serve as lower bounds for the average AoC, confirming the validity of Remark 3. Additionally, in both scenarios, the average AoC initially decreases and then increases, similar to the trend observed under the soft deadline. This suggests that the computation freshness exhibits a convex relationship with respect to λ\lambda, indicating the existence of an optimal λ\lambda that minimizes the average AoC.

We analyze the computation throughput as defined in (18) in Fig. 6(b). The approximated computation throughput in (19) closely matches the simulated results, indicating the accuracy of the approximation. Although the positive correlations among {Tk}k\{T_{k}\}_{k} are removed in (19), making it a lower bound, the reduction in the expectation of MM is small. Consequently, the reduction in 𝔼[j=1MXj]\mathbb{E}[\sum_{j=1}^{M}X_{j}] is negligible. Therefore, (19) serves as a tight lower bound for the computation throughput, not only when μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, but also across the entire range of λ\lambda.

Refer to caption
(a) Relationship between the average AoC and computation throughput.
Refer to caption
(b) Relationship between the optimal average AoC and computation throughput.
Figure 7: Relationships between computation freshness and computation throughput under the hard deadline.

The relationship between the computation freshness and the computation throughput is examined in Fig. 7. In Fig. 7(a), we plot Θ^\hat{\Theta} from (14) versus Ξ^\hat{\Xi} from (19) when λ(0,1.8)\lambda\in(0,1.8) and (μt,μc)=(2,3),(3,2),(2,5)(\mu_{t},\mu_{c})=(2,3),(3,2),(2,5). These curves are observed to be decreasing, indicating that both Θ^\hat{\Theta} and Ξ^\hat{\Xi} acheive their optimal values simutaneously. When the network is stable, i.e., λ<min{μt,μc}=μt\lambda<\min\{\mu_{t},\mu_{c}\}=\mu_{t}, the channel throughput, defined as the number of completed tasks per slot, is λ\lambda. Fig. 7(a) shows that the approximated AoC decreases with the approximated computation throughput consistently. This suggests that computation throughput serves as a reliable proxy for the average AoC. Practically, we can minimize the average AoC by maximizing the computation throughput.

Additionally, the comparison of Θ^\hat{\Theta} versus Ξ^\hat{\Xi} for (μt,μc)=(2,3)(\mu_{t},\mu_{c})=(2,3) (green solid line) aligns with that for (μt,μc)=(3,2)(\mu_{t},\mu_{c})=(3,2) (blue scatter line). This demonstrates that interchanging the communication and computation rates does not alter the relationship, highlighting the symmetry with respect to (μt,μc)(\mu_{t},\mu_{c}) as discussed below Theorem 5. Comparing Θ^\hat{\Theta} v.s Ξ^\hat{\Xi} when (μt,μc)=(2,3)(\mu_{t},\mu_{c})=(2,3) (the green solid line) with Θ^\hat{\Theta} v.s Ξ^\hat{\Xi} when (μt,μc)=(2,5)(\mu_{t},\mu_{c})=(2,5) (the red dashed line), we observe that, for any fixed λ\lambda, the approximated computation throughput under (μt,μc)=(2,5)(\mu_{t},\mu_{c})=(2,5) is larger than that under (μt,μc)=(2,3)(\mu_{t},\mu_{c})=(2,3). Similarly, the approximated average AoC under (μt,μc)=(2,5)(\mu_{t},\mu_{c})=(2,5) is smaller than that under (μt,μc)=(2,3)(\mu_{t},\mu_{c})=(2,3). This is as expected because larger communication or computation capabilities lead to better computation freshness and higher throughput. Both the optimal approximated average AoC and approximated computation throughput for (μt,μc)=(2,5)(\mu_{t},\mu_{c})=(2,5) outperform those for (μt,μc)=(2,3)(\mu_{t},\mu_{c})=(2,3) due to the consistency between AoC and computation throughput.

Finally, we solve the optimization (22) when u=0u=0, and plot the optimal Θ^\hat{\Theta} v.s the corresponding Ξ^\hat{\Xi} in Fig. 7(b). We set μt=2\mu_{t}=2, λ(0,2)\lambda\in(0,2), allowing μc\mu_{c} to vary from 33 to 66. As expected, as the computation rate μc\mu_{c} (representing the computation power of the network) increases, the optimal (approximated) average AoC decreases while the corresponding (approximated) computation throughput increases.

VI-C AoC-based Scheduling in Multi-Source Networks

In this subsection, we numerically validate the time-discrete and real-time Max-Weight scheduling policies proposed in Section V. As a benchmark, we compare them against stationary randomized policies [29]: Under these policies, when the receiver makes a scheduling decision, transmitter ii is selected with a fixed probability qiq_{i}, and the receiver remains idle with a fixed probability q0q_{0}. We consider a symmetric network, i.e., λi=λ\lambda_{i}=\lambda for all ii. The parameters are set as N=5N=5, μc=1\mu_{c}=1, μt=0.5\mu_{t}=0.5, w={4,10}w=\in\{4,10\}, and λ[0.1,0.9]\lambda\in[0.1,0.9]. Given the network symmetry, we set qi=1Nq_{i}=\frac{1}{N} and βi=1\beta_{i}=1 with i{1,2,,N}i\in\{1,2,\cdots,N\}.

Refer to caption
(a) w=4w=4.
Refer to caption
(b) w=10w=10.
Figure 8: Max-Weight policies and stationary randomized policies in multi-source neworks.

As shown in Fig. 8, the Max-Weight policy outperforms the benchmark stationary randomized policy, which is consistent with the findings in [29]. Unlike in Fig. 5 and Fig. 6(a), where the average AoC increases with λ\lambda, the average AoC in Fig. 8 decreases as λ\lambda increases. This difference arises because, in Fig. 5 and Fig. 6(a), the queuing discipline follows a first-come, first-serve (FCFS) rule, whereas Fig. 8 incorporates a preemptive rule. Under this rule, newly arrived tasks can replace previously queued tasks already present in the transmitter, which improves freshness.

Comparing Fig. 8(a) and Fig. 8(b), we observe that when ww is small (i.e., w=4w=4), the average AoC under the hard deadline is higher than under the soft deadline. This is because fewer tasks are considered valid under the hard deadline, resulting in a higher AoC. However, when ww is large (i.e., w=4w=4), most tasks remain valid, reducing the impact of ww on the AoC. Consequently, the difference in average AoC between the hard and soft deadlines becomes less significant.

VII Conclusion and Future Directions

In this paper, we introduce a novel metric AoC to quantify computation freshness in 3CNs. The AoC metric relies solely on tasks’ arrival and completion timestamps, ensuring its applicability to dynamic and complex real-world 3CNs. We investigate AoC in two distinct setting. In point-to-point networks: tasks are processed sequentially with a first-come, first-served discipline. We derive closed-form expressions for the time-average AoC under both a simplified case involving M/M/1-M/M/1 systems and a general case with G/G/1-G/G/1 systems. Additionally, we define the concept of computation throughput and derive its corresponding expressions. We further apply the AoC metric to resource optimization and real-time scheduling in time-discrete multi-source networks. In the multi-source networks: we propose AoC-based Max-Weight policies, leveraging a Lyapunov function to minimize its drift. Simulation results are provided to compare the proposed policies against benchmark approaches, demonstrating their effectiveness.

There are two primary future research directions: (1) Deriving and analyzing time-average AoC in practical scenarios involving complex graph structures, such as sequential dependency graphs, parallel dependency graphs, and general dependency graphs [1]. (2) Exploring optimal AoC-based resource management policies in dynamic and complex 3CNs, such as task offloading in mobile edge networks.

References

  • [1] Y. Mao, C. You, J. Zhang, and et. al, “A Survey on Mobile Edge Computing: The Communication Perspective,” IEEE Communications Surveys and Tutorials, vol. 19, no. 4, pp. 2322–2358, 2017.
  • [2] W. Shi, J. Cao, Q. Zhang, and et. al, “Edge Computing: Vision and Challenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, 2016.
  • [3] Fog computing and its role in the internet of things.   New York, NY, USA: Association for Computing Machinery, 2012.
  • [4] X. Tang, C. Cao, and Y. Wang, “Computing power network: The architecture of convergence of computing and networking towards 6G requirement,” China Communications, vol. 18, no. 2, pp. 175–185, 2021.
  • [5] A. Kosta, N. Pappas and V. Angelakis, “Age of Information: A New Concept, Metric, and Tool,” Foundations and Trends in Networking, vol. 12, no. 3, pp. 162 – 259, 2017.
  • [6] X. Chen, K. Gatsis, H. Hassani and S. Saeedi-Bidokhti, “Age of Information in Random Access Channels,” IEEE Transactions on Information Theory, vol. 68, no. 10, pp. 6548 – 6568, 2022.
  • [7] L. Huang and E. Modiano, “Optimizing age-of-information in a multiclass queueing system,” in IEEE International Symposium on Information Theory, 2015.
  • [8] X. Chen, X. Liao, and S. Saeedi-Bidokhti, “Real-time Sampling and Estimation on Random Access Channels: Age of Information and Beyond,” in IEEE International Conference on Computer Communications, 2021.
  • [9] Y. Zheng, J. Hu, and K. Yang, “Average Age of Information in Wireless Powered Relay Aided Communication Network,” IEEE Internet of Things Journal, vol. 9, no. 13, pp. 11 311 – 11 323, 2022.
  • [10] X. Chen, R. Liu, S. Wang, and S. Saeedi-Bidokhti, “Timely Broadcasting in Erasure Networks: Age-Rate Tradeoffs,” in IEEE International Symposium on Information Theory, 2021.
  • [11] X. Chen and S. Saeedi Bidokhti, “Benefits of Coding on Age of Information in Broadcast Networks,” in IEEE Information Theory Workshop, 2019.
  • [12] Q. Kuang, J. Gong, X. Chen, et. al, “Age-of-information for computation-intensive messages in mobile edge computing,” in The 11th International Conference on Wireless Communications and Signal Processing, 2019.
  • [13] C. Xu, H. H. Yang, X. Wang, et. al, “Optimizing information freshness in computing-enabled IoT networks,” IEEE Internet of Things Journal, vol. 2, no. 971 - 985, 2020.
  • [14] F. Chiariotti, O. Vikhrova, B. Soret, et. al, “Peak age of information distribution for edge computing with wireless links,” IEEE Transactions on Communications, vol. 69, no. 5, pp. 3176 – 3191, 2021.
  • [15] P. Zou, O. Ozel, and S. Subramaniam, “Optimizing information freshness through computation–transmission tradeoff and queue management in edge computing,” IEEE/ACM Transactions on Networking, vol. 29, no. 2, pp. 949 – 963, 2021.
  • [16] X. Qin, Y. Li, X. Song, et. al, “Timeliness of information for computation-intensive status updates in task-oriented communications,” IEEE Journal on Selected Areas in Communications, vol. 41, no. 3, pp. 623 – 638, 2023.
  • [17] Z. Tang, Z. Sun, N. Yang, et. al, “Age of information analysis of multi-user mobile edge computing systems,” in IEEE Global Communications Conference, 2021.
  • [18] L. Liu, J. Qiang, Y. Wang, et. al, “Age of information analysis of NOMA-MEC offloading with dynamic task arrivals,” in IEEE the 14th International Conference on Wireless Communications and Signal Processing, 2022.
  • [19] X. Chen, K. Li, and K. Yang, “Timely Requesting for Time-Critical Content Users in Decentralized F-RANs,” arXiv: 2407.02930, Jul 2024.
  • [20] J. He, D. Zhang, S. Liu, et. al, “Decentralized updates scheduling for data freshness in mobile edge computing,” in IEEE International Symposium on Information Theory, 2022.
  • [21] K. Peng, P. Xiao, S. Wang, et. al, “AoI-aware partial computation offloading in IIoT with edge computing: a deep reinforcement learning based approach,” IEEE Transactions on Cloud Computing, vol. 11, no. 4, pp. 3766 – 3777, 2023.
  • [22] J. Zhong, R. D. Yates, and E. Soljanin, “Two freshness metrics for local cache refresh,” in IEEE International Symposium on Information Theory, 2018.
  • [23] C. Kam, S. Kompella, G. D. Nguyen, et. al, “Towards an effective age of information: Remote estimation of a Markov source,” in IEEE Conference on Computer Communications Workshops, 2018.
  • [24] A. Maatouk, S. Kriouile, M. Assaad, et. al, “The Age of Incorrect Information: A New Performance Metric for Status Updates,” IEEE/ACM Transactions on Networking, vol. 28, no. 5, pp. 2215 – 2228, 2020.
  • [25] X. Zheng S. Zhou, and Z. Niu, “Beyond Age: Urgency of Information for Timeliness Guarantee in Status Update Systems,” in 2020 2nd 6G Wireless Summit, 2020.
  • [26] S. Kaul, R. Yates, and M. Gruteser, “Real-Time Status: How Often Should One Update?” in IEEE International Conference on Computer Communications, 2012.
  • [27] J. Lou, X. Yuan, S. Kompella, et.al, “AoI and throughput tradeoffs in routing-aware multi-hop wireless networks,” in IEEE Conference on Computer Communications, 2020.
  • [28] M. T. M. Emmerich and A. H. Deutz, “A tutorial on multiobjective optimization: fundamentals and evolutionary methods,” Natural Computing, vol. 17, pp. 585 – 609, 2018.
  • [29] I. Kadota and E. Modiano, “Minimizing the Age of Information in Wireless Networks with Stochastic Arrivals,” IEEE Transactions on Mobile Computing, vol. 20, no. 3, pp. 1173 – 1185, 2021.
  • [30] M. J. Neely, Stochastic Network Optimization with Application to Communication and Queueing Systems.   Morgan and Claypool Publishers, 2010.
  • [31] S. Boyd, L. Vandenberghe, Convex Optimization.   Cambridge University Press, 2004.
  • [32] R. Nelson, Probability, stochastic processes, and queueing theory: the mathematics of computer performance modeling.   Springer-Verlag New York, Inc., 1995.

Appendix A Proof of Theorem 1

Recall that ϵ^(t)=A(t)G(t)\hat{\epsilon}(t)=\frac{A(t)}{G(t)} represents the frequency of valid tasks and {Tk}k\{T_{k}\}_{k} is identical across kk. The limit of ϵ^w(t)\hat{\epsilon}_{w}(t) exists and is denote by ϵw\epsilon_{w}. We have:

limtϵ^w(t)=Pr(Tk>w)ϵw.\displaystyle\lim_{t\to\infty}\hat{\epsilon}_{w}(t)=\Pr(T_{k}>w)\triangleq\epsilon_{w}.

Additionally, ϵ^w(t)\hat{\epsilon}_{w}(t) remains unchanged until a new task is completed. Consider the kk-th valid task, where the corresponding inter-arrival time is XkX_{k} and the system time is TkT_{k}. Let the corresponding value of ϵ^w(t)\hat{\epsilon}_{w}(t) be ϵ^k\hat{\epsilon}_{k}. We then have:

limkϵ^k=limtϵ^w(t)=ϵw.\displaystyle\lim_{k\to\infty}\hat{\epsilon}_{k}=\lim_{t\to\infty}\hat{\epsilon}_{w}(t)=\epsilon_{w}.

To derive an expression for the average AoC, we start with a similar graphical argument from [5]. Consider the sum of: (i) the area corresponding to the interval (τk1,τk](\tau_{k-1},\tau_{k}], i.e., the area of trapezoid ABCD¯\overline{ABCD}, and (ii) the area corresponding to the additional latency incurred by task kk, i.e., the area of triangle DEF¯\overline{DEF} in Fig. 2 (a), or the area of trapezoid HGDE¯\overline{HGDE} in Fig. 2 (b). Note that SHGDE¯=SDEFSFGHS_{\overline{HGDE}}=S_{DEF}-S_{FGH}. Denote the corresponding sum area as SkS_{k}, based on Fig. 2, we can compute

Sk=\displaystyle S_{k}= XkTk+Xk22+ϵ^k2((Tkw)+)2\displaystyle X_{k}T_{k}+\frac{X_{k}^{2}}{2}+\frac{\hat{\epsilon}_{k}}{2}\big{(}(T_{k}-w)^{+}\big{)}^{2}
\displaystyle- ϵ^k2((TkSk,cw)+)2\displaystyle\frac{\hat{\epsilon}_{k}}{2}\big{(}(T_{k}-S_{k,c}-w)^{+}\big{)}^{2}
\displaystyle\triangleq Qk,1+ϵ^k2Qk,2.\displaystyle Q_{k,1}+\frac{\hat{\epsilon}_{k}}{2}Q_{k,2}. (32)

Since {Xk}k\{X_{k}\}_{k}, {Sk,c}k\{S_{k,c}\}_{k} and {Tk}k\{T_{k}\}_{k} are i.i.d, then {Qk,1}k\{Q_{k,1}\}_{k} and {Qk,2}k\{Q_{k,2}\}_{k} are also i.i.d, respectively.

Let the number of valid tasks be KK, and consider the limit as KK\to\infty. Utilizing a similar graphical argument in Appendix A, the average AoC can be computed by

Θ^soft=\displaystyle\hat{\Theta}_{\text{soft}}= limKk=1K(Qk,1+ϵ^k2Qk,2)k=1KXk\displaystyle\lim_{K\to\infty}\frac{\sum_{k=1}^{K}\Big{(}Q_{k,1}+\frac{\hat{\epsilon}_{k}}{2}Q_{k,2}\Big{)}}{\sum_{k=1}^{K}X_{k}}
=\displaystyle= limK1Kk=1K(Qk,1+ϵ^k2Qk,2)1Kk=1KXk.\displaystyle\lim_{K\to\infty}\frac{\frac{1}{K}\sum_{k=1}^{K}\Big{(}Q_{k,1}+\frac{\hat{\epsilon}_{k}}{2}Q_{k,2}\Big{)}}{\frac{1}{K}\sum_{k=1}^{K}X_{k}}.

By the Law of Large Numbers, Θsoft\Theta_{\text{soft}} reduces to

Θsoft=\displaystyle\Theta_{\text{soft}}= 𝔼[Qk,1]𝔼[Xk]+limK1Kk=1Kϵ^k2Qk,2𝔼[Xk].\displaystyle\frac{\mathbb{E}[Q_{k,1}]}{\mathbb{E}[X_{k}]}+\lim_{K\to\infty}\frac{\frac{1}{K}\sum_{k=1}^{K}\frac{\hat{\epsilon}_{k}}{2}Q_{k,2}}{\mathbb{E}[X_{k}]}. (33)

Now, let us focus on the term 1Kk=1KQk,2\frac{1}{K}\sum_{k=1}^{K}Q_{k,2}. Since limkϵ^k=ϵw\lim_{k\to\infty}\hat{\epsilon}_{k}=\epsilon_{w}, for any small η>0\eta>0, there exists a large integer H(η)H(\eta) such that |ϵ^kϵw|η|\hat{\epsilon}_{k}-\epsilon_{w}|\leq\eta. Then:

limK1Kk=1KQk,2=\displaystyle\lim_{K\to\infty}\frac{1}{K}\sum_{k=1}^{K}Q_{k,2}= limK1K(k=1H(η)+k=H(η)K)ϵ^k2Qk,2\displaystyle\lim_{K\to\infty}\frac{1}{K}\big{(}\sum_{k=1}^{H(\eta)}+\sum_{k=H(\eta)}^{K}\big{)}\frac{\hat{\epsilon}_{k}}{2}Q_{k,2}
=(a)\displaystyle\overset{(a)}{=} limK1Kk=H(η)KQk,2.\displaystyle\lim_{K\to\infty}\frac{1}{K}\sum_{k=H(\eta)}^{K}Q_{k,2}.

The equality (a) holds because limK1Kk=1H(η)ϵ^k2Qk,2=0\lim_{K\to\infty}\frac{1}{K}\sum_{k=1}^{H(\eta)}\frac{\hat{\epsilon}_{k}}{2}Q_{k,2}=0, as there are only finite number (H(η)H(\eta)) terms and each term is a finite scalar.

Since |ϵ^kϵw|η|\hat{\epsilon}_{k}-\epsilon_{w}|\leq\eta, the summation 1Kk=1Kϵ^k2Qk,2\frac{1}{K}\sum_{k=1}^{K}\frac{\hat{\epsilon}_{k}}{2}Q_{k,2} is bounded by

limKϵwη2Kk=H(η)KQk,2limK1Kk=H(η)Kϵ^k2Qk,2\displaystyle\lim_{K\to\infty}\frac{\epsilon_{w}-\eta}{2K}\sum_{k=H(\eta)}^{K}Q_{k,2}\leq\lim_{K\to\infty}\frac{1}{K}\sum_{k=H(\eta)}^{K}\frac{\hat{\epsilon}_{k}}{2}Q_{k,2}
limKϵw+η2Kk=H(η)KQk,2.\displaystyle\leq\lim_{K\to\infty}\frac{\epsilon_{w}+\eta}{2K}\sum_{k=H(\eta)}^{K}Q_{k,2}.

By the Law of Large Numbers, we have:

ϵwη2𝔼[Qk,2]limK1Kk=H(η)Kϵ^k2Qk,2\displaystyle\frac{\epsilon_{w}-\eta}{2}\mathbb{E}[Q_{k,2}]\leq\lim_{K\to\infty}\frac{1}{K}\sum_{k=H(\eta)}^{K}\frac{\hat{\epsilon}_{k}}{2}Q_{k,2}
ϵw+η2𝔼[Qk,2].\displaystyle\leq\frac{\epsilon_{w}+\eta}{2}\mathbb{E}[Q_{k,2}]. (34)

Substituting (A) and (A) into (33), and taking η0\eta\to 0, we obtain (1).

Appendix B The proof of Theorem 2

First of all, we denote

Q1=𝔼[XkTk+Xk22]𝔼[Xk],Q2=ϵwq1q22𝔼[Xk]\displaystyle Q_{1}=\frac{\mathbb{E}[X_{k}T_{k}+\frac{X_{k}^{2}}{2}]}{\mathbb{E}[X_{k}]},\,\,Q_{2}=\epsilon_{w}\cdot\frac{q_{1}-q_{2}}{2\mathbb{E}[X_{k}]}
q1=𝔼[((Tkw)+)2]\displaystyle q_{1}=\mathbb{E}\big{[}\big{(}(T_{k}-w)^{+}\big{)}^{2}\big{]}
q2=𝔼[((TkSk,cw)+)2].\displaystyle q_{2}=\mathbb{E}\big{[}\big{(}(T_{k}-S_{k,c}-w)^{+}\big{)}^{2}\big{]}.

From [16, Proposition 1], Q1Q_{1} has the same formula of the average AoI, which has the following expression,

Q1=1λ+1μt+1μc+λ2μt2(μtλ)\displaystyle Q_{1}=\frac{1}{\lambda}+\frac{1}{\mu_{t}}+\frac{1}{\mu_{c}}+\frac{\lambda^{2}}{\mu_{t}^{2}(\mu_{t}-\lambda)}
+λ2μc2(μcλ)+λ2μtμc(μt+μcλ).\displaystyle+\frac{\lambda^{2}}{\mu_{c}^{2}(\mu_{c}-\lambda)}+\frac{\lambda^{2}}{\mu_{t}\mu_{c}(\mu_{t}+\mu_{c}-\lambda)}. (35)

To obtain the closed expression for Θsoft\Theta_{\text{soft}} in (1), it suffices to obtain the closed expression for Q2Q_{2}.

Step 1. We fisrt obtain ϵw\epsilon_{w}. Denote UtU_{t} and UcU_{c} are the system in the transmission queue and the computation queue, respectively. Since the tandom is a combination of M/M/1 queues. Due to the momeryless property, UtU_{t} and UcU_{c} are independent [32, 16]. The densifty functions for UtU_{t} and UcU_{c} are given by [32]:

fUt(x)=\displaystyle f_{U_{t}}(x)= (μtλ)e(μtλ)x\displaystyle(\mu_{t}-\lambda)e^{-(\mu_{t}-\lambda)x} (36)
fUc(x)=\displaystyle f_{U_{c}}(x)= (μcλ)e(μcλ)x.\displaystyle(\mu_{c}-\lambda)e^{-(\mu_{c}-\lambda)x}. (37)

Due to independence, the densifty function of the total system time, i.e., Tk=Ut+UcT_{k}=U_{t}+U_{c}, is calculated by convolutional [32]:

fTk(x)={μtδtμcδcμcμt(eμtδtxeμcδcx),μtμcμt2δt2xeμtδtx,μt=μc.\displaystyle f_{T_{k}}(x)=\left\{\begin{aligned} &\frac{\mu_{t}\delta_{t}\mu_{c}\delta_{c}}{\mu_{c}-\mu_{t}}(e^{-\mu_{t}\delta_{t}x}-e^{-\mu_{c}\delta_{c}x}),\,\,\mu_{t}\neq\mu_{c}\\ &\mu_{t}^{2}\delta_{t}^{2}xe^{-\mu_{t}\delta_{t}x},\,\,\mu_{t}=\mu_{c}.\end{aligned}\right. (38)

Recall that ϵw=Pr(Tk>w)\epsilon_{w}=\Pr\big{(}T_{k}>w\big{)}. From (38), ϵw\epsilon_{w} can be computed as

ϵw={μcδceμtδtwμtδteμcδcwμcμt,μtμc(1+μtδtw)eμtδtw,μt=μc.\displaystyle\epsilon_{w}=\left\{\begin{aligned} &\frac{\mu_{c}\delta_{c}e^{-\mu_{t}\delta_{t}w}-\mu_{t}\delta_{t}e^{-\mu_{c}\delta_{c}w}}{\mu_{c}-\mu_{t}},&&\mu_{t}\neq\mu_{c}\\ &(1+\mu_{t}\delta_{t}w)e^{-\mu_{t}\delta_{t}w},&&\mu_{t}=\mu_{c}.\end{aligned}\right. (39)

Step 2. We compute q1q_{1}. We first consider the case when μtμc\mu_{t}\neq\mu_{c}, we then consider the case when μt=μc\mu_{t}=\mu_{c}.

When μtμc\mu_{t}\neq\mu_{c}, according to (38), we have

𝔼[((Tkw)+)2]=\displaystyle\mathbb{E}\big{[}\big{(}(T_{k}-w)^{+}\big{)}^{2}\big{]}= μtδtμcδcμcμt(w(xw)2eμtδtxdx\displaystyle\frac{\mu_{t}\delta_{t}\mu_{c}\delta_{c}}{\mu_{c}-\mu_{t}}\Big{(}\int_{w}^{\infty}(x-w)^{2}e^{-\mu_{t}\delta_{t}x}dx
\displaystyle- w(xw)2eμcδcxdx).\displaystyle\int_{w}^{\infty}(x-w)^{2}e^{-\mu_{c}\delta_{c}x}dx\Big{)}.

By some algebra,

w(xw)2eμtδtx𝑑x=2eμtδtwμt3δt3\displaystyle\int_{w}^{\infty}(x-w)^{2}e^{-\mu_{t}\delta_{t}x}dx=\frac{2e^{-\mu_{t}\delta_{t}w}}{\mu_{t}^{3}\delta_{t}^{3}}
w(xw)2eμcδcx𝑑x=2eμcδcwμc3δc3.\displaystyle\int_{w}^{\infty}(x-w)^{2}e^{-\mu_{c}\delta_{c}x}dx=\frac{2e^{-\mu_{c}\delta_{c}w}}{\mu_{c}^{3}\delta_{c}^{3}}.

Therefore, when μtμc\mu_{t}\neq\mu_{c},

𝔼[((Tkw)+)2]=μtδtμcδcμcμt(2eμtδtwμt3δt32eμcδcwμc3δc3).\displaystyle\mathbb{E}\big{[}\big{(}(T_{k}-w)^{+}\big{)}^{2}\big{]}=\frac{\mu_{t}\delta_{t}\mu_{c}\delta_{c}}{\mu_{c}-\mu_{t}}\Big{(}\frac{2e^{-\mu_{t}\delta_{t}w}}{\mu_{t}^{3}\delta_{t}^{3}}-\frac{2e^{-\mu_{c}\delta_{c}w}}{\mu_{c}^{3}\delta_{c}^{3}}\Big{)}. (40)

Next, when μt=μc\mu_{t}=\mu_{c}, according to (38), we have

𝔼[((Tkw)+)2]=eμtδtw(6μt2δt2+2wμtδt).\displaystyle\mathbb{E}\big{[}\big{(}(T_{k}-w)^{+}\big{)}^{2}\big{]}=e^{-\mu_{t}\delta_{t}w}(\frac{6}{\mu_{t}^{2}\delta_{t}^{2}}+\frac{2w}{\mu_{t}\delta_{t}}). (41)

Step 3. We compute q2q_{2} by considering the waiting time in the computation queue, denoted as WW. We have the following relation:

TkSk,2=Ut+W.\displaystyle T_{k}-S_{k,2}=U_{t}+W.

According to [32], the waiting time in the computation queue, WW, follows the density function:

fW(x)=ρcμc(1ρc)eμc(1ρc)x+(1ρc)δ(t),\displaystyle f_{W}(x)=\rho_{c}\mu_{c}(1-\rho_{c})e^{-\mu_{c}(1-\rho_{c})x}+(1-\rho_{c})\delta(t),

where δ(t)\delta(t) is the Dirac delta function. Since UtU_{t} and UcU_{c} are independent, WW and UtU_{t} are also independent. By convolution, we can derive the density function of Ut+WU_{t}+W. When μtμc\mu_{t}\neq\mu_{c}, we have:

fUt+W(x)=\displaystyle f_{U_{t}+W}(x)= (1ρc)μtδteμtδtt\displaystyle(1-\rho_{c})\mu_{t}\delta_{t}e^{-\mu_{t}\delta_{t}t}
+\displaystyle+ ρcμcδcμtδt(eμtδtteμcδct)μcμt,\displaystyle\frac{\rho_{c}\mu_{c}\delta_{c}\mu_{t}\delta_{t}(e^{-\mu_{t}\delta_{t}t}-e^{-\mu_{c}\delta_{c}t})}{\mu_{c}-\mu_{t}},

and

q2=2ζtμt3δt3μcδcμtδtδcμt2δt2μcμt2ζcμc3δc3ρcμcδcμtδtμcμt.\displaystyle q_{2}=\frac{2\zeta_{t}}{\mu_{t}^{3}\delta_{t}^{3}}\frac{\mu_{c}\delta_{c}\mu_{t}\delta_{t}-\delta_{c}\mu_{t}^{2}\delta_{t}^{2}}{\mu_{c}-\mu_{t}}-\frac{2\zeta_{c}}{\mu_{c}^{3}\delta_{c}^{3}}\frac{\rho_{c}\mu_{c}\delta_{c}\mu_{t}\delta_{t}}{\mu_{c}-\mu_{t}}. (42)

When μt=μc\mu_{t}=\mu_{c}, we have:

fUt+W(x)=μtδt2eμtδtt+ρtμt2δt2teμtδtt,\displaystyle f_{U_{t}+W}(x)=\mu_{t}\delta_{t}^{2}e^{-\mu_{t}\delta_{t}t}+\rho_{t}\mu_{t}^{2}\delta_{t}^{2}te^{-\mu_{t}\delta_{t}t},

and

q2=2ζtμt2δt+ρtζt(2wμtδt+6μt2δt2).\displaystyle q_{2}=\frac{2\zeta_{t}}{\mu_{t}^{2}\delta_{t}}+\rho_{t}\zeta_{t}(\frac{2w}{\mu_{t}\delta_{t}}+\frac{6}{\mu_{t}^{2}\delta_{t}^{2}}). (43)

From Step 1, Step 2 and Step 3, substituting (B), (39), (40), (41), (42), and (43) into (1), we have the desired results.

Appendix C Proof of Theorem 3

The proof is inspired by the proof of [16, Proposition 1].

We first divide the delay of task kk intto 44 components:

Tk=Wk,t+Sk,t+Wk,c+Sk,c,\displaystyle T_{k}=W_{k,t}+S_{k,t}+W_{k,c}+S_{k,c},

where Wk,t=Uk,tSk,tW_{k,t}=U_{k,t}-S_{k,t} and Wk,c=Uk,cSk,cW_{k,c}=U_{k,c}-S_{k,c} denote the waiting times of task kk in the transmission and computation queues, respectively, while Sk,tS_{k,t} and Sk,cS_{k,c} represent the serive times of task kk in the transmission and computation queues, respectivey. From (1), Θsoft\Theta_{\text{soft}} can be re-written as

Θsoft=\displaystyle\Theta_{\text{soft}}= 1𝔼[Xk](𝔼[XkWk,t]+𝔼[XkSk,t]\displaystyle\frac{1}{\mathbb{E}[X_{k}]}\Big{(}\mathbb{E}[X_{k}W_{k,t}]+\mathbb{E}[X_{k}S_{k,t}]
+\displaystyle+ 𝔼[XkWk,c]+𝔼[XkSk,c]+12𝔼[Xk2]\displaystyle\mathbb{E}[X_{k}W_{k,c}]+\mathbb{E}[X_{k}S_{k,c}]+\frac{1}{2}\mathbb{E}[X_{k}^{2}]
+\displaystyle+ Pr(Tk>w)2𝔼[((Tkw)+)2]\displaystyle\frac{\Pr(T_{k}>w)}{2}\mathbb{E}[\big{(}(T_{k}-w)^{+}\big{)}^{2}]
\displaystyle- Pr(Tk>w)2𝔼[((TkSk,cw)+)2]).\displaystyle\frac{\Pr(T_{k}>w)}{2}\mathbb{E}[\big{(}(T_{k}-S_{k,c}-w)^{+}\big{)}^{2}]\Big{)}. (44)

Note that XkX_{k} is independent of Sk,tS_{k,t} and Sk,cS_{k,c}, with 𝔼[Xk]=1λ\mathbb{E}[X_{k}]=\frac{1}{\lambda}, 𝔼[Sk,t]=1μt\mathbb{E}[S_{k,t}]=\frac{1}{\mu_{t}}, and 𝔼[Sk,c]=1μc\mathbb{E}[S_{k,c}]=\frac{1}{\mu_{c}}. Therefore, to compute Θsoft\Theta_{\text{soft}}, it suffices to determine 𝔼[XkWk,t]\mathbb{E}[X_{k}W_{k,t}], 𝔼[XkWk,c]\mathbb{E}[X_{k}W_{k,c}], and Pr(Tk>w)2𝔼[((Tkw)+)2]\frac{\Pr(T_{k}>w)}{2}\mathbb{E}[\big{(}(T_{k}-w)^{+}\big{)}^{2}]. Denote g1=𝔼[XkWk,t]g_{1}=\mathbb{E}[X_{k}W_{k,t}], g2=𝔼[XkWk,c]g_{2}=\mathbb{E}[X_{k}W_{k,c}], g3=Pr(Tk>w)2g_{3}=\frac{\Pr(T_{k}>w)}{2}, g4=𝔼[((Tkw)+)2]g_{4}=\mathbb{E}[\big{(}(T_{k}-w)^{+}\big{)}^{2}], and g5=𝔼[((TkSk,cw)+)2]g_{5}=\mathbb{E}[\big{(}(T_{k}-S_{k,c}-w)^{+}\big{)}^{2}].

Step 1. To obtain g1g_{1}, we proceed as follows:

g1=𝔼[XkWk,t]=0x𝔼[Wk,t|Xk=x]fX(x)𝑑x.\displaystyle g_{1}=\mathbb{E}[X_{k}W_{k,t}]=\int_{0}^{\infty}x\mathbb{E}[W_{k,t}|X_{k}=x]f_{X}(x)dx. (45)

Since Wk,tW_{k,t} represents the waiting time in the transmission queue, we have Wk,t=(Uk1,tXk)+W_{k,t}=\big{(}U_{k-1,t}-X_{k}\big{)}^{+}. Thus,

𝔼[Wk,t|Xk=x]=\displaystyle\mathbb{E}[W_{k,t}|X_{k}=x]= 𝔼[(Uk1,tXk)+|Xk=x]\displaystyle\mathbb{E}[\big{(}U_{k-1,t}-X_{k}\big{)}^{+}|X_{k}=x]
=\displaystyle= 𝔼[(Uk1,tx)+]\displaystyle\mathbb{E}[(U_{k-1,t}-x)^{+}]
=\displaystyle= x(τx)fUt(τ)𝑑τ.\displaystyle\int_{x}^{\infty}(\tau-x)f_{U_{t}}(\tau)d\tau. (46)

Substituting (46) into (45), we obtain:

g1=0xx(τx)fUt(τ)𝑑τfX(x)𝑑x.\displaystyle g_{1}=\int_{0}^{\infty}x\int_{x}^{\infty}(\tau-x)f_{U_{t}}(\tau)d\tau f_{X}(x)dx.

Step 2. To obtain g2g_{2}, we proceed as follows:

g2=\displaystyle g_{2}= 𝔼[XkWk,c]=0x𝔼[Wk,c|Xk=x]fX(x)𝑑x\displaystyle\mathbb{E}[X_{k}W_{k,c}]=\int_{0}^{\infty}x\mathbb{E}[W_{k,c}|X_{k}=x]f_{X}(x)dx
=\displaystyle= 0x(0τfWk,c|Xk=x(τ)𝑑τ)fX(x)𝑑x.\displaystyle\int_{0}^{\infty}x\Big{(}\int_{0}^{\infty}\tau f_{W_{k,c}|X_{k}=x}(\tau)d\tau\Big{)}f_{X}(x)dx. (47)

The rest of the step is to find fWk,c|Xk=x(τ)dτf_{W_{k,c}|X_{k}=x}(\tau)d\tau.

We introduce another random variable, Dk,tD_{k,t}, which represents the inter-depature time associated with task kk in the transmission queue. Note that XkDk,tWk,cX_{k}-D_{k,t}-W_{k,c} forms a Markov chain. By the chain rule,

Pr{Xk=x,Dk,t=y,Wk,c=a}=Pr{Xk=x}\displaystyle\Pr\{X_{k}=x,D_{k,t}=y,W_{k,c}=a\}=\Pr\{X_{k}=x\}
×Pr{Dk,t=y|Xk=x}Pr{Wk,c=a|Dk,t=y},\displaystyle\times\Pr\{D_{k,t}=y|X_{k}=x\}\Pr\{W_{k,c}=a|D_{k,t}=y\},

which implies

Pr{Xk=x,Dk,t=y,Wk,c=a}Pr{Xk=x}\displaystyle\frac{\Pr\{X_{k}=x,D_{k,t}=y,W_{k,c}=a\}}{\Pr\{X_{k}=x\}}
=Pr{Dk,t=y|Xk=x}Pr{Wk,c=a|Dk,t=y},\displaystyle=\Pr\{D_{k,t}=y|X_{k}=x\}\Pr\{W_{k,c}=a|D_{k,t}=y\},

and therefore,

Pr{Dk,t=y,Wk,c=a|Xk=x}\displaystyle\Pr\{D_{k,t}=y,W_{k,c}=a|X_{k}=x\}
=Pr{Dk,t=y|Xk=x}Pr{Wk,c=a|Dk,t=y}.\displaystyle=\Pr\{D_{k,t}=y|X_{k}=x\}\Pr\{W_{k,c}=a|D_{k,t}=y\}.

Using the definition of density functions, we replace the probabilities with their respective density functions. Integrating with respect to Dk,tD_{k,t} on both sides, we have:

fWk,c|Xk=x(τ)=0fDk,t|Xk=x(y)fWk,c|Dk,t=y(τ)𝑑y.\displaystyle f_{W_{k,c}|X_{k}=x}(\tau)=\int_{0}^{\infty}f_{D_{k,t}|X_{k}=x}(y)f_{W_{k,c}|D_{k,t}=y}(\tau)dy. (48)

To obtain (48), we need to obtain fDk,t|Xk=x(y)f_{D_{k,t}|X_{k}=x}(y) and fWk,c|Dk,t=y(τ)f_{W_{k,c}|D_{k,t}=y}(\tau).

Recall Wk,cW_{k,c} is the waiting time of task kk in the computation queue, we have Wk,c=(Uk1,cDk,t)+W_{k,c}=(U_{k-1,c}-D_{k,t})^{+}. Thus,

fWk,c|Dk,t=y(τ)=f(Uk1,cDk,t)+|Dk,t=y(τ)\displaystyle f_{W_{k,c}|D_{k,t}=y}(\tau)=f_{(U_{k-1,c}-D_{k,t})^{+}|D_{k,t}=y}(\tau)
=f(Uk1,cy)+(τ)=fUc(τ+y).\displaystyle=f_{(U_{k-1,c}-y)^{+}}(\tau)=f_{U_{c}}(\tau+y). (49)

At the end of this step, we obtain fDk,t|Xk=x(y)f_{D_{k,t}|X_{k}=x}(y). Define Pbusy,kP_{\text{busy},k} and Pidle,kP_{\text{idle},k} as the probabilities that the transmission queue is busy or idle, respectively, upon the arrival of task kk. Then:

fDk,t|Xk=x(y)=\displaystyle f_{D_{k,t}|X_{k}=x}(y)= Pbusy,kfDk,t|Wk,t>0,Xk=x(y)\displaystyle P_{\text{busy},k}\cdot f_{D_{k,t}|W_{k,t}>0,X_{k}=x}(y)
+\displaystyle+ Pidle,kfDk,t|Wk,t=0,Xk=x(y).\displaystyle P_{\text{idle},k}\cdot f_{D_{k,t}|W_{k,t}=0,X_{k}=x}(y). (50)

When Wk,t>0W_{k,t}>0, the inter-departure time of task kk and task k1k-1 in the transmission queue equals the service time of task kk. Thus:

fDk,t|Wk,t>0,Xk=x(y)=fSt(y)\displaystyle f_{D_{k,t}|W_{k,t}>0,X_{k}=x}(y)=f_{S_{t}}(y) (51)
Pbusy,k=Pr(Wk,t>0|Xk=x)\displaystyle P_{\text{busy},k}=\Pr(W_{k,t}>0|X_{k}=x)
=Pr(Uk1,t>x)=xfUt(τ)𝑑τ.\displaystyle=\Pr(U_{k-1,t}>x)=\int_{x}^{\infty}f_{U_{t}}(\tau)d\tau. (52)

When Wk,t=0W_{k,t}=0, Dk,tD_{k,t} consists of 22 components: the service time of task kk, Sk,tS_{k,t}, and the interval between the arrival of task kk and the departure of task k1k-1, denoted by Bk,tB_{k,t}. Hence:

fDk,t|Wk,t=0,Xk=x(y)=fSt(y)fBk,t|Wk,t=0,Xk=x(y),\displaystyle f_{D_{k,t}|W_{k,t}=0,X_{k}=x}(y)=f_{S_{t}}(y)\ast f_{B_{k,t}|W_{k,t}=0,X_{k}=x}(y), (53)

where \ast represents the convolution operation.

It follows that

Pidle,k=\displaystyle P_{\text{idle},k}= Pr(Wk,t=0|Xk=x)=Pr(Uk1,t<x)\displaystyle\Pr(W_{k,t}=0|X_{k}=x)=\Pr(U_{k-1,t}<x)
=\displaystyle= 0xfUt(τ)𝑑τ.\displaystyle\int_{0}^{x}f_{U_{t}}(\tau)d\tau. (54)

From the definition of Bk,tB_{k,t} as Bk,t=XkUk1,tB_{k,t}=X_{k}-U_{k-1,t}, and using the result from [16, Appendix A], we obtain:

fBk,t|Wk,t=0,Xk=x(y)=fBk,t|Xk=x(y)Pidle,k\displaystyle f_{B_{k,t}|W_{k,t}=0,X_{k}=x}(y)=\frac{f_{B_{k,t}|X_{k}=x}(y)}{P_{\text{idle},k}}
=\displaystyle= fXkUk1,t|Xk=x(y)Pidle,k=fUk1,t|Xk=x(Xky)Pidle,k\displaystyle\frac{f_{X_{k}-U_{k-1,t}|X_{k}=x}(y)}{P_{\text{idle},k}}=\frac{f_{U_{k-1,t}|X_{k}=x}(X_{k}-y)}{P_{\text{idle},k}}
=\displaystyle= fUk1,t(xy)Pidle,k\displaystyle\frac{f_{U_{k-1,t}}(x-y)}{P_{\text{idle},k}} (55)

Substituting (53) and (C) into (53), we get:

fDk,t|Wk,t=0,Xk=x(y)=fSt(y)fUk1,t(xy)0xfUt(τ)𝑑τ.\displaystyle f_{D_{k,t}|W_{k,t}=0,X_{k}=x}(y)=f_{S_{t}}(y)\ast\frac{f_{U_{k-1,t}}(x-y)}{\int_{0}^{x}f_{U_{t}}(\tau)d\tau}. (56)

Let ξ(x)=xfUt(u)𝑑u\xi(x)=\int_{x}^{\infty}f_{U_{t}}(u)du. Using (C), (48), (C), (51), (52), (C), and (56), we derive:

g2=\displaystyle g_{2}= 0x0τ0(ξ(x)fSt(y)+fSt(y)fUt(xy))\displaystyle\int_{0}^{\infty}x\int_{0}^{\infty}\tau\int_{0}^{\infty}(\xi(x)\cdot f_{S_{t}}(y)+f_{S_{t}}(y)\ast f_{U_{t}}(x-y)\big{)}
fUc(τ+y)dydτfX(x)dx.\displaystyle\cdot f_{U_{c}}(\tau+y)dyd\tau f_{X}(x)dx.

Step 3. To obtain g3g_{3}, g4g_{4}, and g5g_{5}. Note that Tk=Uk,t+Uk,cT_{k}=U_{k,t}+U_{k,c}. Then:

fTk(τ)=0τfUt,Uc(u,τu)𝑑u.\displaystyle f_{T_{k}}(\tau)=\int_{0}^{\tau}f_{U_{t},U_{c}}(u,\tau-u)du.

Substituting fTk(τ)f_{T_{k}}(\tau) into g3g_{3}, g4g_{4}, and g5g_{5}, we have

g3=w0τfUt,Uc(u,τu)𝑑u𝑑τ\displaystyle g_{3}=\int_{w}^{\infty}\int_{0}^{\tau}f_{U_{t},U_{c}}(u,\tau-u)dud\tau
g4=w(τw)20τfUt,Uc(u,τu)𝑑u𝑑τ\displaystyle g_{4}=\int_{w}^{\infty}(\tau-w)^{2}\cdot\int_{0}^{\tau}f_{U_{t},U_{c}}(u,\tau-u)dud\tau
g5=w(τw)20τfUt,UcSc(u,τu)𝑑u𝑑τ.\displaystyle g_{5}=\int_{w}^{\infty}(\tau-w)^{2}\cdot\int_{0}^{\tau}f_{U_{t},U_{c}-S_{c}}(u,\tau-u)dud\tau.

From Steps 1 \sim 3, we obtain the desired results.

Appendix D Proof of Theorem 4

Utilizing a similar idea of calculating average AoI [5], the average AoC can be computed as the sum of area of the parallelogram (e.g., ABCD¯\overline{ABCD}) over time horizon tt. The number of corresponding parallelograms is equal to the number of informative tasks. As the time horizon approaches infinity, the rate of informative tasks is

limtN(t)t,\displaystyle\lim_{t\to\infty}\frac{N(t)}{t},

which is also the rate of parallelograms. Then, the average AoC is given by

Θhard=limtN(t)t𝔼[SABCD¯].\displaystyle\Theta_{\text{hard}}=\lim_{t\to\infty}\frac{N(t)}{t}\cdot\mathbb{E}[S_{\overline{ABCD}}]. (57)

From the distribution of MM in (IV-A), we know that a valid task followed by M1M-1 invalid tasks, so

limtN(t)t=limt1t/N(t)=1𝔼[j=1MXj].\displaystyle\lim_{t\to\infty}\frac{N(t)}{t}=\lim_{t\to\infty}\frac{1}{t/N(t)}=\frac{1}{\mathbb{E}[\sum_{j=1}^{M}X_{j}]}. (58)

In addition, the area of ABCD¯\overline{ABCD} can be calculated by

SABCD¯=\displaystyle S_{\overline{ABCD}}= 𝔼[(j=1MXj+TM)2/2TM2/2]\displaystyle\mathbb{E}\Big{[}\big{(}\sum_{j=1}^{M}X_{j}+T_{M}\big{)}^{2}/2-T_{M}^{2}/2\Big{]}
=\displaystyle= 𝔼[TMj=1MXj]+12𝔼[(j=1MXj)2].\displaystyle\mathbb{E}[T_{M}\cdot\sum_{j=1}^{M}X_{j}]+\frac{1}{2}\mathbb{E}[\big{(}\sum_{j=1}^{M}X_{j}\big{)}^{2}\big{]}. (59)

Substituting (58) and (D) into (57), we obtain (13).

Appendix E Proof of Theorem 5

As discussed in before, there are 44 types of correlations underlying (13): (i) positive correlations among the delays {Tk}k\{T_{k}\}_{k} over kk, (ii) positive correlations between TkT_{k} (1kM1\leq k\leq M) and MM, (iii) negative correlations between TkT_{k} and XkX_{k}, and (iv) negative correlations among XkX_{k} and MM. The correlations in (ii), (iii), and (iv) are incurred by the positive correlations in (i). We can alleviate these correlations when the positive correlations in (i) are negligible.

In a tandem of two M/M/1 queues with parameters (λ,μt,μc)(\lambda,\mu_{t},\mu_{c}), when μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the positive correlations among {Tk}k\{T_{k}\}_{k} become negligible [32]. In other words, TkT_{k} and Tk+1T_{k+1} are approximately independent over kk. Consequently, due to the approximate independence among {Tk}k\{T_{k}\}_{k}, according to (IV-A), MM approximates a geometric distribution with parameter 1ϵw1-\epsilon_{w}, which is approximately independent of TkT_{k}. Additionally, when μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the delay TkT_{k} is dominated by the services times at the transmitter and the computational node. This implies that TkT_{k} and XkX_{k} are approximately independent. Hence, XkX_{k} and MM are also approximately independent.

Step 1. We prove (14). Recall that {Xk}k\{X_{k}\}_{k} are i.i.d over kk. As discussed above, when μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, we have the following: (i) From the model assumption, {Xk}k\{X_{k}\}_{k} are indepdent and identical distributions. Since MM is approximately independent of XkX_{k}, we have:

𝔼[j=1MXj]𝔼[M]𝔼[X1],\displaystyle\mathbb{E}[\sum_{j=1}^{M}X_{j}]\approx\mathbb{E}[M]\mathbb{E}[X_{1}], (60)
𝔼[(j=1MXj)2]𝔼[M]𝔼[X12]\displaystyle\mathbb{E}[(\sum_{j=1}^{M}X_{j})^{2}]\approx\mathbb{E}[M]\mathbb{E}[X_{1}^{2}]
+(𝔼[M2]𝔼[M])𝔼2[X1].\displaystyle+(\mathbb{E}[M^{2}]-\mathbb{E}[M])\mathbb{E}^{2}[X_{1}]. (61)

(ii) Since TkT_{k} is approximately independent of XkX_{k}, we have:

𝔼[TMj=1MXj]𝔼[TM]𝔼[j=1MXj].\displaystyle\mathbb{E}[T_{M}\cdot\sum_{j=1}^{M}X_{j}]\approx\mathbb{E}[T_{M}]\mathbb{E}[\sum_{j=1}^{M}X_{j}]. (62)

Substituting (60), (61), and (62) into (13), we get:

Θhard\displaystyle\Theta_{\text{hard}}\approx 𝔼[TM]+𝔼[X12]2𝔼[X1]+(𝔼[M2]2𝔼[M]12)𝔼[X1],\displaystyle\mathbb{E}[T_{M}]+\frac{\mathbb{E}[X_{1}^{2}]}{2\mathbb{E}[X_{1}]}+\big{(}\frac{\mathbb{E}[M^{2}]}{2\mathbb{E}[M]}-\frac{1}{2}\big{)}\mathbb{E}[X_{1}],

thereby completing the proof of (14).

Step 2. We prove (5). According to (IV-A),

𝔼[TM]=𝔼[T1|T1w].\displaystyle\mathbb{E}[T_{M}]=\mathbb{E}[T_{1}|T_{1}\leq w]. (63)

Since the density function of TkT_{k} is given by (38), substituting (38) into (62), we have

𝔼[TM]=11+μtδtweμtδtwμt2δt211+μcδcweμcδcwμc2δc21eμtδtwμtδt1eμcδcwμcδc.\displaystyle\mathbb{E}[T_{M}]=\frac{\frac{1-\frac{1+\mu_{t}\delta_{t}w}{e^{\mu_{t}\delta_{t}w}}}{\mu_{t}^{2}\delta_{t}^{2}}-\frac{1-\frac{1+\mu_{c}\delta_{c}w}{e^{\mu_{c}\delta_{c}w}}}{\mu_{c}^{2}\delta_{c}^{2}}}{\frac{1-e^{-\mu_{t}\delta_{t}w}}{\mu_{t}\delta_{t}}-\frac{1-e^{-\mu_{c}\delta_{c}w}}{\mu_{c}\delta_{c}}}. (64)

Since X1X_{1} has an exponential distribution with parameter λ\lambda, we have:

𝔼[X12]2𝔼[X1]=1λ.\displaystyle\frac{\mathbb{E}[X_{1}^{2}]}{2\mathbb{E}[X_{1}]}=\frac{1}{\lambda}. (65)

Recall that MM approximates a geometric distribution with parameter 1ϵw1-\epsilon_{w}. Therefore,

(𝔼[M2]2𝔼[M]12)𝔼[X1]=1λ(11ϵw1),\displaystyle\big{(}\frac{\mathbb{E}[M^{2}]}{2\mathbb{E}[M]}-\frac{1}{2}\big{)}\mathbb{E}[X_{1}]=\frac{1}{\lambda}(\frac{1}{1-\epsilon_{w}}-1), (66)

where ϵw\epsilon_{w} is gvien by (39). Substituting (39), (64), (65), and (66) into (14), we obtain (5).

Step 3. We prove (5). When μt=μc\mu_{t}=\mu_{c}, substituting (38) into (62), we have

𝔼[TM]=2μtδt(2μtδt+2w+μtδtw2)eμtδtw1eμtδtw(1+μtδtw).\displaystyle\mathbb{E}[T_{M}]=\frac{\frac{2}{\mu_{t}\delta_{t}}-(\frac{2}{\mu_{t}\delta_{t}}+2w+\mu_{t}\delta_{t}w^{2})e^{-\mu_{t}\delta_{t}w}}{1-e^{-\mu_{t}\delta_{t}w}(1+\mu_{t}\delta_{t}w)}. (67)

Substituting (39), (65), (66), and (67) into (14), we obtain (5).

Appendix F Proof of Lemma 1

Consider a large time horizon TT, during which there are N(T)N(T) informative packets. For each information task j{1,2,,N(T)}j\in\{1,2,\cdots,N(T)\}, denote the number of the associated bad tasks as MjM_{j}. {Mj}j\{M_{j}\}_{j} is identical over jj and follows the distribution given in (IV-A). Since {Xk}k\{X_{k}\}_{k} are i.i.d over kk, the sequence {k=1MjXk}j\{\sum_{k=1}^{M_{j}}X_{k}\}_{j} are identical over jj. The remaining time in the interval [0,T][0,T] is TτN(T)T-\tau^{\prime}_{N(T)}. Thus, the time horizon can be re-written as

T=j=1N(T)k=1MjXk+j1+TτN(T).\displaystyle T=\sum_{j=1}^{N(T)}\sum_{k=1}^{M_{j}}X_{k+j-1}+T-\tau^{\prime}_{N(T)}. (68)

Substituting (68) into (18), we have

Ξ=limTN(T)T\displaystyle\Xi=\lim_{T\to\infty}\frac{N(T)}{T}
=limTN(T)j=1N(T)k=1MjXk+j1+TτN(T)\displaystyle=\lim_{T\to\infty}\frac{N(T)}{\sum_{j=1}^{N(T)}\sum_{k=1}^{M_{j}}X_{k+j-1}+T-\tau^{\prime}_{N(T)}}
=limT11N(T)j=1N(T)k=1MjXk+j1+TτN(T)N(T).\displaystyle=\lim_{T\to\infty}\frac{1}{\frac{1}{N(T)}\sum_{j=1}^{N(T)}\sum_{k=1}^{M_{j}}X_{k+j-1}+\frac{T-\tau^{\prime}_{N(T)}}{N(T)}}.

Since the sequence {k=1MjXk}j\{\sum_{k=1}^{M_{j}}X_{k}\}_{j} is identical over jj, by the central limit theory, we have

Ξ=\displaystyle\Xi= 1𝔼[k=1MXk]+limTTτN(T)N(T)\displaystyle\frac{1}{\mathbb{E}[\sum_{k=1}^{M}X_{k}]+\lim_{T\to\infty}\frac{T-\tau^{\prime}_{N(T)}}{N(T)}}
=\displaystyle= 1𝔼[k=1MXk].\displaystyle\frac{1}{\mathbb{E}[\sum_{k=1}^{M}X_{k}]}.

Appendix G Proof of Proposition 1

Recall that MM approximates a geometric distribution with parameter 1ϵw1-\epsilon_{w}, and XX follows an exponential distribution with parameter λ\lambda, then

1𝔼[M]𝔼[X1]=λ(1ϵw).\displaystyle\frac{1}{\mathbb{E}[M]\mathbb{E}[X_{1}]}=\lambda(1-\epsilon_{w}). (69)

Subsituting (39) into (69), we obtain (20) and (21).

Appendix H Proof of Lemma 2

The proof is based on contradiction. Assume that the pair (Θ^(u),Ξ^(u))\big{(}\hat{\Theta}(u),\hat{\Xi}(u)\big{)} is not a weakly Pareto-optimal point. This implies that there exists another solution (Θ^(u),Ξ^(u))\big{(}\hat{\Theta}^{\prime}(u),\hat{\Xi}^{\prime}(u)\big{)} such that Θ^(u)<Θ^(u)\hat{\Theta}^{\prime}(u)<\hat{\Theta}(u) and Ξ^(u)>Ξ^(u)\hat{\Xi}^{\prime}(u)>\hat{\Xi}(u). Given that Ξ^(u)>Ξ^(u)>u\hat{\Xi}^{\prime}(u)>\hat{\Xi}(u)>u, the solution (Θ^(u),Ξ^(u))\big{(}\hat{\Theta}^{\prime}(u),\hat{\Xi}(u)^{\prime}\big{)} must be a feasible solution to problem (22). However, since Θ^(u)\hat{\Theta}(u) is the minimum value in the problem (22), it follows that Θ^(u)Θ^(u)\hat{\Theta}(u)\leq\hat{\Theta}^{\prime}(u). This contradicts the assumption that Θ^(u)<Θ^(u)\hat{\Theta}^{\prime}(u)<\hat{\Theta}(u).

Appendix I Proof of Theorem 6

In a tandem of two G/G/1 queues with parameters (λ,μt,μc)(\lambda,\mu_{t},\mu_{c}), when μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the positive correlations among {Tk}k\{T_{k}\}_{k} become negligible [32]. In other words, TkT_{k} and Tk+1T_{k+1} are approximately independent over kk. Consequently, due to the approximate independence among {Tk}k\{T_{k}\}_{k}, and according to (IV-A), MM approximates a geometric distribution with parameter 1ϵw1-\epsilon_{w}, which is approximately independent of TkT_{k}. Additionally, when μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the delay TkT_{k} is dominated by the services times at the transmitter and the computational node. This implies that TkT_{k} and XkX_{k} are approximately independent. Hence, XkX_{k} and MM are also approximately independent. Based on the same process in Step 1 in Appendix E, we show that the average AoC defined in (13) can be accurately approximated by (14).

Step 1. The delay of task kk, TkT_{k}, is the sum of the system times in both transmission and computation queues, i.e., Tk=Uk,t+Uk,cT_{k}=U_{k,t}+U_{k,c}. Based on Assumption 1, the density functions fUtf_{U_{t}} and fUcf_{U_{c}} are known. Denote the density function and CDF of TkT_{k} as follows,

fT(τ)=\displaystyle f_{T}(\tau)= 0τfUt,Uc(u,τu)𝑑u\displaystyle\int_{0}^{\tau}f_{U_{t},U_{c}}(u,\tau-u)du
FT(w)=\displaystyle F_{T}(w)= 0wfT(τ)𝑑τ.\displaystyle\int_{0}^{w}f_{T}(\tau)d\tau.

When μtλ\mu_{t}\gg\lambda and μcλ\mu_{c}\gg\lambda, the sequence {Tk}k\{T_{k}\}_{k} is approximately independent. Therefore, from (IV-A), we have

Pr(M=n)\displaystyle\Pr(M=n)\approx (Pr(Tk>w))n1Pr(Tkw)\displaystyle\big{(}\Pr(T_{k}>w)\big{)}^{n-1}\Pr(T_{k}\leq w)
=\displaystyle= (1FT(w))n1FT(w).\displaystyle\big{(}1-F_{T}(w)\big{)}^{n-1}F_{T}(w). (70)

Based on (I), it is straightforward to calculate,

𝔼[M]=\displaystyle\mathbb{E}[M]= 1FT(w)\displaystyle\frac{1}{F_{T}(w)} (71)
𝔼[M2]=\displaystyle\mathbb{E}[M^{2}]= 2FT(w)FT2(w).\displaystyle\frac{2-F_{T}(w)}{F_{T}^{2}(w)}. (72)

Step 2. The random variable TMT_{M} is the value of TkT_{k} at the stopping time MM, which is the first instance when TMwT_{M}\leq w. Since {Tk}k\{T_{k}\}_{k} are approximately i.i.d over kk, we have

𝔼[TM]=𝔼[Tk|Tkw]=0wτfT(τ)𝑑τFT(w).\displaystyle\mathbb{E}[T_{M}]=\mathbb{E}[T_{k}|T_{k}\leq w]=\frac{\int_{0}^{w}\tau f_{T}(\tau)d\tau}{F_{T}(w)}. (73)

Substituting (71), (72), and (73) into (14), we obtain (6).

Appendix J Proof of Proposition 2

From (I) in Appendix I, MM approximates a geometric distribution with parameter 1FT(w)1-F_{T}(w). Therefore, we have

1𝔼[M]𝔼[X1]=FT(w)0xfX(x)𝑑x,\displaystyle\frac{1}{\mathbb{E}[M]\mathbb{E}[X_{1}]}=\frac{F_{T}(w)}{\int_{0}^{\infty}xf_{X}(x)dx},

which derives the desired result.

Appendix K Proof of (V-B1)

We first derive the recusion of cisoft(k)c_{i}^{\text{soft}}(k) with respect to di(k)d_{i}(k). If a task from source ii with instantaneous zi(k)z_{i}(k) is delivered to the receiver in time slot kk (i.e., di(k)=1d_{i}(k)=1), then from (3), we have

cisoft(k+1)=\displaystyle c_{i}^{\text{soft}}(k+1)= zi(k)+1\displaystyle z_{i}(k)+1
+\displaystyle+ 1{zi(k)+1>w}Ai(k)Gi(k)(zi(k)+1w),\displaystyle 1_{\{z_{i}(k)+1>w\}}\frac{A_{i}(k)}{G_{i}(k)}(z_{i}(k)+1-w),

where Gi(k)G_{i}(k) is defined in (3) and Ai(k)A_{i}(k) is defined in (4). Otherwise, if no task from source ii is delivered, we have:

cisoft(k+1)=cisoft(k)+1.\displaystyle c_{i}^{\text{soft}}(k+1)=c_{i}^{\text{soft}}(k)+1.

Thus, the recursion for cisoft(k)c_{i}^{\text{soft}}(k) is given by:

cisoft(k+1)=1{di(k)=0}(cisoft(k)+1)\displaystyle c_{i}^{\text{soft}}(k+1)=1_{\{d_{i}(k)=0\}}\big{(}c_{i}^{\text{soft}}(k)+1\big{)}
+1{di(k)=1}(zi(k)+1+i(zi(k)),\displaystyle+1_{\{d_{i}(k)=1\}}\big{(}z_{i}(k)+1+\ell_{i}\big{(}z_{i}(k)\big{)}, (74)

where i(zi(k))=1{zi(k)+1>w}Ai(k)Gi(k)(zi(k)+1w)\ell_{i}\big{(}z_{i}(k)\big{)}=1_{\{z_{i}(k)+1>w\}}\frac{A_{i}(k)}{G_{i}(k)}(z_{i}(k)+1-w).

To schedule a transmitter in time slot kk, the receiver needs the network state in time slot kk:

  • If i=1Ndi(k1)=0\sum_{i=1}^{N}d_{i}(k-1)=0, meaning no task reaches the receiver in time slot kk, the receiver remains idle. Since a task takes at least 2 slots from starting transmission to leaving the computational node, no tasks from source ii can leave the computational node. Thus, from (K), we have cisoft(k+2)=cisoft(k)+2c_{i}^{\text{soft}}(k+2)=c_{i}^{\text{soft}}(k)+2.

  • If If i=1Ndi(k1)=1\sum_{i=1}^{N}d_{i}(k-1)=1, meaning a task is delivered to the receiver in time slot k1k-1, but ai(k)=0a_{i}(k)=0 (i.e., transmitter ii is not scheduled in time slot kk), then no tasks from source ii can leave the computational node. Thus, from (K), we have cisoft(k+2)=cisoft(k)+2c_{i}^{\text{soft}}(k+2)=c_{i}^{\text{soft}}(k)+2.

  • If i=1Ndi(k1)=1\sum_{i=1}^{N}d_{i}(k-1)=1 and ai(k)=1a_{i}(k)=1, meaning transmitter ii is scheduled in time slot kk, then:

    • If the transmission is completed in one time slot, the instantaneous delay of the current task in time slot k+1k+1 increases to zi(k)+1z_{i}(k)+1. From (K), we have cisoft(k+2)=zi(k)+2+i(zi(k)+1)c_{i}^{\text{soft}}(k+2)=z_{i}(k)+2+\ell_{i}\left(z_{i}(k)+1\right).

    • If the transmission is not completed in one time slot, then from (K), we have cisoft(k+2)=cisoft(k)+2c_{i}^{\text{soft}}(k+2)=c_{i}^{\text{soft}}(k)+2.

Based on the discussion above, we can write the expression for cisoft(k+2)c_{i}^{\text{soft}}(k+2) as follows,

cisoft(k+2)=1{i=1Ndi(k1)=0}(cisoft(k)+2)\displaystyle c_{i}^{\text{soft}}(k+2)=1_{\{\sum_{i=1}^{N}d_{i}(k-1)=0\}}\big{(}c_{i}^{\text{soft}}(k)+2\big{)}
+1{i=1Ndi(k1)=1,ai(k)=0}(cisoft(k)+2)\displaystyle+1_{\{\sum_{i=1}^{N}d_{i}(k-1)=1,a_{i}(k)=0\}}\big{(}c_{i}^{\text{soft}}(k)+2\big{)}
+1{i=1Ndi(k1)=1,ai(k)=1,di(k+1)=0}(cisoft(k)+2)\displaystyle+1_{\{\sum_{i=1}^{N}d_{i}(k-1)=1,a_{i}(k)=1,d_{i}(k+1)=0\}}\big{(}c_{i}^{\text{soft}}(k)+2\big{)}
+1{i=1Ndi(k1)=1,ai(k)=1,di(k+1)=1}\displaystyle+1_{\{\sum_{i=1}^{N}d_{i}(k-1)=1,a_{i}(k)=1,d_{i}(k+1)=1\}}
(zi(k)+2+i(zi(k)+1)).\displaystyle\cdot\Big{(}z_{i}(k)+2+\ell_{i}\big{(}z_{i}(k)+1\big{)}\Big{)}.

Appendix L Proof of Propositioni 3

Note that given ai(k)=1a_{i}(k)=1,

Pr(di(k+1)=1|ai(k)=1)=μt,i.\displaystyle\Pr(d_{i}(k+1)=1|a_{i}(k)=1)=\mu_{t,i}. (75)

Substituting (K) and (75) into (28) and (29), we calculate the drift as follows:

Δ(Ξ(k))=\displaystyle\Delta(\Xi(k))= 2Ni=1Nβi\displaystyle\frac{2}{N}\sum_{i=1}^{N}\beta_{i}
\displaystyle- 1Ni=1Nβiμt,i𝔼[ai(k)|Ξ(k)]wi(k),\displaystyle\frac{1}{N}\sum_{i=1}^{N}\beta_{i}\mu_{t,i}\mathbb{E}[a_{i}(k)|\Xi(k)]w_{i}(k),

where wi(k)w_{i}(k) is given in Algorithm 1. To minimize the drift Δ(Ξ(k))\Delta(\Xi(k)), Algorithm 1 selects the transmitter with the maximum weight wi(k)w_{i}(k).