This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Communication-Efficient Distributed On-Device LLM Inference Over Wireless Networks

Kai Zhang, Hengtao He, Member, IEEE, Shenghui Song, Senior Member, IEEE,
Jun Zhang, Fellow, IEEE, and Khaled B. Letaief, Fellow, IEEE
Abstract

Large language models (LLMs) have demonstrated remarkable success across various application domains, but their enormous sizes and computational demands pose significant challenges for deployment on resource-constrained edge devices. To address this issue, we propose a novel distributed on-device LLM inference framework that leverages tensor parallelism to partition the neural network tensors (e.g., weight matrices) of one LLM across multiple edge devices for collaborative inference. A key challenge in tensor parallelism is the frequent all-reduce operations for aggregating intermediate layer outputs across participating devices, which incurs significant communication overhead. To alleviate this bottleneck, we propose an over-the-air computation (AirComp) approach that harnesses the analog superposition property of wireless multiple-access channels to perform fast all-reduce steps. To utilize the heterogeneous computational capabilities of edge devices and mitigate communication distortions, we investigate a joint model assignment and transceiver optimization problem to minimize the average transmission error. The resulting mixed-timescale stochastic non-convex optimization problem is intractable, and we propose an efficient two-stage algorithm to solve it. Moreover, we prove that the proposed algorithm converges almost surely to a stationary point of the original problem. Comprehensive simulation results will show that the proposed framework outperforms existing benchmark schemes, achieving up to 5x inference speed acceleration and improving inference accuracy.

Index Terms:
6G, distributed inference, large language models, over-the-air computation, tensor parallelism.
footnotetext: Part of this work has been accepted for presentation at the 2025 IEEE Int. Conf. Commun. (ICC), Montreal, Canada [1]. The authors are with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong (email: kzhangbn@connect.ust.hk, eehthe@ust.hk, eeshsong@ust.hk, eejzhang@ust.hk, eekhaled@ust.hk). (The corresponding author is Hengtao He.)

I Introduction

The advent of large language models (LLMs) has marked a significant breakthrough in artifical intelligence (AI), demonstrating superior performance and adaptability in a wide range of applications, such as natural language processing [2, 3, 4], embodied intelligence [5, 6, 7], and wireless communications [8, 9, 10]. The efficacy of LLMs is primarily attributed to the vast model scale with billions of parameters, which enables them to capture complex semantic relationships and contextual nuances, leading to superior performance across diverse tasks. However, the substantial computational and memory requirements of LLMs present significant challenges for the deployment on resource-constrained edge devices. For instance, the LLaMA3 model [11] with 13 billion parameters requires 40GB of RAM, which far exceeds the capabilities of most edge devices. Consequently, most existing LLMs rely on cloud-based infrastructure, which limits the feasibility of LLM deployment and raises concerns about data privacy and inference latency, especially in sensitive domains like healthcare and finance. To address these challenges, distributed LLM inference has recently been proposed as a promising solution, which distributes the large models and computational workloads across multiple devices [12, 13, 14]. This strategy allows each device to handle smaller and more manageable model segments, thereby reducing the burden on individual devices and strengthening privacy protections. Furthermore, advancements in communication technologies, such as the 5G and future 6G wireless networks, enhance the feasibility of distributed LLM inference for real-time applications [15, 16].

Communication overhead is a critical factor affecting the performance of distributed LLM inference systems. To enhance communication efficiency, several recent studies have been conducted [17, 18, 19, 20, 21, 22, 23, 24]. In [17], Zhang et al. proposed a collaborative edge computing framework that distributes different layers of LLMs across the edge device and cloud server. They developed a joint device selection and model partitioning algorithm to minimize inference latency and maximize throughput. In [18], Yuan et al. considered splitting LLMs into several sub-models, where the resource-intensive components were offloaded to the server through non-orthogonal multiple-access (NOMA) channels. They further proposed a gradient descent-based algorithm to find the optimal trade-off between inference delay and energy consumption. In [19], He et al. developed an active inference method to address the joint task offloading and resource allocation problem for distributed LLM inference over cloud-edge computing frameworks. Similarly, Chen et al. [20] proposed a reinforcement learning algorithm that optimizes the splitting point of LLMs between the edge device and cloud server to reduce the communication overhead under varying wireless network conditions. Furthermore, task-oriented communications have been utilized to optimize end-to-end inference throughput, accuracy, and latency, which can further enhance the communication efficiency of distributed LLM inference systems [21, 22, 23, 24].

Despite significant advances in distributed LLM inference, most existing works [17, 18, 20, 19, 21, 22, 23, 24] primarily focus on the device-cloud collaborative inference. This architecture, however, faces substantial challenges in terms of feasibility and scalability due to its reliance on a powerful centralized cloud server with high computational capability. Moreover, prior works have generally employed the pipeline parallelism architectures, which are associated with inherent disadvantages such as pipeline bubbles [25]. These bubbles occur when downstream devices are forced to remain idle while waiting for upstream computations to complete, leading to poor utilization of computational resources. To address these limitations, distributed on-device LLM inference leveraging tensor parallelism has recently been proposed as a promising solution [26, 27, 28]. This approach divides large neural network tensors (e.g., weight matrices) of LLMs into smaller segments and distributes them across multiple edge devices. It not only eliminates the reliance on a powerful central server but also enables concurrent processing of model segments across devices, significantly improving the utilization of computation and communication resources. Nevertheless, a critical challenge in tensor parallelism is the frequent all-reduce operations required to aggregate intermediate layer outputs across devices. These communication-intensive all-reduce steps can cause substantial latency in practical wireless networks and hinder real-time inference, necessitating efficient communication strategies to fully achieve the benefits of tensor parallelism.

In this paper, we propose a communication-efficient framework for distributed on-device LLM inference with tensor parallelism. Specifically, we propose an over-the-air computation (AirComp) approach to facilitate fast all-reduce operations. AirComp leverages the superposition property of wireless multiple-access channels, allowing simultaneous transmissions from multiple devices to be naturally summed at the receiver [29, 30]. This method reduces the communication latency and bandwidth requirement compared to traditional techniques that treat communication and computation separately. Most recently, AirComp has gained popularity in various applications such as edge computing [31, 32, 33], federated learning [34, 35, 36], and distributed sensing [37, 38, 39]. Table I shows a thorough survey of recent state-of-the-art frameworks on distributed parallel computing and AirComp for both model training and inference tasks.

Reference Application Scenario Parallelism Method Antenna Configuration Optimization Objective Large-Scale LLM Device Heterogeneity
K. Yang et al. (2020) [34] Training Data Parallelism Multi-Antenna Device Participation ×\times ×\times
X. Fan et al. (2021) [40] Training Data Parallelism Single-Antenna Convergence Rate ×\times \checkmark
T. Sery et al. (2021) [36] Training Data Parallelism Single-Antenna Communication Distortion ×\times ×\times
Y. Liang et al. (2024) [41] Training Data Parallelism Single-Antenna Training Latency and Energy Consumption ×\times \checkmark
H. Sun et al. (2024) [42] Training Data and Model Parallelism Multi-Antenna Convergence Rate \checkmark \checkmark
Z. Zhuang et al. (2023) [43] Inference Data and Model Parallelism Multi-Antenna Minimum Pair-Wise Discriminant Gain ×\times \checkmark
D. Wen et al. (2023) [24] Inference Data Parallelism Multi-Antenna Discriminant Gain ×\times \checkmark
P. Yang et al. (2024) [44] Inference Data and Model Parallelism Multi-Antenna Communication Distortion ×\times ×\times
This paper Inference Tensor Parallelism Multi-Antenna Communication Distortion \checkmark \checkmark
TABLE I: Overview of Over-the-Air Computation for Distributed Learning and Inference
Refer to caption
Fig. 1: An illustration of the distributed on-device LLM inference system, showing the system workflow and visualizing tensor parallelism for (a) MLP and (b) self-attention layers.

The performance of the proposed distributed LLM inference system, however, is heavily influenced by the communication efficiency, particularly given the limited energy resources of edge devices. Thus, to improve the inference performance, we investigate a joint model assignment and transceiver optimization problem aimed at minimizing the average transmission mean-squared error (MSE). The formulated joint optimization is crucial considering the heterogeneous computation capabilities of edge devices and varying wireless channel conditions. Optimal model assignment ensures that each device processes a suitable portion of the model based on its computational capability (e.g., memory size and compute power), while transceiver optimization minimizes the communication distortions during the AirComp process. To simplify the problem and gain key insights, we initially consider the scenario of single-antenna edge devices. We then extend the framework to a multi-antenna configuration, leveraging spatial multiplexing to further enhance communication efficiency and reduce inference latency. Furthermore, the formulated joint model assignment and transceiver optimization problem is intractable due to its mixed-timescale, stochastic, and non-convex property. Specifically, the model assignment policy should be determined at the beginning of inference based on long-term statistical channel state information (CSI), while the transceiver design adapts dynamically to the CSI in each all-reduce step. To address the mixed-timescale optimization problem, we develop an efficient two-stage algorithm by employing semidefinite relaxation (SDR) and stochastic successive convex approximation (SCA). We note that although existing wireless optimization techniques (e.g., SDR and SCA algorithms) have been well studied, their tailored application to distributed LLM inference brings unique challenges and technical requirements. Specifically, our framework addresses unique challenges arising from large-scale distributed LLM inference, including the frequent aggregation of high-dimensional tensors, mixed-timescale optimization involving long-term model assignment and short-term transceiver adaptation, handling of heterogeneous device capabilities, multi-antenna AirComp beamforming designs, and stringent energy constraints.

I-A Contributions

The main contributions of this paper are summarized as follows.

1) We propose a novel distributed on-device LLM inference framework by employing tensor parallelism and AirComp. While tensor parallelism effectively distributes computational workloads across edge devices, its frequent all-reduce operations incur significant communication overhead, which offsets the computational benefits and becomes a major bottleneck for inference performance. To address this challenge, we develop a communication-efficient AirComp all-reduce approach by exploiting the signal superposition property of wireless multiple-access channels.

2) To utilize the heterogeneous computational capabilities of edge devices and mitigate communication distortions, we investigate a joint model assignment and transceiver optimization problem to minimize the average transmission MSE. The formulated mixed-timescale stochastic non-convex optimization problem is inherently intractable. Thus, we develop an efficient two-stage algorithm that decomposes the original problem into short-term transceiver optimization and long-term model assignment optimization subproblems. The resulting subproblems are further solved by employing SDR and stochastic SCA, respectively. The proposed algorithm requires no prior knowledge of channel statistics, and it converges almost surely to a stationary point of the original problem.

3) We validate the effectiveness of the proposed framework through simulations with two state-of-the-art open-source LLMs and a real-world text dataset. Simulation results demonstrate that the proposed algorithm outperforms benchmark schemes across various network settings, achieving up to 5x inference speed acceleration and improving inference accuracy.

I-B Organization and Notations

The rest of this paper is organized as follows. In Section II, we elaborate on the system model and present the problem formulation. In Section III, we develop a two-stage algorithm and prove its convergence. In Section IV, we extend the algorithm for multi-antenna edge devices. Simulation results are presented in Section V, and we conclude the paper in Section VI.

Notations: Column vectors and matrices are denoted by boldface lowercase and boldface capital letters, respectively. The symbol \mathbb{R} denotes the set of real numbers. M×N\mathbb{C}^{M\times N} represents the space of the M×NM\times N complex-valued matrices. ()𝖳(\cdot)^{\mathsf{T}} and ()𝖧(\cdot)^{\mathsf{H}} stand for the transpose and the conjugate transpose of their arguments, respectively. tr(𝐀)\textup{tr}(\mathbf{A}) denote the trace of matrix 𝐀\mathbf{A}. 𝔼[]\mathbb{E}[\cdot] denotes the expectation operation. \nabla represents the gradient operator. |||\cdot| and \|\cdot\| stand for the 1\ell_{1} and 2\ell_{2} norm of vectors.

II System Model and Problem Formulation

In this section, we first elaborate on the proposed distributed on-device LLM inference system, followed by proposing the communication-efficient AirComp all-reduce approach. To minimize the average transmission MSE, we then formulate a joint model assignment and transceiver optimization problem.

II-A Distributed On-Device LLM Inference System

To deploy LLMs on resource-limited edge devices, distributed on-device inference with tensor parallelism has been proposed. This method involves partitioning large neural network tensors (e.g., weight matrices) of LLMs into smaller segments and distributing them across multiple edge devices for simultaneous processing. The complete workflow of the proposed distributed on-device LLM inference system is illustrated in Fig. 1. When a device initiates an inference request, the edge server dynamically identifies available local devices and partitions the model parameters. Then, each device loads its assigned model segment into memory and performs forward computation. After each layer of the LLM is computed, an all-reduce operation aggregates the intermediate layer outputs from all devices, ensuring synchronization and consistency across devices during inference. In the proposed distributed inference framework, the device shares its input (typically token embeddings rather than raw data) with other participating devices. For scenarios demanding strict confidentiality, encryption schemes (e.g., homomorphic encryption) or secure enclaves can be adopted to mitigate privacy leakage. Furthermore, we highlight two typical scenarios illustrating real-world, trusted environments particularly suitable for our distributed inference framework, as shown in the following.

  • Organizational or HPC Clusters: Large institutions (e.g., corporate data centers, national labs, or university HPC centers) often host massive LLMs that exceed the capacity of a single node. In these clusters, multiple servers within the same security domain can distribute model segments or layers among them, securely exchanging raw input data via internal networks. Since all compute nodes reside in the same trusted infrastructure (with well-defined access control, encryption, and compliance policies), they can fully leverage parallelization to reduce per-inference latency and alleviate memory bottlenecks, without risking data exposure to external environments.

  • Single-User or Local Edge Scenarios: Individual users or small teams may possess multiple personal devices or localized edge servers (e.g., the home server or on-premises GPU node). These devices operate within a single-user network or closed local environment, allowing them to share raw inputs without breaching privacy. By splitting the LLM’s parameters or layers across these trusted devices, users can achieve faster response times and reduced memory load per device. These benefits are especially valuable for real-time applications (e.g., smart home assistants or AR/VR), where offloading data to external clouds may be undesirable or impractical.

II-B Tensor Parallelism

LLMs are primarily built on the Transformer architecture, which typically consists of dozens of Transformer layers [45]. Each Transformer layer includes a self-attention mechanism and a multi-layer perceptron (MLP). To achieve efficient distributed inference, tensor parallelism partitions both the self-attention and MLP layers within each Transformer block into smaller tensor segments, as shown in Fig. 1. We note that both pipeline parallelism and tensor parallelism are two prevalent model partitioning strategies widely adopted in distributed inference frameworks. While pipeline parallelism partitions the model across layers, tensor parallelism partitions computations within each layer across multiple devices. Tensor parallelism is particularly attractive for on-device inference due to its inherent advantages in significantly reducing idle times (pipeline bubbles), achieving finer-grained memory allocation, and, when combined with AirComp-based aggregation, greatly minimizing communication overhead. These properties highlight the practical benefits and superior suitability of tensor parallelism for resource-constrained and latency-sensitive inference scenarios considered in this work.

II-B1 Tensor Parallelism for MLP Layer

Refer to caption

Fig. 2: Illustration of MLP matrix multiplication for conventional unpartitioned approach and tensor parallelism with two devices.

For a typical 2-layer MLP within the Transformer block, the forward computation involves two main linear transformations, separated by a non-linear activation function (e.g., ReLU or GeLU). We formulate the computation of the MLP layer by taking the ReLU activation as an example. Mathematically, it is expressed as follows,

𝐙=max(𝟎,𝐗𝐖)𝐔,\displaystyle\mathbf{Z}=\max(\mathbf{0},\mathbf{XW})\mathbf{U}, (1)

where 𝐗\mathbf{X} is the input to the MLP layer, 𝐙\mathbf{Z} is the output, and 𝐖\mathbf{W} and 𝐔\mathbf{U} are the weight matrices, respectively. Our framework can be readily generalized to other activation functions, such as GeLU function: 𝐙=GeLU(𝐗𝐖)𝐔,whereGeLU(x)=xΦ(x),\mathbf{Z}=\text{GeLU}(\mathbf{XW})\mathbf{U},~{}\text{where}~{}\text{GeLU}(x)=x\Phi(x), with Φ(x)\Phi(x) representing the cumulative distribution function of the standard Gaussian distribution. The traditional centralized inference approach loads the entire weight matrices 𝐖\mathbf{W} and 𝐔\mathbf{U} into memory and performs full matrix multiplications on a single device, which is usually impractical for resource-limited edge devices. To overcome this challenge, tensor parallelism distributes the weight matrices 𝐖\mathbf{W} and 𝐔\mathbf{U} across NN devices. The weight matrices 𝐖\mathbf{W} and 𝐔\mathbf{U} have dimensions d×dhidden\displaystyle d\times d_{\text{hidden}} and dhidden×d\displaystyle d_{\text{hidden}}\times d, respectively. As shown in Fig. 2, the weight matrix 𝐖\mathbf{W} is partitioned column-wise into multiple slices as

𝐖\displaystyle\mathbf{W} (2)
=[𝐖1d×dhidden1,𝐖2d×dhidden2,,𝐖Nd×dhiddenN],\displaystyle=[\mathbf{W}_{1}\in\mathbb{R}^{d\times d^{1}_{\text{hidden}}},\mathbf{W}_{2}\in\mathbb{R}^{d\times d^{2}_{\text{hidden}}},\ldots,\mathbf{W}_{N}\in\mathbb{R}^{d\times d^{N}_{\text{hidden}}}],

where 𝐖n\mathbf{W}_{n} represents the portion of the weight matrix 𝐖\mathbf{W} assigned to device nn and dhidden=n=1Ndhiddennd_{\text{hidden}}=\sum_{n=1}^{N}d^{n}_{\text{hidden}}. Similarly, the weight matrix 𝐔\mathbf{U} is partitioned row-wise as

𝐔=[𝐔1dhidden1×d𝐔2dhidden2×d𝐔NdhiddenN×d],\displaystyle\mathbf{U}=\begin{bmatrix}\mathbf{U}_{1}\in\mathbb{R}^{d^{1}_{\text{hidden}}\times d}\\ \mathbf{U}_{2}\in\mathbb{R}^{d^{2}_{\text{hidden}}\times d}\\ \ldots\\ \mathbf{U}_{N}\in\mathbb{R}^{d^{N}_{\text{hidden}}\times d}\end{bmatrix}, (3)

where 𝐔n\mathbf{U}_{n} represents the portion of the weight matrix 𝐔\mathbf{U} assigned to device nn. Then, each device nn can perform the forward computation on its respective model segment as follows,

𝐙n=max(0,𝐗𝐖n)𝐔n,\displaystyle\mathbf{Z}_{n}=\mathrm{max}(0,\mathbf{X}\mathbf{W}_{n})\mathbf{U}_{n}, (4)

where 𝐙n\mathbf{Z}_{n} is the partial output produced by device nn. Once all devices obtain their local outputs 𝐙n\mathbf{Z}_{n}, an all-reduce operation is performed to aggregate the partial outputs from all devices as follows,

𝐙=n=1N𝐙n.\displaystyle\mathbf{Z}=\sum_{n=1}^{N}\mathbf{Z}_{n}. (5)

The validity of this aggregation can be explained by considering how the model parameters are partitioned across the devices. Specifically, concatenating the column slices 𝐖n\mathbf{W}_{n} reproduces 𝐖\mathbf{W}, and stacking the row slices 𝐔n\mathbf{U}_{n} recovers 𝐔\mathbf{U}. Consequently, the original unpartitioned MLP output 𝐙\mathbf{Z} can be expressed as

𝐙\displaystyle\mathbf{Z} =max(𝟎,𝐗𝐖)𝐔\displaystyle=\max(\mathbf{0},\mathbf{XW})\mathbf{U} (6)
=max(𝟎,𝐗[𝐖𝟏,𝐖𝟐,,𝐖𝐍])[𝐔1𝐔2𝐔N]\displaystyle=\max(\mathbf{0},\mathbf{X[\mathbf{W}_{1},\mathbf{W}_{2},\ldots,\mathbf{W}_{N}]})\begin{bmatrix}\mathbf{U}_{1}\\ \mathbf{U}_{2}\\ \ldots\\ \mathbf{U}_{N}\end{bmatrix}
=(a)n=1Nmax(0,𝐗𝐖n)𝐔n\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny(a)}}}}{{=}}}\sum_{n=1}^{N}\mathrm{max}(0,\mathbf{X}\mathbf{W}_{n})\mathbf{U}_{n}
=n=1N𝐙n,\displaystyle=\sum_{n=1}^{N}\mathbf{Z}_{n},

where (a) follows from the element-wise property of activation functions (e.g., ReLU, GeLU). Therefore, aggregating partial results 𝐙n\mathbf{Z}_{n} reconstructs the original unpartitioned output 𝐙\mathbf{Z} (i.e., Eq. (5) holds). After aggregation, the final output 𝐙\mathbf{Z} of the MLP layer is broadcasted to all devices, ensuring synchronization and consistency across devices for the subsequent layer’s computation.

II-B2 Tensor Parallelism for Self-Attention Layer

For the self-attention layer, tensor parallelism similarly partitions its query (𝐐\mathbf{Q}), key (𝐊\mathbf{K}), value (𝐕\mathbf{V}), and transformation (𝐔\mathbf{U}) matrices across edge devices. In the traditional centralized computation of the self-attention layer, the output 𝐙\mathbf{Z} can be derived as follows,

𝐙=softmax(𝐗𝐐(𝐗𝐊)𝖳dk)𝐕𝐔,\displaystyle\mathbf{Z}=\mathrm{softmax}\left(\frac{\mathbf{X}\mathbf{Q}(\mathbf{X}\mathbf{K})^{\mathsf{T}}}{\sqrt{d_{k}}}\right)\mathbf{V}\mathbf{U}, (7)

where 𝐗\mathbf{X} denotes the input, and dkd_{k} denotes the dimension of the key vectors. In tensor parallelism, the memory-intensive weight matrices are splited and distributed across NN edge devices as follows,

𝐐\displaystyle\mathbf{Q} =[𝐐1,,𝐐N],\displaystyle=[\mathbf{Q}_{1},\ldots,\mathbf{Q}_{N}], (8)
𝐊\displaystyle\mathbf{K} =[𝐊1,,𝐊N],\displaystyle=[\mathbf{K}_{1},\ldots,\mathbf{K}_{N}],
𝐕\displaystyle\mathbf{V} =[𝐕1,,𝐕N],\displaystyle=[\mathbf{V}_{1},\ldots,\mathbf{V}_{N}],
𝐔\displaystyle\mathbf{U} =[𝐔1𝖳,,𝐔N𝖳]𝖳.\displaystyle=[\mathbf{U}_{1}^{\mathsf{T}},\ldots,\mathbf{U}_{N}^{\mathsf{T}}]^{\mathsf{T}}.

Then, each device nn performs local computation on its corresponding portion of the query, key, value, and transformation matrices as follows,

𝐙n=softmax(𝐗𝐐n(𝐗𝐊n)𝖳dk)𝐕n𝐔n.\displaystyle\mathbf{Z}_{n}=\mathrm{softmax}\left(\frac{\mathbf{X}\mathbf{Q}_{n}(\mathbf{X}\mathbf{K}_{n})^{\mathsf{T}}}{\sqrt{d_{k}}}\right)\mathbf{V}_{n}\mathbf{U}_{n}. (9)

Once all devices obtain their local outputs 𝐙n\mathbf{Z}_{n}, a similar all-reduce operation is required to gather and combine the partial outputs from devices as shown in (5).

II-C Over-the-Air All-Reduce

Employing tensor parallelism for distributed LLM inference requires frequent all-reduce operations, which cause significant communication overhead in practical wireless networks. To address this issue, we propose a communication-efficient AirComp all-reduce approach. The AirComp aggregates distributed data efficiently by leveraging the signal superposition property of wireless multiple-access channels, allowing simultaneous transmissions to compute nomographic functions (e.g., arithmetic mean) [46]. In the proposed distributed LLM inference system, the aggregation of intermediate layer outputs in the all-reduce step aligns with this operation, making the AirComp suitable to mitigate communication overhead. Note that edge devices performing AirComp must achieve symbol-level synchronization to ensure their transmitted signals arrive concurrently at the receiver, minimizing aggregation errors due to timing offsets. In our framework, synchronization among edge devices can be practically realized through the well-established timing advance (TA) mechanism. Specifically, the edge server estimates each device’s timing offset and instructs each device to adjust its signal transmission timing via dedicated TA commands. By aligning transmissions precisely, edge devices can ensure simultaneous arrival and accurate signal aggregation at the receiver.

We consider a wireless network consisting of an edge server with NrN_{r} antennas and NN single-antenna edge devices. We further extend the proposed framework to a more general scenario invloving multi-antenna edge devices in Section IV. The uplink channels from edge devices to the server are block-fading, where channel statistics remain constant throughout the inference process, with channel states varying independently across different time intervals. Let znz_{n} denote the per-round transmitted entry of device nn’s intermediate layer output 𝐙n\mathbf{Z}_{n}, which has a complete dimensionality of L0L_{0}. To reduce transmission power, the transmitted symbols are normalized to have zero mean and unit variance, i.e., 𝔼[zn2]=1\mathbb{E}[\|z_{n}\|^{2}]=1, where the normalization factor is uniform for all devices and can be inverted at the server. Given synchronized symbol boundaries, all devices transmit their intermediate layer outputs simultaneously. To mitigate the distortion of received signals caused by channel noise, aggregation beamforming is adopted. Let 𝐚Nr×1\mathbf{a}\in\mathbb{C}^{N_{r}\times 1} denote the aggregation beamforming vector at the edge server. After the AirComp, the received signal at the server is given by,

z^=𝐚𝖧n=1N𝐡nbnzn+𝐚𝖧𝐧,\displaystyle\hat{z}=\mathbf{a}^{\mathsf{H}}\sum_{n=1}^{N}\mathbf{h}_{n}b_{n}z_{n}+\mathbf{a}^{\mathsf{H}}\mathbf{n}, (10)

where 𝐡nNr×1\mathbf{h}_{n}\in\mathbb{C}^{N_{r}\times 1} denotes the uplink channel from device nn to the server, bnb_{n} is the transmit power of device nn, and 𝐧𝒞𝒩(0,σ2𝐈)\mathbf{n}\sim\mathcal{CN}\left(0,\sigma^{2}\mathbf{I}\right) denotes the additive white Gaussian noise vector with σ2\sigma^{2} being the noise power. In the single-antenna setting, each device employs only a scalar transmit-power coefficient bnb_{n} (instead of a beamforming vector) to scale its transmitted scalar entry znz_{n}. The distortion of z^\hat{z} with respect to the desired target summation z=n=1Nznz=\sum_{n=1}^{N}z_{n} is measured by the MSE, which is defined as

MSE(z^,z)=𝔼[z^z2].\displaystyle\textup{MSE}(\hat{z},z)=\mathbb{E}\left[\|\hat{z}-z\|^{2}\right]. (11)

The MSE serves as a metric to evaluate the performance of the AirComp all-reduce operations. As shown in the simulations later, the inference accuracy of the distributed on-device LLM inference system is greatly influenced by the transmission error during the AirComp phase. By substituting (10) into (11), the MSE can be explicitly represented as a function of aggregation beamforming vector 𝐚\mathbf{a} and transmitter scalars {bn}n=1N\left\{b_{n}\right\}_{n=1}^{N} as follows,

MSE(𝐚,{bn})=n=1N𝐚𝖧𝐡nbn12+σ2𝐚𝖧𝐚.\displaystyle\textup{MSE}(\mathbf{a},\left\{b_{n}\right\})=\sum_{n=1}^{N}\left\|\mathbf{a}^{\mathsf{H}}\mathbf{h}_{n}b_{n}-1\right\|^{2}+\sigma^{2}\mathbf{a}^{\mathsf{H}}\mathbf{a}. (12)

Edge devices involved in inference tasks typically have limited energy supply. Thus, we assume that for each device nn, the energy consumption for both the forward computation of each LLM layer and the transmission of the intermediate output cannot exceed the maximum power budget PnmaxP_{n}^{\textup{max}}. To model the computation energy consumption, we first introduce a model assignment vector 𝐦=[m1,,mN]\mathbf{m}=[m_{1},\ldots,m_{N}] with its entry mn[0,1]m_{n}\in[0,1] representing the proportion of model allocated to device nn. Consequently, the computation energy consumption for device nn is given by enmnstote_{n}m_{n}s^{\textup{tot}}, where ene_{n} denotes the device-specific energy coefficient that reflects the energy cost associated with accessing and processing each weight during computation, and stots^{\textup{tot}} is the number of parameters (weights) for each layer. The communication energy consumption of device nn can be derived as L0bn2L_{0}\|b_{n}\|^{2}. Accordingly, the power constraint is given by

enmnstot+L0bn2Pnmax,n.\displaystyle e_{n}m_{n}s^{\textup{tot}}+L_{0}\|b_{n}\|^{2}\leq P_{n}^{\textup{max}},\forall n. (13)

II-D Problem Formulation

In the proposed distributed LLM inference system, the overall performance is determined by the model assignment policy 𝐦\mathbf{m} and the transceiver design 𝐚\mathbf{a}, {bn}\left\{b_{n}\right\}. Optimal model assignment ensures that each device processes a suitable portion of the model based on its computational capability (e.g., memory size and compute power). Meanwhile, efficient transceiver optimization can reduce signal misalignment error and suppress channel noise, thereby improving inference accuracy. Thus, to improve inference performance, we formulate a joint model assignment and transceiver optimization problem that aims to minimize the average MSE, subject to the per-device power constraints. Importantly, the transceiver design can adapt dynamically to instantaneous CSI. In contrast, adapting the model assignment policy to instantaneous CSI in a real-time manner is impractical due to the significant latency caused by loading different model segments. Thus, model assignment should be finished before inference based on the long-term channel statistics.

The resulting problem is therefore formulated as a mixed-timescale joint optimization of the short-term transceiver variables 𝐚\mathbf{a}, {bn}\left\{b_{n}\right\} and the long-term model assignment policy 𝐦\mathbf{m} as follows,

𝒫1:min𝐦\displaystyle\mathcal{P}_{1}:~{}\min_{\mathbf{m}} 𝔼𝐡[min𝐚,{bn}MSE(𝐚,{bn})]\displaystyle~{}~{}\mathbb{E}_{\mathbf{h}}\left[\min_{\mathbf{a},\left\{b_{n}\right\}}\textup{MSE}(\mathbf{a},\left\{b_{n}\right\})\right] (14)
s.t. enmnstot+L0bn2Pnmax,n,\displaystyle~{}~{}e_{n}m_{n}s^{\textup{tot}}+L_{0}\|b_{n}\|^{2}\leq P_{n}^{\textup{max}},\forall n,
n=1Nmn=1,\displaystyle~{}~{}\sum_{n=1}^{N}m_{n}=1,
0mn1,n,\displaystyle~{}~{}0\leq m_{n}\leq 1,\forall n,

where the expectation 𝔼𝐡[]\mathbb{E}_{\mathbf{h}}\left[\cdot\right] is taken over all random channel realizations 𝐡={𝐡n}n=1N\mathbf{h}=\left\{\mathbf{h}_{n}\right\}_{n=1}^{N}. However, the problem 𝒫1\mathcal{P}_{1} is challenging to be solved due to the following three reasons.

  • Non-convexity: The objective function is inherently non-convex due to the coupling between the receiver aggregation beamformer 𝐚\mathbf{a} and the transmitter scalars {bn}\left\{b_{n}\right\}.

  • Expectation over Random Channels: The objective involves an expectation over random CSI, which requires prior knowledge of channel statistics.

  • Interdependence of Timescales: The per-device power constraints link the short-term transceiver variables with the long-term model assignment policy, leading to a complex interplay between the two timescales.

To address these challenges, we develop a two-stage algorithm that separately solves the short-term transceiver optimization and the long-term model assignment optimization in the following section.

III Algorithm Development

In this section, we develop an efficient two-stage algorithm to solve the joint model assignment and transceiver optimization problem 𝒫1\mathcal{P}_{1}. Then, we show that the proposed algorithm can converge to a stationary point of the original problem 𝒫1\mathcal{P}_{1}.

III-A Problem Decomposition

We start by decomposing problem 𝒫1\mathcal{P}_{1} into a family of short-term transceiver optimization problems and a long-term model assignment optimization problem as follows.

III-A1 Short-term transceiver optimization for given model assignment policy 𝐦\mathbf{m} and channel condition 𝐡\mathbf{h}

𝒫s:min𝐚,{bn}\displaystyle\mathcal{P}_{s}:\min_{\mathbf{a},\left\{b_{n}\right\}} MSE(𝐚,{bn})\displaystyle~{}\textup{MSE}(\mathbf{a},\left\{b_{n}\right\}) (15)
s.t. enmnstot+L0bn2Pnmax,n.\displaystyle~{}e_{n}m_{n}s^{\textup{tot}}+L_{0}\|b_{n}\|^{2}\leq P_{n}^{\textup{max}},\forall n.

III-A2 Long-term model assignment optimization based on the optimal solution 𝐚(𝐦),{bn(𝐦)}\mathbf{a}^{*}(\mathbf{m}),\left\{b_{n}^{*}(\mathbf{m})\right\} to problem 𝒫s\mathcal{P}_{s}

𝒫l:min𝐦\displaystyle\mathcal{P}_{l}:\min_{\mathbf{m}} 𝔼𝐡[MSE(𝐚(𝐦),{bn(𝐦)})]\displaystyle~{}~{}\mathbb{E}_{\mathbf{h}}\left[\textup{MSE}(\mathbf{a}^{*}(\mathbf{m}),\left\{b_{n}^{*}(\mathbf{m})\right\})\right] (16)
s.t. enmnstot+L0bn(𝐦)2Pnmax,n,\displaystyle~{}~{}e_{n}m_{n}s^{\textup{tot}}+L_{0}\|b_{n}^{*}(\mathbf{m})\|^{2}\leq P_{n}^{\textup{max}},\forall n,
n=1Nmn=1,\displaystyle~{}~{}\sum_{n=1}^{N}m_{n}=1,
0mn1,n.\displaystyle~{}~{}0\leq m_{n}\leq 1,\forall n.

The short-term transceiver optimization problem 𝒫s\mathcal{P}_{s} remains non-convex, and we address it using the SDR technique. The long-term model assignment optimization problem 𝒫l\mathcal{P}_{l} is similarly challenging, as the optimal transceiver variables 𝐚(𝐦),{bn(𝐦)}\mathbf{a}^{*}(\mathbf{m}),\left\{b_{n}^{*}(\mathbf{m})\right\} cannot be derived in closed form. Additionally, the distribution of CSI is difficult to obtain in practical wireless systems. To address these challenges, we propose a stochastic SCA algorithm that operates without requiring prior knowledge of channel statistics. In the following subsections, we provide a detailed implementation of the proposed algorithms.

III-B Short-Term Transceiver Optimization for 𝒫s\mathcal{P}_{s}

The short-term problem 𝒫s\mathcal{P}_{s} is challenging to be solved due to the inherent non-convexity caused by the coupling between receiver aggregation beamformer 𝐚\mathbf{a} and the transmitter scalers {bn}n=1N\left\{b_{n}\right\}_{n=1}^{N}. Thus, we first simplify problem 𝒫s\mathcal{P}_{s} by demonstrating that the channel inversion precoding is optimal conditioned on the aggregation beamformer.

Lemma 1.

For a given aggregation beamformer 𝐚\mathbf{a}, the transmission MSE is minimized by using the zero-forcing precoders bn=1𝐚𝖧𝐡n,nb_{n}^{*}=\frac{1}{\mathbf{a}^{\mathsf{H}}\mathbf{h}_{n}},\forall n.

Proof.

Lemma 1 can be proved by following the same steps as in [47, Appendix A] and we omit it for brevity. ∎

Let 𝐠\mathbf{g} represent the normalized aggregation beamformer that satisfies 𝐠𝖧𝐠=1\mathbf{g}^{\mathsf{H}}\mathbf{g}=1, and consequently 𝐚=α𝐠\mathbf{a}=\sqrt{\alpha}\mathbf{g} where α\alpha is optimized to satisfy the power constraints of edge devices. By applying Lemma 1, problem 𝒫s\mathcal{P}_{s} can be reformulated as follows,

minα,𝐠\displaystyle\min_{\alpha,\mathbf{g}} α\displaystyle~{}~{}\alpha (17)
s.t. enmnstot+L0α𝐠𝖧𝐡n2Pnmax,n,\displaystyle~{}~{}e_{n}m_{n}s^{\textup{tot}}+\frac{L_{0}}{\alpha\|\mathbf{g}^{\mathsf{H}}\mathbf{h}_{n}\|^{2}}\leq P_{n}^{\textup{max}},\forall n,
𝐠𝖧𝐠=1.\displaystyle~{}~{}\mathbf{g}^{\mathsf{H}}\mathbf{g}=1.

Then, by employing the equation 𝐠𝖧𝐡n2=tr(𝐡n𝐡n𝖧𝐠𝐠𝖧)\|\mathbf{g}^{\mathsf{H}}\mathbf{h}_{n}\|^{2}=\textup{tr}\left(\mathbf{h}_{n}\mathbf{h}_{n}^{\mathsf{H}}\mathbf{g}\mathbf{g}^{\mathsf{H}}\right), an equivalent formulation of problem (17) is obtained as follows,

minα,𝐠\displaystyle\min_{\alpha,\mathbf{g}} α\displaystyle~{}~{}\alpha (18)
s.t. enmnstot+L0αtr(𝐡n𝐡n𝖧𝐠𝐠𝖧)Pnmax,n,\displaystyle~{}~{}e_{n}m_{n}s^{\textup{tot}}+\frac{L_{0}}{\alpha\textup{tr}\left(\mathbf{h}_{n}\mathbf{h}_{n}^{\mathsf{H}}\mathbf{g}\mathbf{g}^{\mathsf{H}}\right)}\leq P_{n}^{\textup{max}},\forall n,
𝐠𝖧𝐠=1.\displaystyle~{}~{}\mathbf{g}^{\mathsf{H}}\mathbf{g}=1.

The problem (18) remains intractable due to the non-convex norm constraint on 𝐠\mathbf{g}. To address this issue, we apply the SDR approach that relaxes the non-convex norm constraint by employing its convex hull.

Lemma 2.

(Convex Hull Relaxation [48]) Suppose the set Ω1={𝐘:𝐘=𝐗𝐗𝖧,𝐗𝖧𝐗=𝐈d}\Omega_{1}=\{\mathbf{Y}:\mathbf{Y}=\mathbf{X}\mathbf{X}^{\mathsf{H}},\mathbf{X}^{\mathsf{H}}\mathbf{X}=\mathbf{I}_{d}\} and set Ω2={𝐘:tr(𝐘)=d,0𝐘𝐈}\Omega_{2}=\left\{\mathbf{Y}:\textup{tr}(\mathbf{Y})=d,0\preceq\mathbf{Y}\preceq\mathbf{I}\right\}, where 𝐘\mathbf{Y} is of the size mm by mm while 𝐗\mathbf{X} is of the size mm by dd. The condition of the set Ω2\Omega_{2} indicates that both 𝐘\mathbf{Y} and 𝐈𝐘\mathbf{I}-\mathbf{Y} are positive semi-definite. Then, Ω2\Omega_{2} is the convex hull of Ω1\Omega_{1}, and Ω1\Omega_{1} is the set of extreme points of Ω2\Omega_{2}.

By applying Lemma 2, we can replace the non-convex norm constraint by its convex hull and reformulate a relaxed version of problem (18) as follows,

minα,𝐆^\displaystyle\min_{\alpha,\hat{\mathbf{G}}} α\displaystyle~{}~{}\alpha (19)
s.t. enmnstot+L0αtr(𝐡n𝐡n𝖧𝐠𝐠𝖧)Pnmax,n,\displaystyle~{}~{}e_{n}m_{n}s^{\textup{tot}}+\frac{L_{0}}{\alpha\textup{tr}\left(\mathbf{h}_{n}\mathbf{h}_{n}^{\mathsf{H}}\mathbf{g}\mathbf{g}^{\mathsf{H}}\right)}\leq P_{n}^{\textup{max}},\forall n,
tr(𝐆^)=1,\displaystyle~{}~{}\textup{tr}({\hat{\mathbf{G}}})=1,
0𝐆^𝐈,\displaystyle~{}~{}0\preceq\mathbf{{\hat{\mathbf{G}}}}\preceq\mathbf{I},

where 𝐆^=𝐠𝐠𝖧{\hat{\mathbf{G}}}=\mathbf{g}\mathbf{g}^{\mathsf{H}}. The problem (19) can be proved to be convex, and the globally optimal solution 𝐆^\hat{\mathbf{G}}^{*} can be obtained by using a convex solver (e.g., the CVX toolbox in MATLAB [49]).

We note that the optimal solution 𝐆^\hat{\mathbf{G}}^{*} has a high probability to satisfy the rank-one constraint [47]. If a rank-one solution 𝐆^\hat{\mathbf{G}}^{*} is obtained, the optimal solution 𝐠\mathbf{g}^{*} of the original problem (18) can be immediately achieved by extracting the dominant eigenvector of 𝐆^\hat{\mathbf{G}}^{*} as 𝐠=[𝐕𝐆^]:,1\mathbf{g}^{*}=[\mathbf{V}_{\hat{\mathbf{G}}^{*}}]_{:,1}. Otherwise, if the rank of 𝐆^\hat{\mathbf{G}}^{*} is larger than 1, we apply the Gaussian randomization algorithm [50] to map the solution to a feasible, near-optimal solution for the original non-convex problem.

III-C Long-Term Model Assignment Optimization for 𝒫l\mathcal{P}_{l}

In this subsection, we propose a stochastic SCA algorithm to solve the long-term model assignment problem 𝒫l\mathcal{P}_{l}. The proposed algorithm requires no prior knowledge of channel statistics. For clearer algorithmic description, we first reformulate the long-term problem 𝒫l\mathcal{P}_{l} into an equivalent form as follows,

min𝐦\displaystyle\min_{\mathbf{m}} f0(𝐦)=𝔼𝐇[MSE(𝐚(𝐦),{bn(𝐦)})]\displaystyle~{}~{}f_{0}(\mathbf{m})=\mathbb{E}_{\mathbf{H}}\left[\textup{MSE}(\mathbf{a}^{*}(\mathbf{m}),\left\{b_{n}^{*}(\mathbf{m})\right\})\right] (20)
s.t. f1(𝐦)=stotdiag(𝐞𝐦𝖳)+L0𝐞c(𝐦)𝐩max,\displaystyle~{}~{}f_{1}(\mathbf{m})=s^{\textup{tot}}\textup{diag}(\mathbf{e}\mathbf{m}^{\mathsf{T}})+L_{0}\mathbf{e}_{c}\left(\mathbf{m}\right)\leq\mathbf{p}^{\textup{max}},
n=1Nmn=1,\displaystyle~{}~{}\sum_{n=1}^{N}m_{n}=1,
0mn1,n,\displaystyle~{}~{}0\leq m_{n}\leq 1,\forall n,

where 𝐞c(𝐦)=[b1(𝐦)2,,bN(𝐦)2]𝖳,\mathbf{e}_{c}\!\left(\mathbf{m}\right)\!=\![\|b_{1}^{*}(\mathbf{m})\|^{2},\ldots,\|b_{N}^{*}(\mathbf{m})\|^{2}]^{\mathsf{T}}\!, 𝐩max=[P1max,,PNmax]𝖳\mathbf{p}^{\textup{max}}=[{P}_{1}^{\textup{max}},\ldots,{P}_{N}^{\textup{max}}]^{\mathsf{T}}, and 𝐞=[e1,,eN]𝖳\mathbf{e}=[e_{1},\ldots,e_{N}]^{\mathsf{T}}. The proposed stochastic SCA algorithm iteratively performs the following two steps: First, quadratic surrogate functions f^0(𝐦)\hat{f}_{0}(\mathbf{m}), f^1(𝐦)\hat{f}_{1}(\mathbf{m}) are constructed to approximate the non-convex components of the original objective and constraint functions f0(𝐦)f_{0}(\mathbf{m}), f1(𝐦)f_{1}(\mathbf{m}), respectively. Then, the resulting convex quadratic approximation problem is solved, and the long-term model assignment policy is updated based on the solution. The details of these two steps are illustrated as follows.

III-C1 Step 1

In each iteration τ\tau, the edge server first generates a channel sample 𝐡τ\mathbf{h}^{\tau}, and then calculates the short-term transceiver variables 𝐚(𝐦τ)\mathbf{a}^{*}(\mathbf{m}^{\tau}) and {bn(𝐦τ)}n=1N\left\{b_{n}^{*}(\mathbf{m}^{\tau})\right\}_{n=1}^{N} by solving the short-term problem 𝒫s\mathcal{P}_{s}. Then, the recursive convex approximation of the original objective function f0(𝐦)f_{0}(\mathbf{m}) can be derived as[51]

f^0τ(𝐦)=f¯0(𝐦τ)+(𝐮0τ)𝖳(𝐦𝐦τ)+η0𝐦𝐦τ2,\displaystyle\hat{f}_{0}^{\tau}(\mathbf{m})=\bar{f}_{0}(\mathbf{m}^{\tau})+(\mathbf{u}_{0}^{\tau})^{\mathsf{T}}\left(\mathbf{m}-\mathbf{m}^{\tau}\right)+\eta_{0}\left\|\mathbf{m}-\mathbf{m}^{\tau}\right\|^{2}, (21)

where η0\eta_{0} is a constant that ensures convexity, f¯0(𝐦τ)=MSE(𝐚(𝐦τ),{bn(𝐦τ)})\bar{f}_{0}(\mathbf{m}^{\tau})=\textup{MSE}(\mathbf{a}^{*}(\mathbf{m}^{\tau}),\left\{b_{n}^{*}(\mathbf{m}^{\tau})\right\}) denotes the sample-wise approximation of the average MSE and is computed by using the specific channel realization 𝐡τ\mathbf{h}^{\tau}. Furthermore, 𝐮0τ\mathbf{u}_{0}^{\tau} is an approximation of the gradient f0(𝐦τ)\nabla f_{0}(\mathbf{m}^{\tau}), which is updated recursively as

𝐮0τ=(1ρτ)𝐮0τ1+ρτ𝐦f¯0(𝐦;𝐚(𝐦τ),{bn(𝐦τ)}),\displaystyle\mathbf{u}_{0}^{\tau}=(1-\rho^{\tau})\mathbf{u}_{0}^{\tau-1}+\rho^{\tau}\nabla_{\mathbf{m}}\bar{f}_{0}(\mathbf{m};\mathbf{a}^{*}(\mathbf{m}^{\tau}),\left\{b_{n}^{*}(\mathbf{m}^{\tau})\right\}), (22)

and 𝐮01=𝟎\mathbf{u}_{0}^{-1}=\mathbf{0}[51]. The algorithm parameter ρτ\rho^{\tau} is decreasing in τ\tau, satisfying limτρτ=0\lim_{\tau\rightarrow\infty}\rho^{\tau}=0, τ=0ρτ=\sum_{\tau=0}^{\infty}\rho^{\tau}=\infty, τ=0(ρτ)2<\sum_{\tau=0}^{\infty}(\rho^{\tau})^{2}<\infty, and τ=0ρττ1/2<\sum_{\tau=0}^{\infty}\rho^{\tau}\tau^{-1/2}<\infty. Similarly, the recursive convex approximation of the power constraint function f1(𝐦)f_{1}(\mathbf{m}) is given by

f^1τ(𝐦)=f1(𝐦τ)+(𝐮1τ)𝖳(𝐦𝐦τ)+η1𝐦𝐦τ2,\displaystyle\hat{f}_{1}^{\tau}(\mathbf{m})=f_{1}(\mathbf{m}^{\tau})+(\mathbf{u}_{1}^{\tau})^{\mathsf{T}}\left(\mathbf{m}-\mathbf{m}^{\tau}\right)+\eta_{1}\left\|\mathbf{m}-\mathbf{m}^{\tau}\right\|^{2}, (23)

where η1>0\eta_{1}>0 is a constant, and 𝐮1τ\mathbf{u}_{1}^{\tau} is updated recursively as follows[51],

𝐮1τ=(1ρτ)𝐮1τ1+ρτ𝐦f1(𝐦;𝐚(𝐦τ),{bn(𝐦τ)}).\displaystyle\mathbf{u}_{1}^{\tau}=(1-\rho^{\tau})\mathbf{u}_{1}^{\tau-1}+\rho^{\tau}\nabla_{\mathbf{m}}f_{1}(\mathbf{m};\mathbf{a}^{*}(\mathbf{m}^{\tau}),\left\{b_{n}^{*}(\mathbf{m}^{\tau})\right\}). (24)

It is noted that the surrogate functions f^0τ(𝐦)\hat{f}_{0}^{\tau}(\mathbf{m}) and f^1τ(𝐦)\hat{f}_{1}^{\tau}(\mathbf{m}) are quadratic approximations of the original nonconvex objective f0(𝐦){f}_{0}(\mathbf{m}) and constraint f1(𝐦){f}_{1}(\mathbf{m}) around the current iterate 𝐦τ\mathbf{m}^{\tau}. Specifically, at iteration τ\tau, each surrogate function is constructed using first-order Taylor expansions of the corresponding function fi(𝐦)f_{i}(\mathbf{m}) (for i=0,1i=0,1), along with an additional quadratic regularization term controlled by convexity constants η0\eta_{0} and η1\eta_{1}. The convexity constants η0\eta_{0} and η1\eta_{1} serve to ensure strong convexity and numerical stability of the surrogate functions. Specifically, larger values of η0\eta_{0} and η1\eta_{1} enhance numerical stability but may slow convergence, whereas smaller values permit larger update steps but require careful tuning to prevent instability. In practice, setting these constants within the range 10210110^{-2}\sim 10^{-1} (e.g., around 0.05) achieves a favorable balance between stability and convergence speed.

III-C2 Step 2

After obtaining the convex approximations of the objective and constraint functions, we formulate a convex approximation of the original problem (20) to solve the optimal 𝐦^τ\hat{\mathbf{m}}^{\tau} as follows,

𝐦^τ=min𝐦\displaystyle\hat{\mathbf{m}}^{\tau}=\min_{\mathbf{m}} f^0τ(𝐦)\displaystyle~{}~{}\hat{f}_{0}^{\tau}(\mathbf{m}) (25)
s.t. f^1τ(𝐦)𝐩max,\displaystyle~{}~{}\hat{f}_{1}^{\tau}(\mathbf{m})\leq\mathbf{p}^{\textup{max}},
n=1Nmn=1,\displaystyle~{}~{}\sum_{n=1}^{N}m_{n}=1,
0mn1,n.\displaystyle~{}~{}0\leq m_{n}\leq 1,\forall n.

If problem (25) turns out to be infeasible, the optimal solution 𝐦^τ\hat{\mathbf{m}}^{\tau} is obtained by solving the following feasibility problem,

𝐦^τ=min𝐦,𝝁\displaystyle\hat{\mathbf{m}}^{\tau}=\min_{\mathbf{m},\bm{\mu}} 𝝁\displaystyle~{}~{}\bm{\mu} (26)
s.t. f^1τ(𝐦)𝐩max+𝝁,\displaystyle~{}~{}\hat{f}_{1}^{\tau}(\mathbf{m})\leq\mathbf{p}^{\textup{max}}+\bm{\mu},
n=1Nmn=1,\displaystyle~{}~{}\sum_{n=1}^{N}m_{n}=1,
0mn1,n.\displaystyle~{}~{}0\leq m_{n}\leq 1,\forall n.

After solving for 𝐦^τ\hat{\mathbf{m}}^{\tau}, the model assignment policy is updated as

𝐦τ+1=(1γτ)𝐦τ+γτ𝐦^τ,\displaystyle\mathbf{m}^{\tau+1}=(1-\gamma^{\tau})\mathbf{m}^{\tau}+\gamma^{\tau}\hat{\mathbf{m}}^{\tau}, (27)

where γτ(0,1)\gamma^{\tau}\in(0,1) satisfies limτγτ=0\lim_{\tau\rightarrow\infty}\gamma^{\tau}=0, τ=0γτ=\sum_{\tau=0}^{\infty}\gamma^{\tau}=\infty, and τ=0(γτ)2<\sum_{\tau=0}^{\infty}(\gamma^{\tau})^{2}<\infty[51]. To facilitate practical implementation and reproducibility, we provide explicit heuristics for choosing the hyperparameters ρτ\rho^{\tau}and γτ\gamma^{\tau}. We suggest setting these hyperparameters as ρτ=1(τ+1)α\rho^{\tau}=\frac{1}{(\tau+1)^{\alpha}} and γτ=cτ+c\gamma^{\tau}=\frac{c}{\tau+c^{\prime}}, where the parameter α\alpha typically ranges from 0.5 to 1 to satisfy the convergence conditions outlined in Lemma 3. In practice, setting α0.8\alpha\approx 0.8, c=15c=15, and c=14c^{\prime}=14 has been found through simulations to achieve an effective balance between convergence speed and numerical stability.

1 Initialize: Model assignment policy 𝐦0\mathbf{m}^{0}, iteration index τ=0\tau=0, and convergence tolerance ϵ\epsilon;
2 Step 1 (long-term model assignment optimization at the beginning of inference task)
3
4repeat
5      
6      Obtain a channel sample 𝐡τ={𝐡1τ,,𝐡Nτ}\mathbf{h}^{\tau}=\{\mathbf{h}_{1}^{\tau},\ldots,{\mathbf{h}_{N}^{\tau}}\} and cal- culate the short-term transceiver variables 𝐚(𝐦τ)\!\mathbf{a}^{*}(\mathbf{m}^{\tau}), {bn(𝐦τ)}\{b_{n}^{*}(\mathbf{m}^{\tau})\} by solving the short-term problem 𝒫s\mathcal{P}_{s};
7      
8      Update the surrogate functions f^iτ(𝐦)\hat{f}_{i}^{\tau}(\mathbf{m}) according to (21) and (23);
9      
10      if problem (25) is feasible then
11             Solve problem (25) to obtain the optimal 𝐦^τ\hat{\mathbf{m}}^{\tau}
12       else
13             Solve problem (26) to obtain the optimal 𝐦^τ\hat{\mathbf{m}}^{\tau}
14       end if
15      
16      Update 𝐦τ\mathbf{m}^{\tau} according to (27);
17       ττ+1\tau\leftarrow\tau+1;
18      
19until 𝐦τ𝐦τ1ϵ\|\mathbf{m}^{\tau}-\mathbf{m}^{\tau-1}\|\leq\epsilon;
20Step 2 (short-term transceiver optimization at each all- reduce step):
21 Obtain the channel condition 𝐡\mathbf{h}, and apply the short-term algorithm to solve the optimal transceiver variables with the determined model assignment policy 𝐦\mathbf{m}.
missingfnum@algorithm1Algorithm 1 Mixed-Timescale Model Assignment and Transceiver Optimization Algorithm

The above two steps iterate until convergence, i.e., 𝐦τ𝐦τ1ϵ\|\mathbf{m}^{\tau}-\mathbf{m}^{\tau-1}\|\leq\epsilon, where ϵ\epsilon is the convergence tolerance. The overall algorithm is outlined in Algorithm 1, and the block diagram of the proposed algorithm is illustrated in Fig. 3.

Refer to caption
Fig. 3: Block diagram of Algorithm 1
Remark 1.

The computational complexity of solving the short-term transceiver optimization problem 𝒫s\mathcal{P}_{s} is at most 𝒪(NNr3+N2Nr2+N3)\mathcal{O}\left(NN_{r}^{3}+N^{2}N_{r}^{2}+N^{3}\right), and is usually much lower in practice [52, Theorem 3.12]. Moreover, the most computation-expensive steps for the long-term optimization problem 𝒫l\mathcal{P}_{l} are solving the constructed convex quadratic approximation problems (25) and (26). Specifically, the computational complexity of solving problems (25) and (26) is at most in the order of 𝒪(2N4+N3)\mathcal{O}\left(2N^{4}+N^{3}\right). Then, the total computational complexity of Algorithm 1 is given by 𝒪(τmax(NNr3+N2Nr2+2N4+2N3))\mathcal{O}\left(\tau^{\max}\left(NN_{r}^{3}+N^{2}N_{r}^{2}+2N^{4}+2N^{3}\right)\right), where τmax\tau^{\max} is the maximum iteration number of Algorithm 1. Both optimization and model assignment are performed only once at the beginning of the inference (or after substantial changes in channel or device conditions). Hence, while solving the optimization and loading model segments introduce a non-negligible one-time cost, the subsequent benefits from parallelized forward computation significantly outweigh this initial overhead, resulting in increased inference speed in practical settings.

Remark 2.

In the proposed framework, the edge server, which possesses limited but non-negligible computational and memory resources, collects the channel state information and device capability information. Leveraging these data, the server solves the mixed-timescale optimization problem and assigns each participating edge device its respective portion of the model parameters.

III-D Convergence Analysis

In this subsection, we analyze the asymptotic convergence performance of Algorithm 1 to a stationary point of the original problem 𝒫1\mathcal{P}_{1}.

We first show the convergence of the surrogate functions in the following lemma.

Lemma 3.

Consider a sequence {𝐦τ}τ=0\{\mathbf{m}^{\tau}\}_{\tau=0}^{\infty} converging to a limiting point 𝐦\mathbf{m}^{*}, and define

f^i(𝐦)=fi(𝐦)+fi(𝐦)𝖳(𝐦𝐦)+ηi𝐦𝐦2,\displaystyle\vspace{2.1pt}\begin{split}\hat{f}_{i}(\mathbf{m})=f_{i}(\mathbf{m})+\nabla f_{i}(\mathbf{m}^{*})^{\mathsf{T}}\left(\mathbf{m}-\mathbf{m}^{*}\right)+\eta_{i}\left\|\mathbf{m}-\mathbf{m}^{*}\right\|^{2},\end{split} (28)

which satisfies f^i(𝐦)=fi(𝐦)\hat{f}_{i}(\mathbf{m}^{*})=f_{i}(\mathbf{m}^{*}) and f^i(𝐦)=fi(𝐦),i{0,1}\nabla\hat{f}_{i}(\mathbf{m}^{*})=\nabla f_{i}(\mathbf{m}^{*}),\forall i\in\{0,1\}. Then, if the algorithm parameter ρ\rho satisfies τ=0ρττ1/2<\sum_{\tau=0}^{\infty}\rho^{\tau}\tau^{-1/2}<\infty, we have

limτf^iτ(𝐦)=f^i(𝐦),\displaystyle\vspace{2.1pt}\begin{split}\lim_{\tau\to\infty}\hat{f}_{i}^{\tau}(\mathbf{m})=\hat{f}_{i}(\mathbf{m}),\end{split} (29)

almost surely[51].

Proof.

The proof is presented in Appendix A. ∎

To elaborate the convergence result, we need to introduce the Slater’s condition for the converged surrogate function in the following.

Definition 1.

(Slater’s Condition) Given the sequence {𝐦τ}τ=1\{\mathbf{m}^{\tau}\}_{\tau=1}^{\infty} converging to a limiting point 𝐦\mathbf{m}^{*} and let f^1(𝐦)\hat{f}_{1}(\mathbf{m}) be the converged surrogate function as defined in (28). The Slater’s condition holds at 𝐦\mathbf{m}^{*} if there exists a constant 𝐦\mathbf{m} such that

f^1(𝐦)<𝐩max,\displaystyle\hat{f}_{1}(\mathbf{m})<\mathbf{p}^{\textup{max}}, (30)
n=1Nmn=1,\displaystyle\sum_{n=1}^{N}m_{n}=1,
0mn1,n.\displaystyle 0\leq m_{n}\leq 1,\forall n.

The Slater’s condition is widely used in constrainted optimization algorithms (e.g., the majorization-minimization algorithm [53] and virtual queue-based online convex optimization algorithm [54]). With Lemma 3 and the Slater’s condition, we are ready to show the main convergence result of Algorithm 1 in the following theorem.

Theorem 1.

Let {𝐦τ}τ=1\{\mathbf{m}^{\tau}\}_{\tau=1}^{\infty} denote the sequence of model assignment policies generated by Algorithm 1. If the Slater’s condition is satisfied at the limiting point 𝐦\mathbf{m}^{*} of the sequence {𝐦τ}τ=1\{\mathbf{m}^{\tau}\}_{\tau=1}^{\infty}, then 𝐦\mathbf{m}^{*} is a stationary point of problem 𝒫1\mathcal{P}_{1} almost surely.

Proof.

The proof is presented in Appendix B. ∎

In Section V-B, we further verify the convergence of Algorithm 1 through simulations.

IV Extension to Multi-Antenna Devices

In the previous sections, we analyzed the scenario of single-antenna edge devices to establish foundational insights for optimizing communication efficiency in distributed LLM inference. In this section, we extend the proposed framework and algorithms to the multi-antenna setting. By leveraging spatial multiplexing, the multi-antenna configuration further enhances communication efficiency and reduces inference latency, providing a more general and scalable solution.

IV-A Problem Formulation

Building upon the single-antenna setting, we now consider a more generalized scenario where edge devices in the distributed LLM inference system are equipped with multiple antennas. Thus, the spatial diversity and spatial multiplexing are utilized to further improve communication efficiency. Specifically, we consider the server and each edge device are equipped with NrN_{r} and NtN_{t} antennas, respectively. Similar to the single-antenna case, all devices simultaneously upload their intermediate layer outputs through wireless multiple-access channels. Let 𝐳n=[zn,1,,zn,L]𝖳\mathbf{z}_{n}=[z_{n,1},\ldots,z_{n,L}]^{\mathsf{T}} denote the per-round transmitted LL entries of device nn’s intermediate output 𝐙n\mathbf{Z}_{n}. Let 𝐀Nr×L\mathbf{A}\in\mathbb{C}^{N_{r}\times L} and 𝐁nNt×L\mathbf{B}_{n}\in\mathbb{C}^{N_{t}\times L} denote the aggregation beamforming matrix at the edge server and the data precoding matrix at device nn, respectively. Then, the received signal at the server after the AirComp can be derived as follows,

𝐳^=𝐀𝖧n=1N𝐇n𝐁n𝐳n+𝐀𝖧𝐧,\displaystyle\hat{\mathbf{z}}=\mathbf{A}^{\mathsf{H}}\sum_{n=1}^{N}\mathbf{H}_{n}\mathbf{B}_{n}\mathbf{z}_{n}+\mathbf{A}^{\mathsf{H}}\mathbf{n}, (31)

where 𝐇nNr×Nt\mathbf{H}_{n}\in\mathbb{C}^{N_{r}\times N_{t}} denotes the uplink MIMO channel from device nn to the edge server. In the multi-antenna setting, each device employs the precoding (beamforming) matrix to map its transmitted vector 𝐳n\mathbf{z}_{n} onto multiple antennas for simultaneous transmission. The distortion of 𝐳^\hat{\mathbf{z}} with respect to the desired target vector 𝐳=n=1N𝐳n\mathbf{z}=\sum_{n=1}^{N}\mathbf{z}_{n} is measured by the MSE, defined as

MSE(𝐳^,𝐳)=𝔼[(𝐳^𝐳)𝖧(𝐳^𝐳)].\displaystyle\textup{MSE}(\hat{\mathbf{z}},\mathbf{z})=\mathbb{E}\left[(\hat{\mathbf{z}}-\mathbf{z})^{\mathsf{H}}(\hat{\mathbf{z}}-\mathbf{z})\right]. (32)

By substituting (31) into (32), the MSE can be explicitly represented as a function of transceiver beamforming matrices as follows,

MSE(𝐀,{𝐁n})\displaystyle\textup{MSE}(\mathbf{A},\left\{\mathbf{B}_{n}\right\}) (33)
=n=1Ntr((𝐀𝖧𝐇n𝐁n𝐈)(𝐀𝖧𝐇n𝐁n𝐈)𝖧)+σz2tr(𝐀𝖧𝐀).\displaystyle\!=\sum_{n=1}^{N}\textup{tr}\!\left(\!\left(\mathbf{A}^{\!\mathsf{H}}\mathbf{H}_{n}\mathbf{B}_{n}-\mathbf{I}\right)\!\left(\mathbf{A}^{\!\mathsf{H}}\mathbf{H}_{n}\mathbf{B}_{n}-\mathbf{I}\right)^{\mathsf{H}}\right)\!+\!\sigma_{z}^{2}\textup{tr}\left(\mathbf{A}^{\!\mathsf{H}}\mathbf{A}\right).

To effectively utilize the heterogeneous computational capabilities of edge devices and mitigate communication distortions, we similarly investigate a joint model assignment and transceiver optimization problem. Specifically, the joint optimization problem in the multi-antenna scenario can be formulated as follows,

𝒫2:min𝐦\displaystyle\mathcal{P}_{2}:~{}\min_{\mathbf{m}} 𝔼𝐇[min𝐀,{𝐁n}MSE(𝐀,{𝐁n})]\displaystyle~{}~{}\mathbb{E}_{\mathbf{H}}\left[\min_{\mathbf{A},\left\{\mathbf{B}_{n}\right\}}\textup{MSE}(\mathbf{A},\left\{\mathbf{B}_{n}\right\})\right] (34)
s.t. enmnstot+L0Ltr(𝐁n𝐁n𝖧)Pnmax,n,\displaystyle~{}~{}e_{n}m_{n}s^{\textup{tot}}+\frac{L_{0}}{L}\textup{tr}\left(\mathbf{B}_{n}\mathbf{B}_{n}^{\mathsf{H}}\right)\leq P_{n}^{\textup{max}},\forall n,
n=1Nmn=1,\displaystyle~{}~{}\sum_{n=1}^{N}m_{n}=1,
0mn1,n,\displaystyle~{}~{}0\leq m_{n}\leq 1,\forall n,

where the expectation 𝔼𝐇[]\mathbb{E}_{\mathbf{H}}\left[\cdot\right] is taken over all random channel realizations 𝐇={𝐇n}n=1N\mathbf{H}=\left\{\mathbf{H}_{n}\right\}_{n=1}^{N}.

IV-B Algorithm Development

In this subsection, we extend Algorithm 1 to a more general case involving multi-antenna edge devices. Similarly, we first decompose problem 𝒫2\mathcal{P}_{2} into a family of short-term subproblems and a long-term subproblem as follows.

IV-B1 Short-term transceiver optimization for given model assignment policy 𝐦\mathbf{m} and channel condition 𝐇\mathbf{H}

𝒫s:min𝐀,{𝐁n}\displaystyle\mathcal{P}_{s}:\min_{\mathbf{A},\left\{\mathbf{B}_{n}\right\}} MSE(𝐀,{𝐁n})\displaystyle~{}\textup{MSE}(\mathbf{A},\left\{\mathbf{B}_{n}\right\}) (35)
s.t. enmnstot+L0Ltr(𝐁n𝐁n𝖧)Pnmax,n.\displaystyle~{}e_{n}m_{n}s^{\textup{tot}}+\frac{L_{0}}{L}\textup{tr}\left(\mathbf{B}_{n}\mathbf{B}_{n}^{\mathsf{H}}\right)\leq P_{n}^{\textup{max}},\forall n.

IV-B2 Long-term model assignment optimization based on the optimal solution 𝐀(𝐦),{𝐁n(𝐦)}\mathbf{A}^{*}(\mathbf{m}),\left\{\mathbf{B}_{n}^{*}(\mathbf{m})\right\} to problem 𝒫s\mathcal{P}_{s}

𝒫l:min𝐦\displaystyle\mathcal{P}_{l}:\min_{\mathbf{m}} 𝔼𝐇[MSE(𝐀(𝐦),{𝐁n(𝐦)})]\displaystyle~{}~{}\mathbb{E}_{\mathbf{H}}\left[\textup{MSE}(\mathbf{A}^{*}(\mathbf{m}),\left\{\mathbf{B}_{n}^{*}(\mathbf{m})\right\})\right] (36)
s.t. enmnstot+L0Ltr(𝐁n(𝐦)𝐁n(𝐦)𝖧)Pnmax,n,\displaystyle~{}~{}e_{n}m_{n}s^{\textup{tot}}+\frac{L_{0}}{L}\textup{tr}\left(\mathbf{B}_{n}^{*}(\mathbf{m})\mathbf{B}_{n}^{*}(\mathbf{m})^{\mathsf{H}}\right)\leq P_{n}^{\textup{max}},\forall n,
n=1Nmn=1,\displaystyle~{}~{}\sum_{n=1}^{N}m_{n}=1,
0mn1,n.\displaystyle~{}~{}0\leq m_{n}\leq 1,\forall n.

To solve the short-term problem 𝒫s\mathcal{P}_{s}, we first simplify it by demonstrating that the zero-forcing (channel inversion) precoder is optimal conditioned on the aggregation beamformer.

Lemma 4.

For a given aggregation beamformer 𝐀\mathbf{A}, the transmission MSE is minimized by using the zero-forcing precoders as follows,

𝐁n=(𝐀𝖧𝐇n)𝖧(𝐀𝖧𝐇n𝐇n𝖧𝐀)1,n.\displaystyle\mathbf{B}_{n}^{*}=\left(\mathbf{A}^{\!\mathsf{H}}\mathbf{H}_{n}\right)^{\!\mathsf{H}}\left(\mathbf{A}^{\!\mathsf{H}}\mathbf{H}_{n}\mathbf{H}_{n}^{\mathsf{H}}\mathbf{A}\right)^{\!-1},\forall n. (37)
Proof.

The proof of Lemma 4 is similar to that of Lemma 1 and thus omitted for brevity. ∎

Let 𝐆\mathbf{G} represent the normalized aggregation beamformer that satisfies tr(𝐆𝐆𝖧)=1\textup{tr}(\mathbf{G}\mathbf{G}^{\mathsf{H}})=1, and consequently 𝐀=α𝐆\mathbf{A}=\sqrt{\alpha}\mathbf{G} with α\alpha denoting the norm of 𝐀\mathbf{A}. By employing (37), the problem 𝒫s\mathcal{P}_{s} can be reformulated as follows,

minα,𝐆\displaystyle\min_{\alpha,\mathbf{G}} α\displaystyle~{}~{}\alpha (38)
s.t. enmnstot+L0αLtr((𝐆𝖧𝐇n𝐇n𝖧𝐆)1)Pnmax,n,\displaystyle~{}~{}e_{n}m_{n}s^{\textup{tot}}+\frac{L_{0}}{\alpha L}\textup{tr}\left(\left(\mathbf{G}^{\mathsf{H}}\mathbf{H}_{n}\mathbf{H}_{n}^{\mathsf{H}}\mathbf{G}\right)^{-1}\right)\leq P_{n}^{\textup{max}},\forall n,
tr(𝐆𝐆𝖧)=1.\displaystyle~{}~{}\textup{tr}\left(\mathbf{G}\mathbf{G}^{\mathsf{H}}\right)=1.

The problem (38) remains challenging to be solved due to its non-convex constraints involving the term tr((𝐆𝖧𝐇n𝐇n𝖧𝐆)1)\textup{tr}((\mathbf{G}^{\mathsf{H}}\mathbf{H}_{n}\mathbf{H}_{n}^{\mathsf{H}}\mathbf{G})^{-1}). To address this issue, we develop a tractable approximation of the problem by employing the following inequality,

tr((𝐆𝖧𝐇n𝐇n𝖧𝐆)1)Lλmin(𝐇n𝖧𝐆𝐆𝖧𝐇n),\displaystyle\textup{tr}\left(\left(\mathbf{G}^{\mathsf{H}}\mathbf{H}_{n}\mathbf{H}_{n}^{\mathsf{H}}\mathbf{G}\right)^{-1}\right)\leq\frac{L}{\lambda_{\min}\left(\mathbf{H}_{n}^{\mathsf{H}}\mathbf{G}\mathbf{G}^{\mathsf{H}}\mathbf{H}_{n}\right)}, (39)

where the equality holds when the channel is well-conditioned, i.e., the singular values of 𝐇n\mathbf{H}_{n} are identical. By utilizing (39), we reformulate an approximated version of problem (38) as follows,

minα,𝐆\displaystyle\min_{\alpha,\mathbf{G}} α\displaystyle~{}~{}\alpha (40)
s.t. L0αλmin(𝐇n𝖧𝐆𝐆𝖧𝐇n)Pnmaxenmnstot,n,\displaystyle~{}~{}\frac{L_{0}}{\alpha\lambda_{\min}\left(\mathbf{H}_{n}^{\mathsf{H}}\mathbf{G}\mathbf{G}^{\mathsf{H}}\mathbf{H}_{n}\right)}\leq P_{n}^{\textup{max}}-e_{n}m_{n}s^{\textup{tot}},\forall n,
tr(𝐆𝐆𝖧)=1.\displaystyle~{}~{}\textup{tr}\left(\mathbf{G}\mathbf{G}^{\mathsf{H}}\right)=1.

Then, by introducing a new variable 𝐆^=𝐆𝐆𝖧{\hat{\mathbf{G}}}=\mathbf{G}\mathbf{G}^{\mathsf{H}}, an equivalent formulation of problem (40) is obtained as follows,

minα,𝐆^\displaystyle\min_{\alpha,\hat{\mathbf{G}}} α\displaystyle~{}~{}\alpha (41)
s.t. L0αλmin(𝐇n𝖧𝐆^𝐇n)Pnmaxenmnstot,n,\displaystyle~{}~{}\frac{L_{0}}{\alpha\lambda_{\min}\left(\mathbf{H}_{n}^{\mathsf{H}}\hat{\mathbf{G}}\mathbf{H}_{n}\right)}\leq P_{n}^{\textup{max}}-e_{n}m_{n}s^{\textup{tot}},\forall n,
tr(𝐆^)=1,rank(𝐆^)=L,𝐆^0.\displaystyle~{}~{}\textup{tr}(\hat{\mathbf{G}})=1,~{}\textup{rank}(\hat{\mathbf{G}})=L,~{}\hat{\mathbf{G}}\succeq 0.

We observe that the only non-convex constraint in problem (41) is rank(𝐆^)=L\textup{rank}(\hat{\mathbf{G}})=L. Therefore, we remove this constraint to obtain a relaxed version of problem (41) as follows,

minα,𝐆^\displaystyle\min_{\alpha,\hat{\mathbf{G}}} α\displaystyle~{}~{}\alpha (42)
s.t. L0αλmin(𝐇n𝖧𝐆^𝐇n)Pnmaxenmnstot,n,\displaystyle~{}~{}\frac{L_{0}}{\alpha\lambda_{\min}\left(\mathbf{H}_{n}^{\mathsf{H}}\hat{\mathbf{G}}\mathbf{H}_{n}\right)}\leq P_{n}^{\textup{max}}-e_{n}m_{n}s^{\textup{tot}},\forall n,
tr(𝐆^)=1,𝐆^0.\displaystyle~{}~{}\textup{tr}(\hat{\mathbf{G}})=1,~{}\hat{\mathbf{G}}\succeq 0.

The problem (42) can be proved to be a convex problem. After solving problem (42) using a convex solver (e.g., the CVX toolbox in MATLAB [49]) and obtaining the globally optimal solution 𝐆^\hat{\mathbf{G}}^{*}, we apply the Gaussian randomization algorithm [50] to map the solution to a feasible, near-optimal solution for the original non-convex problem.

Next, we solve the long-term model assignment problem 𝒫l\mathcal{P}_{l}. The proposed stochastic SCA algorithm, initially introduced for the single-antenna case in Section III-B, can be directly extended to the multi-antenna scenario without requiring further modifications. For clearer algorithmic description, we first reformulate the long-term problem 𝒫l\mathcal{P}_{l} into an equivalent form as follows,

min𝐦\displaystyle\min_{\mathbf{m}} f0(𝐦)=𝔼𝐇[MSE(𝐀(𝐦),{𝐁n(𝐦)})]\displaystyle~{}~{}f_{0}(\mathbf{m})=\mathbb{E}_{\mathbf{H}}\left[\textup{MSE}(\mathbf{A}^{*}(\mathbf{m}),\left\{\mathbf{B}_{n}^{*}(\mathbf{m})\right\})\right] (43)
s.t. f1(𝐦)=stotdiag(𝐞𝐦𝖳)+L0L𝐞c(𝐦)𝐩max,\displaystyle~{}~{}f_{1}(\mathbf{m})=s^{\textup{tot}}\textup{diag}(\mathbf{e}\mathbf{m}^{\mathsf{T}})+\frac{L_{0}}{L}\mathbf{e}_{c}\left(\mathbf{m}\right)\leq\mathbf{p}^{\textup{max}},
𝐦T𝟏=1,𝐦0,\displaystyle~{}~{}\mathbf{m}^{\textsf{T}}\bm{1}=1,\mathbf{m}\geq 0,

where

𝐞c(𝐦)=[tr(𝐁1(𝐦)(𝐁1(𝐦))𝖧),,tr(𝐁N(𝐦)(𝐁N(𝐦))𝖧)]𝖳,\mathbf{e}_{c}\!\left(\mathbf{m}\right)\!=\![\textup{tr}\!\left(\mathbf{B}_{1}^{*}(\mathbf{m})(\mathbf{B}_{1}^{*}(\mathbf{m}))^{\mathsf{H}}\right)\!,\!\ldots\!,\textup{tr}\!\left(\mathbf{B}_{N}^{*}(\mathbf{m})(\mathbf{B}_{N}^{*}(\mathbf{m}))^{\mathsf{H}}\right)]^{\mathsf{T}}\!,

𝐩max=[P1max,,PNmax]𝖳\mathbf{p}^{\textup{max}}=[{P}_{1}^{\textup{max}},\ldots,{P}_{N}^{\textup{max}}]^{\mathsf{T}}, and 𝐞=[e1,,eN]𝖳\mathbf{e}=[e_{1},\ldots,e_{N}]^{\mathsf{T}}. The main structure of the proposed stochastic SCA algorithm remains intact, and it iteratively performs the following two steps: First, quadratic surrogate functions f^0(𝐦)\hat{f}_{0}(\mathbf{m}), f^1(𝐦)\hat{f}_{1}(\mathbf{m}) are constructed to approximate the non-convex components of the original objective and constraint functions f0(𝐦)f_{0}(\mathbf{m}), f1(𝐦)f_{1}(\mathbf{m}), respectively. Then, the resulting convex quadratic approximation problem is solved, and the long-term model assignment policy is updated based on the solution. Here, we omit the details of these two steps for brevity. In Section V-B, we also demonstrate the convergence of the proposed algorithm for the multi-antenna scenario through simulations.

V Simulation Results

V-A Simulation Setups

V-A1 LLM Inference Model Setting

All simulations are performed on a desktop server equipped with Nvidia GeForce RTX 4070Ti GPU and Intel Core i9 CPU, using PyTorch 2.0 with CUDA 11.7. We set up NN virtual machines (VMs) with each VM simulating a distinct edge device. Each VM is allocated 4 CPU cores, 16 GB RAM, and 128 GB storage space, ensuring efficient utilization of computational resources and optimized parallel processing. For evaluation, we utilize the LLaMA2 [55] and LLaMA3 [11] models due to their state-of-the-art performance among open-source models. Additionally, we employ the WikiText-2 dataset [56], which is widely used in the field of LLM inference for benchmarking and evaluation purposes. We have released our implementation on GitHub: https://github.com/zklasd24/distributed_llama_AirComp, which builds upon the open-source project Distributed Llama [57].

The primary performance metric for inference accuracy is perplexity [58], which is a widely recognized measure of a LLM’s capability to predict the next word in a sequence. It is defined mathematically as follows,

Perplexity=exp(1Ltxtk=1Ltxtlog(wkw1,,wk1)),\displaystyle\mathrm{Perplexity}=\exp\!\left(\!-\frac{1}{L_{\textup{txt}}}\sum_{k=1}^{L_{\textup{txt}}}\log\mathbb{P}{}\left(w_{k}\!\mid\!w_{1},\ldots,w_{k-1}\right)\right)\!, (44)

where (wkw1,,wk1)\mathbb{P}{}\left(w_{k}\mid w_{1},\ldots,w_{k-1}\right) denotes the model’s predicted probability for the next word wkw_{k}, and LtxtL_{\textup{txt}} is the text length. Lower perplexity values indicate better inference performance, reflecting the model’s accuracy in generating subsequent tokens.

V-A2 Communication Model Setting

The number of antennas at the edge server is Nr=16N_{r}=16, and each edge device has single antenna or Nt=4N_{t}=4 antennas for different cases. The bandwidth between the edge server and edge devices is B=10B=10 MHz. The uplink channels are assumed to be independent and identically distributed (i.i.d.) Rician fading [59], modeled as i.i.d. complex Gaussian random variables with non-zero mean μ=1\mu=1 and variance σ2=1\sigma^{2}=1. Moreover, the maximum power budget is set as Pnmax=10P_{n}^{\textup{max}}=10 and the noise variance at the edge server is assumed to be 1.

V-B Algorithm Convergence

Refer to caption
Fig. 4: Convergence of Algorithm 1 for the scenarios of single-antenna devices (Top) and multi-antenna devices (Bottom).

In this subsection, we analyze the convergence behavior of the propose d algorithm for both single-antenna and multi-antenna scenarios. The parameter parameters are set as ϵ=0.001\epsilon=0.001, ρτ=[1/((τ+1)4/5)]\rho^{\tau}=[1/((\tau+1)^{4/5})], and γτ=15/(14+τ)\gamma^{\tau}=15/(14+\tau). As illustrated in Fig. 4, the proposed algorithm demonstrates rapid convergence, reaching a stationary point within approximately 100 iterations. The swift convergence speed ensures that the distributed LLM inference system can quickly adapt to varying network conditions, enabling real-time inference especially in latency-sensitive applications. Moreover, the consistent performance across both single-antenna and multi-antenna settings suggests the robustness of the proposed algorithm to various network scenarios.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Figure 5: The average MSE (a), perplexity (b), and average generation time (c) versus the number of edge devices for the scenario of single-antenna d evices.

V-C Performance Evaluation

In this subsection, we compare the performance of the proposed AirComp all-reduce approach with the following two benchmark schemes.

  • Digital All-Reduce: All devices upload intermediate layer outputs using a traditional broadband digital multiple-access scheme, with each transmitted symbol quantized to Q=8Q=8 bits. To prevent multi-user interference, orthogonal frequency division multiple-access (OFDMA) is employed, assigning each sub-channel to one device [60].

  • Uncoded FDMA: This scheme similarly employs the OFDMA technique, with each device occupying a dedicated sub-channel to upload intermediate layer outputs in an uncoded analog manner.

      Transmission Scheme       Transmission Time
      Digital All-Reduce       NL0QBlog2(1+SNRrxN)\frac{NL_{0}Q}{B\log_{2}\left(1+\textup{SNR}_{\textup{rx}}N\right)}
      Uncoded FDMA       NL0B\frac{NL_{0}}{B}
      AirComp All-Reduce       L0B\frac{L_{0}}{B}
TABLE II: Transmission time for different transmission schemes.
Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Figure 6: The average MSE (a), perplexity (b), and average generation time (c) versus the number of edge devices for the scenario of multi-antenna devices.

In Fig. 5, we compare the inference performance of different transmission schemes using the LLaMA3 model with 8 billion parameters, across three key performance metrics: transmission MSE, perplexity, and average generation time. In Fig. 5(a), the proposed AirComp all-reduce approach consistently achieves low MSE across all device counts, significantly outperforming the uncoded FDMA scheme, which exhibits a near-linear increase in MSE as the number of devices grows. The digital all-reduce method achieves near-zero MSE across all configurations. However, it has significantly higher communication latency. In Fig. 5(b), perplexity follows the same trend as the transmission MSE. The AirComp all-reduce method maintains stable, low perplexity across all device configurations, while the perplexity of uncoded FDMA rises sharply with more devices. Digital all-reduce performs similarly to AirComp all-reduce, maintaining low perplexity. Turning to the average generation time in Fig. 5(c), we observe a notable distinction among the three methods. Here, the total inference time is defined as the sum of local computation time and the time taken to transmit the local outputs. The local computation time is obtained through experimental measurements, while the communication time is estimated based on different transmission methods as outlined in Table II, where SNRrx\textup{SNR}_{\textup{rx}} denotes the average receive signal-to-noise ratio (SNR). We observe that AirComp all-reduce consistently demonstrates the lowest latency, particularly as the number of edge devices grows. The digital all-reduce scheme shows a significant increase in generation time with more devices due to increased communication overhead, while the uncoded FDMA method provides moderate improvements but still lags behind AirComp all-reduce. The proposed AirComp all-reduce approach exhibits superior scalability compared to traditional communication strategies. Specifically, by exploiting analog signal superposition inherent in wireless channels, AirComp enables simultaneous aggregation of signals from multiple devices within a single communication slot. Consequently, unlike traditional communication schemes, whose overhead increases linearly with the number of participating devices, the AirComp all-reduce approach maintains low communication latency even as device count grows.

Fig. 6 expands on the simulation results by evaluating the performance of the proposed method in a more general setting of multi-antenna devices. In this scenario, the digital all-reduce scheme maintains the lowest MSE and perplexity. However, its average generation time grows considerably with an increasing number of devices, indicating scalability limitations in practice. The proposed AirComp all-reduce scheme, while exhibiting a slight increase in MSE compared to digital all-reduce, remains competitive in terms of perplexity and demonstrates the shortest generation time across all configurations. This makes it an promising choice for applications where low latency is critical, and slight trade-offs in accuracy are acceptable. On the other hand, the uncoded FDMA scheme’s performance degrades significantly with more devices, reflected by steep increases in both MSE and perplexity.

Average generation time per token (ms)
Model LLaMA2-7B LLaMA2-13B LLaMA2-70B LLaMA3-70B
Device Number 1 2 4 8 1 2 4 8 1 2 4 8 1 2 4 8
Digital All-Reduce 114.2 85.2 79.5 108.3 217.3 174.0 176.6 261.4   N/A\textup{N/A}^{*} 807.3 729.7 981.6 N/A 893.2 783.8 1033.6
AirComp All-Reduce 114.2 69.7 45.7 37.8 217.3 128.5 81.3 66.4 N/A 660.9 423.0 354.2 N/A 746.8 477.1 406.0
*: Not available due to insufficient memory.
TABLE III: Average generation time for different models across varying device numbers, with the shortest average generation time for each model being highlighted in bold.

To further validate the effectiveness of the proposed algorithm, we conduct additional experiments using larger models, including LLaMA2 with 7, 13, and 70 billion parameters, and LLaMA3 with 70 billion parameters. In Table III, it is observed that AirComp all-reduce method consistently demonstrates superior performance in terms of reduced generation time, particularly as the number of devices increases. Across various device and model configurations, AirComp all-reduce achieves up to 4x faster generation speed, demonstrating its significant advantages for distributed LLM inference, especially with large-scale models.

Overall, the AirComp all-reduce approach emerges as a balanced and scalable solution, effectively managing the trade-offs between latency, accuracy, and scalability in both single-antenna and multi-antenna environments. This highlights its potential for deployment in practical, large-scale wireless scenarios.

V-D Comparison with Centralized Inference Approach

In this subsection, we compare the proposed AirComp-based distributed inference framework with the traditional centralized inference approach. Table IV compares the per-token generation latency for centralized versus distributed LLM inference across different large models. As shown in the table, although the centralized inference does not incur a communication overhead, it suffers from significantly higher per-token computation time. In contrast, the proposed distributed inference approach partitions the model across multiple devices, substantially reducing each device’s computational load. Despite introducing modest communication overhead, the proposed distributed scheme achieves significantly lower total inference latency per token. Hence, for large-scale LLMs with billions of parameters, distributing both the model storage and compute cost across multiple devices proves far more feasible and efficient than hosting the entire model on a single node. Moreover, both per-token local computation time and communication overhead increase substantially as the number of transformer layers grows. However, it is noteworthy that the distributed inference approach consistently maintains a significant latency advantage over centralized inference across all models with different number of layers.

Model Number of Transformer Layers Method Per-Token Local  Computation Time (ms)  Per-Token  Communication Time (ms) Per-Token Total Generation Time (ms)
  LLaMA2-7B   32 Centralized 114.2 0 114.2
    Distributed 26.0 11.8 37.8
  LLaMA2-13B   40 Centralized 217.3 0 217.3
    Distributed 38.5 27.9 66.4
  LLaMA2-70B   80 Centralized 1152.6 0 1152.6
    Distributed 264.6 89.6 354.2
TABLE IV: Comparison of Centralized v.s. Decentralized Inference across Different Models: Per-Token Computation, Communication, and Total Latency.

VI Conclusion

In this paper, we proposed a novel distributed on-device LLM inference framework employing tensor parallelism. To mitigate the communication overhead from frequent all-reduce steps in tensor parallelism, we proposed a communication-effcient AirComp all-reduce approach. Moreover, to minimize the average transmission MSE, we formulated a joint model assignment and transceiver design problem, which can be derived as a mixed-timescale stochastic non-convex optimization. We further developed an efficient two-stage algorithm that decomposed the original problem in short-term transceiver optimization and long-term model assignment optimization problems, which were solved by leveraging the SDR and stochastic SCA, respectively. We proved that the proposed algorithm can converge almost surely to a stationary point of the original problem. Simulation results demonstrated that the proposed approach significantly reduced inference latency while improving inference accuracy, making distributed on-device LLM inference feasible for resource-constrained edge devices.

There are several promising directions for further advancing distributed on-device LLM inference systems. One important research direction is experimentally validating the proposed AirComp-based distributed inference framework using real-world wireless hardware setups, further assessing practical performance and robustness. In addition, exploring cluster-based hierarchical AirComp designs and distributed transceiver optimization methods can effectively address potential scalability bottlenecks arising from synchronization overhead, channel estimation complexity, and computational demands in large-scale device networks.

Appendix

VI-A Proof of Lemma 3

According to the assumption that channel statistics remain constant throughout the inference process, we have that the sample-wise approximation of the average MSE, f¯0(𝐦τ)\bar{f}_{0}(\mathbf{m}^{\tau}), satisfying

f¯0(𝐦τ) a.s. f0(𝐦τ),\displaystyle\bar{f}_{0}(\mathbf{m}^{\tau})\xrightarrow{\textup{~{}a.s.~{}}}f_{0}(\mathbf{m}^{\tau}), (45)
𝔼[f¯0(\displaystyle\mathbb{E}[\|\bar{f}_{0}( 𝐦τ)f0(𝐦τ)]=𝒪(1τ),\displaystyle\mathbf{m}^{\tau})-f_{0}(\mathbf{m}^{\tau})\|]=\mathcal{O}\left(\frac{1}{\sqrt{\tau}}\right), (46)

which follow from the law of large numbers and the central limit theorem, respectively. Then, combining (45) and (46) into (21), we have

limτ|f^iτ(\displaystyle\lim_{\tau\to\infty}|\hat{f}_{i}^{\tau}( 𝐦τ)fi(𝐦τ)|=0,\displaystyle\mathbf{m}^{\tau})-f_{i}(\mathbf{m}^{\tau})|=0, (47)

for i=0,1i=0,1. Equation (47) indicates the convergence of f^iτ\hat{f}_{i}^{\tau}, and we then need to prove the convergence of f^iτ\nabla\hat{f}_{i}^{\tau} as follows,

limτ|f^iτ(\displaystyle\lim_{\tau\to\infty}|\nabla\hat{f}_{i}^{\tau}( 𝐦τ)fi(𝐦τ)|=0.\displaystyle\mathbf{m}^{\tau})-\nabla f_{i}(\mathbf{m}^{\tau})|=0. (48)

It is easy to verify that the MSE funtion f¯0\bar{f}_{0} and its derivation f¯0\nabla\bar{f}_{0} are Lipschitz continuous, according to the fact that the channel sample is always bounded in practice. Then, we can obtain that

𝔼[𝐮0τ]f0(𝐦τ)\displaystyle\|\mathbb{E}[\mathbf{u}_{0}^{\tau}]-\nabla f_{0}(\mathbf{m}^{\tau})\| (49)
\displaystyle\leq 𝔼[f¯0(𝐦τ)f0(𝐦τ)]\displaystyle\mathbb{E}[\|\nabla\bar{f}_{0}(\mathbf{m}^{\tau})-\nabla f_{0}(\mathbf{m}^{\tau})\|]
(a)\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny(a)}}}}{{\leq}}} 𝒪(1τ),\displaystyle\mathcal{O}\left(\frac{1}{\sqrt{\tau}}\right),

where (a) holds since f¯0\nabla\bar{f}_{0} is Lipschitz continuous. From τ=0ρττ1/2<\sum_{\tau=0}^{\infty}\rho^{\tau}\tau^{-1/2}<\infty, we can obtain that

τ=0ρτ𝔼[𝐮0τ]f0(𝐦τ)<,\displaystyle\sum_{\tau=0}^{\infty}\rho^{\tau}\|\mathbb{E}[\mathbf{u}_{0}^{\tau}]-\nabla f_{0}(\mathbf{m}^{\tau})\|<\infty, (50)

which indicates the convergence of 𝐮0τ\mathbf{u}_{0}^{\tau}. Then, according to [61, Lemma 1], equation (48) holds.

Next, according to the fact that f¯0\bar{f}_{0} is Lipschitz continuous, it directly follows that there exists a constant ll such that

limτ1,τ2f^iτ1(𝐦τ1)f^iτ2(𝐦τ2)l𝐦τ1𝐦τ2.\displaystyle\lim_{\tau_{1},\tau_{2}\to\infty}\hat{f}_{i}^{\tau_{1}}(\mathbf{m}^{\tau_{1}})-\hat{f}_{i}^{\tau_{2}}(\mathbf{m}^{\tau_{2}})\leq l\|\mathbf{m}^{\tau_{1}}-\mathbf{m}^{\tau_{2}}\|. (51)

Finally, from (47), (48) and (51), we can obtain that the sequences of the surrogate functions f^iτ(𝐦)\hat{f}_{i}^{\tau}(\mathbf{m}) converge to f^i(𝐦)\hat{f}_{i}(\mathbf{m}) almost surely.

VI-B Proof of Theorem 1

Let {𝐦τ}τ=1\{\mathbf{m}^{\tau}\}_{\tau=1}^{\infty} denote the sequence of model assignment policies generated by Algorithm 1. According to [51, Lemma 4], we have

limτf1(𝐦τ)𝐩max,\displaystyle\lim_{\tau\to\infty}f_{1}(\mathbf{m}^{\tau})\leq\mathbf{p}^{\textup{max}}, (52)
limτ𝐦τ𝐦^τ=0,\displaystyle\lim_{\tau\to\infty}\|\mathbf{m}^{\tau}-\hat{\mathbf{m}}^{\tau}\|=0, (53)

where 𝐦^τ\hat{\mathbf{m}}^{\tau} is obtained by solving problem (25) or (26). Then, we introduce an auxiliary variable 𝐦~τ\tilde{\mathbf{m}}^{\tau}, which is the optimal solution of the following problem,

𝐦~τ=min𝐦\displaystyle\tilde{\mathbf{m}}^{\tau}=\min_{\mathbf{m}} f^0τ(𝐦)\displaystyle~{}~{}\hat{f}_{0}^{\tau}(\mathbf{m}) (54)
s.t. f^1τ(𝐦)𝐩max+𝝁τ,\displaystyle~{}~{}\hat{f}_{1}^{\tau}(\mathbf{m})\leq\mathbf{p}^{\textup{max}}+\bm{\mu}^{\tau},
n=1Nmn=1,\displaystyle~{}~{}\sum_{n=1}^{N}m_{n}=1,
0mn1,n,\displaystyle~{}~{}0\leq m_{n}\leq 1,\forall n,

where limτ𝝁τ=0\lim_{\tau\to\infty}\bm{\mu}^{\tau}=0. Letting τ\tau\to\infty in (54) and combining (47) and (53) into (54), we have

𝐦=min𝐦\displaystyle\mathbf{m}^{*}=\min_{\mathbf{m}} f^0(𝐦)\displaystyle~{}~{}\hat{f}_{0}(\mathbf{m}) (55)
s.t. f^1(𝐦)𝐩max,\displaystyle~{}~{}\hat{f}_{1}(\mathbf{m})\leq\mathbf{p}^{\textup{max}},
n=1Nmn=1,\displaystyle~{}~{}\sum_{n=1}^{N}m_{n}=1,
0mn1,n.\displaystyle~{}~{}0\leq m_{n}\leq 1,\forall n.

Then, if 𝐦\mathbf{m}^{*} satisfies the Slater’s condition, we have that the KKT condition of problem (55) holds, i.e., there exists λ\lambda such that

f^0(𝐦)+λf^1(𝐦)=𝟎.\displaystyle\nabla\hat{f}_{0}(\mathbf{m}^{*})+\lambda\nabla\hat{f}_{1}(\mathbf{m}^{*})=\mathbf{0}. (56)

Finally, it follows from Lemma 1 and (56) that 𝐦\mathbf{m}^{*} satisfies the KKT condition of the original problem 𝒫l\mathcal{P}_{l} as follows,

f0(𝐦)+λf1(𝐦)=𝟎.\displaystyle\nabla f_{0}(\mathbf{m}^{*})+\lambda\nabla f_{1}(\mathbf{m}^{*})=\mathbf{0}. (57)

This completes the proof.

References

  • [1] K. Zhang, H. He, S. Song, J. Zhang, and K. B. Letaief, “Distributed on-device LLM inference with over-the-air computation,” in Proc. IEEE Int. Conf. Commun. (ICC), Montreal, Canada, Jun. 2025, to appear, https://arxiv.org/abs/2502.12559.
  • [2] B. Min, H. Ross, E. Sulem, A. P. B. Veyseh, T. H. Nguyen, O. Sainz, E. Agirre, I. Heintz, and D. Roth, “Recent advances in natural language processing via large pre-trained language models: A survey,” ACM Comput. Surv., vol. 56, no. 2, pp. 1–40, 2023.
  • [3] Y. Chang, X. Wang, J. Wang, Y. Wu, L. Yang, K. Zhu, H. Chen, X. Yi, C. Wang, Y. Wang, et al., “A survey on evaluation of large language models,” ACM Trans. Intell. Syst. Technol., vol. 15, no. 3, pp. 1–45, 2024.
  • [4] Y. Gu, R. Tinn, H. Cheng, M. Lucas, N. Usuyama, X. Liu, T. Naumann, J. Gao, and H. Poon, “Domain-specific language model pretraining for biomedical natural language processing,” ACM Trans. Comput. Healthc., vol. 3, no. 1, pp. 1–23, 2021.
  • [5] H. Fan, X. Liu, J. Y. H. Fuh, W. F. Lu, and B. Li, “Embodied intelligence in manufacturing: leveraging large language models for autonomous industrial robotics,” J. Intell. Manuf., pp. 1–17, 2024.
  • [6] Y. Yang, T. Zhou, K. Li, D. Tao, L. Li, L. Shen, X. He, J. Jiang, and Y. Shi, “Embodied multi-modal agent trained by an LLM from a parallel textworld,” in IEEE Conf. Comput. Vis. Pattern Recognit., pp. 26275–26285, 2024.
  • [7] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, et al., “Palm-e: An embodied multimodal language model,” arXiv preprint arXiv:2303.03378, 2023.
  • [8] L. Bariah, Q. Zhao, H. Zou, Y. Tian, F. Bader, and M. Debbah, “Large generative AI models for telecom: The next big thing?,” IEEE Commun. Mag., 2024.
  • [9] J. Shao, J. Tong, Q. Wu, W. Guo, Z. Li, Z. Lin, and J. Zhang, “WirelessLLM: Empowering large language models towards wireless intelligence,” J. Commun. Inf. Netw., vol. 9, pp. 99–112, 2024.
  • [10] J. Tong, J. Shao, Q. Wu, W. Guo, Z. Li, Z. Lin, and J. Zhang, “WirelessAgent: Large language model agents for intelligent wireless networks,” arXiv preprint arXiv:2409.07964, 2024.
  • [11] A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Yang, A. Fan, et al., “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783, 2024.
  • [12] B. Wu, Y. Zhong, Z. Zhang, G. Huang, X. Liu, and X. Jin, “Fast distributed inference serving for large language models,” arXiv preprint arXiv:2305.05920, 2023.
  • [13] A. Borzunov, M. Ryabinin, A. Chumachenko, D. Baranchuk, T. Dettmers, Y. Belkada, P. Samygin, and C. A. Raffel, “Distributed inference and fine-tuning of large language models over the internet,” Adv. Neural Inf. Process. Syst., vol. 36, 2024.
  • [14] C. Hu, H. Huang, L. Xu, X. Chen, J. Xu, S. Chen, H. Feng, C. Wang, S. Wang, Y. Bao, et al., “Inference without interference: Disaggregate llm inference for mixed downstream workloads,” arXiv preprint arXiv:2401.11181, 2024.
  • [15] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y.-J. A. Zhang, “The roadmap to 6G: AI empowered wireless networks,” IEEE Commun. Mag., vol. 57, no. 8, pp. 84–90, 2019.
  • [16] K. B. Letaief, Y. Shi, J. Lu, and J. Lu, “Edge artificial intelligence for 6G: Vision, enabling technologies, and applications,” IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 5–36, 2021.
  • [17] M. Zhang, J. Cao, X. Shen, and Z. Cui, “Edgeshard: Efficient LLM inference via collaborative edge computing,” arXiv preprint arXiv:2405.14371, 2024.
  • [18] X. Yuan, N. Li, T. Zhang, M. Li, Y. Chen, J. F. M. Ortega, and S. Guo, “High efficiency inference accelerating algorithm for noma-based edge intelligence,” IEEE Trans. Wireless Commun., 2024.
  • [19] Y. He, J. Fang, F. R. Yu, and V. C. Leung, “Large language models inference offloading and resource allocation in cloud-edge computing: An active inference approach,” IEEE Trans. Mobile Comput., 2024.
  • [20] Y. Chen, R. Li, X. Yu, Z. Zhao, and H. Zhang, “Adaptive layer splitting for wireless LLM inference in edge computing: A model-based reinforcement learning approach,” arXiv preprint arXiv:2406.02616, 2024.
  • [21] J. Shao, Y. Mao, and J. Zhang, “Learning task-oriented communication for edge inference: An information bottleneck approach,” IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 197–211, 2021.
  • [22] H. Li, W. Yu, H. He, J. Shao, S. Song, J. Zhang, and K. B. Letaief, “Task-oriented communication with out-of-distribution detection: An information bottleneck framework,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Kuala Lumpur, Malaysia, Dec. 2023.
  • [23] H. Li, J. Shao, H. He, S. Song, J. Zhang, and K. B. Letaief, “Tackling distribution shifts in task-oriented communication with information bottleneck,” arXiv preprint arXiv:2405.09514, 2024.
  • [24] D. Wen, X. Jiao, P. Liu, G. Zhu, Y. Shi, and K. Huang, “Task-oriented over-the-air computation for multi-device edge AI,” IEEE Trans. Wireless Commun., 2023.
  • [25] F. Brakel, U. Odyurt, and A.-L. Varbanescu, “Model parallelism on distributed infrastructure: A literature review from theory to LLM case-studies,” arXiv preprint arXiv:2403.03699, 2024.
  • [26] M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro, “Megatron-lm: Training multi-billion parameter language models using model parallelism,” arXiv preprint arXiv:1909.08053, 2019.
  • [27] H. Dong, T. Johnson, M. Cho, and E. Soroush, “Towards low-bit communication for tensor parallel LLM inference,” arXiv preprint arXiv:2411.07942, 2024.
  • [28] J. Hansen-Palmus, M. Truong-Le, O. Hausdörfer, and A. Verma, “Communication compression for tensor parallel LLM inference,” arXiv preprint arXiv:2411.09510, 2024.
  • [29] B. Nazer and M. Gastpar, “Computation over multiple-access channels,” IEEE Trans. Inf. Theory, vol. 53, no. 10, pp. 3498–3516, 2007.
  • [30] S. Cui, J.-J. Xiao, A. J. Goldsmith, Z.-Q. Luo, and H. V. Poor, “Energy-efficient joint estimation in sensor networks: Analog vs. digital,” in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 4, pp. 745–748, IEEE, 2005.
  • [31] F. Wang and V. K. Lau, “Multi-level over-the-air aggregation of mobile edge computing over d2d wireless networks,” IEEE Trans. Wireless Commun., vol. 21, no. 10, pp. 8337–8353, 2022.
  • [32] M. Frey, I. Bjelaković, and S. Stańczak, “Over-the-air computation in correlated channels,” IEEE Trans. Signal Process., vol. 69, pp. 5739–5755, 2021.
  • [33] X. Cao, G. Zhu, J. Xu, and K. Huang, “Optimized power control for over-the-air computation in fading channels,” IEEE Trans. Wireless Commun., vol. 19, no. 11, pp. 7498–7513, 2020.
  • [34] K. Yang, T. Jiang, Y. Shi, and Z. Ding, “Federated learning via over-the-air computation,” IEEE Trans. Wireless Commun., vol. 19, no. 3, pp. 2022–2035, 2020.
  • [35] J. Zhu, Y. Shi, Y. Zhou, C. Jiang, W. Chen, and K. B. Letaief, “Over-the-air federated learning and optimization,” IEEE Internet Things J., 2024.
  • [36] T. Sery, N. Shlezinger, K. Cohen, and Y. C. Eldar, “Over-the-air federated learning from heterogeneous data,” IEEE Trans. Signal Process., vol. 69, pp. 3796–3811, 2021.
  • [37] Z. Liu, Q. Lan, A. E. Kalor, P. Popovski, and K. Huang, “Over-the-air multi-view pooling for distributed sensing,” IEEE Trans. Wireless Commun., 2023.
  • [38] Z. Wang, A. E. Kalor, Y. Zhou, P. Popovski, and K. Huang, “Ultra-low-latency edge inference for distributed sensing,” arXiv preprint arXiv:2407.13360, 2024.
  • [39] C. Feres, B. C. Levy, and Z. Ding, “Over-the-air multi-sensor collaboration for resource efficient joint detection,” IEEE Trans. Signal Process., 2023.
  • [40] X. Fan, Y. Wang, Y. Huo, and Z. Tian, “Joint optimization of communications and federated learning over the air,” IEEE Trans. Wireless Commun., vol. 21, no. 6, pp. 4434–4449, 2021.
  • [41] Y. Liang, Q. Chen, G. Zhu, H. Jiang, Y. C. Eldar, and S. Cui, “Communication-and-energy efficient over-the-air federated learning,” IEEE Trans. Wireless Commun., 2024.
  • [42] H. Sun, H. Tian, W. Ni, J. Zheng, D. Niyato, and P. Zhang, “Federated low-rank adaptation for large models fine-tuning over wireless networks,” IEEE Trans. Wireless Commun., 2024.
  • [43] Z. Zhuang, D. Wen, Y. Shi, G. Zhu, S. Wu, and D. Niyato, “Integrated sensing-communication-computation for over-the-air edge AI inference,” IEEE Trans. Wireless Commun., vol. 23, no. 4, pp. 3205–3220, 2023.
  • [44] P. Yang, D. Wen, Q. Zeng, Y. Zhou, T. Wang, H. Cai, and Y. Shi, “Over-the-air computation empowered vertically split inference,” IEEE Trans. Wireless Commun., 2024.
  • [45] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” Adv. Neural Inf. Process. Syst., 2017.
  • [46] M. Goldenbaum, H. Boche, and S. Stańczak, “Harnessing interference for analog function computation in wireless sensor networks,” IEEE Trans. Signal Process., vol. 61, no. 20, pp. 4893–4906, 2013.
  • [47] X. Li, G. Zhu, Y. Gong, and K. Huang, “Wirelessly powered data aggregation for iot via over-the-air function computation: Beamforming and power control,” IEEE Trans. Wireless Commun., vol. 18, no. 7, pp. 3437–3452, 2019.
  • [48] M. L. Overton and R. S. Womersley, “On the sum of the largest eigenvalues of a symmetric matrix,” SIAM J. Matrix Anal. Appl., vol. 13, no. 1, pp. 41–45, 1992.
  • [49] M. Grant and S. Boyd, “Cvx: Matlab software for disciplined convex programming, version 2.1,” 2014.
  • [50] Z.-Q. Luo, W.-K. Ma, A. M.-C. So, Y. Ye, and S. Zhang, “Semidefinite relaxation of quadratic optimization problems,” IEEE Signal Process. Mag., vol. 27, no. 3, pp. 20–34, 2010.
  • [51] A. Liu, V. K. Lau, and B. Kananian, “Stochastic successive convex approximation for non-convex constrained stochastic optimization,” IEEE Trans. Signal Process., vol. 67, no. 16, pp. 4189–4203, 2019.
  • [52] I. M. Bomze, V. F. Demyanov, R. Fletcher, T. Terlaky, I. Pólik, and T. Terlaky, “Interior point methods for nonlinear optimization,” Nonlinear Optimization, pp. 215–276, 2010.
  • [53] M. Razaviyayn, Successive convex approximation: Analysis and applications. PhD thesis, University of Minnesota, 2014.
  • [54] K. Zhang and X. Cao, “Online power control for distributed multitask learning over noisy fading wireless channels,” IEEE Trans. Signal Process., 2023.
  • [55] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288, 2023.
  • [56] S. Merity, N. S. Keskar, and R. Socher, “Regularizing and optimizing lstm language models,” arXiv preprint arXiv:1708.02182, 2017.
  • [57] B. Tadych, “Distributed llama.” https://github.com/b4rtaz/distributed-llama, 2024.
  • [58] G. Alon and M. Kamfonas, “Detecting language model attacks with perplexity,” arXiv preprint arXiv:2308.14132, 2023.
  • [59] D. Tse and P. Viswanath, Fundamentals of wireless communication. Cambridge university press, 2005.
  • [60] A. Goldsmith, Wireless communications. Cambridge university press, 2005.
  • [61] A. Ruszczyński, “Feasible direction methods for stochastic programming problems,” Math. Program., vol. 19, pp. 220–229, 1980.