Power Allocation for the Base Matrix of Spatially Coupled Sparse Regression Codes
Abstract
We investigate power allocation for the base matrix of a spatially coupled sparse regression code (SC-SPARC) for reliable communications over an additive white Gaussian noise channel. A conventional SC-SPARC allocates power uniformly to the non-zero entries of its base matrix. Yet, to achieve the channel capacity with uniform power allocation, the coupling width and the coupling length of the base matrix must satisfy regularity conditions and tend to infinity as the rate approaches the capacity. For a base matrix with a pair of finite and arbitrarily chosen coupling width and coupling length, we propose a novel power allocation policy, termed V-power allocation. V-power allocation puts more power to the outer columns of the base matrix to jumpstart the decoding process and less power to the inner columns, resembling the shape of the letter V. We show that V-power allocation outperforms uniform power allocation since it ensures successful decoding for a wider range of signal-to-noise ratios given a code rate in the limit of large blocklength. In the finite blocklength regime, we show by simulations that power allocations imitating the shape of the letter V improve the error performance of a SC-SPARC.
I Introduction
For reliable communications over an additive white Gaussian noise (AWGN) channel, Joseph and Barron [1] designed the sparse regression code (SPARC). It forms a codeword by multiplying a design matrix by a sparse message. The message is sparse as it is segmented into several sections and each section contains only one non-zero entry. The codeword is passed through an AWGN channel subject to an average power constraint. With uniform power allocation across the non-zero entries of a message and a maximum likelihood decoder, a SPARC asymptotically achieves the channel capacity of the AWGN channel [1]. To overcome the complexity barrier of the maximum likelihood decoder, the approximate message passing (AMP) decoder with polynomial complexity has been proposed [2]–[4]. Its decoding error is closely tracked by the state evolution (SE) and it outperforms other low-complexity decoders [5][6] in terms of the finite-blocklength error rates. By judiciously allocating power to the non-zero entries of a sparse message, SPARCs with AMP decoding continue to achieve the channel capacity [3]. For example, iterative power allocation [7] uses the asymptotic SE of the AMP decoder to decide the power allocation for a message section by section.
By introducing a spatial coupling structure to the design matrix, SC-SPARCs with AMP decoding not only achieve the channel capacity [8][9] but also display a better error performance compared to power-allocated SPARCs [4][10]. Similar to the graph-lifting of SC-LDPC codes [11][12], the design matrix of a SC-SPARC is constructed from a base matrix. Each entry of the base matrix is expanded as a Gaussian submatrix in the design matrix, and the variance of the Gaussian entry is determined by the corresponding entry in the base matrix. The coupling structure of the base matrix is determined by a coupling pair comprising a coupling width and a coupling length.
Existing works on SC-SPARCs commonly assumed that the power is uniformly allocated to the non-zero entries of the message as well as the base matrix, e.g., [7][9][10][13]. For such uniform power allocation (UPA), a decoding phenomenon termed sliding window is observed [9][13], namely, the decoding propagates from two sides to the middle of a message in a symmetric fashion. Once the outer parts of a message are successfully decoded, they act as perfect side information that facilitates the decoding of the inner parts of the message. This phenomenon is used as a decoding techinque termed seed to boost the decoding performance of SC-SPARCs [8].
While UPA is sufficient for a SC-SPARC with AMP decoding to achieve the channel capacity, the coupling pair of the base matrix must satisfy regularity conditions and tend to infinity as the rate approaches the channel capacity [8][9]. Yet, in practical implementations, the coupling pair is finite and arbitrary. Given a finite coupling pair, it has been observed that UPA might be inefficient and causes AMP decoding failure. Thus, it is of practical interest to design a power allocation policy for a base matrix with a finite coupling pair to ensure successful decoding for a wide range of power and code rates.
We propose a novel power allocation policy–V-power allocation (VPA)–for the base matrix of a SC-SPARC with AMP decoding. Its power allocation is non-increasing from the outer columns to the middle column of the base matrix, resembling the shape of the letter V. Similar to iterative power allocation [7], VPA leverages the asymptotic SE of the AMP decoder to tell whether a SC-SPARC ensures successful decoding in the limit of large blocklength. Dissimilar to conventional power allocation policies that vary the non-zero coefficients of a message, VPA only varies the non-zero entries of a base matrix. To measure the performance of a power allocation policy for the base matrix, we define a power-rate function (PRF). Given a finite coupling pair, a channel noise variance, and a rate, the PRF quantifies the minimum power so that a SC-SPARC with a power allocation policy ensures successful decoding for all power above it. We derive the PRFs for UPA and VPA, respectively, and we show that VPA outperforms UPA in terms of the PRF, meaning that VPA ensures successful decoding for a larger range of power. While VPA is designed in the infinite blocklength regime, we use simulations to show that a VPA-like power allocation improves the finite-blocklength block error rates of a SC-SPARC.
Notations: For a positive integer , we denote . For a matrix , we denote by the entry at the -th row and the -th column. For a sequence , we denote .
II Spatially coupled sparse regression codes
II-A Encoder
The encoder of a SC-SPARC forms a codeword by multiplying a message vector by a design matrix ,
(1) |
and the codeword is subject to an average power constraint
(2) |
The message is a sparse vector of length . It consists of length- sections. In each section , there is only one non-zero entry, whose value is set a priori. Since the information is carried only by the indices of the non-zero entries, the alphabet size of is . As we will vary the variances of the entries of design matrix by varying the power allocation for the base matrix, we set all the non-zero coefficients of to without loss of generality.
The design matrix , as shown in Fig. 1, is constructed from a base matrix . The base matrix serves as a protograph for the design matrix. Each entry of base matrix is expanded as an submatrix of design matrix , whose entries are i.i.d. Gaussian random variables . A column block in corresponds to a set of columns that are expanded from one column in . A row block in corresponds to a set of rows that are expanded from one row in . The design matrix contains columns blocks and row blocks. It holds that , .
The rate of a SC-SPARC is defined as
(3) |
In this work, we focus on a class of band-diagonal base matrices defined below, which is introduced in [9]. We denote by and the coupling width and the coupling length of the base matrix, respectively.
Definition 1.
An base matrix is specified by the following properties.
-
i)
The base matrix is of size , where , , ;
-
ii)
Given any column , the non-zero entries are only at rows ;
-
iii)
The entries of satisfy the average power constraint (2),
(4)

II-B Decoder
The codeword (1) is transmitted through an AWGN channel yielding , where is a vector of i.i.d. Gaussian random variables each with zero mean and variance . The AMP decoder iteratively estimates the message from the channel output as follows [9, Section III]. At iteration , the AMP decoder initializes the estiamte of as and initilizes two vectors , . At iterations , the AMP decoder calculates the estimate as
(5) | |||
(6) |
where denotes the entry-wise product; function is the minimum mean square error estimator for ; vector and matrix are determined by the SE parameters. In the asymptotic regime , the SE parameters [9, (23)–(24)] at iterations are given by
(7a) | ||||
(7b) |
where . The SE parameter (7b) closely tracks the normalized mean-square error between the part of message and the part of the estimate corresponding to column block at iteration , i.e., for all . This is evidenced both by the simulations [9, Fig. 3] and the concentration inequality [9, Theorem 2].
III Power allocation and performance metrics
We define power allocation policies for a base matrix as well as the performance metrics.
For an base matrix in Definition 1, a power allocation policy is a mapping that gives a set of non-negaitve values corresponding to the entries of the base matrix. The power allocation policy for the base matrix does not affect the non-zero coefficients of message .
We say that a SC-SPARC successfully decodes column block of the message, i.e., , if there exists a time such that (7b); we say that a SC-SPARC successfully decodes the entire message if there exists a time ,
(8) |
We use the asymptotic SE parameter (7b) to define the performance metrics. The asymptotic SE parameter is fully determined by the coupling pair , the noise variance , the rate , the power , and the power allocation policy . Fixing the first three parameters, it becomes .
We measure the performance of a power allocation policy using the rate-power function (RPF) and the power-rate function (PRF) defined next.
Definition 2.
Fix a finite coupling pair , a noise variance of the AWGN channel , and a power . The RPF for power allocation policy is the largest rate so that for any rate , a SC-SPARC generated by an base matrix with power allocation ensures successful decoding,
(9) |
Fix a finite coupling pair , a noise variance of the AWGN channel , and a rate . The PRF for power allocation policy is the minimum power so that for any power , a SC-SPARC generated by an base matrix with power allocation ensures successful decoding,
(10) |
We aim to find a power allocation policy that leads to a large , or equivalently, a small .
IV Uniform power allocation
We say that an base matrix in Definition 1 has uniform power allocation (UPA) if
(11) |
Theorem 1.
Fix a finite coupling pair and an AWGN channel with noise variance . The RPF for UPA is given by
(12) |
the PRF for UPA is given by
(13) |
where is the inverse function of .
Proof.
Appendix -A. ∎
We compare (12) with the channel capacity of the AWGN channel with noise variance . Using Right-endpoint approximation, we upper bound (12) as
(14) |
The right side of (14) is smaller than for a finite coupling pair, implying that a SC-SPARC with a finite coupling pair no longer achieves the channel capacity. The gap closes if and only if and .
For rates , a SC-SPARC fails to ensure successful decoding, and the reason is shown in Proposition 1 stated below. We denote the index of the middle column of the base matrix by .
Proposition 1.
Consider a SC-SPARC generated by an base matrix with UPA (11). At iteration , if the AMP decoder successfully decodes column blocks of the message,
(15) |
for some , then at iterations , the AMP decoder continues to decode column blocks of the message,
(16) |
Proof.
Appendix -B. ∎
Proposition 1 states that if , the decoder fails to decode even a single column block of the message; otherwise, the entire message is decoded within iterations. Here, it suffices to limit because means that the entire message is successfully decoded in the first iteration (Appendix -C). Proposition 1 indicates that a SC-SPARC with UPA fails to decode at because the power (11) allocated to columns and of the base matrix is smaller than the power needed to make the event in (7b) occur.
V V-power allocation
V-A VPA Algorithm
Fixing an AWGN channel with noise variance and a rate , we present VPA for an base matrix.
In the extreme, a power allocation policy can allocate a different power to every non-zero entry of the base matrix . The output of VPA satisfy:
-
a)
The power does not change with rows, i.e., ,
(17) -
b)
The power is symmetric about the middle column index,
(18)
We define function , as111Although the summation in the denominator of the right side of (19) may include , is still a function of variables only, due to the symmetry assumption (18).
(19) |
Let be a sequence of positives chosen arbitrarily.
Proposition 2, stated next, shows that VPA follows a shape of V, namely, is non-increasing on and is non-decreasing on by symmetry (18).
Proposition 2.
Power allocation that ensure (line 2 of Algorithm 1) for all are unique and satisfy
(20) |
Proof.
Appendix -D. ∎
Although the sequence does not perfectly coincide with the sequence formed at the end of line 5, it reflects the trend of for abitrarily small .
V-B VPA performance
Before we show the PRF for VPA, we introduce Lemma 1 below. It states that if a column block of the message is decoded at some iteration, then it remains decoded in the subsequent iterations, and that the asymptotic SE (7b) can be expressed in terms of (19) under some conditions.
Lemma 1.
Consider a SC-SPARC generated by an base matrix. Fix a noise variance and a rate .
-
1.
If , , then , .
-
2.
For a power allocation policy satisfying a)–b), at ,
(21) if , then at iterations ,
(22)
Proof.
Appendix -E. ∎
We present the PRF for VPA.
Theorem 2.
Proof sketch.
The proof is divided into two steps.
(i) We show that VPA outputs , or equivalently it does not declare failure, if and only if and less than the upper bound in (23). Appendix -F.
(ii) We show that the output of VPA ensures successful decoding. Appendix -G. ∎
The working principle of VPA is to allocate sufficient power to the outer columns of the base matrix in order to jumpstart the wave-like decoding process that propagates from the sides to the middle of the message, and to allocate lower power (but not too low that prohibits the decoding process) to the inner columns of the base matrix.
V-C VPA outperforms UPA
Proposition 3.
Fix a finite coupling pair and an AWGN channel with noise variance . The rate that ensures also ensures , i.e.,
(24) |
For a rate that belongs to both sets in (24), it holds that
(25) |
Proof.
Appendix -H. ∎
In fact, UPA is a special case of VPA by carefully selecting (Appendix -I).
VI Simulations
We use an example to illustrate (25). Consider , , , , and . For UPA, we have , and Proposition 1 implies that a SC-SPARC with UPA fails to decode the message. We now determine the power allocation using VPA. Choosing , , and following lines 1–5 of VPA, we obtain . We check that line 9 of VPA is satisfied, and we transfer the residual power to the boundary columns yielding . Since the power in Theorem 2, the output of VPA ensures successful decoding.
While VPA is designed in the limit of large section length , we show by simulations that power allocation imitating the shape of the letter V (20) also improves the finite-blocklength error performance of a SC-SPARC. We consider a SC-SPARC of parameters and an AWGN channel of variance . Fig. 2 compare the SC-SPARC with UPA (11) and that with a VPA-like power allocation chosen empirically in Table I. Fig. 2 shows that the BLER of the VPA-like power allocation is smaller than that of UPA, especially in the middle part of the waterfall region. Fig. 3 shows the convergence of the BLERs. To reduce the complexities, we use the Hadamard design matrix as in [4][9], instead of using the i.i.d. Gaussian design matrix. The simulations may not perfectly match our theoretical results since the asymptotic SE is accurate only for an i.i.d. Gaussian design matrix and .


SNR(dB) | Outer columns | Inner columns |
---|---|---|
9.5 | ||
10.0 | ||
10.5 | ||
11.0 | ||
11.5 | ||
12.0 |
VII Conclusion
In this paper, we propose V-power allocation for the base matrix of a SC-SPARC with a finite coupling pair. It yields power allocation that descends from the outer columns to the inner columns of the base matrix, resembling the shape of the letter V. By analyzing the PRFs, we show that given a code rate, V-power allocation ensures successful decoding for a wider range of power compared to uniform power allocation. Numerical simulations indicate that power allocation following the shape of the letter V reduces the finite-blocklength block error rates of a SC-SPARC.
References
- [1] A. Joseph and A. R. Barron, “Least squares superposition codes of moderate dictionary size are reliable at rates up to capacity,” in IEEE Trans. Inf. Theory, vol. 58, no. 5, pp. 2541–2557, May 2012.
- [2] J. Barbier and F. Krzakala. “Replica analysis and approximate message passing decoder for superposition codes,” in 2014 IEEE Int. Symp. Inf. Theory, Honolulu, HI, USA, July 2014, pp. 1494-1498.
- [3] C. Rush, A. Greig and R. Venkataramanan, “Capacity-achieving sparse superposition codes via approximate message passing decoding,” in IEEE Trans. Inf. Theory, vol. 63, no. 3, pp. 1476–1500, March 2017.
- [4] J. Barbier and F. Krzakala, “Approximate message-passing decoder and capacity achieving sparse superposition codes,” in IEEE Trans. Inf. Theory, vol. 63, no. 8, pp. 4894–4927, Aug. 2017.
- [5] A. Joseph and A. R. Barron, “Fast sparse superposition codes have near exponential error probability for ,” in IEEE Trans. Inf. Theory, vol. 60, no. 2, pp. 919–942, Feb. 2014.
- [6] S. Cho, and A. Barron. “Approximate iterative Bayes optimal estimates for high-rate sparse superposition codes,” in Sixth Workshop on Information-Theoretic Methods in Science and Engineering, 2013.
- [7] R. Venkataramanan, S. Tatikonda, and A. Barron, “Sparse regression codes,” in Foundations and Trends in Communications and Information Theory, vol. 15, nos. 1–2, pp. 1–195, 2019.
- [8] J. Barbier, M. Dia and N. Macris, “Proof of threshold saturation for spatially coupled sparse superposition codes,” in 2016 IEEE Int. Symp. Inf. Theory, Barcelona, Spain, 2016, pp. 1173–1177.
- [9] C. Rush, K. Hsieh and R. Venkataramanan, “Capacity-achieving spatially coupled sparse superposition codes with AMP decoding,” in IEEE Trans. Inf. Theory, vol. 67, no. 7, pp. 4446–4484, July 2021.
- [10] K. Hsieh, C. Rush and R. Venkataramanan, “Spatially coupled sparse regression codes: design and state evolution analysis,” in 2018 IEEE Int. Symp. Inf. Theory, Vail, CO, USA, 2018, pp. 1016–1020.
- [11] S. Kudekar, T. J. Richardson and R. L. Urbanke, “Threshold saturation via spatial coupling: why convolutional LDPC ensembles perform so well over the BEC,” in IEEE Trans. Inf. Theory, vol. 57, no. 2, pp. 803–834, Feb. 2011.
- [12] D. G. M. Mitchell, M. Lentmaier and D. J. Costello, “Spatially Coupled LDPC Codes Constructed From Protographs,” in IEEE Trans. Inf. Theory, vol. 61, no. 9, pp. 4866–4889, Sept. 2015.
- [13] C. Rush, K. Hsieh and R. Venkataramanan, “Spatially coupled sparse regression codes with sliding window AMP decoding,” in 2019 IEEE Inf. Theory Workshop, Visby, Sweden, 2019, pp. 1–5.
- [14] N. Guo, S. Liang, W. Han, “Power allocation for the base matrix of spatially coupled sparse regression codes”, Arxiv Preprint, May 2023.
-A Proof of Theorem 1
-A1 Proof of
Before we prove in (12), we first show that a SC-SPARC with UPA succesfully decodes the entire message if and only if . If , then in (15), and Proposition 1 implies that the decoding is successful. To prove the reverse direction, we prove its equivalent, namely, if , then a SC-SPARC with UPA cannot decode successfully. Since (7b) is non-decreasing on , we conclude that implies for all . Thus, in (15), and Proposition 1 implies the decoding failure.
We proceed to prove (12). We write it as
(26a) | ||||
(26b) | ||||
(26c) |
where (26a) holds as we have proved that a SC-SPARC with UPA decodes successfully if and only if ; (26b) holds since
(27) |
is non-decreasing as increases; (26c) holds since (26b) is equivalent to the supremum of that makes the event in (27) occur. Thus, (12) follows.
-A2 Proof of
Before we prove (12), we calcuclate the derivative of the left side of the event in (27) with respect to as
(28a) | ||||
(28b) |
Since the left side of the event in (27) increases as increases, we conclude that is non-increasing as increases.
To express in terms of the inverse function of , we show that the inverse function exists. We calculate the derivative of with respect to as
(29a) | ||||
(29b) |
Since is differentiable and its derivative is positive, we conclude that is continuous and monotone, thus is bijective and has an inverse function.
To demonstrate the domain and the range of the inverse function , we show the range of . Since increases as increases by (29), it holds that for ,
(30a) | ||||
(30b) |
meaning that the inverse function satisfies
(31) |
We proceed to show (12). We write it as
(32a) | ||||
(32b) | ||||
(32c) | ||||
(32d) |
where (32a) holds as we have shown in Appendix -A1 that a SC-SPARC with UPA decodes successfully if and only if ; (32b) holds as we have shown that is non-increasing as increases below (28); (32c) holds since (32b) is equivalent to the infimum of that makes the event in (27) occur; (32d) holds by noticing that the objective functions in (32c) and (26c) are the same and by the fact that exists.
-B Proof of Proposition 1
We show (16) by mathematical induction. We denote by the non-zero value of a base matrix with UPA (11). Plugging (7a) into (7b), we write the asymptotic SE parameter as
(33) |
where Definition 1 i)–ii) implies
(34) | ||||
(35) |
Initial step: At iteration , by the assumption of Proposition 1, , . Plugging into (33) and using , we write , as
(36a) | ||||
(36b) |
Induction step: Assuming that (16) holds at iteration , we show that it continues to hold at iteration . If , then Lemma 1 item 1) implies that (16) holds at iteration . If , the asymptotic SE can be upper bounded as
(37a) | ||||
(37b) | ||||
(37c) | ||||
(37d) | ||||
where (37a) holds by plugging , , and into (33); (37b) holds by the induction assumption and the fact ; (37c) holds by rewriting the summation in the denominator of (37b); (37d) holds by change of measure . Comparing (36) and (37d), we conclude that | ||||
(37e) |
Using (37e), the induction assumption, and Lemma 1 item 1), we conclude that (16) holds at iteration .
-C is sufficient
-D Proof of Proposition 2
In Appendix -D1, we first show that the sequence of power allocation is unique, and we then show that it is non-increasing (20). In Appendices -D2–-D3, we prove the lemmas used in Appendix -D1.
-D1 Main proof
To show the uniqueness, we introduce Lemma 2 below.
Lemma 2.
Fixing , function is continuous in and is monotonically increasing as increases.
Proof.
Appendix -D2. ∎
Lemma 2 indicates that fixing , is a bijective function of . Thus, there exist unique power allocation that satisfy .
We proceed to show that the sequence is non-increasing (20).
Lemma 3.
For any , given , if , it holds that .
Proof.
Appendix -D3. ∎
-D2 Proof of Lemma 2
We compute the derivative of with respect to . If ,
(39) |
if ,
(40) |
Since is differentiable and its derivative is positive, we conclude that Lemma 2 holds.
-D3 Proof of Lemma 3
Given , function can be written as
(41a) | ||||
(41b) | ||||
(41c) |
where (41a) holds by definition (19); (41b) holds by change of measure ; (41c) holds by expanding the summation in the denominator of (41b). Function with can be written as
(42) |
To compare (41c) and (42), it suffices to compare the summations in their denominators. We denote by and the summations in the denominators of (42) and (41c), respectively, i.e.,
(43) | ||||
(44) |
Fix .
Case 1: If and , it holds that
(45) | ||||
(46) |
where (46) holds by the fact and the fact .
Case 2: If and , it holds that
(47) | ||||
(48) | ||||
(49) |
where (48) holds since the assumptions on in Case 2 imply .
Case 3: If and , it holds that
(50) |
Since cases 1–3 indicate , we conclude that if , then .
-E Proof of Lemma 1
-E1 Proof of item 1)
We prove item 1) by mathematical induction. We denote the set of zero positions of by
(51) |
To show item 1), it suffices to show
(52) |
Initial step: At , for all , thus is an empty set. It is trivial to conclude .
Induction step: Assuming that , we proceed to show . The asymptotic SE (7b) at iteration is given by (33). The induction assumption posits that if , we have . As a result, the denominator of the event in (33) at iteration is larger than or equal to that at iteration , and we obtain . Since is binary for all , , we conclude (52).
-E2 Proof of item 2)
The asymptotic SE can be written as (21) by comparing (21) and (33) with , . It remains to show that (22) holds for under the assumption that for in Lemma 1. The SE parameter is given by (33) with and the left side of its event can be lower bounded as
(53a) | ||||
(53b) | ||||
(53c) | ||||
(53d) |
where (53b) holds by (34)–(35), the assumption for , and the symmetry of (18); (53c) holds since , , and ; (53d) holds by definition (19). The equality of (53b) is achieved if and only if for all . Replacing the left side of the event of by its lower bound in (53d), we obtain (22).
-F Proof of Theorem 2: step (i)
Given power and rate , we show that VPA outputs , i.e., it does not declare failure, if and only if and is less than the upper bound in (23). To this end, we first introduce useful lemmas and notations in Appendix -F1; we prove the ‘if’ direction in Appendix -F2; we prove the ‘only if’ direction in Appendix -F3; the proof of the lemmas in Appendix -F1 are presented in Appendices -F4–-F7.
-F1 Lemmas and notations
We introduce Lemmas 4–7. We denote by the upper bound on in (23), i.e.,
(54) |
Lemma 4, stated next, shows the existence of .
Lemma 4.
If and only if , there exists a sequence that satisfies (line 2 of VPA) simultaneously for all .
Proof.
Appendix -F4. ∎
Lemma 5 shows how changes with .
Proof.
Appendix -F5. ∎
Lemma 6 below shows how changes with , .
Lemma 6.
Fixing for , , , function is continuous in and is monotonically decreasing as increases.
Proof.
Appendix -F6. ∎
Lemma 7 below shows how changes with .
Lemma 7.
Proof.
Appendix -F7. ∎
We introduce notations that will be used in the following proof. Fixing , we denote the derivative of with respect to at by
(56) |
Given a sequence of positive numbers , we denote by a positive number that ensures
(57) |
for . Such always exists since is continuously differentiable with respect to , i.e., it is a Lipschitz function of , and it decreases as increases by Lemma 6. Given , we define sequence ,
(58a) | ||||
(58b) | ||||
(58c) |
Given and an arbitrary positive number , we define a non-increasing sequence , , as
(59) |
We denote
(60) |
We define the minimum of (59) and (60) as
(61) |
We define a sequence of numbers as
(62) |
-F2 Proof of ‘If’ direction
We show that if and , then VPA does not declare a failure, equivalently, there exists a sequence that satisfies (lines 1–5 and line 9 of Algorithm 1):
(63) | ||||
(64) |
For and , we set
(65) |
where exists due to Lemma 4; is defined in (62); defining (61) via (59) is an arbitrary positive number; the sequence defining (61) via (59) is chosen to be large enough so that
(66) |
and that satisfies (55) with and , .
We show that the power allocation in (65) satisfies (63)–(64), respectively. The power allocation (65) satisfies the power constraint (64) due to (60)–(61). To show that the power allocation satisfies (63), it suffices to show that the following statement:
(67) |
Since for all by (61), this statement allows us to conclude that the condition (63) is satisfied for all .
It remains to prove the statement (67). The statement trivially holds for , since ensures (63) according to Lemma 2. We proceed to prove the statement for . Taking the difference between two with different , we obtain
(68a) | ||||
(68b) | ||||
(68c) |
where (68a) holds by Mean Value Theorem and by Lemma 5; (68b) holds due to and Lemma 5; (68c) holds due to the fact that for all , the choice of below (66), and Lemma 7. We then take the difference between two with different , , just like (57) with , . Summing (68) and (57) for all , we obtain
(69a) | ||||
(69b) | ||||
(69c) | ||||
(69d) |
where (69c) holds by plugging (62) into (69b), and (69d) holds by plugging (58b) into (69c). Since the second term in (69a) is equal to , we conclude that (63) holds at iteration with the power allocation in (65).
-F3 Proof of ‘Only if’ direction
Given and , we show that if VPA does not declare failure, then and .
Not declaring failure implies that at the end of line 5 of Algorithm 1, VPA forms finite that ensure for all . We show by mathematical induction that there exist that satisfy
(70a) | ||||
(70b) |
for all and for any yielded by VPA.
Initial step: Since , , and is continuously increasing by Lemma 2, we conclude that there exists that satisfies (70) at .
-F4 Proof of Lemma 4
Fixing for all , we denote
(72) |
Before we prove Lemma 4, we show
(73) |
For ,
(74) |
for ,
(75) |
for ,
(76) |
If is even, , the first two cases (74)–(75) describe for all , and the minimum in (73) is achieved at , yielding
(77) |
If is odd, , the three cases (74)–(76) jointly describe for all , and the minimum in (73) is achieved at , yielding
(78) |
We begin to prove Lemma 4.
We show that if , there exist that satisfy for all . We prove this by mathematical induction.
Initial step: Since and is continuously increasing in , there exists that satisfies .
Induction step: Assuming there exist that satisfy for all , we show that together with , there exists that satisfies . Since are finite by the induction assumption, it holds that
(79) |
Since and is continuously increasing in , there exists that achieves .
-F5 Proof of Lemma 5
-F6 Proof of Lemma 6
We show that the derivative of with respect to for is negative. For ,
(81) |
For ,
(82) |
-F7 Proof of Lemma 7
-G Proof of Theorem 2: step (ii)
We show that the output power allocation of VPA ensures successful decoding. The power determined at the end of line 5 of Algorithm 1 satisfies
(85) |
since for all and Lemma 2, which states that increases as increases. Plugging (85) into Lemma 1 item 2), we conclude , , meaning that VPA ensures successful decoding within iterations.
Since the left sides of the inequalites in lines 6 and 9 are equal to the left side of (4), representing the resultant power, lines 6 and 9 check the satisfaction of the power constraint (4). After transferring the resiudal power to and in lines 9–13, the resultant power still satisfies (85) since does not depend on and , and by Lemma 2 monotonically increases as increases.
-H Proof of Proposition 3
We first show (24). It suffices to show that the upper bound on in (13) is smaller than or equal to that in (23). For clarity, we denote the upper bounds on in (13) and (23) by and , respectively. We upper bound as
(86a) | ||||
(86b) | ||||
(86c) | ||||
(86d) |
where (86b) holds by lower bounding by for all .
Given rate that ensures and , we proceed to show (25). We denote by the UPA at . To show (25), it suffices to show
(87) |
To this end, from (21) and (32), we conclude
(88) |
Lemma 3 implies for all ,
(89) |
At , Lemma 2 and (89) imply (87). At , since Lemma 6 implies , we conclude from Lemma 2 and (89) that (87) holds at . Simiarly, at , iteratively using Lemmas 2 and 6 and (88)–(89), we obtain (87).
-I UPA is a special case of VPA
We show that UPA is a special case of VPA. This is an alternative proof for (25). Consider any and with UPA . Due to the fact that in (88) increases as increases, , and Lemma 3, we conclude for all . VPA recovers UPA by choosing , where is the output of line 2 of Algorithm 1. The difference is positive for all since Lemma 2 implies that the output of line 2 satisfies .