This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

An Information Theoretic Analysis of Single Transceiver Passive RFID Networks

Yücel Altuğ,  S. Serdar Kozat,  M. Kıvanç Mıhçak Y. Altuğ is with the School of Electrical and Computer Engineering of Cornell University, Ithaca, NY, 14853, USA. (e-mail: ya68@cornell.edu), M. K. Mıhçak are with the Electrical and Electronic Engineering Department of Boğaziçi University, Istanbul, 34342, Turkey (e-mail: kivanc.mihcak@boun.edu.tr), S. S. Kozat is with the Electrical and Electronic Engineering Department of Koç University, Rumeli Feneri Yolu, Sarıyer, Istanbul, 34450, Turkey (e-mail: skozat@ku.edu.tr)Y. Altuğ is partially supported by TÜBİTAK Career Award no. 106E117; M. K. Mıhçak is partially supported by TÜBİTAK Career Award no. 106E117 and TÜBA-GEBIP Award.
Abstract

In this paper, we study single transceiver passive RFID networks by modeling the underlying physical system as a special cascade of a certain broadcast channel (BCC) and a multiple access channel (MAC), using a “nested codebook” structure in between. The particular application differentiates this communication setup from an ordinary cascade of a BCC and a MAC, and requires certain structures such as “nested codebooks”, impurity channels or additional power constraints. We investigate this problem both for discrete alphabets, where we characterize the achievable rate region, as well as for continuous alphabets with additive Gaussian noise, where we provide the capacity region. Hence, we establish the maximal achievable error free communication rates for this particular problem which constitutes the fundamental limit that is achievable by any TDMA based RFID protocol and the achievable rate region for any RFID protocol for the case of continuous alphabets under additive Gaussian noise.

I Introduction

In this paper, we deal with a multiuser communication setup which consists of “cascade” of a broadcast channel (BCC) and a multiple access channel (MAC). The encoder of BCC part and the decoder of the MAC part is the same transceiver, and the decoders of the BCC part and the encoders of the MAC part are the mobile units of the system. The ultimate goal of the communication system considered in the paper is the following: transceiver111In practical RFID systems, the problem of reader collusion is also considered, which amounts to having multiple transceivers in our setup. In our case, we concentrate on the “single reader (transceiver)” setup as first step. wants to “find out” some specific information possessed by the mobile units and for this purpose it first broadcasts the “type” of the information it seeks to receive from each mobile unit. Then every mobile unit “sends” the corresponding information of the received type to the transceiver. The specific type of information phenomenon differentiates the system at hand from the ordinary cascade of BCC and MAC, because in order to model this situation we employ a nested codebook structure at the MAC encoders, i.e. at the mobile units, which will be explained in detail in Section II-B.

Beyond its promising structure to model wireless communication networks, the problem at hand gives the fundamental limits of RFID protocols in two different ways, supposing the transceiver is RFID reader, mobile units are RFID tags and the RFID reader knows the set of the IDs of the RFID tags in the environment:

  1. (i)

    The above mentioned communication problem gives the fundamental limits achievable in TDMA based RFID protocols, since the transceiver sends the TDMA time slots, which are designated to allow communication in a collusion free manner, using the BCC part and then mobile units uses their corresponding time slot information in order to transmit their data to the RFID reader. Supposing equal information rate, say RIDR^{ID}, at each BCC branch, the maximum number of RFID tags that can be handled is 2RID2^{R^{ID}} and the maximum data rate from tags to reader is the maximum rate that can be achieved using TDMA at the MAC part of the communication system.

  2. (ii)

    The above mentioned communication problem gives the fundamental limits of any RFID protocol, since the RFID reader transmits “on-off” message222This on-off message also meaningful in practice as far as passive RFID tags are concerned, since they need to facilitate an external energy in order to operate from the BCC to tags, and then tags communicate back their data through the MAC simultaneously to the reader. The achievable rate region of the MAC part is the fundamental limit of any RFID protocol under the assumption that receiver knows the set of the IDs of the RFID tags in the environment.

The nested codebook structure used in the MAC part of this paper is similar to the “pseudo users” concept introduced in [4], where the authors investigate a special notion of capacity for time slotted ALOHA systems by combining multiple access rate splitting and broadcast codes. However, in [4], the authors explicitly investigate the ALOHA protocol over a degraded additive Gaussian noise channel, where users communicate over a common channel using data packets with predefined collusion probability. Unlike [4], our codes achieve the capacity in the usual sense, where the codewords are sent with arbitrarily small error probability. We also investigate a cascade structure including a BCC in the front and a different MAC in the end. We study this setup both for discrete alphabets using imperfection channels to model the impurities of the actual physical system as well as for continuous alphabets over additive Gaussian noise channel by including appropriate power constraints.

We note that the nested codebook structure used in this paper differs from the nested codes defined in [5, 6]. In [5] nested codebooks, especially nested lattices codes, are explicitly defined with a multi-resolution point of view, where the nesting of codes provide progressively coarser description to finer description of the intended information. Here, our nested codebooks are independent from each other and convey different information.

Organization of the paper is as follows: In Section II we state the notation followed throughout the paper and formulate the communication problem considered in the paper. Section III devoted to derive an achievable rate region of the problem for the case of discrete alphabets, by also including “imperfection channels” in order to model the practical phenomenon better. In Section IV, we state the capacity region of the problem for the case of Gaussian BCC and Gaussian MAC by also incorporating suitable power constraints. Paper ends with the conclusions given in Section V.

II Notation and Problem Statement

II-A Notation

Boldface letters denote vectors; regular letters with subscripts denote individual elements of vectors. Furthermore, capital letters represent random variables and lowercase letters denote individual realizations of the corresponding random variable. The sequence of {a1,a2,,aN}\left\{a_{1},a_{2},\ldots,a_{N}\right\} is compactly represented by 𝐚N\mathbf{a}^{N}. The abbreviations “i.i.d.”, “p.m.f.” and “w.l.o.g.” are shorthands for the terms “independent identically distributed”, “probability mass function” and “without loss of generality”, respectively.

II-B Problem Statement

In this paper, our major concern is finding maximum achievable error-free rates for the following multiuser communication problem (For the sake of simplicity, we define the problem for the case of two mobile units, however all of the results can easily be generalized to MM users using the same arguments employed in the paper): A transceiver first acts as a transmitter and broadcasts a pair of messages, (W1,W2)𝒲1×𝒲2(W_{1},W_{2})\in{\mathcal{W}}_{1}\times{\mathcal{W}}_{2}, to mobile units through the first memoryless communication channel. Mobile units decode the messages intended to them, i.e. first (resp. second) mobile unit decides W^1{\hat{W}}_{1} (resp. W^2{\hat{W}}_{2}), and then choose their messages accordingly, i.e. first (resp. second) mobile unit chooses M11W^1M_{1}\in{\mathcal{M}}_{1}^{{\hat{W}}_{1}} (resp. M22W^2M_{2}\in{\mathcal{M}}_{2}^{{\hat{W}}_{2}}), and simultaneously sends to transceiver, which this time acts as a receiver, through the second memoryless communication channel.

Next, we give the quantitative definition of the communication system considered:

Definition II.1

The above-mentioned communication system consists of the following components:

  1. (i)

    Eight discrete finite sets 𝒳{\mathcal{X}}, 𝒴1{\mathcal{Y}}_{1}, 𝒴2{\mathcal{Y}}_{2}, 𝒬1{\mathcal{Q}}_{1}, 𝒬2{\mathcal{Q}}_{2}, 𝒬1^\hat{{\mathcal{Q}}_{1}}, 𝒬2^\hat{{\mathcal{Q}}_{2}}, 𝒮{\mathcal{S}}.

  2. (ii)

    A one-input two-output, discrete memoryless communication channel, termed as “broadcast channel part” or shortly BCC part from now on, modeled by a conditional p.m.f. p(y1,y2|x)𝒴1×𝒴2×𝒳p(y_{1},y_{2}|x)\in{\mathcal{Y}}_{1}\times{\mathcal{Y}}_{2}\times{\mathcal{X}}. Using the memoryless property, we have the following expression for the n-th extension of the BCC part:

    p(𝐲1n,𝐲2n|𝐱n)=k=1np(y1k,y2k|xk).p({\mathbf{y}}_{1}^{n},{\mathbf{y}}_{2}^{n}|{\mathbf{x}}^{n})=\prod_{k=1}^{n}p(y_{1k},y_{2k}|x_{k}). (1)
  3. (iii)

    The memoryless “imperfections channel”, which models the impurities and the instantaneous erroneous behavior at the mobile units (especially useful in the modeling of the RFID tags), given by a conditional p.m.f. p(q^i|qi)𝒬^×𝒬ip({\hat{q}}_{i}|q_{i})\in\hat{{\mathcal{Q}}}\times{\mathcal{Q}}_{i}. Using the memoryless property, we have the following expression for the n-th extension of the i-th imperfection channel

    p(𝐪^in|𝐪in)=k=1np(q^i,k|qi,k),p({\hat{\mathbf{q}}}^{n}_{i}|{\mathbf{q}}^{n}_{i})=\prod_{k=1}^{n}p({\hat{q}}_{i,k}|q_{i,k}), (2)

    for i{1,2}i\in\{1,2\}.

  4. (iv)

    A two-input one-output, discrete memoryless communication channel, termed as “multiple access channel part” or shortly MAC part from now on, given by a conditional p.m.f. p(s|q^1,q^2)𝒮×𝒬1^×𝒬2^p(s|{\hat{q}}_{1},{\hat{q}}_{2})\in{\mathcal{S}}\times\hat{{\mathcal{Q}}_{1}}\times\hat{{\mathcal{Q}}_{2}}. Using the memoryless property, we have the following expression for the n-th extension of the MAC part:

    p(𝐬n|𝐪^1n,𝐪^2n)=k=1np(sk|q^1,k,q^2,k).p({\mathbf{s}}^{n}|{\hat{\mathbf{q}}}_{1}^{n},{\hat{\mathbf{q}}}_{2}^{n})=\prod_{k=1}^{n}p(s_{k}|{\hat{q}}_{1,k},{\hat{q}}_{2,k}). (3)

Next, we state the code definition

Definition II.2

An (2nR1ID,2nR2ID,2nR1Data,2nR2Data,n)\left(2^{nR_{1}^{ID}},2^{nR_{2}^{ID}},2^{nR_{1}^{Data}},2^{nR_{2}^{Data}},n\right) code for the communication system given above consists of the following parts:

  1. (i)

    Pair of transmitter messages, termed as “broadcast channel messages” or shortly BCC messages from now on, to mobile units given as (W1,W2)𝒲1×𝒲2(W_{1},W_{2})\in{\mathcal{W}}_{1}\times{\mathcal{W}}_{2}, where 𝒲i={1,,2nRiID}{\mathcal{W}}_{i}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left\{1,\ldots,2^{nR_{i}^{ID}}\right\} for i{1,2}i\in\{1,2\}.

  2. (ii)

    The transceiver’s encoding function, termed as “broadcast channel encoder” or shortly BCC encoder from now on, given as

    XBCC:𝒲1×𝒲2𝒳n, such that XBCC(W1,W2)=𝐱n(W1,W2).X^{BCC}\;:\;{\mathcal{W}}_{1}\times{\mathcal{W}}_{2}\rightarrow{\mathcal{X}}^{n},\textrm{ such that }X^{BCC}\left(W_{1},W_{2}\right)={\mathbf{x}}^{n}(W_{1},W_{2}). (4)
  3. (iii)

    The mobile units’ decoding functions, termed as “broadcast channel decoders” or shortly BCC decoders from now on, given by giBCC:𝒴in𝒲i{0}g_{i}^{BCC}\;:\;{\mathcal{Y}}_{i}^{n}\rightarrow{\mathcal{W}}_{i}\cup\{0\}, such that giBCC(𝐘1n)=W^ig_{i}^{BCC}({\mathbf{Y}}_{1}^{n})={\hat{W}}_{i}, for i{1,2}i\in\{1,2\}, where {0}\{0\} corresponds to “miss-type” error event.

  4. (iv)

    The mobile units’ messages corresponding to decoded BCC messages W^i{\hat{W}}_{i}, termed as “multiple access channel messages” or shortly MAC messages from now on, MiiW^iM_{i}\in{\mathcal{M}}_{i}^{{\hat{W}}_{i}}, where iW^i={1,,2nRiData}{\mathcal{M}}_{i}^{{\hat{W}}_{i}}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left\{1,\ldots,2^{nR_{i}^{Data}}\right\}, for i{1,2}i\in\{1,2\}. Note that this is the message part of a “nested codebook structure” corresponding to the decoded message W^i{\hat{W}}_{i} at each mobile unit.

  5. (v)

    The mobile units’ encoding function, termed as “multiple access channel encoders” or shortly MAC encoders from now on, given by QiMAC:iW^i𝒬inQ_{i}^{MAC}\;:\;{\mathcal{M}}_{i}^{{\hat{W}}_{i}}\rightarrow{\mathcal{Q}}_{i}^{n}, for i{1,2}i\in\{1,2\}, such that QiMAC(Mi)=𝐪W^in(Mi)Q_{i}^{MAC}(M_{i})={\mathbf{q}}_{{\hat{W}}_{i}}^{n}\left(M_{i}\right). Note that 𝐪W^in(Mi){\mathbf{q}}_{{\hat{W}}_{i}}^{n}\left(M_{i}\right)’s are the codewords of the “nested codebook structure” corresponding to the decoded message W^i{\hat{W}}_{i} at each mobile unit.

  6. (vi)

    The transceiver’s decoding function, termed as “multiple access channel decoder” or shortly MAC decoder from now on, given by gMAC:𝒮n1𝒲1×2𝒲2g^{MAC}\;:\;{\mathcal{S}}^{n}\rightarrow{\mathcal{M}}_{1}^{{\mathcal{W}}_{1}}\times{\mathcal{M}}_{2}^{{\mathcal{W}}_{2}}.

  7. (vii)

    Decoded messages at the transceiver: (M^1,M^2)1W1×2W2\left({\hat{M}}_{1},{\hat{M}}_{2}\right)\in{\mathcal{M}}_{1}^{W_{1}}\times{\mathcal{M}}_{2}^{W_{2}}. Note that since transceiver knows (W1,W2)(W_{1},W_{2}) pair and tries to “learn” the corresponding (M1,M2)(M_{1},M_{2}) pairs simultaneously, hence it chooses (M1,M2)(M_{1},M_{2})-th messages from the set 1W1×2W2{\mathcal{M}}_{1}^{W_{1}}\times{\mathcal{M}}_{2}^{W_{2}}.

Obviously, the communication system may be intuitively considered as a cascade of a two user “broadcast channel”[1] and a two user “multiple access channel”[1] with the following modifications: first the employment of the nested codebook structure at the MAC encoders and the imperfections channels included. The aforementioned modified cascade, including the encoders, codewords and decoders at both BCC and MAC part is shown in Figure 1 below:

Figure 1: Block Diagram Representation of the multiuser communication system considered in the paper.

Now, we state following “probability of error” related definitions, which will be used throughout the paper.

Definition II.3


  • (i)

    The conditional probability of error, λi\lambda_{i}, for the communication system is defined by:

    λw1,w2,m1,m2=1Pr([(W^1,W^2)=(w1,w2)|(W1,W2)=(w1,w2)][(M^1,M^2)=(m1,m2)|(M1,M2)=(m1,m2)]),\lambda_{w_{1},w_{2},m_{1},m_{2}}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}1-\Pr\left(\left[({\hat{W}}_{1},{\hat{W}}_{2})=(w_{1},w_{2})|(W_{1},W_{2})=(w_{1},w_{2})\right]\wedge\left[({\hat{M}}_{1},{\hat{M}}_{2})=(m_{1},m_{2})|(M_{1},M_{2})=(m_{1},m_{2})\right]\right), (5)

    and the maximal probability of error, λ(n)\lambda^{(n)}, for the communication system is defined by:

    λ(n)=maxw1,w2,m1,m2λw1,w2,m1,m2.\lambda^{(n)}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\max_{w_{1},w_{2},m_{1},m_{2}}\lambda_{w_{1},w_{2},m_{1},m_{2}}. (6)
  • (ii)

    The conditional probability of error for the BCC part, λiBCC\lambda_{i}^{BCC}, is defined by:

    λBCCw1,w2=Pr((W^1,W^2)(w1,w2)|(W1,W2)=(w1,w2)),\lambda_{BCC}^{w_{1},w_{2}}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\Pr\left(({\hat{W}}_{1},{\hat{W}}_{2})\neq(w_{1},w_{2})|(W_{1},W_{2})=(w_{1},w_{2})\right), (7)

    and the average probability of error for the BCC part, Pe,BCC(n)P_{e,BCC}^{(n)}, is defined by:

    Pe,BCC(n)=Pr((W^1,W^2)(W1,W2)),P_{e,BCC}^{(n)}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\Pr\left(\left({\hat{W}}_{1},{\hat{W}}_{2}\right)\neq\left(W_{1},W_{2}\right)\right), (8)
  • (iii)

    The conditional probability of error for the MAC part, λiMAC\lambda_{i}^{MAC}, is defined by:

    λMACm1,m2=Pr((M^1,M^2)(m1,m2)|(M1,M2)=(m1,m2),(W^1,W^2)=(w1,w2)),\lambda_{MAC}^{m_{1},m_{2}}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\Pr\left(({\hat{M}}_{1},{\hat{M}}_{2})\neq(m_{1},m_{2})|(M_{1},M_{2})=(m_{1},m_{2}),({\hat{W}}_{1},{\hat{W}}_{2})=(w_{1},w_{2})\right), (9)

    and the average probability of error for the MAC part, Pe,MAC(n)P_{e,MAC}^{(n)} is defined by:

    Pe,MAC(n)=Pr((M^1,M^2)(M1,M2)|(W^1,W^2)=(w1,w2)).P_{e,MAC}^{(n)}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\Pr\left(\left({\hat{M}}_{1},{\hat{M}}_{2}\right)\neq\left(M_{1},M_{2}\right)|\left({\hat{W}}_{1},{\hat{W}}_{2}\right)=\left(w_{1},w_{2}\right)\right). (10)

Note that, using (5),(7) and (9) we conclude that

λw1,w2,m1,m2=1(1λBCCw1,w2)(1λMACm1,m2).\lambda_{w_{1},w_{2},m_{1},m_{2}}=1-(1-\lambda_{BCC}^{w_{1},w_{2}})(1-\lambda_{MAC}^{m_{1},m_{2}}). (11)

Next, achievability is defined as

Definition II.4

Any rate quadruple (R1ID,R2ID,R1Data,R2Data)\left(R_{1}^{ID},R_{2}^{ID},R_{1}^{Data},R_{2}^{Data}\right) is said to be achievable if there exists a sequence of codes (2nR1ID,2nR2ID,2nR1Data,2nR2Data,n)\left(2^{nR_{1}^{ID}},2^{nR_{2}^{ID}},2^{nR_{1}^{Data}},2^{nR_{2}^{Data}},n\right) such that λ(n)0\lambda^{(n)}\rightarrow 0 as nn\rightarrow\infty.

III Discrete Case

In this section, we deal with the problem stated in Section II-B under the discrete random variables assumption.

III-A Achievable Region for The General Case

The main result of this section is the following theorem:

Theorem III.1

(Achievability-Discrete Case) Any quadruple (R1ID,R2ID,R1Data,R2Data)0\left(R_{1}^{ID},R_{2}^{ID},R_{1}^{Data},R_{2}^{Data}\right)\in{\mathcal{R}}_{0} is achievable, where

0={(R1ID,R2ID,R1Data,R2Data):R1ID,R2ID,R1Data,R2Data0,R1ID<I(U;Y1),R2ID<I(V;Y2),\displaystyle{\mathcal{R}}_{0}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left\{(R_{1}^{ID},R_{2}^{ID},R_{1}^{Data},R_{2}^{Data})\;:\;R_{1}^{ID},R_{2}^{ID},R_{1}^{Data},R_{2}^{Data}\geq 0,\;R_{1}^{ID}<I\left(U;Y_{1}\right),\;R_{2}^{ID}<I\left(V;Y_{2}\right),\right.
R1ID+R2ID<I(U;Y1)+I(V;Y2)I(U;V),R1Data<I(Q^1;S|Q^2),R2Data<I(Q^2;S|Q^1),\displaystyle R_{1}^{ID}+R_{2}^{ID}<I\left(U;Y_{1}\right)+I\left(V;Y_{2}\right)-I\left(U;V\right),\;R_{1}^{Data}<I({\hat{Q}}_{1};S|{\hat{Q}}_{2}),\;R_{2}^{Data}<I({\hat{Q}}_{2};S|{\hat{Q}}_{1}),
R1Data+R2Data<I(Q^1,Q^2;S), for some p(u,v,x) on 𝒰×𝒱×𝒳 and p(q1,q2,s) on 𝒬1×𝒬2×𝒮,\displaystyle R_{1}^{Data}+R_{2}^{Data}<I({\hat{Q}}_{1},{\hat{Q}}_{2};S),\textrm{ for some }p(u,v,x)\textrm{ on }{\mathcal{U}}\times{\mathcal{V}}\times{\mathcal{X}}\textrm{ and }p(q_{1},q_{2},s)\textrm{ on }{\mathcal{Q}}_{1}\times{\mathcal{Q}}_{2}\times{\mathcal{S}},
 where p(q1,q2,s)=q^1,q^2p(s|q^1,q^2)p(q^1|q1)p(q^2|q2)p(q1)p(q2), for some p(q1),p(q2) on 𝒬1,𝒬2, respectively}.\displaystyle\left.\textrm{ where }p(q_{1},q_{2},s)\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\sum_{{\hat{q}}_{1},{\hat{q}}_{2}}p(s|{\hat{q}}_{1},{\hat{q}}_{2})p({\hat{q}}_{1}|q_{1})p({\hat{q}}_{2}|q_{2})p(q_{1})p(q_{2}),\textrm{ for some }p(q_{1}),p(q_{2})\textrm{ on }{\mathcal{Q}}_{1},{\mathcal{Q}}_{2},\textrm{ respectively}\right\}. (12)
Proof:

Proof follows combining arguments from [2] and [1] for BCC and MAC parts, respectively; by also taking imperfection channels and nested codebook structure into account.

W.l.o.g. we suppose ϵ(0,1)\epsilon\in(0,1). 333Since we want to show that λ(n)0\lambda^{(n)}\rightarrow 0 as nn\rightarrow\infty, this will suffice. To see this, observe that in the proof of the theorem, we show that for any sufficiently large nn and for any ϵ(0,1)\epsilon\in(0,1), λ(n)ϵ\lambda^{(n)}\leq\epsilon, which directly implies λ(n)ϵ\lambda^{(n)}\leq\epsilon^{\prime} for any ϵ1\epsilon^{\prime}\geq 1.

First, define Aϵ(n)(U)A_{{\epsilon}}^{(n)}(U) (resp. Aϵ(n)(V)A_{{\epsilon}}^{(n)}(V)) as the set of ϵ{\epsilon}-typical sequences [1] 𝐮n𝒰n\mathbf{u}^{n}\in{\mathcal{U}}^{n} (resp. 𝐯n𝒱n\mathbf{v}^{n}\in{\mathcal{V}}^{n}) for any given p(u)p(u) (resp. p(v)p(v)) on 𝒰{\mathcal{U}} (resp. 𝒱{\mathcal{V}}).

Next, for w1{1,,2nR1ID}w_{1}\in\{1,\ldots,2^{nR_{1}^{ID}}\}, we define following cells:

Bw1=[(w11)2n(I(U;Y1)R1IDϵ)+1,w12n(I(U;Y1)R1IDϵ)].B_{w_{1}}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left[(w_{1}-1)2^{n(I(U;Y_{1})-R_{1}^{ID}-{\epsilon})}+1,\;w_{1}2^{n(I(U;Y_{1})-R_{1}^{ID}-{\epsilon})}\right].

Similarly, for resp. w2{1,,2nR2ID}w_{2}\in\{1,\ldots,2^{nR_{2}^{ID}}\}, we define:

Cw2=[(w21)2n(I(V;Y2)R2IDϵ)+1,w22n(I(V;Y2)R2IDϵ)],C_{w_{2}}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left[(w_{2}-1)2^{n(I(V;Y_{2})-R_{2}^{ID}-{\epsilon})}+1,\;w_{2}2^{n(I(V;Y_{2})-R_{2}^{ID}-{\epsilon})}\right],

w.l.o.g. supposing that 2n(I(U;Y1)R1IDϵ),2n(I(V;Y2)R2IDϵ)+2^{n(I(U;Y_{1})-R_{1}^{ID}-{\epsilon})},2^{n(I(V;Y_{2})-R_{2}^{ID}-{\epsilon})}\in{\mathbb{Z}}^{+}.

Encoding at BCC part:

  • i)

    Generation of the codebook: Generate the codebook 𝒞BCC𝒳2nR1ID×𝒳2nR2ID×𝒳n\mathcal{C}_{BCC}\in{\mathcal{X}}^{2^{nR^{ID}_{1}}}\times{\mathcal{X}}^{2^{nR^{ID}_{2}}}\times{\mathcal{X}}^{n} such that (i,j,m)(i,j,m)-th element is xm(i,j)x_{m}(i,j) and xm(i,j)x_{m}(i,j)s are i.i.d. realizations of XX of which distribution is p(x)=u,vp(u,v,x)p(x)=\sum_{u,v}p(u,v,x) for all i,j,mi,j,m and reveal the codebook to both mobile units and transceiver.

  • ii)

    Choose an (W1,W2)𝒲1×𝒲2(W_{1},W_{2})\in{\mathcal{W}}_{1}\times{\mathcal{W}}_{2} uniformly over 𝒲1×𝒲2{\mathcal{W}}_{1}\times{\mathcal{W}}_{2}, i.e. Pr(W1=w1,W2=w2)=1/(2nR1ID2nR2ID)\Pr(W_{1}=w_{1},W_{2}=w_{2})=1/\left(2^{nR_{1}^{ID}}2^{nR_{2}^{ID}}\right), for all (w1,w2)𝒲1×𝒲2(w_{1},w_{2})\in{\mathcal{W}}_{1}\times{\mathcal{W}}_{2}.

  • iii)

    Next, generate 2n(I(U;Y1)ϵ)2^{n(I(U;Y_{1})-{\epsilon})}, i.i.d. 𝐮n{\mathbf{u}}^{n}, such that

    p(𝐮n)={1|Aϵ(n)(U)|, if 𝐮nAϵ(n)(U)0, otherwisep({\mathbf{u}}^{n})=\left\{\begin{array}[]{cl}\frac{1}{|A_{\epsilon}^{(n)}(U)|}&\hbox{, if }{\mathbf{u}}^{n}\in A_{\epsilon}^{(n)}(U)\\ 0&\hbox{, otherwise}\end{array}\right.

    Similarly, generate 2n(I(V;Y2)ϵ)2^{n(I(V;Y_{2})-{\epsilon})}, i.i.d. 𝐯n{\mathbf{v}}^{n}, such that

    p(𝐯n)={1|Aϵ(n)(V)|, if 𝐯nAϵ(n)(V)0, otherwisep({\mathbf{v}}^{n})=\left\{\begin{array}[]{cl}\frac{1}{|A_{\epsilon}^{(n)}(V)|}&\hbox{, if }{\mathbf{v}}^{n}\in A_{\epsilon}^{(n)}(V)\\ 0&\hbox{, otherwise}\end{array}\right.

    Label these 𝐮n(k){\mathbf{u}}^{n}(k) (resp. 𝐯n(l){\mathbf{v}}^{n}(l)), k[1,2n(I(U;Y1)ϵ)]k\in\left[1,2^{n(I(U;Y_{1})-{\epsilon})}\right] (resp. l[1,2n(I(V;Y2)ϵ)]l\in\left[1,2^{n(I(V;Y_{2})-{\epsilon})}\right]).

  • iv)

    If a message pair (w1,w2)(w_{1},w_{2}) is to be transmitted, pick one pair (𝐮n(k),𝐯n(l))Aϵ(n)(U,V)Bw1×Cw2({\mathbf{u}}^{n}(k),{\mathbf{v}}^{n}(l))\in A_{\epsilon}^{(n)}(U,V)\cap B_{w_{1}}\times C_{w_{2}}. Then, find an 𝐱(w1,w2){\mathbf{x}}(w_{1},w_{2}) which is jointly ϵ{\epsilon}-typical with (w1,w2)(w_{1},w_{2}) pair and designate it as the corresponding codeword of (w1,w2)(w_{1},w_{2}). Send over the BCC part, p(y1,y2|x)p(y_{1},y_{2}|x).

Decoding at BCC part:

  • i)

    Find the indexes k^\hat{k} (resp. l^\hat{l}) such that (𝐮n(k^),𝐲1)Aϵ(n)(U,Y1)({\mathbf{u}}^{n}(\hat{k}),{\mathbf{y}}_{1})\in A_{\epsilon}^{(n)}(U,Y_{1}) (resp. (𝐯n(l^),𝐲2)Aϵ(n)(V,Y2)({\mathbf{v}}^{n}(\hat{l}),{\mathbf{y}}_{2})\in A_{\epsilon}^{(n)}(V,Y_{2})). If k^,l^\hat{k},\hat{l} are not unique or does not exist, declare an error, i.e. W^1=0{\hat{W}}_{1}=0 and/or W^2=0{\hat{W}}_{2}=0. Else, decide W^1𝒲1{\hat{W}}_{1}\in{\mathcal{W}}_{1} (resp. W^2𝒲2{\hat{W}}_{2}\in{\mathcal{W}}_{2}) at mobile unit one (resp two), such that k^BW^1\hat{k}\in B_{{\hat{W}}_{1}} (resp. l^CW^2\hat{l}\in C_{{\hat{W}}_{2}}).

Encoding at MAC part:

  • i)

    Generation of the codebook(Nested codebook structure): Fix p(q1),p(q2)p(q_{1}),p(q_{2}). Let p(q1,q2)=p(q1)p(q2)p(q_{1},q_{2})=p(q_{1})p(q_{2}). Generate the wiw_{i}-th codebook 𝒞MACwi𝒬i2nRiData×𝒬in\mathcal{C}_{MAC}^{w_{i}}\in{\mathcal{Q}}_{i}^{2^{nR_{i}^{Data}}}\times{\mathcal{Q}}_{i}^{n} such that (j,k)(j,k)-th element is qwi,k(j)q_{w_{i},k}(j) and qwi,k(j)q_{w_{i},k}(j)s are i.i.d. realizations of QiQ_{i} of which distribution is p(qi)p(q_{i}) for all j{1,,2nRiData}j\in\{1,\ldots,2^{nR_{i}^{Data}}\}, k{1,,n}k\in\{1,\ldots,n\} and i{1,2}i\in\{1,2\}.

  • ii)

    Choose a message MiiW^iM_{i}\in{\mathcal{M}}_{i}^{{\hat{W}}_{i}} uniformly for the W^i{\hat{W}}_{i} decided at the BCC part, i.e. Pr(Mi=mi)=12nRiData\Pr(M_{i}=m_{i})=\frac{1}{2^{nR_{i}^{Data}}}, for all miiW^im_{i}\in{\mathcal{M}}_{i}^{{\hat{W}}_{i}} and for i{1,2}i\in\{1,2\}. In order to send the message mim_{i}, pick the corresponding codeword 𝐪W^in(mi){\mathbf{q}}_{{\hat{W}}_{i}}^{n}(m_{i}) of 𝒞MACW^i\mathcal{C}_{MAC}^{{\hat{W}}_{i}} and send over the imperfection channel p(q^i|qW^i)p({\hat{q}}_{i}|q_{{\hat{W}}_{i}}) resulting in 𝐪^in{\hat{\mathbf{q}}}_{i}^{n} for i{1,2}i\in\{1,2\}. The pair of (𝐪^1,𝐪^2)({\hat{\mathbf{q}}}_{1},{\hat{\mathbf{q}}}_{2}) is the input to the MAC part, p(s|q^1,q^2)p(s|{\hat{q}}_{1},{\hat{q}}_{2}).

Decoding at MAC part:

  • i)

    Find the pair of indexes (M^1,M^2)1w1×2w2\left({\hat{M}}_{1},{\hat{M}}_{2}\right)\in{\mathcal{M}}_{1}^{w_{1}}\times{\mathcal{M}}_{2}^{w_{2}} such that (𝐪w1n(M^1),𝐪w2n(M^2),𝐬n)Aϵ(n)(Q1,Q2,S)({\mathbf{q}}_{w_{1}}^{n}({\hat{M}}_{1}),{\mathbf{q}}_{w_{2}}^{n}({\hat{M}}_{2}),{\mathbf{s}}^{n})\in A_{\epsilon}^{(n)}(Q_{1},Q_{2},S), where Aϵ(n)(Q1,Q2,S)A_{\epsilon}^{(n)}(Q_{1},Q_{2},S) is the ϵ{\epsilon}-typical set with respect to distribution

    p(q1,q2,s)\displaystyle p(q_{1},q_{2},s) =\displaystyle= q^1,q^2p(s|q^1,q^2,q1,q2)p(q^1,q^2|q1,q2)p(q1)p(q2),\displaystyle\sum_{{\hat{q}}_{1},{\hat{q}}_{2}}p(s|{\hat{q}}_{1},{\hat{q}}_{2},q_{1},q_{2})p({\hat{q}}_{1},{\hat{q}}_{2}|q_{1},q_{2})p(q_{1})p(q_{2}), (13)
    =\displaystyle= q^1,q^2p(s|q^1,q^2)p(q^1,q^2|q1,q2)p(q1)p(q2),\displaystyle\sum_{{\hat{q}}_{1},{\hat{q}}_{2}}p(s|{\hat{q}}_{1},{\hat{q}}_{2})p({\hat{q}}_{1},{\hat{q}}_{2}|q_{1},q_{2})p(q_{1})p(q_{2}), (14)
    =\displaystyle= q^1,q^2p(s|q^1,q^2)p(q^1|q1)p(q^2|q2)p(q1)p(q2),\displaystyle\sum_{{\hat{q}}_{1},{\hat{q}}_{2}}p(s|{\hat{q}}_{1},{\hat{q}}_{2})p({\hat{q}}_{1}|q_{1})p({\hat{q}}_{2}|q_{2})p(q_{1})p(q_{2}), (15)

    where (13) follows since p(q1,q2)=p(q1)p(q2)p(q_{1},q_{2})=p(q_{1})p(q_{2}) (cf. the codebook generation of MAC part), (14) follows since MAC channel depends on only (q^1,q^2)({\hat{q}}_{1},{\hat{q}}_{2}) and (15) follows since imperfection channels are independent and depends on only q1q_{1} and q2q_{2}, respectively.

    If such a (M^1,M^2)\left({\hat{M}}_{1},{\hat{M}}_{2}\right) pair does not exist or is not unique, then declare an error, i.e. M^1=0{\hat{M}}_{1}=0 and/or M^2=0{\hat{M}}_{2}=0; otherwise decide (M^1,M^2)\left({\hat{M}}_{1},{\hat{M}}_{2}\right).

Analysis of Probability of Error:
We begin with BCC part. By defining the error event as BCC={(W^1(𝐘1n),W^2(𝐘2n))(W1,W2)}\mathcal{E}^{BCC}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left\{({\hat{W}}_{1}({\mathbf{Y}}_{1}^{n}),{\hat{W}}_{2}({\mathbf{Y}}_{2}^{n}))\neq(W_{1},W_{2})\right\}, we have the following expression for the average probability of error averaged over all messages, (w1,w2)(w_{1},w_{2}), and codebooks, 𝒞BCC{\mathcal{C}}_{BCC}

Pe,BCC(n)\displaystyle P_{e,BCC}^{(n)} =\displaystyle= Pr(BCC),\displaystyle\Pr\left(\mathcal{E}^{BCC}\right), (16)
=\displaystyle= Pr(BCC|(W1,W2)=(1,1)),\displaystyle\Pr\left(\mathcal{E}^{BCC}|(W_{1},W_{2})=(1,1)\right),

where (16) follows by noting the equality of arithmetic average probability of error and the average probability of error given in (8) and the symmetry of the codebook construction at the BCC part.

Next, we define following type of error events:

1BCC\displaystyle\mathcal{E}_{1}^{BCC} =\>\stackrel{{\scriptstyle\triangle}}{{=}}\> {(𝐮n(k),𝐯n(l))(B1×C1)Aϵ(n)(U,V)},\displaystyle\left\{\nexists({\mathbf{u}}^{n}(k),{\mathbf{v}}^{n}(l))\in(B_{1}\times C_{1})\cap A_{\epsilon}^{(n)}(U,V)\right\}, (17)
2BCC\displaystyle\mathcal{E}_{2}^{BCC} =\>\stackrel{{\scriptstyle\triangle}}{{=}}\> {(𝐮n(k),𝐯n(l),𝐱n(w1,w2),𝐲1n,𝐲2n)Aϵ(n)(U,V,X,Y1,Y2)},\displaystyle\left\{({\mathbf{u}}^{n}(k),{\mathbf{v}}^{n}(l),{\mathbf{x}}^{n}(w_{1},w_{2}),{\mathbf{y}}_{1}^{n},{\mathbf{y}}_{2}^{n})\not\in A_{\epsilon}^{(n)}(U,V,X,Y_{1},Y_{2})\right\}, (18)
3BCC\displaystyle\mathcal{E}_{3}^{BCC} =\>\stackrel{{\scriptstyle\triangle}}{{=}}\> {k^k, s.t. (𝐮n(k^),𝐲1n)Aϵ(n)(U,Y1)},\displaystyle\left\{\exists\hat{k}\neq k,\textrm{ s.t. }({\mathbf{u}}^{n}(\hat{k}),{\mathbf{y}}_{1}^{n})\in A_{\epsilon}^{(n)}(U,Y_{1})\right\}, (19)
4BCC\displaystyle\mathcal{E}_{4}^{BCC} =\>\stackrel{{\scriptstyle\triangle}}{{=}}\> {l^l, s.t. (𝐯n(l^),𝐲2n)Aϵ(n)(V,Y2)},\displaystyle\left\{\exists\hat{l}\neq l,\textrm{ s.t. }({\mathbf{v}}^{n}(\hat{l}),{\mathbf{y}}_{2}^{n})\in A_{\epsilon}^{(n)}(V,Y_{2})\right\}, (20)

where (17) corresponds to the failure of the encoding, (19) (resp. (20)) corresponds to the failure of the decoding at mobile unit one (resp. mobile unit two).

Using typicality arguments, it can be shown that Pr(iBCC)ϵ/4\Pr\left(\mathcal{E}_{i}^{BCC}\right)\leq\epsilon/4 for i{2,3,4}i\in\{2,3,4\} and Lemma 1 of [2] also guarantees that Pr(1BCC)ϵ/4\Pr\left(\mathcal{E}_{1}^{BCC}\right)\leq\epsilon/4. Using these facts and the union bound, we conclude that

Pe,BCC(n)=Pr(BCC)=Pr(BCC|(W1,W2)=(1,1))ϵ,P_{e,BCC}^{(n)}=\Pr(\mathcal{E}^{BCC})=\Pr(\mathcal{E}^{BCC}|(W_{1},W_{2})=(1,1))\leq{\epsilon}, (21)

for any ϵ>0{\epsilon}>0, for sufficiently large nn; provided that I(U;Y1)>R1ID+ϵI(U;Y_{1})>R_{1}^{ID}+{\epsilon}, I(V;Y2)>R2ID+ϵI(V;Y_{2})>R_{2}^{ID}+{\epsilon}, I(U;Y1)+I(V;Y2)I(U;V)>R1ID+R2ID+2ϵ+δ(ϵ)I(U;Y_{1})+I(V;Y_{2})-I(U;V)>R_{1}^{ID}+R_{2}^{ID}+2{\epsilon}+\delta({\epsilon}), such that δ(ϵ)0\delta({\epsilon})\rightarrow 0 as ϵ0{\epsilon}\rightarrow 0.

Further, using standard arguments for finding a code with negligible maximal probability of error (cf. [1] pp. 203-204) from the one with Pe,BCC(n)ϵP_{e,BCC}^{(n)}\leq{\epsilon} we conclude that we have

λBCC(n)=maxw1,w2λBCCw1,w22ϵ,\lambda_{BCC}^{(n)}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\max_{w_{1},w_{2}}\lambda_{BCC}^{w_{1},w_{2}}\leq 2{\epsilon}, (22)

for any ϵ>0{\epsilon}>0 and for sufficiently large nn, which concludes the BCC part.

By defining the error event as MAC={(M^1(𝐒n),M^2(𝐒n))(M1,M2)|(W^1,W^2)=(w1,w2)}\mathcal{E}^{MAC}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left\{\left({\hat{M}}_{1}({\mathbf{S}}^{n}),{\hat{M}}_{2}({\mathbf{S}}^{n})\right)\neq(M_{1},M_{2})|\left({\hat{W}}_{1},{\hat{W}}_{2}\right)=\left(w_{1},w_{2}\right)\right\}, we have the following expression for the average probability of error averaged over all messages, (m1,m2)(m_{1},m_{2}), and codebooks corresponding to the messages, 𝒞MACw1{\mathcal{C}}_{MAC}^{w_{1}} and 𝒞MACw2{\mathcal{C}}_{MAC}^{w_{2}}

Pe,MAC(n)\displaystyle P_{e,MAC}^{(n)} =\displaystyle= Pr(MAC),\displaystyle\Pr\left(\mathcal{E}^{MAC}\right), (23)
=\displaystyle= Pr(MAC|(M1,M2)=(1,1)),\displaystyle\Pr\left(\mathcal{E}^{MAC}|(M_{1},M_{2})=(1,1)\right),

where (23) follows by noting the equality of arithmetic average probability of error and the average probability of error given in (10) and the symmetry of the nested codebook construction at the MAC part.

Next, we define the following events

ijMAC={(𝐪w1n(i),𝐪w2n(j),𝐬n)Aϵ(n)(Q1,Q2,S)},\mathcal{E}_{ij}^{MAC}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left\{({\mathbf{q}}_{w_{1}}^{n}(i),{\mathbf{q}}_{w_{2}}^{n}(j),{\mathbf{s}}^{n})\in A_{\epsilon}^{(n)}(Q_{1},Q_{2},S)\right\}, (24)

Using union bound and appropriately bounding each error event by exploiting typicality arguments, one can show that

Pe,MAC(n)=Pr(MAC)=Pr(MAC|(M1,M2)=(1,1))ϵ,P_{e,MAC}^{(n)}=\Pr\left(\mathcal{E}^{MAC}\right)=\Pr\left(\mathcal{E}^{MAC}|(M_{1},M_{2})=(1,1)\right)\leq{\epsilon}, (25)

for any ϵ>0{\epsilon}>0 and sufficiently large nn; provided that I(Q1;S|Q2)R1Data>3ϵI(Q_{1};S|Q_{2})-R_{1}^{Data}>3{\epsilon}, I(Q2;S|Q1)R2Data>3ϵI(Q_{2};S|Q_{1})-R_{2}^{Data}>3{\epsilon} and I(Q1,Q2;S)(R1Data+R2Data)>4ϵI(Q_{1},Q_{2};S)-(R_{1}^{Data}+R_{2}^{Data})>4{\epsilon}.

Further, using standard arguments for finding a code with negligible maximal probability of error (cf. [1] pp. 203-204) from the one with Pe,MAC(n)ϵP_{e,MAC}^{(n)}\leq{\epsilon} we conclude that we have

λMAC(n)=maxm1,m2λMACm1,m22ϵ,\lambda_{MAC}^{(n)}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\max_{m_{1},m_{2}}\lambda_{MAC}^{m_{1},m_{2}}\leq 2{\epsilon}, (26)

for any ϵ>0{\epsilon}>0 and for sufficiently large nn, which concludes the MAC part.

Next, we sum up things and conclude the proof in the following manner.

First, by plugging (11) in (6), we have

λ(n)=maxλBCCw1,w2,λMACm1,m2λBCCw1,w2+λMACm1,m2λBCCw1,w2λMACm1,m2.\lambda^{(n)}=\max_{\lambda_{BCC}^{w_{1},w_{2}},\lambda_{MAC}^{m_{1},m_{2}}}\lambda_{BCC}^{w_{1},w_{2}}+\lambda_{MAC}^{m_{1},m_{2}}-\lambda_{BCC}^{w_{1},w_{2}}\lambda_{MAC}^{m_{1},m_{2}}. (27)

Further, using the fact that the cost function in (27) is monotonic increasing in both λBCCw1,w2\lambda_{BCC}^{w_{1},w_{2}} and λMACm1,m2\lambda_{MAC}^{m_{1},m_{2}}, we conclude that (cf. (22) and (26))

λ(n)4ϵ4ϵ2,\lambda^{(n)}\leq 4{\epsilon}-4{\epsilon}^{2}, (28)

for any 0<ϵ<10<{\epsilon}<1 and sufficiently large nn. Since ϵ{\epsilon} may be arbitrarily small, (28) concludes the proof. ∎

IV Power Constrained Gaussian Case

IV-A Problem Statement

In this section, we generalize the communication problem stated in Section II-B to continuous random variables under the assumption of Gaussian noise and power constraint on the codebooks. To be more precise we have the problem depicted in Figure 2, with the power constraints:

E[X2]\displaystyle\textrm{E}\left[X^{2}\right] \displaystyle\leq P,\displaystyle P, (29)
E[(Q1,W^1)2]\displaystyle\textrm{E}\left[(Q_{1,{\hat{W}}_{1}})^{2}\right] \displaystyle\leq α1P1,\displaystyle\alpha_{1}P_{1}, (30)
E[(Q1,W^2)2]\displaystyle\textrm{E}\left[(Q_{1,{\hat{W}}_{2}})^{2}\right] \displaystyle\leq α2P2,\displaystyle\alpha_{2}P_{2}, (31)

such that α1,α2<1\alpha_{1},\alpha_{2}<1 and P1+P2PP_{1}+P_{2}\leq P, where P1P_{1} (resp. P2P_{2}) is the power delivered to mobile unit one (resp. two) and w.l.o.g. we assume that N1<N2N_{1}<N_{2}.

Figure 2: Block Diagram Representation of the multiuser communication system under Gaussian noise assumption.

Note that both Definition II.1 (excluding imperfection channels, which are irrelevant for this case) and Definition II.2 are valid for this case, with 𝒳=𝒬1=𝒬2=𝒮={\mathcal{X}}={\mathcal{Q}}_{1}={\mathcal{Q}}_{2}={\mathcal{S}}={\mathbb{R}}.

Remark IV.1

  • (i)

    Observe that, we model the “imperfection channel” of discrete case as an additional power constraint for the Gaussian case.

  • (ii)

    BCC part for the Gaussian case at hand is equivalent to “degraded BCC”, which enables us to state the capacity region instead of characterizing achievable region only.

IV-B Capacity Region for Gaussian Case

In this section, we state the capacity region of the communication system given in Section IV-A. Note that throughout the section, all the logarithms are base ee, in other words the unit of information is “nats”.

Theorem IV.1

The capacity region, 14{\mathcal{R}}_{1}\subset{\mathbb{R}}^{4}, of the system shown in Figure 2 is given by

1={(R1ID,R2ID,R1Data,R2Data):R1ID,R2ID,R1Data,R2Data0,R1ID<12log(1+αPN1),\displaystyle{\mathcal{R}}_{1}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left\{(R_{1}^{ID},R_{2}^{ID},R_{1}^{Data},R_{2}^{Data})\;:\;R_{1}^{ID},R_{2}^{ID},R_{1}^{Data},R_{2}^{Data}\geq 0,\;R_{1}^{ID}<\frac{1}{2}\log\left(1+\frac{\alpha P}{N_{1}}\right),\right.
R2ID<12log(1+(1α)PN2+αP),R1Data<12log(1+αα1PN3),R2Data<12log(1+(1α)α2PN3),\displaystyle R_{2}^{ID}<\frac{1}{2}\log\left(1+\frac{(1-\alpha)P}{N_{2}+\alpha P}\right),\;R_{1}^{Data}<\frac{1}{2}\log\left(1+\frac{\alpha\alpha_{1}P}{N_{3}}\right),\;R_{2}^{Data}<\frac{1}{2}\log\left(1+\frac{(1-\alpha)\alpha_{2}P}{N_{3}}\right),
R1Data+R2Data<12log(1+αα1P+(1α)α2PN3), s. t. 0α1, 0α1,α21},\displaystyle\left.R_{1}^{Data}+R_{2}^{Data}<\frac{1}{2}\log\left(1+\frac{\alpha\alpha_{1}P+(1-\alpha)\alpha_{2}P}{N_{3}}\right),\textrm{ s. t. }0\leq\alpha\leq 1,\;0\leq\alpha_{1},\alpha_{2}\leq 1\right\}, (32)

where α\alpha may be chosen arbitrarily in the given range and α1\alpha_{1} and α2\alpha_{2} are system parameters.

IV-B1 Achievability

In section, we prove the forward part of Theorem IV.1, in other words following theorem:

Theorem IV.2

Any rate quadruple (R1ID,R2ID,R1Data,R2Data)4(R_{1}^{ID},R_{2}^{ID},R_{1}^{Data},R_{2}^{Data})\in{\mathbb{R}}^{4}, there exists a sequence of
(2nR1ID,2nR2ID,2nR1Data,2nR2Data,n)\left(2^{nR_{1}^{ID}},2^{nR_{2}^{ID}},2^{nR_{1}^{Data}},2^{nR_{2}^{Data}},n\right) codes with arbitrarily small probability of error for sufficiently large nn, provided that

12log(1+αPN1)\displaystyle\frac{1}{2}\log\left(1+\frac{\alpha P}{N_{1}}\right) >\displaystyle> R1ID+ϵ,\displaystyle R_{1}^{ID}+{\epsilon}, (33)
12log(1+(1α)PαP+N2)\displaystyle\frac{1}{2}\log\left(1+\frac{(1-\alpha)P}{\alpha P+N_{2}}\right) >\displaystyle> R2ID+ϵ,\displaystyle R_{2}^{ID}+{\epsilon}, (34)
12log(1+α1αPN3)\displaystyle\frac{1}{2}\log\left(1+\frac{\alpha_{1}\alpha P}{N_{3}}\right) >\displaystyle> R1Data+3ϵ,\displaystyle R_{1}^{Data}+3{\epsilon}, (35)
12log(1+α2(1α)PN3)\displaystyle\frac{1}{2}\log\left(1+\frac{\alpha_{2}(1-\alpha)P}{N_{3}}\right) >\displaystyle> R2Data+3ϵ,\displaystyle R_{2}^{Data}+3{\epsilon}, (36)
12log(1+α1αP+α2(1α)PN3)\displaystyle\frac{1}{2}\log\left(1+\frac{\alpha_{1}\alpha P+\alpha_{2}(1-\alpha)P}{N_{3}}\right) >\displaystyle> R1Data+R2Data+4ϵ,\displaystyle R_{1}^{Data}+R_{2}^{Data}+4{\epsilon}, (37)

for any ϵ>0{\epsilon}>0, 0α10\leq\alpha\leq 1 and 0α1,α210\leq\alpha_{1},\alpha_{2}\leq 1.

Proof:

In order to prove the theorem, we use superposition coding [1] at BCC part and standard random coding at MAC part. W.l.o.g. suppose ϵ(0,13/84)\epsilon\in(0,13/84). 444Since we want to show that λ(n)0\lambda^{(n)}\rightarrow 0 as nn\rightarrow\infty, this will suffice. To see this, observe that in the proof of the theorem, we show that for any sufficiently large nn and for any ϵ(0,13/84)\epsilon\in(0,13/84), λ(n)ϵ\lambda^{(n)}\leq\epsilon, which directly implies λ(n)ϵ\lambda^{(n)}\leq\epsilon^{\prime} for any ϵ13/84\epsilon^{\prime}\geq 13/84.

Encoding at BCC part:

  • i)

    Generation of the codebook:(Superposition Coding) Generate codebook, 𝒞BCC1{\mathcal{C}}_{BCC}^{1} (resp. 𝒞BCC2{\mathcal{C}}_{BCC}^{2}) with corresponding rate R1IDR_{1}^{ID} (resp. R2IDR_{2}^{ID}) such that both R1IDR_{1}^{ID} and R2IDR_{2}^{ID} satisfy the conditions (33), (34) and (35) where

    𝒞BCC1=[x1,i(w1)],{\mathcal{C}}_{BCC}^{1}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left[x_{1,i}(w_{1})\right], (38)

    such that each x1,i(w1)x_{1,i}(w_{1}) are i.i.d. realizations of X1𝒩(0,αPϵ/2)X_{1}\sim{\mathcal{N}}(0,\alpha P-{\epsilon}/2) and

    𝒞BCC2=[x2,i(w2)],{\mathcal{C}}_{BCC}^{2}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left[x_{2,i}(w_{2})\right], (39)

    such that each x2,i(w2)x_{2,i}(w_{2}) are i.i.d. realizations of X2𝒩(0,(1α)Pϵ/2)X_{2}\sim{\mathcal{N}}(0,(1-\alpha)P-{\epsilon}/2). Reveal both 𝒞BCC1{\mathcal{C}}_{BCC}^{1} and 𝒞BCC2{\mathcal{C}}_{BCC}^{2} to each mobile unit.

  • ii)

    Choose a message pair (w1,w2)𝒲1×𝒲2(w_{1},w_{2})\in{\mathcal{W}}_{1}\times{\mathcal{W}}_{2}, uniformly over 𝒲1×𝒲2{\mathcal{W}}_{1}\times{\mathcal{W}}_{2}, i.e. Pr(W1=w1,W2=w2)=1/2n(R1ID+R2ID)\Pr(W_{1}=w_{1},W_{2}=w_{2})=1/2^{n(R_{1}^{ID}+R_{2}^{ID})}, for all (w1,w2)𝒲1×𝒲2(w_{1},w_{2})\in{\mathcal{W}}_{1}\times{\mathcal{W}}_{2}.

  • iii)

    In order to send message (w1,w2)(w_{1},w_{2}), take 𝐱1n(w1){\mathbf{x}}_{1}^{n}(w_{1}) from 𝒞BCC1{\mathcal{C}}_{BCC}^{1} and 𝐱2n(w2){\mathbf{x}}_{2}^{n}(w_{2}) from 𝒞BCC2{\mathcal{C}}_{BCC}^{2} and send 𝐱n(w1,w2)=𝐱1n(w1)+𝐱2n(w2){\mathbf{x}}^{n}(w_{1},w_{2})\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}{\mathbf{x}}_{1}^{n}(w_{1})+{\mathbf{x}}_{2}^{n}(w_{2}) over the BCC to both sides, yielding Y1=𝐱n(w1,w2)+Z1Y_{1}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}{\mathbf{x}}^{n}(w_{1},w_{2})+Z_{1} at mobile unit one and Y2=𝐱n(w1,w2)+Z2Y_{2}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}{\mathbf{x}}^{n}(w_{1},w_{2})+Z_{2} at mobile unit two, where Z1Z_{1} and Z2Z_{2} are arbitrarily correlated with following marginal distributions Z1𝒩(0,N1)Z_{1}\sim{\mathcal{N}}(0,N_{1}), Z2𝒩(0,N2)Z_{2}\sim{\mathcal{N}}(0,N_{2}). Note that law of large numbers ensures 𝐱n(w1,w2){\mathbf{x}}^{n}(w_{1},w_{2}) satisfies the power constraint of (29).

Decoding at BCC part:

  • i)

    Upon receiving 𝐲2n{\mathbf{y}}_{2}^{n}, second mobile unit performs jointly typical decoding, i.e. decides the unique W^2𝒲2{\hat{W}}_{2}\in{\mathcal{W}}_{2} such that (𝐲2n,𝐱2n(W^2))Aϵ(n)(X2,Y2)\left({\mathbf{y}}_{2}^{n},{\mathbf{x}}_{2}^{n}({\hat{W}}_{2})\right)\in A_{\epsilon}^{(n)}(X_{2},Y_{2}). If such a W^2𝒲2{\hat{W}}_{2}\in{\mathcal{W}}_{2} does not exist or is not unique, then declares an error, i.e. 𝒲2=0{\mathcal{W}}_{2}=0.

    Mobile unit one also performs the same jointly typical decoding first with 𝐲1n{\mathbf{y}}_{1}^{n} in order to decide the unique W^2𝒲2{\hat{W}}_{2}\in{\mathcal{W}}_{2} such that (𝐲1n,𝐱1n(W^2))Aϵ(n)(X2,Y)\left({\mathbf{y}}_{1}^{n},{\mathbf{x}}_{1}^{n}({\hat{W}}_{2})\right)\in A_{\epsilon}^{(n)}(X_{2},Y). If such W^2𝒲2{\hat{W}}_{2}\in{\mathcal{W}}_{2} does not exist or is not unique, then declares an error, i.e. 𝒲2=0{\mathcal{W}}_{2}=0. After deciding on W^2{\hat{W}}_{2}, mobile unit one calculates the corresponding 𝐲n=𝐲1n𝐱2n(W^2){\mathbf{y}}^{n}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}{\mathbf{y}}_{1}^{n}-{\mathbf{x}}_{2}^{n}({\hat{W}}_{2}) and then performs jointly typical decoding, i.e. decides the unique W^1𝒲1{\hat{W}}_{1}\in{\mathcal{W}}_{1} such that (𝐲n,𝐱1n(W^1))Aϵ(n)(X1,Y)\left({\mathbf{y}}^{n},{\mathbf{x}}_{1}^{n}({\hat{W}}_{1})\right)\in A_{\epsilon}^{(n)}(X_{1},Y). If such a W^1𝒲1{\hat{W}}_{1}\in{\mathcal{W}}_{1} does not exist or is not unique, then declares an error, i.e. W^1=0{\hat{W}}_{1}=0.

Encoding at MAC part:

  • i)

    Generation of Codebook (Nested Codebook Structure): Fix f(q1),f(q2)f(q_{1}),f(q_{2}). Let f(q1,q2)=f(q1)f(q2)f(q_{1},q_{2})=f(q_{1})f(q_{2}). Generate the w1w_{1}-th (resp. w2w_{2}-th) codebook as 𝒞MACw1=[qw1,j(m1)]\mathcal{C}_{MAC}^{w_{1}}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left[q_{w_{1},j}(m_{1})\right] (resp. 𝒞MACw2=[qw2,j(m2)]\mathcal{C}_{MAC}^{w_{2}}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left[q_{w_{2},j}(m_{2})\right]), such that qw1,j(m1)q_{w_{1},j}(m_{1}) (resp. qw2,j(m2)q_{w_{2},j}(m_{2})) are i.i.d. realizations of Q1𝒩(0,α1αPϵ)Q_{1}\sim{\mathcal{N}}(0,\alpha_{1}\alpha P-{\epsilon}) (resp. Q2𝒩(0,α2(1α)Pϵ)Q_{2}\sim{\mathcal{N}}(0,\alpha_{2}(1-\alpha)P-{\epsilon})) for all w1{1,,2nR1ID}w_{1}\in\{1,\ldots,2^{nR_{1}^{ID}}\} (resp. w2{1,,2nR2ID}w_{2}\in\{1,\ldots,2^{nR_{2}^{ID}}\}), m1{1,,2nR1Data}m_{1}\in\{1,\ldots,2^{nR_{1}^{Data}}\} (resp. m2{1,,2nR2Data}m_{2}\in\{1,\ldots,2^{nR_{2}^{Data}}\}) and j{1,,n}j\in\{1,\ldots,n\}.

  • ii)

    Choose a message MiiW^iM_{i}\in{\mathcal{M}}_{i}^{{\hat{W}}_{i}} uniformly, i.e. Pr(Mi=mi)=1/2nRiData\Pr(M_{i}=m_{i})=1/2^{nR_{i}^{Data}}, for all miiW^im_{i}\in{\mathcal{M}}_{i}^{{\hat{W}}_{i}} and for i{1,2}i\in\{1,2\}. In order to send a message mim_{i}, take the corresponding codeword 𝐪W^in{\mathbf{q}}_{{\hat{W}}_{i}}^{n} of 𝒞MACW^i{\mathcal{C}}_{MAC}^{{\hat{W}}_{i}} and send over the MAC, for i{1,2}i\in\{1,2\}, resulting in 𝐒n=𝐪W^1n+𝐪W^2n+𝐙3n{\mathbf{S}}^{n}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}{\mathbf{q}}_{{\hat{W}}_{1}}^{n}+{\mathbf{q}}_{{\hat{W}}_{2}}^{n}+{\mathbf{Z}}_{3}^{n}.

Decoding at MAC part:

  • i)

    Find the pair of indexes (M^1,M^2)1w1×2w2({\hat{M}}_{1},{\hat{M}}_{2})\in{\mathcal{M}}_{1}^{w_{1}}\times{\mathcal{M}}_{2}^{w_{2}} such that (𝐪w1(M^1),𝐪w2(M^2),𝐬n)Aϵ(n)(Q1,Q2,S)({\mathbf{q}}_{w_{1}}({\hat{M}}_{1}),{\mathbf{q}}_{w_{2}}({\hat{M}}_{2}),{\mathbf{s}}^{n})\in A_{\epsilon}^{(n)}(Q_{1},Q_{2},S). If such a pair does not exist or is not unique, then declare an error, i.e. M^1=0{\hat{M}}_{1}=0 and/or M^2=0{\hat{M}}_{2}=0; otherwise decide (M^1,M^2)({\hat{M}}_{1},{\hat{M}}_{2}).

Analysis of Probability of Error: We begin with the BCC part. First, note that (16) is still valid as well as the error event definition. Next, we define following type of error events

0BCC\displaystyle\mathcal{E}_{0}^{BCC} =\>\stackrel{{\scriptstyle\triangle}}{{=}}\> {1nj=1nxj2(1,1)>P},\displaystyle\left\{\frac{1}{n}\sum_{j=1}^{n}x_{j}^{2}(1,1)>P\right\}, (40)
1,iBCC\displaystyle\mathcal{E}_{1,i}^{BCC} =\>\stackrel{{\scriptstyle\triangle}}{{=}}\> {(𝐱2n(i),𝐲1n)Aϵ(n)(X2,Y1), s.t. i1},\displaystyle\left\{({\mathbf{x}}_{2}^{n}(i),{\mathbf{y}}_{1}^{n})\in A_{\epsilon}^{(n)}(X_{2},Y_{1}),\textrm{ s.t. }i\neq 1\right\}, (41)
2,jBCC\displaystyle\mathcal{E}_{2,j}^{BCC} =\>\stackrel{{\scriptstyle\triangle}}{{=}}\> {(𝐱1n(j),𝐲n)Aϵ(n)(X1,Y), s.t. j1},\displaystyle\left\{({\mathbf{x}}_{1}^{n}(j),{\mathbf{y}}^{n})\in A_{\epsilon}^{(n)}(X_{1},Y),\textrm{ s.t. }j\neq 1\right\}, (42)
3,kBCC\displaystyle\mathcal{E}_{3,k}^{BCC} =\>\stackrel{{\scriptstyle\triangle}}{{=}}\> {(𝐱2n(k),𝐲2n)Aϵ(n)(X2,Y2), s.t. k1},\displaystyle\left\{({\mathbf{x}}_{2}^{n}(k),{\mathbf{y}}_{2}^{n})\in A_{\epsilon}^{(n)}(X_{2},Y_{2}),\textrm{ s.t. }k\neq 1\right\}, (43)

where (40) corresponds to the violation of the power constraint, (41) corresponds to the failure of the first step of the decoding at the mobile unit one, (42) corresponds to the failure of the second step of the decoding at the mobile unit one, (43) corresponds to the failure of the decoding at the mobile unit two.

Using union bound and appropriately bounding the probability of each error event term by using arguments of typicality (except for the power constraint, which follows from law of large numbers), one can show that

Pe,BCC(n)=Pr(BCC)=Pr(BCC|(W1,W2)=(1,1))7ϵ,P_{e,BCC}^{(n)}=\Pr\left(\mathcal{E}^{BCC}\right)=\Pr\left(\mathcal{E}^{BCC}|(W_{1},W_{2})=(1,1)\right)\leq 7{\epsilon}, (44)

for any ϵ>0{\epsilon}>0 and sufficiently large nn, provided that 12log(1+αPN1)R1ID>ϵ\frac{1}{2}\log\left(1+\frac{\alpha P}{N_{1}}\right)-R_{1}^{ID}>{\epsilon} (cf. (33)), 12log(1+(1α)PαP+N2)R2ID>ϵ\frac{1}{2}\log\left(1+\frac{(1-\alpha)P}{\alpha P+N_{2}}\right)-R_{2}^{ID}>{\epsilon} (cf. (34)) and 12log(1+(1α)Pα+N1)R2ID>ϵ\frac{1}{2}\log\left(1+\frac{(1-\alpha)P}{\alpha+N_{1}}\right)-R_{2}^{ID}>{\epsilon} ( which is guaranteed by recalling N1<N2N_{1}<N_{2} and (33).

Further, using standard arguments for finding a code with negligible maximal probability of error (cf. [1] pp. 203-204) from the one with Pe,BCC(n)7ϵP_{e,BCC}^{(n)}\leq 7{\epsilon} we conclude that we have

λBCC(n)=maxw1,w2λBCCw1,w214ϵ,\lambda_{BCC}^{(n)}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\max_{w_{1},w_{2}}\lambda_{BCC}^{w_{1},w_{2}}\leq 14{\epsilon}, (45)

for any ϵ>0{\epsilon}>0 and sufficiently large nn, provided that (33) and (34) hold, which concludes the BCC part.

Now, we continue with the MAC part and note that (23) is still valid as well as the error event definition. We additionally include the following type of error event, which deals with the power constraints

0,iMAC={1nj=1nqwi,j2(1)>αiPi},\mathcal{E}^{MAC}_{0,i}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\left\{\frac{1}{n}\sum_{j=1}^{n}q_{w_{i},j}^{2}(1)>\alpha_{i}P_{i}\right\}, (46)

for i{1,2}i\in\{1,2\}, such that P1=αPP_{1}=\alpha P and P2=(1α)PP_{2}=(1-\alpha)P and α\alpha is the same as the one given in BCC case.

Using union bound and appropriately bounding the probability of each error event term by using arguments of typicality (except for the power constraint related terms, which follow from law of large numbers), one can show that

Pe,MAC(n)=Pr(MAC)=Pr(MAC|(M1,M2)=(1,1))6ϵ,P_{e,MAC}^{(n)}=\Pr\left(\mathcal{E}^{MAC}\right)=\Pr\left(\mathcal{E}^{MAC}|(M_{1},M_{2})=(1,1)\right)\leq 6{\epsilon}, (47)

for any ϵ>0{\epsilon}>0 and sufficiently large nn, provided that 12log(1+α1αPN3)>R1Data+3ϵ\frac{1}{2}\log\left(1+\frac{\alpha_{1}\alpha P}{N_{3}}\right)>R_{1}^{Data}+3{\epsilon}, 12log(1+α2(1α)PN3)>R2Data+3ϵ\frac{1}{2}\log\left(1+\frac{\alpha_{2}(1-\alpha)P}{N_{3}}\right)>R_{2}^{Data}+3{\epsilon}, 12log(1+α1αP+α2(1α)PN3)>R1Data+R2Data+4ϵ\frac{1}{2}\log\left(1+\frac{\alpha_{1}\alpha P+\alpha_{2}(1-\alpha)P}{N_{3}}\right)>R_{1}^{Data}+R_{2}^{Data}+4{\epsilon}.

Further, using standard arguments for finding a code with negligible maximal probability of error (cf. [1] pp. 203-204) from the one with Pe,MAC(n)6ϵP_{e,MAC}^{(n)}\leq 6{\epsilon} we conclude that we have

λMAC(n)=maxm1,m2λMACm1,m212ϵ,\lambda_{MAC}^{(n)}\mbox{$\>\stackrel{{\scriptstyle\triangle}}{{=}}\>$}\max_{m_{1},m_{2}}\lambda_{MAC}^{m_{1},m_{2}}\leq 12{\epsilon}, (48)

for any ϵ>0{\epsilon}>0 and sufficiently large nn, provided that (35), (36) and (37) hold, which concludes the MAC part.

Following similar arguments as in Section III-A and using (45) and (48), we conclude that

λ(n)ϵ(26168ϵ),\lambda^{(n)}\leq{\epsilon}(26-168{\epsilon}), (49)

for any 0<ϵ<13840<{\epsilon}<\frac{13}{84}, where λ(n)\lambda^{(n)} is as defined in (6). Since ϵ{\epsilon} may be arbitrarily small, (49) concludes the proof. ∎

IV-B2 Converse

In this section, we prove the converse part of Theorem IV.1, in other words we have the following theorem:

Theorem IV.3

For any sequence of (2nR1ID,2nR2ID,2nR1Data,2nR2Data,n)\left(2^{nR_{1}^{ID}},2^{nR_{2}^{ID}},2^{nR_{1}^{Data}},2^{nR_{2}^{Data}},n\right)-RFID codes with Pe(n)<ϵP_{e}^{(n)}<{\epsilon}, for any ϵ>0{\epsilon}>0, we have (R1ID,R2ID,R1Data,R2Data)1\left(R_{1}^{ID},R_{2}^{ID},R_{1}^{Data},R_{2}^{Data}\right)\in{\mathcal{R}}_{1}.

Proof:

Proof relies on ideas from [3] for BCC part and [1] for MAC part.

First of all, we have following

Pe(n)\displaystyle P_{e}^{(n)} =\displaystyle= 1Pr([(W^1,W^2)=(W1,W2)][(M^1,hM2)=(M1,M2)]),\displaystyle 1-\Pr\left(\left[({\hat{W}}_{1},{\hat{W}}_{2})=(W_{1},W_{2})\right]\wedge\left[({\hat{M}}_{1},hM_{2})=(M_{1},M_{2})\right]\right), (50)
=\displaystyle= 1Pr((W^1,W^2)=(W1,W2))Pr((M^1,M^2)=(M1,M2)|(W^1,W^2)=(W1,W2)).\displaystyle 1-\Pr\left(({\hat{W}}_{1},{\hat{W}}_{2})=(W_{1},W_{2})\right)\Pr\left(({\hat{M}}_{1},{\hat{M}}_{2})=(M_{1},M_{2})|({\hat{W}}_{1},{\hat{W}}_{2})=(W_{1},W_{2})\right).

Using (50) and noting that Pe(n)ϵP_{e}^{(n)}\leq{\epsilon}, we have

(1Pr((W^1,W^2)(W1,W2)))(Pr((M^1,M^2)(M1,M2)|(W^1,W^2)=(W1,W2))),\left(1-\Pr\left(({\hat{W}}_{1},{\hat{W}}_{2})\neq(W_{1},W_{2})\right)\right)\left(\Pr\left(({\hat{M}}_{1},{\hat{M}}_{2})\neq(M_{1},M_{2})|({\hat{W}}_{1},{\hat{W}}_{2})=(W_{1},W_{2})\right)\right),

which implies

Pe,BCC(n)=Pr((W^1,W^2)(W1,W2))ϵ,P_{e,BCC}^{(n)}=\Pr\left(({\hat{W}}_{1},{\hat{W}}_{2})\neq(W_{1},W_{2})\right)\leq{\epsilon}, (51)

and

Pe,MAC(n)=Pr((M^1,M^2)(M1,M2)|(W^1,W^2)=(W1,W2))ϵ,P_{e,MAC}^{(n)}=\Pr\left(({\hat{M}}_{1},{\hat{M}}_{2})\neq(M_{1},M_{2})|({\hat{W}}_{1},{\hat{W}}_{2})=(W_{1},W_{2})\right)\leq{\epsilon}, (52)

Next, (51) enables us to use the result of [3] for BCC case, hence we state that

R1ID\displaystyle R_{1}^{ID} \displaystyle\leq 12log(1+αPN1),\displaystyle\frac{1}{2}\log\left(1+\frac{\alpha P}{N_{1}}\right), (53)
R2ID\displaystyle R_{2}^{ID} \displaystyle\leq 12log(1+(1α)PαP+N2),\displaystyle\frac{1}{2}\log\left(1+\frac{(1-\alpha)P}{\alpha P+N_{2}}\right), (54)

for any 0α10\leq\alpha\leq 1.

Further, (52) enables us to use the result of [1] for MAC case, hence we state that

R1Data\displaystyle R_{1}^{Data} \displaystyle\leq 12log(1+α1αPN3),\displaystyle\frac{1}{2}\log\left(1+\frac{\alpha_{1}\alpha P}{N_{3}}\right), (55)
R2Data\displaystyle R_{2}^{Data} \displaystyle\leq 12log(1+α2(1α)PN3),\displaystyle\frac{1}{2}\log\left(1+\frac{\alpha_{2}(1-\alpha)P}{N_{3}}\right), (56)
R1Data+R2Data\displaystyle R_{1}^{Data}+R_{2}^{Data} \displaystyle\leq 12log(1+α1αP+α2(1α)PN3).\displaystyle\frac{1}{2}\log\left(1+\frac{\alpha_{1}\alpha P+\alpha_{2}(1-\alpha)P}{N_{3}}\right). (57)

Combining (53), (54), (55), (56) and (57) we conclude that for any (2nR1ID,2nR2ID,2nR1Data,2nR2Data,n)\left(2^{nR_{1}^{ID}},2^{nR_{2}^{ID}},2^{nR_{1}^{Data}},2^{nR_{2}^{Data}},n\right)-RFID codes with PenP_{e}^{n}, we have (R1ID,R2ID,R1Data,R2Data)1\left(R_{1}^{ID},R_{2}^{ID},R_{1}^{Data},R_{2}^{Data}\right)\in{\mathcal{R}}_{1}, which concludes the proof. ∎

V Conclusion

In this paper, we studied the RFID capacity problem by modeling the underlying structure as a specific multiuser communication system that is represented by a cascade of a BCC and a MAC. The BCC and MAC parts are used to model communication between the RFID reader and the mobile units, and between the mobile units and the RFID reader, respectively. To connect the BCC and MAC parts, we used a “nested codebook” structure. We further introduced imperfection channels for discrete alphabet case as well as additional power limitations for continuous alphabet additive Gaussian noise case to accurately model the physical medium of the RFID system. We provided the achievable rate region in the discrete alphabet case and the capacity region for the continuous alphabet additive Gaussian noise case. Hence, overall, we characterized the maximal achievable error free communication rates for any RFID protocol for the latter case.

References

  • [1] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd Edition, New York: Wiley, 2006.
  • [2] A. El Gamal and E. C. Van der Meulen, “A Proof of Marton s Coding Theorem for the Discrete Memoryless Broadcast Channels”, IEEE Trans. Inf. Theory, vol. IT–27, no. 1, pp. 120–122, January 1981.
  • [3] P. R Bergmans, “A Simple Converse for Broadcast Channels with Additive White Gaussian Noise”, IEE Trans. Inf. Theory, vol. IT–20, no. 2, pp. 279–280, March 1974.
  • [4] M. Medard, J. Huang, A. J. Goldsmith, S. P. Meyn, T. P. Coleman, “Capacity of Time-slotted ALOHA Packetized Multiple-Access Systems over the AWGN Channel,” IEEE Trans. Inf. Theory, vol. IT–3, no. 2, pp. 486–499, March 2004.
  • [5] R. Zamir, S. Shamai, U. Erez, “Nested Linear/Lattice Codes for Structured Multiterminal Binning,” IEEE Trans. Inf. Theory, vol. IT–48, no. 6, pp. 1250–1276, June 2002.
  • [6] R. J. Barron, B. Chen, G. W. Wornell, “The Duality Between Information Embedding and Source Coding With Side Information and Some Applications,” IEEE Trans. Inf. Theory, vol. IT–48, no. 6, pp. 1159–1180, May 2003.