This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Robust low-delay Streaming PIR using convolutional codes

Julia Lieb, Diego Napp and Raquel Pinto Julia Lieb is at the Department of Mathematics, University of Zurich, Switzerland e-mail: julia.lieb@math.uzh.ch.Diego Napp is at the Department of Mathematics, University of Alicante, Spain e-mail: diego.napp@ua.es.Raquel Pinto is at the Department of Mathematics, University of Aveiro, Portugal e-mail: raquel@ua.pt.
Abstract

In this paper we investigate the design of a low-delay robust streaming PIR scheme on coded data that is resilient to unresponsive or slow servers and can privately retrieve streaming data in a sequential fashion subject to a fixed decoding delay. We present a scheme based on convolutional codes and the star product and assume no collusion between servers. In particular we propose the use of convolutional codes that have the maximum distance increase, called Maximum Distance Profile (MDP). We show that the proposed scheme can deal with many different erasure patterns.

Index Terms:
Private Information Retrieval, Private Streaming, Convolutional codes low-delay MDP codes Erasure channel.

I Introduction

Video traffic has had an explosive growth and it is expected to keep its exponential growth in the coming years [1]. Service providers for real-time video streaming are typically hosted in a public cloud, with multiple servers in different data centers, e.g. Google Cloud, Amazon CloudFront and Microsoft Azure. These cloud services aim for private and low-latency communications.

The problem of Private Information Retrieval (PIR) has attracted a lot of attention in the recent years and studies how to retrieve a file from a storage system without revealing the desired file to the servers. It was initially addressed for replicated files [7] and recently for coded files [6, 8, 13, 16, 17]. In this last setting, the general model of the information theoretic PIR problem is as follows. Let a coded database be contained in nn servers storing mm files and assume that the user knows the content of the servers. Each file is coded and stored independently using the same code and the user wants to retrieve a particular file from the database with zero information gain from the servers, i.e., the user wants information theoretic privacy [19]. Recently, the literature on PIR models has grown considerably with extensions for more general PIR models with several additional constraints. Most of the efforts in private retrieval have focused on efficient schemes that optimize different metrics, such as communication cost or rate. However, in many cases some of the servers may be busy and do not respond within a desirable time frame or network failures may occur. For this reason, new robust schemes were proposed in order to deal with such scenarios [19, 18] adding redundancy to tolerate certain missing files from the servers. PIR schemes on coded data for Byzantine or unresponsive servers were presented in [21, 18]. These schemes are suited for retrieving one single file and therefore use block codes. In [10] a scheme for sequential retrieving was proposed but again for a given set of files of fixed size and assuming that all the responses of the servers are lost at the same time instant. The case of a non-bursty channel is also considered in this paper but only using unit memory convolutional codes. However, to the best of the author’s knowledge, the problem of low-delay private retrieval of a stream of files (of undetermined length) with some low or unresponsive servers remains unexplored.

In this work we investigate this more general problem and propose a novel robust scheme to deal with low-delay streaming retrieval of files from nn servers in the presence of possible unresponsive servers by using Maximum Distance Profile (MDP) convolutional codes. This class of codes is suitable for low-delay streaming applications as they possess optimal error-correcting capabilities within a decoding window, see [5, 20]. One of the advantages of using convolutional codes over block codes is the sliding window flexibility that allows to select different decoding windows according to the erasure pattern. We show how to take advantage of this property to provide robust PIR in this context. We present a scheme that is able to stream files consisting of many stripes in the presence of erasures without assuming any particular structure in the sequence of erasures. The model in [10] treated burst erasure channels using general convolutional codes and the non-bursty channel case was treated using unit memory convolutional codes. Unit memory codes are restricted to store only what occurred in the previous instant and therefore are far from optimal for low-delay applications when the given delay constraint is larger than one. Note also that when only burst erasure channels are assumed, there exist concrete constructions of convolutional codes that are optimal in such a context [5, 4, 14]. In this work we extend this thread of research and consider a non necessarily bursty channel using convolutional codes with no restriction in the memory, namely, MDP convolutional codes. In contrast to [10], where the response of the servers is built in a convolutional fashion but the storage code is still a block code, we also use a convolutional code to store the files on the servers.

II Preliminaries

In this section we recall basic material and introduce the definitions needed for this work, including the notion of convolutional code and superregular matrix. Let 𝔽=𝔽q\mathbb{F}=\mathbb{F}_{q} be a finite field of size qq and 𝔽[z]\mathbb{F}[z] be the ring of polynomials with coefficients in 𝔽\mathbb{F}.

Definition 1

An [𝐧,𝐤]\mathbf{[n,k]}-block code 𝒞\mathcal{C} is a kk-dimensional subspace of 𝔽n\mathbb{F}^{n}, i.e., there exists G𝔽k×nG\in\mathbb{F}^{k\times n} of full row rank matrix such that

𝒞={𝒗𝔽n|𝒗=𝒖Gfor some𝒖𝔽k}.\mathcal{C}=\{\boldsymbol{v}\in\mathbb{F}^{n}\ |\ \boldsymbol{v}=\boldsymbol{u}G\ \text{for some}\ \boldsymbol{u}\in\mathbb{F}^{k}\}.

GG is called generator matrix of the code and is unique up to left multiplication with an invertible matrix UGlk(𝔽)U\in Gl_{k}(\mathbb{F}). Furthermore, u𝔽ku\in\mathbb{F}^{k} is called message vector and the elements 𝐯𝒞\boldsymbol{v}\in\mathcal{C} are called codewords.

Convolutional codes process a continuous sequence of data instead of blocks of fixed vectors as done by block codes. If we introduce a variable zz, called the delay operator, to indicate the time instant in which each information arrived or each codeword was transmitted, then we can represent the sequence message (𝒗0,𝒗1,,𝒗l)(\boldsymbol{v}_{0},\boldsymbol{v}_{1},\cdots,\boldsymbol{v}_{l}) as a polynomial vector 𝒗(z)=𝒗0+𝒗1z++𝒗lzl\boldsymbol{v}(z)=\boldsymbol{v}_{0}+\boldsymbol{v}_{1}z+\cdots+\boldsymbol{v}_{l}z^{l}. Formally, we can define convolutional codes as follows.

A rate k/nk/n convolutional code 𝒞\mathcal{C} [20] is an 𝔽[z]\mathbb{F}[z]-submodule of 𝔽[z]n\mathbb{F}[z]^{n} with rank kk given by

𝒞=im𝔽[z]G(z)={𝒗(z)𝔽[z]n|𝒗(z)=𝒖(z)G(z), with 𝒖(z)𝔽[z]k}\mathcal{C}=\text{im}_{\mathbb{F}[z]}G(z)=\{\boldsymbol{v}(z)\in\mathbb{F}[z]^{n}\ |\ \boldsymbol{v}(z)=\boldsymbol{u}(z)G(z),\text{ with }\boldsymbol{u}(z)\in\mathbb{F}[z]^{k}\}

where G(z)𝔽[z]k×nG(z)\in\mathbb{F}[z]^{k\times n} is a matrix, called generator matrix, that is basic, i.e., G(z)G(z) has a polynomial right inverse.

Note that if 𝒗(z)=𝒖(z)G(z)\boldsymbol{v}(z)=\boldsymbol{u}(z)G(z), with

𝒖(z)\displaystyle\boldsymbol{u}(z) =𝒖0+𝒖1z+𝒖2z2+ and G(z)=j=0μGjzj\displaystyle=\boldsymbol{u}_{0}+\boldsymbol{u}_{1}z+\boldsymbol{u}_{2}z^{2}+\cdots\ \text{ and }\ G(z)=\sum_{j=0}^{\mu}G_{j}z^{j}

then,

𝒗0+𝒗1z+𝒗2z2+\displaystyle\boldsymbol{v}_{0}+\boldsymbol{v}_{1}z+\boldsymbol{v}_{2}z^{2}+\cdots
=𝒖0G0+(𝒖1G0+𝒖0G1)z+(𝒖2G0+𝒖1G1+𝒖0G2)z2+\displaystyle=\boldsymbol{u}_{0}G_{0}+\left(\boldsymbol{u}_{1}G_{0}+\boldsymbol{u}_{0}G_{1}\right)z+\left(\boldsymbol{u}_{2}G_{0}+\boldsymbol{u}_{1}G_{1}+\boldsymbol{u}_{0}G_{2}\right)z^{2}+\cdots

The maximum degree of all polynomials in the jj-th row of G(z)G(z) is denoted by δj\delta_{j}. The degree δ\delta of 𝒞\mathcal{C} is defined as the maximum degree of the full size minors of G(z)G(z). We say that 𝒞\mathcal{C} is an (n,k,δ)(n,k,\delta) convolutional code [15]. Important for the performance of a code in terms of error-free decoding is the (Hamming) distance between two codewords. In the case of convolutional codes, the most relevant notion of distance for low-delay decoding is the column distance that can be defined as follows.

The 𝐣\mathbf{j}-th column distance [11] is defined as

djc(𝒞)=min{wt(𝒗[0,j](z))|𝒗(z)𝒞 and 𝒗0𝟎},d_{j}^{c}\left(\mathcal{C}\right)=\min\left\{\text{wt}\left(\boldsymbol{v}_{[0,j]}(z)\right)|\ \boldsymbol{v}(z)\in\mathcal{C}\text{ and }\boldsymbol{v}_{0}\neq\mathbf{0}\right\},

where 𝒗[0,j](z)=𝒗0+𝒗1z++𝒗jzj\boldsymbol{v}_{[0,j]}(z)=\boldsymbol{v}_{0}+\boldsymbol{v}_{1}z+\cdots+\boldsymbol{v}_{j}z^{j} represents the jj-th truncation of the codeword 𝒗(z)𝒞\boldsymbol{v}(z)\in\mathcal{C} and

wt(𝒗[0,j](z))=wt(𝒗0)+wt(𝒗1)++wt(𝒗j)\text{wt}(\boldsymbol{v}_{[0,j]}(z))=\text{wt}(\boldsymbol{v}_{0})+\text{wt}(\boldsymbol{v}_{1})+\cdots+\text{wt}(\boldsymbol{v}_{j})

where wt(𝒗i)\text{wt}(\boldsymbol{v}_{i}) is the Hamming weight of 𝒗i\boldsymbol{v}_{i}, which determines the number of nonzero components of 𝒗i\boldsymbol{v}_{i}, for i=1,,ji=1,\ldots,j. For simplicity, we use djcd_{j}^{c} instead of djc(𝒞)d_{j}^{c}(\mathcal{C}).

The jj-th column distance is upper bounded [9] by

djc(nk)(j+1)+1,d_{j}^{c}\leq(n-k)(j+1)+1,

and the maximality of any of the column distances implies the maximality of all the previous ones, that is, if djc=(nk)(j+1)+1d_{j}^{c}=(n-k)(j+1)+1 for some jj, then dic=(nk)(i+1)+1d_{i}^{c}=(n-k)(i+1)+1 for all iji\leq j. The value

L=δk+δnkL=\left\lfloor\frac{\delta}{k}\right\rfloor+\left\lfloor\frac{\delta}{n-k}\right\rfloor (1)

is the largest value for which the bound can be achieved and an (n,k,δ)(n,k,\delta) convolutional code 𝒞\mathcal{C} with dLc=(nk)(L+1)+1d_{L}^{c}=(n-k)(L+1)+1 is called a maximum distance profile (MDP) code [9]. Hence, MDP codes have optimal error correcting capabilities within time intervals and therefore are ideal for low delay correction. In this work we shall assume that the retrieval must be performed within a given delay constraint ΔL\Delta\leq L, see [2, 5].

Assume that

G(z)=j=0μGjzj,Gj𝔽k×n,Gμ0,G(z)=\sum_{j=0}^{\mu}G_{j}z^{j},G_{j}\in\mathbb{F}^{k\times n},G_{\mu}\neq 0,

and consider the associated sliding matrix

Gjc=(G0G1GjG0Gj1G0)G_{j}^{c}=\left(\begin{array}[c]{cccc}G_{0}&G_{1}&\cdots&G_{j}\\ &G_{0}&\cdots&G_{j-1}\\ &&\ddots&\vdots\\ &&&G_{0}\end{array}\right) (2)

with Gj=0G_{j}=0 when j>μj>\mu, for jj\in\mathbb{N}.

Theorem 2 (Theorem 2.42.4 in [9])

Let GjcG_{j}^{c} be the matrices defined in (2). Then the following statements are equivalent:

  1. 1.

    djc=(nk)(j+1)+1d_{j}^{c}=(n-k)(j+1)+1;

  2. 2.

    every (j+1)k×(j+1)k(j+1)k\times(j+1)k full size minor of GjcG_{j}^{c} formed from the columns with the indices 1t1t(j+1)k1\leq t_{1}\leq\cdots\leq t_{(j+1)k}, where tik+1int_{ik+1}\leq in, for i=1,2,,ji=1,2,\ldots,j is nonzero;

In particular, when j=Lj=L, 𝒞\mathcal{C} is an MDP code.

Theorem 3

[20, Theorem 3.1] Let 𝒞\mathcal{C} be an (n,k,δ)(n,k,\delta) MDP convolutional code. If djc=(nk)(j+1)+1d_{j}^{c}=(n-k)(j+1)+1 and in any sliding window of length (j+1)n(j+1)n at most (j+1)(nk)(j+1)(n-k) erasures occur in a transmitted sequence, then complete recovery is possible.

Considering the proof of this theorem, one sees that the recovery is even possible within a delay of jj windows of size nn and that the given condition for complete recovery is only sufficient but not necessary.

We will develop a PIR scheme in which the star product of certain block codes plays an important role.

Definition 4

The star product of two vectors v,w𝔽nv,w\in\mathbb{F}^{n} is defined as vw=(v1w1,,vnwn)v\ast w=(v_{1}w_{1},\ldots,v_{n}w_{n}). The star product of two block codes 𝒞,𝒟𝔽n\mathcal{C},\mathcal{D}\subset\mathbb{F}^{n} is defined as 𝒞𝒟=cd|c𝒞,d𝒟\mathcal{C}\ast\mathcal{D}=\langle c\ast d\ |\ c\in\mathcal{C},\ d\in\mathcal{D}\rangle.

Star product PIR was first introduced in [8]. The main idea of this scheme is to design the queries to the different servers in such a way that if the responses are formed as inner products of the query and the stored information, then the total response is a codeword of a certain star product code with some error, where the error contains the information one is interested in. In [10] this scheme was adopted forming the responses in a convolutional way. In the following section, we present a star product scheme where the responses as well as the storage code are convolutional.

III Streaming PIR scheme

We have mm sequences of files (Xsi)s(X_{s}^{i})_{s\in\mathbb{N}} with Xsi𝔽kX_{s}^{i}\in\mathbb{F}^{k} for i=1,,mi=1,\ldots,m and ss\in\mathbb{N}. These are encoded with an (n,k,δ)(n,k,\delta) MDP convolutional code 𝒞\mathcal{C} with generator matrix G(z)=r=0μGrzrG(z)=\sum_{r=0}^{\mu}G_{r}z^{r} to obtain the sequences of files (Yti)t(Y_{t}^{i})_{t\in\mathbb{N}} with Yti=r+s=tXsiGr𝔽nY_{t}^{i}=\sum_{r+s=t}X_{s}^{i}G_{r}\in\mathbb{F}^{n} for i=1,,mi=1,\ldots,m and tt\in\mathbb{N} where we set Gr=0G_{r}=0 for r>μr>\mu. Moreover, we have nn servers and for j=1,,nj=1,\ldots,n, we store the jj-th component Yt,jiY_{t,j}^{i} of each vector YtiY_{t}^{i} (for i=1,,mi=1,\ldots,m, tt\in\mathbb{N}) on server number jj. Furthermore, we assume that (μ+2)kn(\mu+2)k\leq n and that for f=1,,μf=1,\ldots,\mu, (G0Gf)\begin{pmatrix}G_{0}\\ \vdots\\ G_{f}\end{pmatrix} is the generator matrix of an [n,(f+1)k][n,(f+1)k] MDS block code denoted by 𝒞f\mathcal{C}_{f} . We will present a construction for an (n,k,δ)(n,k,\delta) MDP convolutional code with these properties later in this paper. It holds Yti𝒞fY_{t}^{i}\in\mathcal{C}_{f} for all f{t,,μ}f\in\{t,\ldots,\mu\} and Yti𝒞μY_{t}^{i}\in\mathcal{C}_{\mu} for tμt\geq\mu. Thus, we set 𝒞f=𝒞μ\mathcal{C}_{f}=\mathcal{C}_{\mu} for fμf\geq\mu.

The user wants to stream the sequence (Xsi)s(X_{s}^{i})_{s\in\mathbb{N}} for some ii without the servers knowing ii, i.e. without that the servers know which sequence he or she is streaming. For our PIR scheme we assume that there is no collusion between the servers (i.e. the number of colluding servers, usually denoted by tt in the literature is equal to 11).
Set d=[1 1]𝔽nd=[1\ \ldots\ 1]\in\mathbb{F}^{n}, let 𝒟\mathcal{D} be the [n,1][n,1] block code generated by dd and D𝔽(μ+1)m×nD\in\mathbb{F}^{(\mu+1)m\times n} be a matrix whose rows are constituted by (μ+1)m(\mu+1)m random codewords of 𝒟\mathcal{D} (i.e. multiples of dd). For a subset J{1,,n}J\subset\{1,\ldots,n\}, we denote by E𝔽nE\in\mathbb{F}^{n} the vector with entries Ej:={1ifjJ0otherwiseE_{j}:=\begin{cases}1&\text{if}\ j\in J\\ 0&\text{otherwise}\end{cases} and we denote by eje_{j} the jj-th standard basis vector of 𝔽(μ+1)m\mathbb{F}^{(\mu+1)m}.
For j=1,,nj=1,\ldots,n, we send the following query qjiq_{j}^{i} to server jj:

qji=D,j+Ejl=0μelm+i𝔽(μ+1)m×1\displaystyle q_{j}^{i}=D_{\cdot,j}+E_{j}\sum_{l=0}^{\mu}e_{lm+i}\in\mathbb{F}^{(\mu+1)m\times 1} (3)

where D,jD_{\cdot,j} denotes the jj-th column of DD. We write qji=(qj,1iqj,μ+1i)q_{j}^{i}=\begin{pmatrix}q_{j,1}^{i}\\ \vdots\\ q_{j,\mu+1}^{i}\end{pmatrix} with qj,ki𝔽m×1q_{j,k}^{i}\in\mathbb{F}^{m\times 1} for k=1,,μ+1k=1,\ldots,\mu+1 and Yt,j:=(Yt,j1,Yt,j2,,Yt,jm)𝔽mY_{t,j}:=(Y^{1}_{t,j},Y^{2}_{t,j},\ldots,Y^{m}_{t,j})\in\mathbb{F}^{m}.
The response of server jj at time tt\in\mathbb{N} is

rt,ji=k+r1=tqj,ki,Yr,j𝔽\displaystyle r_{t,j}^{i}=\sum_{k+r-1=t}\langle q_{j,k}^{i},Y^{\top}_{r,j}\rangle\in\mathbb{F} (4)

where Yr,j=0Y_{r,j}=0 for rr\not\in\mathbb{N}.
Hence the total response at time tt\in\mathbb{N} is given by

rti\displaystyle r_{t}^{i} =[rt,1i,,rt,ni]=\displaystyle=[r_{t,1}^{i},\ldots,r_{t,n}^{i}]=
=[D1,1Yt,11,,D1,nYt,n1]𝒟𝒞t+[D2,1Yt,12,,D2,nYt,n2]𝒟𝒞t++\displaystyle=\underbrace{[D_{1,1}Y_{t,1}^{1},\ldots,D_{1,n}Y_{t,n}^{1}]}_{\in\mathcal{D}\ast\mathcal{C}_{t}}+\underbrace{[D_{2,1}Y_{t,1}^{2},\ldots,D_{2,n}Y_{t,n}^{2}]}_{\in\mathcal{D}\ast\mathcal{C}_{t}}+\cdots+
+[D(μ+1)m,1Ytμ,1m,,D(μ+1)m,nYtμ,nm]𝒟𝒞t+diag(E)l=0μYtli\displaystyle+\underbrace{[D_{(\mu+1)m,1}Y_{t-\mu,1}^{m},\ldots,D_{(\mu+1)m,n}Y_{t-\mu,n}^{m}]}_{\in\mathcal{D}\ast\mathcal{C}_{t}}+diag(E)\sum_{l=0}^{\mu}Y^{i}_{t-l} (5)

where diag(E) denotes the diagonal matrix with diagonal entries equal to the entries of the vector EE.

By Definition 4 and the definition of the code 𝒟\mathcal{D} the star product code 𝒟𝒞t\mathcal{D}\ast\mathcal{C}_{t} is equal to the MDS code 𝒞t\mathcal{C}_{t}. As 𝒟𝒞t\mathcal{D}\ast\mathcal{C}_{t} is a linear code, any sum of codewords is again a codeword. Hence, the response has the form

rti=ct+diag(E)l=0μYtli\displaystyle r_{t}^{i}=c_{t}+diag(E)\sum_{l=0}^{\mu}Y^{i}_{t-l} (6)

for some ct𝒞tc_{t}\in\mathcal{C}_{t}.

We assume that it is possible that some parts of the response at time tt get lost during transmission and could not be received. Hence the vector rtir_{t}^{i} could have some erased components. We denote by Tt{1,,n}T_{t}\subset\{1,\ldots,n\} the set that consists of the positions of the erased components of the vector rtir_{t}^{i}.

Lemma 5

If |TtJ|<nk(min{t,μ}+1)+1|T_{t}\cup J|<n-k(\min\{t,\mu\}+1)+1, the user is able to obtain the vector diag(E)l=0μYtlidiag(E)\sum_{l=0}^{\mu}Y^{i}_{t-l}. In particular, this is true if |J|+nt<nk(min{t,μ}+1)+1|J|+n_{t}<n-k(\min\{t,\mu\}+1)+1, where ntn_{t} is the number of erased components of the vector rtir_{t}^{i}.

Proof:

Using equation (6) and the definition of the vector EE, we apply erasure decoding in the [n,(min{t,μ}+1)k][n,(\min\{t,\mu\}+1)k] MDS code 𝒞t\mathcal{C}_{t} to the vector rtir_{t}^{i} where the set of erasures is the union of TT and JJ. The lemma follows from the fact that an [n,(min{t,μ}+1)k][n,(\min\{t,\mu\}+1)k] MDS code could correct any set of erasures whose cardinality is smaller than the minimum distance nk(min{t,μ}+1)+1n-k(\min\{t,\mu\}+1)+1 of the code. ∎

For each tt\in\mathbb{N} for which the condition of the preceding lemma is not fulfilled we are not able to obtain diag(E)l=0μYtlidiag(E)\sum_{l=0}^{\mu}Y^{i}_{t-l}. Therefore, we define

diagt(E^)={diag(E)if|TtJ|<nk(min{t,μ}+1)+10notherwise\displaystyle diag_{t}(\hat{E})=\begin{cases}diag(E)&\text{if}\ |T_{t}\cup J|<n-k(\min\{t,\mu\}+1)+1\\ 0_{n}&\text{otherwise}\end{cases} (7)

where 0n0_{n} denotes the n×nn\times n zero matrix.

It remains to show how to obtain the desired sequence of files (Xsi)s(X_{s}^{i})_{s\in\mathbb{N}} from the sequence (diagt(E^)l=0μYtli)t(diag_{t}(\hat{E})\sum_{l=0}^{\mu}Y^{i}_{t-l})_{t\in\mathbb{N}}. With the definitions r^ti:=l=0μYtli\hat{r}^{i}_{t}:=\sum_{l=0}^{\mu}Y^{i}_{t-l} and

𝒢~:=[G0G0+G1r=0μGrr=1μGrGμG0G0+G1r=0μGrr=1μGrGμ]\displaystyle\tilde{\mathcal{G}}:=\left[\begin{array}[]{ccccccccc}G_{0}&G_{0}+G_{1}&\cdots&\sum_{r=0}^{\mu}G_{r}&\sum_{r=1}^{\mu}G_{r}&\cdots&G_{\mu}&&\\ &G_{0}&G_{0}+G_{1}&\cdots&\sum_{r=0}^{\mu}G_{r}&\sum_{r=1}^{\mu}G_{r}&\cdots&G_{\mu}&\\ &\qquad\ \ddots&\ddots&&\ddots&\ddots&&\qquad\ \ddots\end{array}\right] (11)

one obtains

[r^1i,r^2i,]=[X1i,X2i,]𝒢~\displaystyle[\hat{r}^{i}_{1},\hat{r}^{i}_{2},\ldots]=[X_{1}^{i},X_{2}^{i},\ldots]\cdot\tilde{\mathcal{G}} (12)

Denote by Ik𝔽k×kI_{k}\in\mathbb{F}^{k\times k} the identity matrix and set U:=[IkIkIkIk]U\!:=\!\!\left[\begin{array}[]{cccc}I_{k}&\cdots&I_{k}&\\ &I_{k}&\cdots&I_{k}\\ &\qquad\ \ddots&&\qquad\ \ddots\end{array}\right] where each block of kk rows of UU contains μ+1\mu+1 identity matrices. Then, one has

𝒢~=U[G0G1GμG0G1Gμ]:=𝒢.\displaystyle\tilde{\mathcal{G}}=U\cdot\underbrace{\left[\begin{array}[]{cccccc}G_{0}&G_{1}&\cdots&G_{\mu}&&\\ &G_{0}&G_{1}&\cdots&G_{\mu}&\\ &\qquad\ \ddots&\qquad\ddots&&\qquad\ \ddots\end{array}\right]}_{:=\mathcal{G}}. (16)

Therefore, one obtains the following lemma.

Lemma 6

The column distances of the convolutional code 𝒞~\tilde{\mathcal{C}} with generator matrix G~(z)=b=0G~bzb\tilde{G}(z)=\sum_{b=0}\tilde{G}_{b}z^{b} where G~b=r=0μGbr\tilde{G}_{b}=\sum_{r=0}^{\mu}G_{b-r} are equal to the column distances of 𝒞\mathcal{C}.

Proof:

First note that the matrix 𝒢~\tilde{\mathcal{G}} defined in (11) is the sliding generator matrix of 𝒞~\tilde{\mathcal{C}}. Denote by d~jc\tilde{d}_{j}^{c} the jj-th column distance of the code 𝒞~\tilde{\mathcal{C}} and by UjU_{j} the matrix that consists of the first k(j+1)k(j+1) rows and the first k(j+1)k(j+1) columns of the matrix UU. Then, it holds

d~jc\displaystyle\tilde{d}_{j}^{c} =minX1i0(wt([X1i,,Xj+1i]G~jc))=(16)minX1i0(wt([X1i,,Xj+1i]UjGjc))\displaystyle=\min_{X^{i}_{1}\neq 0}\left(wt\left([X^{i}_{1},\ldots,X^{i}_{j+1}]\cdot\tilde{G}_{j}^{c}\right)\right)\stackrel{{\scriptstyle\eqref{u}}}{{=}}\min_{X^{i}_{1}\neq 0}\left(wt\left([X^{i}_{1},\ldots,X^{i}_{j+1}]U_{j}\cdot G_{j}^{c}\right)\right)
=minX^1i0(wt([X^1i,,X^j+1i]Gjc))=djc\displaystyle=\min_{\hat{X}^{i}_{1}\neq 0}\left(wt\left([\hat{X}^{i}_{1},\ldots,\hat{X}^{i}_{j+1}]\cdot G_{j}^{c}\right)\right)=d_{j}^{c} (17)

Hence, we could use equation (12) to obtain [diag1(E^)r^1i,diag2(E^)r^2i,][diag_{1}(\hat{E})\hat{r}^{i}_{1},diag_{2}(\hat{E})\hat{r}^{i}_{2},\ldots] via erasure decoding with an MDP convolutional code where the set of positions of the total erasures denoted by TT has the form T=tStT=\bigcup_{t\in\mathbb{N}}S_{t} with

St={{Tt+(t1)n}transmission erasures({1+(t1)n,,n+(t1)n}{J+(t1)n})erasures caused by the multiplication withdiag(E)ifdiagt(E^)0{1+(t1)n,,n+(t1)n}otherwise\displaystyle S_{t}=\begin{cases}\underbrace{\{T_{t}+(t-1)n\}}_{\text{transmission erasures}}\cup\underbrace{(\{1+(t-1)n,\ldots,n+(t-1)n\}\setminus\{J+(t-1)n\})}_{\text{erasures caused by the multiplication with}\ diag(E)}&\text{if}\ diag_{t}(\hat{E})\neq 0\\ \vskip 0.28453pt\\ \{1+(t-1)n,\ldots,n+(t-1)n\}&\text{otherwise}\end{cases} (18)

where for J={j1,,j|J|}J=\{j_{1},\ldots,j_{|J|}\}, the set {J+(t1)n}\{J+(t-1)n\} is defined as
{j1+(t1)n,,j|J|+(t1)n}\{j_{1}+(t-1)n,\ldots,j_{|J|}+(t-1)n\} and {Tt+(t1)n}\{T_{t}+(t-1)n\} should be defined analogous. Hence, using also Theorem 3, we get the following theorem.

Theorem 7

Assume that ΔL=δnk+δk\Delta\leq L=\lfloor\frac{\delta}{n-k}\rfloor+\lfloor\frac{\delta}{k}\rfloor. If the set of erasures TT given in (18) is such that in every sliding window of the sequence (rti)t(r_{t}^{i})_{t\in\mathbb{N}} of size (Δ+1)n(\Delta+1)n there are not more than (Δ+1)(nk)(\Delta+1)(n-k) erasures, then one could obtain the desired sequence of files (Xsi)s(X_{s}^{i})_{s\in\mathbb{N}} from the sequence (diagt(E^)l=0μYtli)t(diag_{t}(\hat{E})\sum_{l=0}^{\mu}Y^{i}_{t-l})_{t\in\mathbb{N}} within time delay Δ\Delta, i.e. one could privatly obtain the sequence of files (Xsi)s(X_{s}^{i})_{s\in\mathbb{N}} within time delay Δ\Delta.

From this theorem we could deduce which erasure patterns we can correct for sure with our proposed scheme.

Corollary 8

With the proposed scheme private reception within time delay ΔL\Delta\leq L is possible if for tt\in\mathbb{N}, there are not more than nk(min{μ,t}+1)|J|n-k(\min\{\mu,t\}+1)-|J| transmission erasures in positions {1+(t1)n,,n+(t1)n}{J+(t1)n}\{1+(t-1)n,\ldots,n+(t-1)n\}\setminus\{J+(t-1)n\} of the sequence of responses (rti)t(r^{i}_{t})_{t\in\mathbb{N}} and in every sliding window of this sequence of length (Δ+1)n(\Delta+1)n there are not more than (Δ+1)(nk)(\Delta+1)(n-k) transmission erasures in positions {1,,(Δ+1)n}t{J+(t1)n}\{1,\ldots,(\Delta+1)n\}\cap\bigcup_{t\in\mathbb{N}}\{J+(t-1)n\}.

Finally, we have to choose the cardinality of the set J{1,,n}J\subset\{1,\ldots,n\}. Then, the set JJ is chosen randomly with this fixed cardinality. If the cardinality of JJ is larger, this leads to more erasures for 𝒞t\mathcal{C}_{t} to correct. But in turn if the cardinality of JJ is smaller, this leads to more erasures for 𝒞\mathcal{C} to correct. To balance this somehow, we want to determine |J||J| such that the number of erasures one could correct in positions {1+(t1)n,,n+(t1)n}{J+(t1)n}\{1+(t-1)n,\ldots,n+(t-1)n\}\setminus\{J+(t-1)n\} is approximately the same as the number of erasures one could correct in positions {1+(t1)n,,n+(t1)n}{J+(t1)n}\{1+(t-1)n,\ldots,n+(t-1)n\}\cap\{J+(t-1)n\}. We denote this number of erasures by ntn_{t}. This approach leads to the following equations:

nt\displaystyle n_{t} nk(min{μ,t}+1)|J|and\displaystyle\leq n-k(\min\{\mu,t\}+1)-|J|\quad\text{and} (19)
nt\displaystyle n_{t} nk(n|J|)=|J|k.\displaystyle\leq n-k-(n-|J|)=|J|-k. (20)

This implies

ntk|J|nk(min{μ,t}+1)nt\displaystyle n_{t}-k\leq|J|\leq n-k(\min\{\mu,t\}+1)-n_{t} (21)

and consequently,

nt12(nk(min{μ,t}+2)).\displaystyle n_{t}\leq\frac{1}{2}(n-k(\min\{\mu,t\}+2)). (22)

Having equality in this last equation, implies |J|=12(nkmin{μ,t})|J|=\frac{1}{2}(n-k\min\{\mu,t\}). However, we need |J||J| to be an integer and independent of tt. As depending on the erasure pattern, the MDP convolutional code 𝒞\mathcal{C} might be able to correct more erasures than (20) indicates, we propose to rather choose |J||J| smaller, which finally leads to

|J|=12(nkμ).|J|=\lfloor\frac{1}{2}(n-k\mu)\rfloor. (23)

Of course, depending on the erasures that occur during transmission, other choices for |J||J| could lead to a better performance. However, as we do not know the erasure pattern before transmission and we have to choose JJ before, we cannot adapt JJ corresponding to the erasure pattern but have to choose it in a way that the numbers of channel erasures our codes 𝒞t\mathcal{C}_{t} and 𝒞\mathcal{C} are able to tolerate are balanced.

Note that we can correct more erasures in rtir_{t}^{i} if tt is small (as the code 𝒞t\mathcal{C}_{t} has a larger minimum distance if tt is small). This means that we could tolerate slightly more erasures at the beginning of the stream than at the end.

In the following, we illustrate the erasure correcting capability of our scheme with the help of two examples.

Example 9

Let n=6n=6, k=1k=1 and μ=2\mu=2. This implies δ=2\delta=2 and L=2L=2, i.e. 𝒞\mathcal{C} is an (6,1,2)(6,1,2) MDP convolutional code that could recover all erasures patterns for which in each sliding window of size 1818 there are not more than 1515 erasures. We assume Δ=L\Delta=L. Moreover, according to equation (23), we have |J|=2|J|=2. We illustrate one window of the response sequence (rti)t(r^{i}_{t})_{t\in\mathbb{N}} in the following figure, where the squares with content jj denote the positions of the set JJ:

j j j j j j

According to Theorem 8 we are able to recover 22 erasures in the first 44 positions with erasure decoding in 𝒞1\mathcal{C}_{1}. Moreover, 𝒞2\mathcal{C}_{2} and 𝒞3\mathcal{C}_{3} are both able to correct 11 additional erasure. Finally, the convolutional code 𝒞\mathcal{C} is able to correct 33 erasures in the positions in which we have a jj. To count the total number of erasures as well as the number of erasure patterns that can be corrected (assuming that erasures occur independently of each other), we have to distinguish two cases.

For the first case, we assume that the erasure pattern allows decoding with 𝒞1\mathcal{C}_{1}, 𝒞2\mathcal{C}_{2} and 𝒞3\mathcal{C}_{3}. Hence, we are able to correct up to 77 erasures in 1818 positions. Moreover, if we assume that the erasures occur independently of each other, we could correct (i=02(4i))55(i=03(6i))=10175\left(\sum_{i=0}^{2}\binom{4}{i}\right)\cdot 5\cdot 5\cdot\left(\sum_{i=0}^{3}\binom{6}{i}\right)=10175 different erasure patterns.

For the second case, we assume that the erasure pattern is such that there exists t{1,2,3}t\in\{1,2,3\} such that decoding with 𝒞t\mathcal{C}_{t} is not possible, i.e. the tt-th window of size n=6n=6 has to be considered as completely lost for 𝒞\mathcal{C}. In order that recovery is still possible, decoding with 𝒞s\mathcal{C}_{s} for sts\neq t has to be possible and only one additional erasure in the positions in JJ outside the completely erased window can be tolerated. Thus, for t=1t=1 the maximal number of erasures that can be corrected is 99 and the number of correctable erasure patterns equals 625625. For t1t\neq 1, the maximum number of erasures that can be corrected is 1010 and the number of correctable erasure patterns equals 27502750.

Summimg up, considering all cases, one gets that there are 1355013550 erasure patterns that we can correct.

If one would choose |J|=1|J|=1, correction is not possible anymore if one complete window of size nn is lost. We would still be able to correct 77 erasures but all these erasures have to be in positions

t=1L+1({1+(t1)n,,n+(t1)n}{J+(t1)n})\bigcup_{t=1}^{L+1}\left(\{1+(t-1)n,\ldots,n+(t-1)n\}\setminus\{J+(t-1)n\}\right)

whereas no erasures in positions

t=1L+1({1+(t1)n,,n+(t1)n}{J+(t1)n}),\bigcup_{t=1}^{L+1}\left(\{1+(t-1)n,\ldots,n+(t-1)n\}\cap\{J+(t-1)n\}\right),

could be corrected. Counting the number of erasure patterns that we are able to correct under the assumption of independent erasures, we get 66566656.

If one would choose |J|=3|J|=3, there are three cases to distinguish. For the first case, assume that no window of size nn is completely lost for recovery with 𝒞\mathcal{C}. Then, we could again correct 77 erasures but only 11 of these erasures can have a position in

t=1L+1({1+(t1)n,,n+(t1)n}{J+(t1)n}).\bigcup_{t=1}^{L+1}\left(\{1+(t-1)n,\ldots,n+(t-1)n\}\setminus\{J+(t-1)n\}\right).

The number of erasure patterns that could be corrected is 18641864.

For the second case, assume that correction with 𝒞t\mathcal{C}_{t} is not possible for exactly one t{1,2,3}t\in\{1,2,3\}. For t=1t=1, one could correct up to 99 erasures and 168168 erasure patterns, for t1t\neq 1, up to 1010 erasures and 23522352 erasure patterns.

For the third case, assume that correction with 𝒞t\mathcal{C}_{t} is not possible for exactly two values t{1,2,3}t\in\{1,2,3\}, denoted by t1t_{1} and t2t_{2}. If 1{t1,t2}1\in\{t_{1},t_{2}\}, one could correct up to 1212 erasures and 5656 erasure patterns, for 1{t1,t2}1\notin\{t_{1},t_{2}\}, up to 1313 erasures and 196196 erasure patterns.

Hence the total number of erasure patterns that could be corrected is 46364636. This illustrates that our choice of JJ is optimal if we assume the erasures to occur independently of each other.

Finally, we want to consider, how many erasures we can correct in a larger window and choose a window of size 24, which is illustrated as follows:

j j j j j j j j

According to Theorem 8 we are able to recover 22 erasures in the first 44 positions with erasure decoding in 𝒞1\mathcal{C}_{1} and 33 additional erasures with 𝒞t\mathcal{C}_{t} for t2t\geq 2. The convolutional code 𝒞\mathcal{C} is able to correct up to 55 erasures in the positions with jj. Under the assumption that decoding with 𝒞t\mathcal{C}_{t} is possible for t=1,,4t=1,\ldots,4, we are able to correct up to 1010 erasures. If decoding is not possible for exactly one tt, one can correct up to 1212 erasures if t=1t=1 and up to 1313 erasures if t1t\neq 1. If decoding is not possible for exactly two of the star product codes, recovery is only possible if this happens for t=1t=1 and t=4t=4, in which case up to 1515 erasures can be corrected.

If one would choose |J|=1|J|=1, we would only be able to correct 77 erasures and all these erasures have to be in positions in

t=1L+1({1+(t1)n,,n+(t1)n}{J+(t1)n}).\bigcup_{t=1}^{L+1}\left(\{1+(t-1)n,\ldots,n+(t-1)n\}\setminus\{J+(t-1)n\}\right).

If one would choose |J|=3|J|=3, one has to distinguish four cases. Under the assumption that decoding with 𝒞t\mathcal{C}_{t} is possible for t=1,,4t=1,\ldots,4, we are able to correct up to 1010 erasures in total but only 11 of these erasures could be in

t=1L+1({1+(t1)n,,n+(t1)n}{J+(t1)n}).\bigcup_{t=1}^{L+1}\left(\{1+(t-1)n,\ldots,n+(t-1)n\}\setminus\{J+(t-1)n\}\right).

If decoding is not possible for exactly one tt, one can correct up to 1212 erasures if t=1t=1 and up to 1313 erasures if t1t\neq 1. If decoding is not possible for exactly two of the star product codes and 𝒞1\mathcal{C}_{1} is among them, one could correct up to 1515 erasures and if 𝒞1\mathcal{C}_{1} is not among them, one could correct up to 1616 erasures. If decoding is not possible for exactly three of the star product codes, one could correct up to 1818 erasures (but there are only two erasure patterns for this scenario).

Example 10

Let n=10n=10, k=2k=2 and μ=2\mu=2. This implies (if we use for 𝒞\mathcal{C} the construction presented in the next section where GμG_{\mu} is full rank) δ=4\delta=4 and L=2L=2, i.e. 𝒞\mathcal{C} is an (10,2,4)(10,2,4) MDP convolutional code that can recover all erasures patterns for which in each sliding window of size 3030 there are not more than 2424 erasures. We assume Δ=L\Delta=L. Moreover, according to equation (23), we have |J|=3|J|=3.

According to Theorem 8 we are able to recover 33 erasures in the first 77 positions of the response sequence (rti)t(r^{i}_{t})_{t\in\mathbb{N}} with erasure decoding in 𝒞1\mathcal{C}_{1}. Moreover, 𝒞2\mathcal{C}_{2} and 𝒞3\mathcal{C}_{3} are both able to correct 11 additional erasure. Finally, the convolutional code 𝒞\mathcal{C} is able to correct 33 erasures in the positions covered by one of the sets {J+(t1)n}\{J+(t-1)n\}. In total, we are able to correct 88 erasures in 3030 positions in the case that correction with all 𝒞t\mathcal{C}_{t} is possible, up to 1212 erasures in the case that (only) the first window of size n=10n=10 is lost completely and up to 1414 erasures in the case that another window of size nn is erased completely.

If one would choose |J|=2|J|=2, we would be able to correct 88 erasures but all these erasures have to be in positions in

t=1L+1({1+(t1)n,,n+(t1)n}{J+(t1)n})\bigcup_{t=1}^{L+1}\left(\{1+(t-1)n,\ldots,n+(t-1)n\}\setminus\{J+(t-1)n\}\right)

whereas no erasures in

t=1L+1({1+(t1)n,,n+(t1)n}{J+(t1)n})\bigcup_{t=1}^{L+1}\left(\{1+(t-1)n,\ldots,n+(t-1)n\}\cap\{J+(t-1)n\}\right)

could be corrected.

If one would choose |J|=4|J|=4, we can again correct 88 erasures in the case that correction with all 𝒞t\mathcal{C}_{t} is possible but only 22 of these erasures can have a position in

t=1L+1({1+(t1)n,,n+(t1)n}{J+(t1)n}).\bigcup_{t=1}^{L+1}\left(\{1+(t-1)n,\ldots,n+(t-1)n\}\setminus\{J+(t-1)n\}\right).

Moreover, we could correct up to 1212 erasures in the case that the first window of size n=10n=10 is lost completely and up to 1414 erasures in the case that another window of size nn is erased completely.

Again our choice of JJ is optimal if we assume the erasures to occur independently of each other.

Remark 11

The major advantage of using convolutional codes instead of block codes is that the symbols in different windows of size nn are dependent on each other and hence erasures cannot only be recovered with the help of the received symbols in the same window but also with the help of received symbols of other windows. This is illustrated also by the previous examples where recovery is possible if all symbols with positions in JJ are erased if not too many symbols with positions in {J+n}\{J+n\} and {J+2n}\{J+2n\} are erased. This is due to the fact that there are erasure patterns where all symbols of the first window of size nn are erased but recovery with a convolutional code is still possible. But of course. this can never be possible using block codes since in this case all windows of size nn have to be decoded independently of each other.

IV Construction of suitable streaming codes

The aim of this section is to provide constructions for (n,k,δ)(n,k,\delta) MDP convolutional codes 𝒞\mathcal{C}, which have the additional property that, for f=1,,μf=1,\ldots,\mu, 𝒞f\mathcal{C}_{f} is an [n,(f+1)k][n,(f+1)k] MDS block code, as proposed at the beginning of the previous section. To this end, we will use the following lemma and proposition.

Lemma 12

[12] Let 𝒞\mathcal{C} be an [n,k][n,k] block code with generator matrix GG. Then, 𝒞\mathcal{C} is MDS if, and only if, all k×kk\times k full size minors of GG are nonzero.

Proposition 13

[3, Theorem 3.3] Let α\alpha be a primitive element of a finite field 𝔽=𝔽pN\mathbb{F}=\mathbb{F}_{p^{N}} and B=[bi,l]B=[b_{i,l}] be a matrix over 𝔽\mathbb{F} with the following properties

  1. 1.

    if bi,l0b_{i,l}\neq 0, then bi,l=αβi,lb_{i,l}=\alpha^{\beta_{i,l}} for a positive integer βi,l\beta_{i,l}

  2. 2.

    if bi,l=0b_{i,l}=0, then bi,l=0b_{i^{\prime},l}=0 for any i>ii^{\prime}>i or bi,l=0b_{i,l^{\prime}}=0 for any l<ll^{\prime}<l

  3. 3.

    if l<ll<l^{\prime}, bi,l0b_{i,l}\neq 0 and bi,l0b_{i,l^{\prime}}\neq 0, then 2βi,lβi,l2\beta_{i,l}\leq\beta_{i,l^{\prime}}

  4. 4.

    if i<ii<i^{\prime}, bi,l0b_{i,l}\neq 0 and bi,l0b_{i^{\prime},l}\neq 0, then 2βi,lβi,l2\beta_{i,l}\leq\beta_{i^{\prime},l}.

Suppose NN is greater than any exponent of α\alpha appearing as a nontrivial term of any minor of BB. Then BB has the property that each of its minors which is not trivially zero is nonzero.

The following theorem gives the desired construction.

Theorem 14

Let pp be prime, NN\in\mathbb{N} and α\alpha be a primitive element of 𝔽pN\mathbb{F}_{p^{N}}. For i=1,,μi=1,\ldots,\mu, set

Gi:=[α2inα2(i+1)n1α2in+k1α2(i+1)n+k2].\displaystyle G_{i}:=\left[\begin{array}[]{ccc}\alpha^{2^{in}}&\cdots&\alpha^{2^{(i+1)n-1}}\\ \vdots&&\vdots\\ \alpha^{2^{in+k-1}}&\cdots&\alpha^{2^{(i+1)n+k-2}}\end{array}\right]. (27)

Then, the convolutional code 𝒞\mathcal{C} with generator matrix G(z)=i=0μGiziG(z)=\sum_{i=0}^{\mu}G_{i}z^{i} is an MDP convolutional code and moreover, for 0tμ0\leq t\leq\mu, (G0Gt)\begin{pmatrix}G_{0}\\ \vdots\\ G_{t}\end{pmatrix} is the generator matrix of an MDS block code if N>max{2n(L+2)1,2(μ+1)n+k1}N>\max\{2^{n(L+2)-1},2^{(\mu+1)n+k-1}\}.

Proof:

Obviously, the fullsize minors of [G0GLG0]\left[\begin{array}[]{ccc}G_{0}&&\\ \vdots&\ddots&\\ G_{L}&\cdots&G_{0}\end{array}\right] and [G0G0GL]\left[\begin{array}[]{ccc}&&G_{0}\\ &\text{\reflectbox{$\ddots$}}&\vdots\\ G_{0}&\cdots&G_{L}\end{array}\right] are equal. Thus, it follows from Theorem 2 and Proposition 13 that 𝒞\mathcal{C} is an MDP convolutional code if N>2n(L+2)1N>2^{n(L+2)-1} (for the bound on NN also see Theorem 3.2 of [3]). Moreover, it follows from Lemma 12 and Proposition 13 that (G0Gt)\begin{pmatrix}G_{0}\\ \vdots\\ G_{t}\end{pmatrix} for 0tμ0\leq t\leq\mu are generator matrices of MDS block codes if N>2(μ+1)n+k1>j=(μ+1)n1(μ+1)n+k22jN>2^{(\mu+1)n+k-1}>\sum_{j=(\mu+1)n-1}^{(\mu+1)n+k-2}2^{j}. ∎

V Conclusion

We have studied the problem of private streaming of a sequence of files having the resilience against unresponsive servers the primary metric for judging the efficiency of a PIR scheme. We proposed for the first time a general scheme for such a problem. This scheme is based on MDP convolutional codes and the star product of codes. It suits for a context where some servers fail to respond in contrast to other solutions considered in the literature where all the servers were assumed to fail at the same time instant. The approach presented can retrieve files in a sequential fashion and therefore is optimal for low-delay streaming applications. Some examples were presented to show how to take advantage of the proposed scheme. We derived a large set of erasure patterns that our codes can recover. Concrete constructions of such codes exist although large field sizes are required. The construction of optimal codes for PIR over small fields that can deal with both burst and isolated erasures/errors is an interesting open problem that requires further research.

Acknowledgment

The work of the first and third author was supported by the Portuguese Foundation for Science and Technology (FCT-Fundação para a Ciência e a Tecnologia), through CIDMA - Center for Research and Development in Mathematics and Applications, within project UID/MAT/04106/2019. The first author was supported by the German Research Foundation within grant LI 3101/1-1. The second author was partially supported by Spanish grant AICO/2017/128 of the Generalitat Valenciana and the University of Alicante under the project VIGROB-287.

References

  • [1] Cisco visual network index: Forecast and methodology, 2016-2021. Tech. Rep., June 2017, 2018.
  • [2] N. Adler and Y. Cassuto. Burst-erasure correcting codes with optimal average delay. IEEE Trans. Inform. Theory, 63(5):2848–2865, May 2017.
  • [3] P. Almeida, D. Napp, and R. Pinto. Superregular matrices and applications to convolutional codes. Linear Algebra and its Applications, 499:1–25, 2016.
  • [4] A. Badr, A. Khisti, W. T. Tan, and J. Apostolopoulos. Robust streaming erasure codes based on deterministic channel approximations. In 2013 IEEE International Symposium on Information Theory, pages 1002–1006, 2013.
  • [5] A. Badr, A. Khisti, Wai-Tian. Tan, and J. Apostolopoulos. Layered constructions for low-delay streaming codes. IEEE Trans. Inform. Theory, 63(1):111–141, 2017.
  • [6] K. Banawan and S. Ulukus. The capacity of private information retrieval from coded databases. IEEE Transactions on Information Theory, 64(3):1945–1956, 2018.
  • [7] Benny Chor, Eyal Kushilevitz, Oded Goldreich, and Madhu Sudan. Private information retrieval. J. ACM, 45(6):965–981, 1998.
  • [8] R. Freij-Hollanti, O. Gnilke, C. Hollanti, and D. Karpuk. Private information retrieval from coded databases with colluding servers. SIAM Journal on Applied Algebra and Geometry, 1(1):647–664, 2017.
  • [9] H. Gluesing-Luerssen, J. Rosenthal, and R. Smarandache. Strongly MDS convolutional codes. IEEE Trans.  Inform.  Theory, 52(2):584–598, 2006.
  • [10] Lukas Holzbaur, Ragnar Freij-Hollanti, Antonia Wachter-Zeh, and Camilla Hollanti. Private streaming with convolutional codes. In 2018 IEEE Information Theory Workshop, ITW 2018, pages 550–554. Institute of Electrical and Electronics Engineers, 2019.
  • [11] R. Johannesson and K. Sh. Zigangirov. Fundamentals of Convolutional Coding. IEEE Press, New York, 2015.
  • [12] F. J. MacWilliams and N. J.A. Sloane. The Theory of Error-Correcting Codes. North Holland, Amsterdam, 1977.
  • [13] U. Martínez-Peñas. Private information retrieval from locally repairable databases with colluding servers. In 2019 IEEE International Symposium on Information Theory (ISIT), 2019.
  • [14] E. Martinian and C. E. W. Sundberg. Burst erasure correction codes with low decoding delay. IEEE Transactions on Information Theory, 50(10):2494–2502, 2004.
  • [15] R. J. McEliece. The algebraic theory of convolutional codes. In Handbook of Coding Theory, volume 1, pages 1065–1138. Elsevier Science Publishers, 1998.
  • [16] N. B. Shah, K. V. Rashmi, and K. Ramchandran. One extra bit of download ensures perfectly private information retrieval. In 2014 IEEE International Symposium on Information Theory, pages 856–860, 2014.
  • [17] R. Tajeddine and S. El Rouayheb. Private information retrieval from mds coded data in distributed storage systems. In 2016 IEEE International Symposium on Information Theory (ISIT), pages 1411–1415, 2016.
  • [18] R. Tajeddine, O. W. Gnilke, D. Karpuk, R. Freij-Hollanti, and C. Hollanti. Private information retrieval from coded storage systems with colluding, byzantine, and unresponsive servers. IEEE Transactions on Information Theory, 65(6):3898–3906, 2019.
  • [19] R. Tajeddine and S. E. Rouayheb. Robust private information retrieval on coded data. In 2017 IEEE International Symposium on Information Theory (ISIT), pages 1903–1907, 2017.
  • [20] V. Tomas, J. Rosenthal, and R. Smarandache. Decoding of convolutional codes over the erasure channel. IEEE Trans. Inform. Theory, 58(1):90–108, January 2012.
  • [21] Yiwei Zhang and Gennian Ge. Private information retrieval from mds coded databases with colluding servers under several variant models. 2017.