This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Pennsylvania State University, USA, 11email: {sofya,asmith}@cse.psu.edu 22institutetext: Tel Aviv University, Israel, 22email: danar@eng.tau.ac.il 33institutetext: MIT, Cambridge MA, USA, 33email: ronitt@csail.mit.edu

Sublinear Algorithms for Approximating String Compressibility

Sofya Raskhodnikova Research done while at the Hebrew University of Jerusalem, Israel, supported by the Lady Davis Fellowship, and while at the Weizmann Institute of Science, Israel.11    Dana Ron Supported by the Israel Science Foundation (grant number 89/05).22    Ronitt Rubinfeld 33    Adam Smith Research done while at the Weizmann Institute of Science, Israel, supported by the Louis L. and Anita M. Perlman Postdoctoral Fellowship.11
Abstract

We raise the question of approximating the compressibility of a string with respect to a fixed compression scheme, in sublinear time. We study this question in detail for two popular lossless compression schemes: run-length encoding (RLE) and Lempel-Ziv (LZ), and present sublinear algorithms for approximating compressibility with respect to both schemes. We also give several lower bounds that show that our algorithms for both schemes cannot be improved significantly.

Our investigation of LZ yields results whose interest goes beyond the initial questions we set out to study. In particular, we prove combinatorial structural lemmas that relate the compressibility of a string with respect to Lempel-Ziv to the number of distinct short substrings contained in it. In addition, we show that approximating the compressibility with respect to LZ is related to approximating the support size of a distribution.

1 Introduction

Given an extremely long string, it is natural to wonder how compressible it is. This fundamental question is of interest to a wide range of areas of study, including computational complexity theory, machine learning, storage systems, and communications. As massive data sets are now commonplace, the ability to estimate their compressibility with extremely efficient, even sublinear time, algorithms, is gaining in importance. The most general measure of compressibility, Kolmogorov complexity, is not computable (see [14] for a textbook treatment), nor even approximable. Even under restrictions which make it computable (such as a bound on the running time of decompression), it is probably hard to approximate in polynomial time, since an approximation would allow distinguishing random from pseudorandom strings and, hence, inverting one-way functions. However, the question of how compressible a large string is with respect to a specific compression scheme may be tractable, depending on the particular scheme.

We raise the question of approximating the compressibility of a string with respect to a fixed compression scheme, in sublinear time, and give algorithms and nearly matching lower bounds for several versions of the problem. While this question is new, for one compression scheme, answers follow from previous work. Namely, compressibility under Huffman encoding is determined by the entropy of the symbol frequencies. Batu et al.  [3] and Brautbar and Samorodnitsky [5] study the problem of approximating the entropy of a distribution from a small number of samples, and their results immediately imply algorithms and lower bounds for approximating compressibility under Huffman encoding.

In this work we study the compressibility approximation question in detail for two popular lossless compression schemes: run-length encoding (RLE) and Lempel-Ziv (LZ) [18]. In the RLE scheme, each run, or a sequence of consecutive occurrences of the same character, is stored as a pair: the character, and the length of the run. Run-length encoding is used to compress black and white images, faxes, and other simple graphic images, such as icons and line drawings, which usually contain many long runs. In the LZ scheme111We study the variant known as LZ77 [18], which achieves the best compressibility. There are several other variants that do not compress some inputs as well, but can be implemented more efficiently., a left-to-right pass of the input string is performed and at each step, the longest sequence of characters that has started in the previous portion of the string is replaced with the pointer to the previous location and the length of the sequence (for a formal definition, see Section 4). The LZ scheme and its variants have been studied extensively in machine learning and information theory, in part because they compress strings generated by an ergodic source to the shortest possible representation (given by the entropy) in the asymptotic limit (cf. [10]). Many popular archivers, such as gzip, use variations on the LZ scheme. In this work we present sublinear algorithms and corresponding lower bounds for approximating compressibility with respect to both schemes, RLE and LZ.

Motivation.

Computing the compressibility of a large string with respect to specific compression schemes may be done in order to decide whether or not to compress the file, to choose which compression method is the most suitable, or check whether a small modification to the file (e.g., a rotation of an image) will make it significantly more compressible222For example, a variant of the RLE scheme, typically used to compress images, runs RLE on the concatenated rows of the image and on the concatenated columns of the image, and stores the shorter of the two compressed files. . Moreover, compression schemes are used as tools for measuring properties of strings such as similarity and entropy. As such, they are applied widely in data-mining, natural language processing and genomics (see, for example, Lowenstern et al.  [15], Kukushkina et al.  [11], Benedetto et al.  [4], Li et al.  [13] and Calibrasi and Vitányi [8, 9]). In these applications, one typically needs only the length of the compressed version of a file, not the output itself. For example, in the clustering algorithm of [8], the distance between two objects xx and yy is given by a normalized version of the length of their compressed concatenation xyx\|y. The algorithm first computes all pairwise distances, and then analyzes the resulting distance matrix. This requires Θ(t2)\Theta(t^{2}) runs of a compression scheme, such as gzip, to cluster tt objects. Even a weak approximation algorithm that can quickly rule out very incompressible strings would reduce the running time of the clustering computations dramatically.

Multiplicative and Additive Approximations.

We consider three approximation notions: additive, multiplicative, and the combination of additive and multiplicative. On the input of length nn, the quantities we approximate range from 1 to nn. An additive approximation algorithm is allowed an additive error of ϵn\epsilon n, where ϵ(0,1)\epsilon\in(0,1) is a parameter. The output of a multiplicative approximation algorithm is within a factor A>1A>1 of the correct answer. The combined notion allows both types of error: the algorithm should output an estimate C^\widehat{C} of the compression cost CC such that CAϵnC^AC+ϵn\frac{C}{A}-\epsilon n\leq\widehat{C}\leq A\cdot C+\epsilon n. Our algorithms are randomized, and for all inputs the approximation guarantees hold with probability at least 23\frac{2}{3}.

We are interested in sublinear approximation algorithms, which read few positions of the input strings. For the schemes we study, purely multiplicative approximation algorithms must read almost the entire input. Nevertheless, algorithms with additive error guarantees, or a possibility of both multiplicative and additive error are often sufficient for distinguishing very compressible inputs from inputs that are not well compressible. For both the RLE and LZ schemes, we give algorithms with combined multiplicative and additive error that make few queries to the input. When it comes to additive approximations, however, the two schemes differ sharply: sublinear additive approximations are possible for the RLE compressibility, but not for LZ compressibility.

1.1 Results for Run-Length Encoding

For RLE, we present sublinear algorithms for all three approximation notions defined above, providing a trade-off between the quality of approximation and the running time. The algorithms that allow an additive approximation run in time independent of the input size. Specifically, an ϵn\epsilon n-additive estimate can be obtained in time333The notation O~(g(k))\tilde{O}(g(k)) for a function gg of a parameter kk means O(g(k)polylog(g(k))O(g(k)\cdot{\rm polylog}(g(k)) where polylog(g(k))=logc(g(k)){\rm polylog}(g(k))=\log^{c}(g(k)) for some constant cc. O~(1/ϵ3)\tilde{O}(1/\epsilon^{3}), and a combined estimate, with a multiplicative error of 33 and an additive error of ϵn\epsilon n, can be obtained in time O~(1/ϵ)\tilde{O}(1/\epsilon). As for a strict multiplicative approximation, we give a simple 4-multiplicative approximation algorithm that runs in expected time O~(nCrle(w))\tilde{O}(\frac{n}{C_{\rm rle}(w)}) where Crle(w)C_{\rm rle}(w) denotes the compression cost of the string ww. For any γ>0\gamma>0, the multiplicative error can be improved to 1+γ1+\gamma at the cost of multiplying the running time by poly(1/γ){\rm poly}(1/\gamma). Observe that the algorithm is more efficient when the string is less compressible, and less efficient when the string is more compressible. One of our lower bounds justifies such a behavior and, in particular, shows that a constant factor approximation requires linear time for strings that are very compressible. We also give a lower bound of Ω(1/ϵ2)\Omega(1/\epsilon^{2}) for ϵn\epsilon n-additive approximation.

1.2 Results for Lempel-Ziv

We prove that approximating compressibility with respect to LZ is closely related to the following problem, which we call Colors: Given access to a string τ\tau of length nn over alphabet Ψ\Psi, approximate the number of distinct symbols (“colors”) in τ\tau. This is essentially equivalent to estimating the support size of a distribution [17]. Variants of this problem have been considered under various guises in the literature: in databases it is referred to as approximating distinct values (Charikar et al.  [7]), in statistics as estimating the number of species in a population (see the over 800 references maintained by Bunge [6]), and in streaming as approximating the frequency moment F0F_{0} (Alon et al.  [1], Bar-Yossef et al.  [2]). Most of these works, however, consider models different from ours. For our model, there is an AA-multiplicative approximation algorithm of [7], that runs in time O(nA2)O\left(\frac{n}{A^{2}}\right), matching the lower bound in [7, 2]. There is also an almost linear lower bound for approximating Colors with additive error [17].

We give a reduction from LZ compressibility to Colors and vice versa. These reductions allow us to employ the known results on Colors to give algorithms and lower bounds for this problem. Our approximation algorithm for LZ compressibility combines a multiplicative and additive error. The running time of the algorithm is O~(nA3ϵ)\tilde{O}\left(\frac{n}{A^{3}\epsilon}\right) where AA is the multiplicative error and ϵn\epsilon n is the additive error. In particular, this implies that for any α>0\alpha>0, we can distinguish, in sublinear time O~(n1α)\tilde{O}(n^{1-\alpha}), strings compressible to O(n1α)O(n^{1-\alpha}) symbols from strings only compressible to Ω(n)\Omega(n) symbols.444To see this, set A=o(nα/2)A=o(n^{\alpha/2}) and ϵ=o(nα/2)\epsilon=o(n^{-\alpha/2}).

The main tool in the algorithm consists of two combinatorial structural lemmas that relate compressibility of the string to the number of distinct short substrings contained in it. Roughly, they say that a string is well compressible with respect to LZ if and only if it contains few distinct substrings of length \ell for all small \ell (when considering all n+1n-\ell+1 possible overlapping substrings). The simpler of the two lemmas was inspired by a structural lemma for grammars by Lehman and Shelat [12]. The combinatorial lemmas allow us to establish a reduction from LZ compressibility to Colors and employ a (simple) algorithm for approximating Colors in our algorithm for LZ.

Interestingly, we can show that there is also a reduction in the opposite direction: namely, approximating Colors reduces to approximating LZ compressibility. The lower bound of [17], combined with the reduction from Colors to LZ, implies that our algorithm for LZ cannot be improved significantly. In particular, our lower bound implies that for any B=no(1)B=n^{o(1)}, distinguishing strings compressible by LZ to O~(n/B)\tilde{O}(n/B) symbols from strings compressible to Ω~(n)\tilde{\Omega}(n) symbols requires n1o(1)n^{1-o(1)} queries.

1.3 Further Research

It would be interesting to extend our results for estimating the compressibility under LZ77 to other variants of LZ, such as dictionary-based LZ78 [19]. Compressibility under LZ78 can be drastically different from compressibility under LZ77: e.g., for 0n0^{n} they differ roughly by a factor of n\sqrt{n}. Another open question is approximating compressibility for schemes other than RLE and LZ. In particular, it would be interesting to design approximation algorithms for lossy compression schemes such as JPEG, MPEG and MP3. One lossy compression scheme to which our results extend directly is Lossy RLE, where some characters, e.g., the ones that represent similar colors, are treated as the same character.

1.4 Organization

We start with some definitions in Section 2. Section 3 contains our results for RLE. Section 4 deals with the LZ scheme. All missing details (descriptions of algorithms and proofs of claims) can be found in [16].

2 Preliminaries

The input to our algorithms is usually a string ww of length nn over a finite alphabet Σ\Sigma. The quantities we approximate, such as compression cost of ww under a specific algorithm, range from 1 to nn. We consider estimates to these quantities that have both multiplicative and additive error. We call C^\widehat{C} an (λ,ϵ)(\lambda,\epsilon)-estimate for CC if CλϵnC^λC+ϵn,\frac{C}{\lambda}-\epsilon n\;\leq\;\widehat{C}\;\leq\;\lambda\cdot C+\epsilon n\;, and say an algorithm (λ,ϵ)(\lambda,\epsilon)-estimates CC (or is an (λ,ϵ)(\lambda,\epsilon)-approximation algorithm for CC) if for each input it produces an (λ,ϵ)(\lambda,\epsilon)-estimate for CC with probability at least 23\frac{2}{3}.

When the error is purely additive or multiplicative, we use the following shorthand: ϵn\epsilon n-additive estimate stands for (1,ϵ)(1,\epsilon)-estimate and λ\lambda-multiplicative estimate, or λ\lambda-estimate, stands for (λ,0)(\lambda,0)-estimate. An algorithm computing an ϵn\epsilon n-additive estimate with probability at least 23\frac{2}{3} is an ϵn\epsilon n-additive approximation algorithm, and if it computes an λ\lambda-multiplicative estimate then it is an λ\lambda-multiplicative approximation algorithm, or λ\lambda-approximation algorithm.

For some settings of parameters, obtaining a valid estimate is trivial. For a quantity in [1,n][1,n], for example, n2\frac{n}{2} is an n2\frac{n}{2}-additive estimate, n\sqrt{n} is a n\sqrt{n}-estimate and ϵn\epsilon n is an (λ,ϵ)(\lambda,\epsilon)-estimate whenever λ12ϵ\lambda\geq\frac{1}{2\epsilon}.

3 Run-Length Encoding

Every nn-character string ww over alphabet Σ\Sigma can be partitioned into maximal runs of identical characters of the form σ\sigma^{\ell}, where σ\sigma is a symbol in Σ\Sigma and \ell is the length of the run, and consecutive runs are composed of different symbols. In the Run-Length Encoding of ww, each such run is replaced by the pair (σ,)(\sigma,\ell). The number of bits needed to represent such a pair is log(+1)+log|Σ|\left\lceil{\log(\ell+1)}\right\rceil+\left\lceil{\log|\Sigma|}\right\rceil plus the overhead which depends on how the separation between the characters and the lengths is implemented. One way to implement it is to use prefix-free encoding for lengths. For simplicity we ignore the overhead in the above expression, but our analysis can be adapted to any implementation choice. The cost of the run-length encoding, denoted by Crle(w)C_{\rm rle}(w), is the sum over all runs of log(+1)+log|Σ|\left\lceil{\log(\ell+1)}\right\rceil+\left\lceil{\log|\Sigma|}\right\rceil.

3.1 An ϵn\epsilon n-Additive Estimate with O~(1/ϵ3)\tilde{O}(1/\epsilon^{3}) Queries

Our first algorithm for approximating the cost of RLE is very simple: it samples a few positions in the input string uniformly at random and bounds the lengths of the runs to which they belong by looking at the positions to the left and to the right of each sample. If the corresponding run is short, its length is established exactly; if it is long, we argue that it does not contribute much to the encoding cost. For each index t[n]t\in[n], let (t)\ell(t) be the length of the run to which wtw_{t} belongs. The cost contribution of index tt is defined as

c(t)=log((t)+1)+log|Σ|(t).c(t)=\frac{\left\lceil{\log(\ell(t)+1)}\right\rceil+\left\lceil{\log|\Sigma|}\right\rceil}{\ell(t)}. (1)

By definition, Crle(w)n=𝖤t[n][c(t)]\displaystyle\frac{C_{\rm rle}(w)}{n}=\operatorname*{{\mathsf{E}}}_{t\in[n]}[c(t)], where 𝖤t[n]\operatorname*{{\mathsf{E}}}_{t\in[n]} denotes expectation over a uniformly random choice of tt. The algorithm, presented below, estimates the encoding cost by the average of the cost contributions of the sampled short runs, multiplied by nn.

Algorithm I: An ϵn\epsilon n-additive Approximation for Crle(w)C_{\rm rle}(w) 1. Select q=Θ(1ϵ2)q=\Theta\left(\frac{1}{\epsilon^{2}}\right) indices t1,,tqt_{1},\dots,t_{q} uniformly and independently at random. 2. For each i[q]:i\in[q]: (a) Query tit_{i} and up to 0=8log(4|Σ|/ϵ)ϵ\ell_{0}=\frac{8\log(4|\Sigma|/\epsilon)}{\epsilon} positions in its vicinity to bound (ti)\ell(t_{i}). (b) Set c^(ti)=c(ti)\hat{c}(t_{i})=c(t_{i}) if (ti)<0\ell(t_{i})<\ell_{0} and c^(ti)=0\hat{c}(t_{i})=0 otherwise. 3. Output C^rle=n𝖤i[q][c^(ti)]\displaystyle\widehat{C}_{\rm rle}=n\cdot\operatorname*{{\mathsf{E}}}_{i\in[q]}[\hat{c}(t_{i})].

Correctness.

We first prove that the algorithm is an ϵn\epsilon n-additive approximation. The error of the algorithm comes from two sources: from ignoring the contribution of long runs and from sampling. The ignored indices tt, for which (t)0\ell(t)\geq\ell_{0}, do not contribute much to the cost. Since the cost assigned to the indices monotonically decreases with the length of the run to which they belong, for each such index,

c(t)log(0+1)+log|Σ|0ϵ2.c(t)\leq\frac{\left\lceil{\log(\ell_{0}+1)}\right\rceil+\left\lceil{\log|\Sigma|}\right\rceil}{\ell_{0}}\leq\frac{\epsilon}{2}. (2)

Therefore,

Crle(w)nϵ21nt:(t)<0c(t)Crle(w)n.\frac{C_{\rm rle}(w)}{n}-\frac{\epsilon}{2}\;\;\leq\;\;\frac{1}{n}\cdot\sum_{t:\,\ell(t)<\ell_{0}}c(t)\;\;\leq\;\;\frac{C_{\rm rle}(w)}{n}. (3)

Equivalently, Crle(w)nϵ2𝖤i[n][c^(ti)]Crle(w)n\frac{C_{\rm rle}(w)}{n}-\frac{\epsilon}{2}\leq\operatorname*{{\mathsf{E}}}_{i\in[n]}[\hat{c}(t_{i})]\leq\frac{C_{\rm rle}(w)}{n}.

By an additive Chernoff bound, with high constant probability, the sampling error in estimating 𝖤[c^(ti)]\operatorname*{{\mathsf{E}}}[\hat{c}(t_{i})] is at most ϵ/2\epsilon/2. Therefore, C^rle\widehat{C}_{\rm rle} is an ϵn\epsilon n-additive estimate of Crle(w)C_{\rm rle}(w), as desired.

Query and time complexity. (Assuming |Σ||\Sigma| is constant.) Since the number of queries performed for each selected tit_{i} is O(0)=O(log(1/ϵ)/ϵ)O(\ell_{0})=O(\log(1/\epsilon)/\epsilon), the total number of queries, as well as the running time, is O(log(1/ϵ)/ϵ3)O(\log(1/\epsilon)/\epsilon^{3}).

3.2 Summary of Positive Results on RLE

After stating Theorem 1 that summarizes our positive results, we briefly discuss some of the ideas used in the algorithms omitted from this version of the paper.

Theorem 1

Let wΣnw\in\Sigma^{n} be a string to which we are given query access.

  1. 1.

    Algorithm I gives ϵn\epsilon n-additive approximation to Crle(w)C_{\rm rle}(w) in time O~(1/ϵ3)\tilde{O}(1/\epsilon^{3}).

  2. 2.

    Crle(w)C_{\rm rle}(w) can be (3,ϵ)(3,\epsilon)-estimated in time O~(1/ϵ)\tilde{O}(1/\epsilon).

  3. 3.

    Crle(w)C_{\rm rle}(w) can be 44-estimated in expected time O~(nCrle(w))\tilde{O}\left(\frac{n}{C_{\rm rle}(w)}\right). A (1+γ)(1+\gamma)-estimate of Crle(w)C_{\rm rle}(w) can be obtained in expected time O~(nCrle(w)poly(1/γ))\tilde{O}\left(\frac{n}{C_{\rm rle}(w)}\cdot{\rm poly}(1/\gamma)\right). The algorithm needs no prior knowledge of Crle(w)C_{\rm rle}(w).

Section 3.1 gives a complete proof of Item 1. The algorithm in Item 2 partitions the positions in the string into buckets according to the length of the runs they belong to. It estimates the sizes of different buckets with different precision, depending on the size of the bucket and the length of the runs it contains. The main idea in Item 3 is to search for Crle(w)C_{\rm rle}(w), using the algorithm from Item 2 repeatedly (with different parameters) to establish successively better estimates.

3.3 Lower Bounds for RLE

We give two lower bounds, for multiplicative and additive approximation, respectively, which establish that the running times in Items 1 and 3 of Theorem 1 are essentially tight.

Theorem 2
  1. 1.

    For all A>1A>1, any AA-approximation algorithm for CrleC_{\rm rle} requires Ω(nA2logn)\Omega\left(\frac{n}{A^{2}\log n}\right) queries. Furthermore, if the input is restricted to strings with compression cost Crle(w)CC_{\rm rle}(w)\geq C, then Ω(nCA2log(n))\Omega\left(\frac{n}{CA^{2}\log(n)}\right) queries are necessary.

  2. 2.

    For all ϵ(0,12)\epsilon\in\left(0,\frac{1}{2}\right), any ϵn\epsilon n-additive approximation algorithm for CrleC_{\rm rle} requires Ω(1/ϵ2)\Omega(1/\epsilon^{2}) queries.

A Multiplicative Lower Bound (Proof of Theorem 2, Item 1):

The claim follows from the next lemma:

Lemma 3

For every n2n\geq 2 and every integer 1kn/21\leq k\leq n/2, there exists a family of strings, denoted WkW_{k}, for which the following holds: (1) Crle(w)=Θ(klog(nk))C_{\rm rle}(w)=\Theta\left(k\log(\frac{n}{k})\right) for every wWkw\in W_{k}; (2) Distinguishing a uniformly random string in WkW_{k} from one in WkW_{k^{\prime}}, where k>kk^{\prime}>k, requires Ω(nk)\Omega\left(\frac{n}{k^{\prime}}\right) queries.

Proof:  Let Σ={0,1}\Sigma={\{0,1\}} and assume for simplicity that nn is divisible by kk. Every string in WkW_{k} consists of kk blocks, each of length nk\frac{n}{k}. Every odd block contains only 11s and every even block contains a single 0. The strings in WkW_{k} differ in the locations of the 0s within the even blocks. Every wWkw\in W_{k} contains k/2k/2 isolated 0s and k/2k/2 runs of 1s, each of length Θ(nk)\Theta(\frac{n}{k}). Therefore, Crle(w)=Θ(klog(nk))C_{\rm rle}(w)=\Theta\left(k\log(\frac{n}{k})\right). To distinguish a random string in WkW_{k} from one in WkW_{k^{\prime}} with probability 2/3, one must make Ω(nmax(k,k))\Omega(\frac{n}{\max(k,k^{\prime})}) queries since, in both cases, with asymptotically fewer queries the algorithm sees only 1’s with high probability.     

Additive Lower Bound (Proof Theorem 2, Item 1):

For any p[0,1]p\in[0,1] and sufficiently large nn, let 𝒟n,p{\cal D}_{n,p} be the following distribution over nn-bit strings. For simplicity, consider nn divisible by 33. The string is determined by n3\frac{n}{3} independent coin flips, each with bias pp. Each “heads” extends the string by three runs of length 1, and each “tails”, by a run of length 3. Given the sequence of run lengths, dictated by the coin flips, output the unique binary string that starts with 0 and has this sequence of run lengths.555Let bib_{i} be a boolean variable representing the outcome of the iith coin. Then the output is 0b101b2¯10b301b4¯10b_{1}01\overline{b_{2}}10b_{3}01\overline{b_{4}}1\dots

Let WW be a random variable drawn according to 𝒟n,1/2{\cal D}_{n,1/2} and WW^{\prime}, according to 𝒟n,1/2+ϵ{\cal D}_{n,1/2+\epsilon}. The following facts are established in the full version [16]: (a) Ω(1/ϵ2)\Omega(1/\epsilon^{2}) queries are necessary to reliably distinguish WW from WW^{\prime}, and (b) With high probability, the encoding costs of WW and WW^{\prime} differ by Ω(ϵn)\Omega(\epsilon n). Together these facts imply the lower bound.     

4 Lempel Ziv Compression

In this section we consider a variant of Lempel and Ziv’s compression algorithm [18], which we refer to as LZ77. In all that follows we use the shorthand [n][n] for {1,,n}\{1,\ldots,n\}. Let wΣnw\in\Sigma^{n} be a string over an alphabet Σ\Sigma. Each symbol of the compressed representation of ww, denoted LZ(w){LZ}(w), is either a character σΣ\sigma\in\Sigma or a pair (p,)(p,\ell) where p[n]p\in[n] is a pointer (index) to a location in the string ww and \ell is the length of the substring of ww that this symbol represents. To compress ww, the algorithm works as follows. Starting from t=1t=1, at each step the algorithm finds the longest substring wtwt+1w_{t}\dots w_{t+\ell-1} for which there exists an index p<tp<t, such that wpwp+1=wtwt+1w_{p}\ldots w_{p+\ell-1}=w_{t}\ldots w_{t+\ell-1}. (The substrings wpwp+1w_{p}\ldots w_{p+\ell-1} and wtwt+1w_{t}\ldots w_{t+\ell-1} may overlap.) If there is no such substring (that is, the character wtw_{t} has not appeared before) then the next symbol in LZ(w){LZ}(w) is wtw_{t}, and t=t+1t=t+1. Otherwise, the next symbol is (p,)(p,\ell) and t=t+t=t+\ell. We refer to the substring wtwt+1w_{t}\dots w_{t+\ell-1} (or wtw_{t} when wtw_{t} is a new character) as a compressed segment.

Let CLZ(w)C_{\text{LZ}}(w) denote the number of symbols in the compressed string LZ(w){LZ}(w). (We do not distinguish between symbols that are characters in Σ\Sigma, and symbols that are pairs (p,)(p,\ell).) Given query access to a string wΣnw\in\Sigma^{n}, we are interested in computing an estimate C^LZ\widehat{C}_{\text{LZ}} of CLZ(w)C_{\text{LZ}}(w). As we shall see, this task reduces to estimating the number of distinct substrings in ww of different lengths, which in turn reduces to estimating the number of distinct characters (“colors”) in a string. The actual length of the binary representation of the compressed substring is at most a factor of 2logn2\log n larger than CLZ(w)C_{\text{LZ}}(w). This is relatively negligible given the quality of the estimates that we can achieve in sublinear time.

We begin by relating LZ compressibility to Colors (§4.1), then use this relation to discuss algorithms (§4.2) and lower bounds (§4.3) for compressiblity.

4.1 Structural Lemmas

Our algorithm for approximating the compressibility of an input string with respect to LZ77 uses an approximation algorithm for Colors (defined in the introduction) as a subroutine. The main tool in the reduction from LZ77 to Colors is the relation between CLZ(w)C_{\text{LZ}}(w) and the number of distinct substrings in ww, formalized in the two structural lemmas. In what follows, d(w)d_{\ell}(w) denotes the number of distinct substrings of length \ell in ww. Unlike compressed segments in ww, which are disjoint, these substrings may overlap.

Lemma 4 (Structural Lemma 1)

For every [n]\ell\in[n]CLZ(w)d(w)C_{\text{LZ}}(w)\geq\frac{d_{\ell}(w)}{\ell}.

Lemma 5 (Structural lemma 2)

Let 0[n]\ell_{0}\in[n]. Suppose that for some integer mm and for every [0]\ell\in[\ell_{0}]d(w)md_{\ell}(w)\leq m\cdot\ell. Then CLZ(w)4(mlog0+n/0)C_{\text{LZ}}(w)\leq 4(m\log\ell_{0}+n/\ell_{0}).

Proof of Lemma 4 This proof is similar to the proof of a related lemma concerning grammars from [12]. First note that the lemma holds for =1\ell=1, since each character wtw_{t} in ww that has not appeared previously (that is, wtwtw_{t^{\prime}}\neq w_{t} for every t<tt^{\prime}<t) is copied by the compression algorithm to LZ(w){LZ}(w).

For the general case, fix >1\ell>1. Recall that wtwt+k1w_{t}\dots w_{t+k-1} of ww is a compressed segment if it is represented by one symbol (p,k)(p,k) in LZ(w){LZ}(w). Any substring of lenth \ell that occurs within a compressed segment must have occurred previously in the string. Such substrings can be ignored for our purposes: the number of distinct length-\ell substrings is bounded above by the number of length-\ell substrings that start inside one compressed segment and end in another. Each segment (except the last) contributes (1)(\ell-1) such substrings. Therefore, d(w)(CLZ(w)1)(1)<CLZ(w)d_{\ell}(w)\leq(C_{\text{LZ}}(w)-1)(\ell-1)<C_{\text{LZ}}(w)\cdot\ell for every >1\ell>1.     

Proof of Lemma 5 Let n(w)n_{\ell}(w) denote the number of compressed segments of length \ell in ww, not including the last compressed segment. We use the shorthand nn_{\ell} for n(w)n_{\ell}(w) and dd_{\ell} for d(w)d_{\ell}(w). In order to prove the lemma we shall show that for every 10/21\leq\ell\leq\left\lfloor{\ell_{0}/2}\right\rfloor,

k=1nk2(m+1)k=11k.\sum_{k=1}^{\ell}n_{k}\leq 2(m+1)\cdot\sum_{k=1}^{\ell}\frac{1}{k}\;. (4)

For all 1\ell\geq 1, since the compressed segments in ww are disjoint, k=+1nnkn+1\sum_{k=\ell+1}^{n}n_{k}\leq\frac{n}{\ell+1}. If we substitute =0/2\ell=\left\lfloor{\ell_{0}/2}\right\rfloor in the last two equations and sum them up, we get:

k=1nnk2(m+1)k=10/21k+2n02(m+1)(ln0+1)+2n0.\sum_{k=1}^{n}n_{k}\leq 2(m+1)\cdot\sum_{k=1}^{\left\lfloor{\ell_{0}/2}\right\rfloor}\frac{1}{k}+\frac{2n}{\ell_{0}}\leq 2(m+1)(\ln\ell_{0}+1)+\frac{2n}{\ell_{0}}. (5)

Since CLZ(w)=k=1nnk+1C_{\text{LZ}}(w)=\sum_{k=1}^{n}n_{k}+1, the lemma follows.

It remains to prove Equation (4). We do so below by induction on \ell, using the following claim.

Claim 6

For every 10/2,k=1knk2(m+1).1\leq\ell\leq\left\lfloor{\ell_{0}/2}\right\rfloor,\ \ \ \displaystyle\sum_{k=1}^{\ell}k\cdot n_{k}\leq 2\ell(m+1)\;.

Proof:  We show that each position j{,,n}j\in\{\ell,\dots,n-\ell\} that participates in a compressed substring of length at most \ell in ww can be mapped to a distinct length-22\ell substring of ww. Since 0/2\ell\leq\ell_{0}/2, by the premise of the lemma, there are at most 2m2\ell\cdot m distinct length-22\ell substrings. In addition, the first 1\ell-1 and the last \ell positions contribute less than 22\ell symbols. The claim follows.

We call a substring new if no instance of it started in the previous portion of ww. Namely, wtwt+1w_{t}\dots w_{t+\ell-1} is new if there is no p<tp<t such that wtwt+1=wpwp+1w_{t}\dots w_{t+\ell-1}=w_{p}\dots w_{p+\ell-1}. Consider a compressed substring wtwt+k1w_{t}\dots w_{t+k-1} of length kk\leq\ell. The substrings of length greater than kk that start at wtw_{t} must be new, since LZ77 finds the longest substring that appeared before. Furthermore, every substring that contains such a new substring is also new. That is, every substring wtwt+kw_{t^{\prime}}\dots w_{t+k^{\prime}} where ttt^{\prime}\leq t and kk+(tt)k^{\prime}\geq k+(t^{\prime}-t), is new.

Map each position j{,,n}j\in\{\ell,\dots,n-\ell\} in the compressed substring wtwt+k1w_{t}\dots w_{t+k-1} to the length-22\ell substring that ends at wj+w_{j+\ell}. Then each position in {,,n}\{\ell,\dots,n-\ell\} that appears in a compressed substring of length at most \ell is mapped to a distinct length-22\ell substring, as desired.      (Claim 6)

Establishing Equation (4).

We prove Equation (4) by induction on \ell. Claim 6 with \ell set to 1 gives the base case, i.e., n12(m+1)n_{1}\leq 2(m+1). For the induction step, assume the induction hypothesis for every j[1]j\in[\ell-1]. To prove it for \ell, add the equation in Claim 6 to the sum of the induction hypothesis inequalities (Equation (4)) for every j[1]j\in[\ell-1]. The left hand side of the resulting inequality is

k=1knk+j=11k=1jnk=k=1knk+k=11j=1knk=k=1knk+k=11(k)nk=k=1nk.\sum_{k=1}^{\ell}k\cdot n_{k}+\sum_{j=1}^{\ell-1}\sum_{k=1}^{j}n_{k}=\sum_{k=1}^{\ell}k\cdot n_{k}+\sum_{k=1}^{\ell-1}\sum_{j=1}^{\ell-k}n_{k}\\ =\sum_{k=1}^{\ell}k\cdot n_{k}+\sum_{k=1}^{\ell-1}(\ell-k)\cdot n_{k}=\ell\cdot\sum_{k=1}^{\ell}n_{k}\;.

The right hand side, divided by the factor 2(m+1)2(m+1), which is common to all inequalities, is

+j=11k=1j1k=+k=11j=1k1k=+k=11kk=+k=111k(1)=k=11k.\ell+\sum_{j=1}^{\ell-1}\sum_{k=1}^{j}\frac{1}{k}=\ell+\sum_{k=1}^{\ell-1}\sum_{j=1}^{\ell-k}\frac{1}{k}\ =\ell+\sum_{k=1}^{\ell-1}\frac{\ell-k}{k}=\ell+\ell\cdot\sum_{k=1}^{\ell-1}\frac{1}{k}-(\ell-1)=\ell\cdot\sum_{k=1}^{\ell}\frac{1}{k}\;.

Dividing both sides by \ell gives the inequality in Equation (4).      (Lemma 5)

4.2 An Algorithm for LZ77

This subsection describes an algorithm for approximating the compressibility of an input string with respect to LZ77, which uses an approximation algorithm for Colors as a subroutine. The main tool in the reduction from LZ77 to Colors consists of structural lemmas 4 and 5, summarized in the following corollary.

Corollary 7

For any 01\ell_{0}\geq 1, let m=m(0)=max=10d(w)m=m(\ell_{0})=\max_{\ell=1}^{\ell_{0}}\frac{d_{\ell}(w)}{\ell}. Then

mCLZ(w)  4(mlog0+n0).m\;\;\leq\;\;C_{\text{LZ}}(w)\;\;\leq\;\;4\cdot\left(m\log\ell_{0}+\frac{n}{\ell_{0}}\right)\;.

The corollary allows us to approximate CLZC_{\text{LZ}} from estimates for dd_{\ell} for all [0]\ell\in[\ell_{0}]. To obtain these estimates, we use the algorithm of [7] for Colors as a subroutine (in the full version [16] we also describe a simpler Colors algorithm with the same provable guarantees). Recall that an algorithm for Colors approximates the number of distinct colors in an input string, where the iith character represents the iith color. We denote the number of colors in an input string τ\tau by CCOL(τ){C_{\rm COL}}(\tau). To approximate dd_{\ell}, the number of distinct length-\ell substrings in ww, using an algorithm for Colors, view each length-\ell substring as a separate color. Each query of the algorithm for Colors can be implemented by \ell queries to ww.

Let Estimate(,B,δ)\mbox{\sc Estimate}(\ell,B,\delta) be a procedure that, given access to ww, an index [n]\ell\in[n], an approximation parameter B=B(n,)>1B=B(n,\ell)>1 and a confidence parameter δ[0,1]\delta\in[0,1], computes a BB-estimate for dd_{\ell} with probability at least 1δ1-\delta. It can be implemented using an algorithm for Colors, as described above, and employing standard amplification techniques to boost success probability from 23\frac{2}{3} to 1δ1-\delta: running the basic algorithm Θ(logδ1)\Theta(\log\delta^{-1}) times and outputting the median. Since the algorithm of [7] requires O(n/B2)O(n/B^{2}) queries, the query complexity of Estimate(,B,δ)\mbox{\sc Estimate}(\ell,B,\delta) is O(nB2logδ1)O\left(\frac{n}{B^{2}}\;\ell\log\delta^{-1}\right). Using Estimate(,B,δ)\mbox{\sc Estimate}(\ell,B,\delta) as a subroutine, we get the following approximation algorithm for the cost of LZ77.

Algorithm II: An (A,ϵ)(A,\epsilon)-approximation for CLZ(w)C_{\text{LZ}}(w) 1. Set 0=2Aϵ\ell_{0}=\left\lceil{\frac{2}{A\epsilon}}\right\rceil and B=A2log(2/(Aϵ))B=\frac{A}{2\sqrt{\log(2/(A\epsilon))}}. 2. For all \ell in [0][\ell_{0}], let d^=Estimate(,B,130)\hat{d}_{\ell}=\mbox{\sc Estimate}(\ell,B,\frac{1}{3\ell_{0}}). 3. Combine the estimates to get an approximation of mm from Corollary 7:   set m^=maxd^\hat{m}=\displaystyle\max_{\ell}\frac{\hat{d}_{\ell}}{\ell}. 4. Output C^LZ=m^AB+ϵn.\widehat{C}_{\text{LZ}}=\hat{m}\cdot{\frac{A}{B}}+\epsilon n.

Theorem 8

Algorithm II (A,ϵ)(A,\epsilon)-estimates CLZ(w)C_{\text{LZ}}(w). With a proper implementation that reuses queries and an appropriate data structure, its query and time complexity are O~(nA3ϵ).\tilde{O}\left(\frac{n}{A^{3}\epsilon}\right).

Proof:  By the Union Bound, with probability 23\geq\frac{2}{3}, all values d^\hat{d}_{\ell} computed by the algorithm are BB-estimates for the corresponding dd_{\ell}. When this holds, m^\hat{m} is a BB-estimate for mm from Corollary 7, which implies that

m^BCLZ(w)  4(m^Blog0+n0).\frac{\hat{m}}{B}\;\;\leq\;\;C_{\text{LZ}}(w)\;\;\leq\;\;4\cdot\left(\hat{m}B\log\ell_{0}+\frac{n}{\ell_{0}}\right)\;.

Equivalently, CLZ4(n/0)4Blog0m^BCLZ.\displaystyle\frac{C_{\text{LZ}}-4(n/\ell_{0})}{4B\log\ell_{0}}\leq\hat{m}\leq B\cdot C_{\text{LZ}}. Multiplying all three terms by AB\frac{A}{B} and adding ϵn\epsilon n to them, and then substituting parameter settings for 0\ell_{0} and BB, specified in the algorithm, shows that C^LZ\widehat{C}_{\text{LZ}} is indeed an (A,ϵ)(A,\epsilon)-estimate for CLZC_{\text{LZ}}.

As explained before the algorithm statement, each call to Estimate(,B,130)\mbox{\sc Estimate}(\ell,B,\frac{1}{3\ell_{0}}) costs O(nB2log0)O\left(\frac{n}{B^{2}}\;\ell\log\ell_{0}\right) queries. Since the subroutine is called for all [0]\ell\in[\ell_{0}], the straightforward implementation of the algorithm would result in O(nB202log0)O\left(\frac{n}{B^{2}}\ell^{2}_{0}\log\ell_{0}\right) queries. Our analysis of the algorithm, however, does not rely on independence of queries used in different calls to the subroutine, since we employ the Union Bound to calculate the error probability. It will still apply if we first run Estimate to approximate d0d_{\ell_{0}} and then reuse its queries for the remaining calls to the subroutine, as though it requested to query only the length-\ell prefixes of the length-0\ell_{0} substrings queried in the first call. With this implementation, the query complexity is O(nB20log0)=O(nA3ϵlog21Aϵ).O\left(\frac{n}{B^{2}}\ell_{0}\log\ell_{0}\right)=O\left(\frac{n}{A^{3}\epsilon}\log^{2}\frac{1}{A\epsilon}\right). To get the same running time, one can maintain counters for all [0]\ell\in[\ell_{0}] for the number of distinct length-\ell substrings seen so far and use a trie to keep the information about the queried substrings. Every time a new node at some depth \ell is added to the trie, the \ellth counter is incremented.     

4.3 Lower Bounds: Reducing Colors to LZ77

We have demonstrated that estimating the LZ77 compressibility of a string reduces to Colors. As shown in [17], Colors is quite hard, and it is not possible to improve much on the simple approximation algorithm in [7] , on which we base the LZ77 approximation algorithm in the previous subsection. A natural question is whether there is a better algorithm for the LZ77 estimation problem. That is, is the LZ77 estimation strictly easier than Colors? As we shall see, it is not much easier in general.

Lemma 9 (Reduction from Colors to LZ77)

Suppose there exists an algorithm 𝒜LZ{\cal A}_{\text{LZ}} that, given access to a string ww of length nn over an alphabet Σ\Sigma, performs q=q(n,|Σ|,α,β)q=q(n,|\Sigma|,\alpha,\beta) queries and with probability at least 5/65/6 distinguishes between the case that CLZ(w)αnC_{\text{LZ}}(w)\leq\alpha n and the case that CLZ(w)>βnC_{\text{LZ}}(w)>\beta n, for some α<β\alpha<\beta.

Then there is an algorithm for Colors taking inputs of length n=Θ(αn)n^{\prime}=\Theta(\alpha n) that performs qq queries and, with probability at least 2/32/3, distinguishes inputs with at most αn\alpha^{\prime}n^{\prime} colors from those with at least βn\beta^{\prime}n^{\prime} colors, α=α/2\alpha^{\prime}=\alpha/2 and β=β2max{1,4lognlog|Σ|}\beta^{\prime}=\beta\cdot 2\cdot\max\left\{1,\frac{4\log n^{\prime}}{\log|\Sigma|}\right\}.

Two notes are in place regarding the reduction. The first is that the gap between the parameters α\alpha^{\prime} and β\beta^{\prime} that is required by the Colors algorithm obtained in Lemma 9, is larger than the gap between the parameters α\alpha and β\beta for which the LZ-compressibility algorithm works, by a factor of 4max{1,4lognlog|Σ|}4\cdot\max\left\{1,\frac{4\log n^{\prime}}{\log|\Sigma|}\right\}. In particular, for binary strings βα=O(lognβα)\frac{\beta^{\prime}}{\alpha^{\prime}}=O\left(\log n^{\prime}\cdot\frac{\beta}{\alpha}\right), while if the alphabet is large, say, of size at least nn^{\prime}, then βα=O(βα)\frac{\beta^{\prime}}{\alpha^{\prime}}=O\left(\frac{\beta}{\alpha}\right). In general, the gap increases by at most O(logn)O(\log n^{\prime}). The second note is that the number of queries, qq, is a function of the parameters of the LZ-compressibility problem and, in particular, of the length of the input strings, nn. Hence, when writing qq as a function of the parameters of Colors and, in particular, as a function of n=Θ(αn)n^{\prime}=\Theta(\alpha n), the complexity may be somewhat larger. It is an open question whether a reduction without such increase is possible.

Prior to proving the lemma , we discuss its implications.  [17] give a strong lower bound on the sample complexity of approximation algorithms for Colors. An interesting special case is that a subpolynomial-factor approximation for Colors requires many queries even with a promise that the strings are only slightly compressible: for any B=no(1)B=n^{o(1)}, distinguishing inputs with n/11n/11 colors from those with n/Bn/B colors requires n1o(1)n^{1-o(1)} queries. Lemma 9 extends that bound to estimating LZ compressibility: For any B=no(1)B=n^{o(1)}, and any alphabet Σ\Sigma, distinguishing strings with LZ compression cost Ω~(n)\tilde{\Omega}(n) from strings with cost O~(n/B)\tilde{O}(n/B) requires n1o(1)n^{1-o(1)} queries.

The lower bound for Colors in [17] applies to a broad range of parameters, and yields the following general statement when combined with Lemma 9:

Corollary 10 (LZ is Hard to Approximate with Few Samples)

For sufficiently large nn, all alphabets Σ\Sigma and all Bn1/4/(4logn3/2)B\leq n^{1/4}/(4\log n^{3/2}), there exist α,β(0,1)\alpha,\beta\in(0,1) where β=Ω(min{1,log|Σ|4logn})\beta=\Omega\left(\min\left\{1,\frac{\log|\Sigma|}{4\log n}\right\}\right) and α=O(βB)\alpha=O\left(\frac{\beta}{B}\right), such that every algorithm that distinguishes between the case that CLZ(w)αnC_{\text{LZ}}(w)\leq\alpha n and the case that CLZ(w)>βnC_{\text{LZ}}(w)>\beta n for wΣnw\in\Sigma^{n}, must perform Ω((nB)12k)\Omega\left(\left(\frac{n}{B^{\prime}}\right)^{1-\frac{2}{k}}\right) queries for B=Θ(Bmax{1,4lognlog|Σ|})B^{\prime}=\Theta\left(B\cdot\max\left\{1,\frac{4\log n}{\log|\Sigma|}\right\}\right) and k=Θ(lognlogB+12loglogn)k=\Theta\left(\sqrt{\frac{\log n}{\log B^{\prime}+\frac{1}{2}\log\log n}}\right).

Proof of Lemma 9 Suppose we have an algorithm 𝒜LZ{\cal A}_{\text{LZ}} for LZ-compressibility as specified in the premise of Lemma 9. Here we show how to transform a Colors instance τ\tau into an input for 𝒜LZ{\cal A}_{\text{LZ}}, and use the output of 𝒜LZ{\cal A}_{\text{LZ}} to distinguish τ\tau with at most αn\alpha^{\prime}n^{\prime} colors from τ\tau with at least βn\beta^{\prime}n^{\prime} colors, where α\alpha^{\prime} and β\beta^{\prime} are as specified in the lemma. We shall assume that βn\beta^{\prime}n^{\prime} is bounded below by some sufficiently large constant. Recall that in the reduction from LZ77 to Colors, we transformed substrings into colors. Here we perform the reverse operation.

Given a Colors instance τ\tau of length nn^{\prime}, we transform it into a string of length n=nkn=n^{\prime}\cdot k over Σ\Sigma, where k=1αk=\lceil\frac{1}{\alpha}\rceil. We then run 𝒜LZ{\cal A}_{\text{LZ}} on ww to obtain information about τ\tau. We begin by replacing each color in τ\tau with a uniformly selected substring in Σk\Sigma^{k}. The string ww is the concatenation of the corresponding substrings (which we call blocks). We show that:

  1. 1.

    If τ\tau has at most αn\alpha^{\prime}n^{\prime} colors, then CLZ(w)2αnC_{\text{LZ}}(w)\leq 2\alpha^{\prime}n;

  2. 2.

    If τ\tau has at least βn\beta^{\prime}n^{\prime} colors, then Prw[CLZ(w)12min{1,log|Σ|4logn}βn]78.{\rm Pr}_{w}[C_{\text{LZ}}(w)\geq\frac{1}{2}\cdot\min\left\{1,\frac{\log|\Sigma|}{4\log n^{\prime}}\right\}\cdot\beta^{\prime}n]\geq\frac{7}{8}.

That is, in the first case we get an input ww for Colors such that CLZ(w)αnC_{\text{LZ}}(w)\leq\alpha n for α=2α\alpha=2\alpha^{\prime}, and in the second case, with probability at least 7/87/8, CLZ(w)βnC_{\text{LZ}}(w)\geq\beta n for β=12min{1,log|Σ|4logn}β\beta=\frac{1}{2}\cdot\min\left\{1,\frac{\log|\Sigma|}{4\log n^{\prime}}\right\}\cdot\beta^{\prime}. Recall that the gap between α\alpha^{\prime} and β\beta^{\prime} is assumed to be sufficiently large so that α<β\alpha<\beta. To distinguish the case that CCOL(τ)αn{C_{\rm COL}}(\tau)\leq\alpha^{\prime}n^{\prime} from the case that CCOL(τ)>βn{C_{\rm COL}}(\tau)>\beta^{\prime}n^{\prime}, we can run 𝒜LZ{\cal A}_{\text{LZ}} on ww and output its answer. Taking into account the failure probability of 𝒜LZ{\cal A}_{\text{LZ}} and the failure probability in Item 2 above, the Lemma follows.

We prove these two claims momentarily, but first observe that in order to run the algorithm 𝒜LZ{\cal A}_{\text{LZ}}, there is no need to generate the whole string ww. Rather, upon each query of 𝒜LZ{\cal A}_{\text{LZ}} to ww, if the index of the query belongs to a block that has already been generated, the answer to 𝒜LZ{\cal A}_{\text{LZ}} is determined. Otherwise, we query the element (color) in τ\tau that corresponds to the block. If this color was not yet observed, then we set the block to a uniformly selected substring in Σk\Sigma^{k}. If this color was already observed in τ\tau, then we set the block according to the substring that was already selected for the color. In either case, the query to ww can now be answered. Thus, each query to ww is answered by performing at most one query to τ\tau.

It remains to prove the two items concerning the relation between the number of colors in τ\tau and CLZ(w)C_{\text{LZ}}(w). If τ\tau has at most αn\alpha^{\prime}n^{\prime} colors then ww contains at most αn\alpha^{\prime}n^{\prime} distinct blocks. Since each block is of length kk, at most kk compressed segments start in each new block. By definition of LZ77, at most one compressed segment starts in each repeated block. Hence,

CLZ(w)αnk+(1α)nαn+n2αn.C_{\text{LZ}}(w)\leq\alpha^{\prime}n^{\prime}\cdot k+(1-\alpha^{\prime})n^{\prime}\leq\alpha^{\prime}n+n^{\prime}\leq 2\alpha^{\prime}n.

If τ\tau contains βn\beta^{\prime}n^{\prime} or more colors, ww is generated using at least βnlog(|Σ|k)=βnlog|Σ|\beta^{\prime}n^{\prime}\cdot\log(|\Sigma|^{k})=\beta^{\prime}n\log|\Sigma| random bits. Hence, with high probability (e.g., at least 7/87/8) over the choice of these random bits, any lossless compression algorithm (and in particular LZ77) must use at least βnlog|Σ|3\beta^{\prime}n\log|\Sigma|-3 bits to compress ww. Each symbol of the compressed version of ww can be represented by max{log|Σ|,2logn}+1\max\{\lceil\log|\Sigma|\rceil,2\lceil\log n\rceil\}+1 bits, since it is either an alphabet symbol or a pointer-length pair. Since n=n1/αn=n^{\prime}\lceil 1/\alpha^{\prime}\rceil, and α>1/n\alpha^{\prime}>1/n^{\prime}, each symbol takes at most max{4logn,log|Σ|}+2\max\{4\log n^{\prime},\log|\Sigma|\}+2 bits to represent. This means the number of symbols in the compressed version of ww is

CLZ(w)βnlog|Σ|3max{4logn,log|Σ|})+212βnmin{1,log|Σ|4logn}C_{\text{LZ}}(w)\geq\dfrac{\beta^{\prime}n\log|\Sigma|-3}{\max\left\{4\log n^{\prime},\log|\Sigma|\right\})+2}\geq\frac{1}{2}\cdot\beta^{\prime}n\cdot\min\left\{1,\tfrac{\log|\Sigma|}{4\log n^{\prime}}\right\}\;

where we have used the fact that βn\beta^{\prime}n^{\prime}, and hence βn\beta^{\prime}n, is at least some sufficiently large constant.     

Acknowledgements.

We would like to thank Amir Shpilka, who was involved in a related paper on distribution support testing [17] and whose comments greatly improved drafts of this article. We would also like to thank Eric Lehman for discussing his thesis material with us and Oded Goldreich and Omer Reingold for helpful comments.

References

  • [1] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. J. Comput. Syst. Sci., 58(1):137–147, 1999.
  • [2] Ziv Bar-Yossef, Ravi Kumar, and D. Sivakumar. Sampling algorithms: lower bounds and applications. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages 266–275, New York, NY, USA, 2001. ACM Press.
  • [3] Tugkan Batu, Sanjoy Dasgupta, Ravi Kumar, and Ronitt Rubinfeld. The complexity of approximating the entropy. SIAM Journal on Computing, 35(1):132–150, 2005.
  • [4] Dario Benedetto, Emanuele Caglioti, and Vittorio Loreto. Language trees and zipping. Phys. Rev. Lett., 88(4), 2002. See comment by Khmelev DV, Teahan WJ, Phys Rev Lett. 90(8):089803, 2003 and the reply Phys Rev Lett. 90(8):089804, 2003.
  • [5] Mickey Brautbar and Alex Samorodnitsky. Approximating the entropy of large alphabets. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2007.
  • [6] John Bunge. Bibligraphy on estimating the number of classes in a population. www.stat.cornell.edu/\simbunge/bibliography.htm.
  • [7] Moses Charikar, Surajit Chaudhuri, Rajeev Motwani, and Vivek R. Narasayya. Towards estimation error guarantees for distinct values. In PODS, pages 268–279. ACM, 2000.
  • [8] Rudi Cilibrasi and Paul M. B. Vitányi. Clustering by compression. IEEE Transactions on Information Theory, 51(4):1523–1545, 2005.
  • [9] Rudi Cilibrasi and Paul M. B. Vitányi. Similarity of objects and the meaning of words. In Jin-Yi Cai, S. Barry Cooper, and Angsheng Li, editors, TAMC, volume 3959 of Lecture Notes in Computer Science, pages 21–45. Springer, 2006.
  • [10] T. Cover and J. Thomas. Elements of Information Theory. Wiley & Sons, 1991.
  • [11] O. V. Kukushkina, A. A. Polikarpov, and D. V. Khmelev. Using literal and grammatical statistics for authorship attribution. Prob. Peredachi Inf., 37(2):96–98, 2000. [Probl. Inf. Transm. ( Engl. Transl.) 37, 172–184 (2001)].
  • [12] Eric Lehman and Abhi Shelat. Approximation algorithms for grammer-based compression. In Proceedings of the Thirteenth annual ACM–SIAM symposium on Discrete Algorithms, pages 205–212, 2002.
  • [13] Ming Li, Xin Chen, Xin Li, Bin Ma, and Paul M. B. Vitányi. The similarity metric. IEEE Transactions on Information Theory, 50(12):3250–3264, 2004. Prelim. version in SODA 2003.
  • [14] Ming Li and Paul Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Springer, 1997.
  • [15] David Loewenstern, Haym Hirsh, Michiel Noordewier, and Peter Yianilos. DNA sequence classification using compression-based induction. Technical Report 95-04, Rutgers University, DIMACS, 1995.
  • [16] Sofya Raskhodnikova, Dana Ron, Ronitt Rubinfeld, and Adam Smith. Sublinear algorithms for approximating string compressibility. Full version of this paper, in preparation., June 2007.
  • [17] Sofya Raskhodnikova, Dana Ron, Amir Shpilka, and Adam Smith. On the difficulty of approximating the support size of a distribution. Manuscript, 2007.
  • [18] Jacob Ziv and Abraham Lempel. A universal algorithm for sequential data compression. IEEE Transactions on Information Theory, 23:337–343, 1977.
  • [19] Jacob Ziv and Abraham Lempel. Compression of individual sequences via variable-rate coding. IEEE Transactions on Information Theory, 24:530–536, 1978.