Sublinear Algorithms for Approximating String Compressibility
Abstract
We raise the question of approximating the compressibility of a string with respect to a fixed compression scheme, in sublinear time. We study this question in detail for two popular lossless compression schemes: run-length encoding (RLE) and Lempel-Ziv (LZ), and present sublinear algorithms for approximating compressibility with respect to both schemes. We also give several lower bounds that show that our algorithms for both schemes cannot be improved significantly.
Our investigation of LZ yields results whose interest goes beyond the initial questions we set out to study. In particular, we prove combinatorial structural lemmas that relate the compressibility of a string with respect to Lempel-Ziv to the number of distinct short substrings contained in it. In addition, we show that approximating the compressibility with respect to LZ is related to approximating the support size of a distribution.
1 Introduction
Given an extremely long string, it is natural to wonder how compressible it is. This fundamental question is of interest to a wide range of areas of study, including computational complexity theory, machine learning, storage systems, and communications. As massive data sets are now commonplace, the ability to estimate their compressibility with extremely efficient, even sublinear time, algorithms, is gaining in importance. The most general measure of compressibility, Kolmogorov complexity, is not computable (see [14] for a textbook treatment), nor even approximable. Even under restrictions which make it computable (such as a bound on the running time of decompression), it is probably hard to approximate in polynomial time, since an approximation would allow distinguishing random from pseudorandom strings and, hence, inverting one-way functions. However, the question of how compressible a large string is with respect to a specific compression scheme may be tractable, depending on the particular scheme.
We raise the question of approximating the compressibility of a string with respect to a fixed compression scheme, in sublinear time, and give algorithms and nearly matching lower bounds for several versions of the problem. While this question is new, for one compression scheme, answers follow from previous work. Namely, compressibility under Huffman encoding is determined by the entropy of the symbol frequencies. Batu et al. [3] and Brautbar and Samorodnitsky [5] study the problem of approximating the entropy of a distribution from a small number of samples, and their results immediately imply algorithms and lower bounds for approximating compressibility under Huffman encoding.
In this work we study the compressibility approximation question in detail for two popular lossless compression schemes: run-length encoding (RLE) and Lempel-Ziv (LZ) [18]. In the RLE scheme, each run, or a sequence of consecutive occurrences of the same character, is stored as a pair: the character, and the length of the run. Run-length encoding is used to compress black and white images, faxes, and other simple graphic images, such as icons and line drawings, which usually contain many long runs. In the LZ scheme111We study the variant known as LZ77 [18], which achieves the best compressibility. There are several other variants that do not compress some inputs as well, but can be implemented more efficiently., a left-to-right pass of the input string is performed and at each step, the longest sequence of characters that has started in the previous portion of the string is replaced with the pointer to the previous location and the length of the sequence (for a formal definition, see Section 4). The LZ scheme and its variants have been studied extensively in machine learning and information theory, in part because they compress strings generated by an ergodic source to the shortest possible representation (given by the entropy) in the asymptotic limit (cf. [10]). Many popular archivers, such as gzip, use variations on the LZ scheme. In this work we present sublinear algorithms and corresponding lower bounds for approximating compressibility with respect to both schemes, RLE and LZ.
Motivation.
Computing the compressibility of a large string with respect to specific compression schemes may be done in order to decide whether or not to compress the file, to choose which compression method is the most suitable, or check whether a small modification to the file (e.g., a rotation of an image) will make it significantly more compressible222For example, a variant of the RLE scheme, typically used to compress images, runs RLE on the concatenated rows of the image and on the concatenated columns of the image, and stores the shorter of the two compressed files. . Moreover, compression schemes are used as tools for measuring properties of strings such as similarity and entropy. As such, they are applied widely in data-mining, natural language processing and genomics (see, for example, Lowenstern et al. [15], Kukushkina et al. [11], Benedetto et al. [4], Li et al. [13] and Calibrasi and Vitányi [8, 9]). In these applications, one typically needs only the length of the compressed version of a file, not the output itself. For example, in the clustering algorithm of [8], the distance between two objects and is given by a normalized version of the length of their compressed concatenation . The algorithm first computes all pairwise distances, and then analyzes the resulting distance matrix. This requires runs of a compression scheme, such as gzip, to cluster objects. Even a weak approximation algorithm that can quickly rule out very incompressible strings would reduce the running time of the clustering computations dramatically.
Multiplicative and Additive Approximations.
We consider three approximation notions: additive, multiplicative, and the combination of additive and multiplicative. On the input of length , the quantities we approximate range from 1 to . An additive approximation algorithm is allowed an additive error of , where is a parameter. The output of a multiplicative approximation algorithm is within a factor of the correct answer. The combined notion allows both types of error: the algorithm should output an estimate of the compression cost such that . Our algorithms are randomized, and for all inputs the approximation guarantees hold with probability at least .
We are interested in sublinear approximation algorithms, which read few positions of the input strings. For the schemes we study, purely multiplicative approximation algorithms must read almost the entire input. Nevertheless, algorithms with additive error guarantees, or a possibility of both multiplicative and additive error are often sufficient for distinguishing very compressible inputs from inputs that are not well compressible. For both the RLE and LZ schemes, we give algorithms with combined multiplicative and additive error that make few queries to the input. When it comes to additive approximations, however, the two schemes differ sharply: sublinear additive approximations are possible for the RLE compressibility, but not for LZ compressibility.
1.1 Results for Run-Length Encoding
For RLE, we present sublinear algorithms for all three approximation notions defined above, providing a trade-off between the quality of approximation and the running time. The algorithms that allow an additive approximation run in time independent of the input size. Specifically, an -additive estimate can be obtained in time333The notation for a function of a parameter means where for some constant . , and a combined estimate, with a multiplicative error of and an additive error of , can be obtained in time . As for a strict multiplicative approximation, we give a simple 4-multiplicative approximation algorithm that runs in expected time where denotes the compression cost of the string . For any , the multiplicative error can be improved to at the cost of multiplying the running time by . Observe that the algorithm is more efficient when the string is less compressible, and less efficient when the string is more compressible. One of our lower bounds justifies such a behavior and, in particular, shows that a constant factor approximation requires linear time for strings that are very compressible. We also give a lower bound of for -additive approximation.
1.2 Results for Lempel-Ziv
We prove that approximating compressibility with respect to LZ is closely related to the following problem, which we call Colors: Given access to a string of length over alphabet , approximate the number of distinct symbols (“colors”) in . This is essentially equivalent to estimating the support size of a distribution [17]. Variants of this problem have been considered under various guises in the literature: in databases it is referred to as approximating distinct values (Charikar et al. [7]), in statistics as estimating the number of species in a population (see the over 800 references maintained by Bunge [6]), and in streaming as approximating the frequency moment (Alon et al. [1], Bar-Yossef et al. [2]). Most of these works, however, consider models different from ours. For our model, there is an -multiplicative approximation algorithm of [7], that runs in time , matching the lower bound in [7, 2]. There is also an almost linear lower bound for approximating Colors with additive error [17].
We give a reduction from LZ compressibility to Colors and vice versa. These reductions allow us to employ the known results on Colors to give algorithms and lower bounds for this problem. Our approximation algorithm for LZ compressibility combines a multiplicative and additive error. The running time of the algorithm is where is the multiplicative error and is the additive error. In particular, this implies that for any , we can distinguish, in sublinear time , strings compressible to symbols from strings only compressible to symbols.444To see this, set and .
The main tool in the algorithm consists of two combinatorial structural lemmas that relate compressibility of the string to the number of distinct short substrings contained in it. Roughly, they say that a string is well compressible with respect to LZ if and only if it contains few distinct substrings of length for all small (when considering all possible overlapping substrings). The simpler of the two lemmas was inspired by a structural lemma for grammars by Lehman and Shelat [12]. The combinatorial lemmas allow us to establish a reduction from LZ compressibility to Colors and employ a (simple) algorithm for approximating Colors in our algorithm for LZ.
Interestingly, we can show that there is also a reduction in the opposite direction: namely, approximating Colors reduces to approximating LZ compressibility. The lower bound of [17], combined with the reduction from Colors to LZ, implies that our algorithm for LZ cannot be improved significantly. In particular, our lower bound implies that for any , distinguishing strings compressible by LZ to symbols from strings compressible to symbols requires queries.
1.3 Further Research
It would be interesting to extend our results for estimating the compressibility under LZ77 to other variants of LZ, such as dictionary-based LZ78 [19]. Compressibility under LZ78 can be drastically different from compressibility under LZ77: e.g., for they differ roughly by a factor of . Another open question is approximating compressibility for schemes other than RLE and LZ. In particular, it would be interesting to design approximation algorithms for lossy compression schemes such as JPEG, MPEG and MP3. One lossy compression scheme to which our results extend directly is Lossy RLE, where some characters, e.g., the ones that represent similar colors, are treated as the same character.
1.4 Organization
2 Preliminaries
The input to our algorithms is usually a string of length over a finite alphabet . The quantities we approximate, such as compression cost of under a specific algorithm, range from 1 to . We consider estimates to these quantities that have both multiplicative and additive error. We call an -estimate for if and say an algorithm -estimates (or is an -approximation algorithm for ) if for each input it produces an -estimate for with probability at least .
When the error is purely additive or multiplicative, we use the following shorthand: -additive estimate stands for -estimate and -multiplicative estimate, or -estimate, stands for -estimate. An algorithm computing an -additive estimate with probability at least is an -additive approximation algorithm, and if it computes an -multiplicative estimate then it is an -multiplicative approximation algorithm, or -approximation algorithm.
For some settings of parameters, obtaining a valid estimate is trivial. For a quantity in , for example, is an -additive estimate, is a -estimate and is an -estimate whenever .
3 Run-Length Encoding
Every -character string over alphabet can be partitioned into maximal runs of identical characters of the form , where is a symbol in and is the length of the run, and consecutive runs are composed of different symbols. In the Run-Length Encoding of , each such run is replaced by the pair . The number of bits needed to represent such a pair is plus the overhead which depends on how the separation between the characters and the lengths is implemented. One way to implement it is to use prefix-free encoding for lengths. For simplicity we ignore the overhead in the above expression, but our analysis can be adapted to any implementation choice. The cost of the run-length encoding, denoted by , is the sum over all runs of .
3.1 An -Additive Estimate with Queries
Our first algorithm for approximating the cost of RLE is very simple: it samples a few positions in the input string uniformly at random and bounds the lengths of the runs to which they belong by looking at the positions to the left and to the right of each sample. If the corresponding run is short, its length is established exactly; if it is long, we argue that it does not contribute much to the encoding cost. For each index , let be the length of the run to which belongs. The cost contribution of index is defined as
| (1) |
By definition, , where denotes expectation over a uniformly random choice of . The algorithm, presented below, estimates the encoding cost by the average of the cost contributions of the sampled short runs, multiplied by .
Algorithm I: An -additive Approximation for 1. Select indices uniformly and independently at random. 2. For each (a) Query and up to positions in its vicinity to bound . (b) Set if and otherwise. 3. Output .
Correctness.
We first prove that the algorithm is an -additive approximation. The error of the algorithm comes from two sources: from ignoring the contribution of long runs and from sampling. The ignored indices , for which , do not contribute much to the cost. Since the cost assigned to the indices monotonically decreases with the length of the run to which they belong, for each such index,
| (2) |
Therefore,
| (3) |
Equivalently, .
By an additive Chernoff bound, with high constant probability, the sampling error in estimating is at most . Therefore, is an -additive estimate of , as desired.
Query and time complexity. (Assuming is constant.) Since the number of queries performed for each selected is , the total number of queries, as well as the running time, is .
3.2 Summary of Positive Results on RLE
After stating Theorem 1 that summarizes our positive results, we briefly discuss some of the ideas used in the algorithms omitted from this version of the paper.
Theorem 1
Let be a string to which we are given query access.
-
1.
Algorithm I gives -additive approximation to in time .
-
2.
can be -estimated in time .
-
3.
can be -estimated in expected time . A -estimate of can be obtained in expected time . The algorithm needs no prior knowledge of .
Section 3.1 gives a complete proof of Item 1. The algorithm in Item 2 partitions the positions in the string into buckets according to the length of the runs they belong to. It estimates the sizes of different buckets with different precision, depending on the size of the bucket and the length of the runs it contains. The main idea in Item 3 is to search for , using the algorithm from Item 2 repeatedly (with different parameters) to establish successively better estimates.
3.3 Lower Bounds for RLE
We give two lower bounds, for multiplicative and additive approximation, respectively, which establish that the running times in Items 1 and 3 of Theorem 1 are essentially tight.
Theorem 2
-
1.
For all , any -approximation algorithm for requires queries. Furthermore, if the input is restricted to strings with compression cost , then queries are necessary.
-
2.
For all , any -additive approximation algorithm for requires queries.
A Multiplicative Lower Bound (Proof of Theorem 2, Item 1):
The claim follows from the next lemma:
Lemma 3
For every and every integer , there exists a family of strings, denoted , for which the following holds: (1) for every ; (2) Distinguishing a uniformly random string in from one in , where , requires queries.
Proof: Let and assume for simplicity that is divisible by . Every string in consists of blocks, each of length . Every odd block contains only s and every even block contains a single . The strings in differ in the locations of the s within the even blocks. Every contains isolated 0s and runs of 1s, each of length . Therefore, . To distinguish a random string in from one in with probability 2/3, one must make queries since, in both cases, with asymptotically fewer queries the algorithm sees only 1’s with high probability.
Additive Lower Bound (Proof Theorem 2, Item 1):
For any and sufficiently large , let be the following distribution over -bit strings. For simplicity, consider divisible by . The string is determined by independent coin flips, each with bias . Each “heads” extends the string by three runs of length 1, and each “tails”, by a run of length 3. Given the sequence of run lengths, dictated by the coin flips, output the unique binary string that starts with 0 and has this sequence of run lengths.555Let be a boolean variable representing the outcome of the th coin. Then the output is
Let be a random variable drawn according to and , according to . The following facts are established in the full version [16]: (a) queries are necessary to reliably distinguish from , and (b) With high probability, the encoding costs of and differ by . Together these facts imply the lower bound.
4 Lempel Ziv Compression
In this section we consider a variant of Lempel and Ziv’s compression algorithm [18], which we refer to as LZ77. In all that follows we use the shorthand for . Let be a string over an alphabet . Each symbol of the compressed representation of , denoted , is either a character or a pair where is a pointer (index) to a location in the string and is the length of the substring of that this symbol represents. To compress , the algorithm works as follows. Starting from , at each step the algorithm finds the longest substring for which there exists an index , such that . (The substrings and may overlap.) If there is no such substring (that is, the character has not appeared before) then the next symbol in is , and . Otherwise, the next symbol is and . We refer to the substring (or when is a new character) as a compressed segment.
Let denote the number of symbols in the compressed string . (We do not distinguish between symbols that are characters in , and symbols that are pairs .) Given query access to a string , we are interested in computing an estimate of . As we shall see, this task reduces to estimating the number of distinct substrings in of different lengths, which in turn reduces to estimating the number of distinct characters (“colors”) in a string. The actual length of the binary representation of the compressed substring is at most a factor of larger than . This is relatively negligible given the quality of the estimates that we can achieve in sublinear time.
We begin by relating LZ compressibility to Colors (§4.1), then use this relation to discuss algorithms (§4.2) and lower bounds (§4.3) for compressiblity.
4.1 Structural Lemmas
Our algorithm for approximating the compressibility of an input string with respect to LZ77 uses an approximation algorithm for Colors (defined in the introduction) as a subroutine. The main tool in the reduction from LZ77 to Colors is the relation between and the number of distinct substrings in , formalized in the two structural lemmas. In what follows, denotes the number of distinct substrings of length in . Unlike compressed segments in , which are disjoint, these substrings may overlap.
Lemma 4 (Structural Lemma 1)
For every , .
Lemma 5 (Structural lemma 2)
Let . Suppose that for some integer and for every , . Then .
Proof of Lemma 4. This proof is similar to the proof of a related lemma concerning grammars from [12]. First note that the lemma holds for , since each character in that has not appeared previously (that is, for every ) is copied by the compression algorithm to .
For the general case, fix . Recall that of is a compressed segment if it is represented by one symbol in . Any substring of lenth that occurs within a compressed segment must have occurred previously in the string. Such substrings can be ignored for our purposes: the number of distinct length- substrings is bounded above by the number of length- substrings that start inside one compressed segment and end in another. Each segment (except the last) contributes such substrings. Therefore, for every .
Proof of Lemma 5. Let denote the number of compressed segments of length in , not including the last compressed segment. We use the shorthand for and for . In order to prove the lemma we shall show that for every ,
| (4) |
For all , since the compressed segments in are disjoint, . If we substitute in the last two equations and sum them up, we get:
| (5) |
Since , the lemma follows.
It remains to prove Equation (4). We do so below by induction on , using the following claim.
Claim 6
For every
Proof: We show that each position that participates in a compressed substring of length at most in can be mapped to a distinct length- substring of . Since , by the premise of the lemma, there are at most distinct length- substrings. In addition, the first and the last positions contribute less than symbols. The claim follows.
We call a substring new if no instance of it started in the previous portion of . Namely, is new if there is no such that . Consider a compressed substring of length . The substrings of length greater than that start at must be new, since LZ77 finds the longest substring that appeared before. Furthermore, every substring that contains such a new substring is also new. That is, every substring where and , is new.
Map each position in the compressed substring to the length- substring that ends at . Then each position in that appears in a compressed substring of length at most is mapped to a distinct length- substring, as desired. (Claim 6)
Establishing Equation (4).
We prove Equation (4) by induction on . Claim 6 with set to 1 gives the base case, i.e., . For the induction step, assume the induction hypothesis for every . To prove it for , add the equation in Claim 6 to the sum of the induction hypothesis inequalities (Equation (4)) for every . The left hand side of the resulting inequality is
The right hand side, divided by the factor , which is common to all inequalities, is
Dividing both sides by gives the inequality in Equation (4). (Lemma 5)
4.2 An Algorithm for LZ77
This subsection describes an algorithm for approximating the compressibility of an input string with respect to LZ77, which uses an approximation algorithm for Colors as a subroutine. The main tool in the reduction from LZ77 to Colors consists of structural lemmas 4 and 5, summarized in the following corollary.
Corollary 7
For any , let . Then
The corollary allows us to approximate from estimates for for all . To obtain these estimates, we use the algorithm of [7] for Colors as a subroutine (in the full version [16] we also describe a simpler Colors algorithm with the same provable guarantees). Recall that an algorithm for Colors approximates the number of distinct colors in an input string, where the th character represents the th color. We denote the number of colors in an input string by . To approximate , the number of distinct length- substrings in , using an algorithm for Colors, view each length- substring as a separate color. Each query of the algorithm for Colors can be implemented by queries to .
Let be a procedure that, given access to , an index , an approximation parameter and a confidence parameter , computes a -estimate for with probability at least . It can be implemented using an algorithm for Colors, as described above, and employing standard amplification techniques to boost success probability from to : running the basic algorithm times and outputting the median. Since the algorithm of [7] requires queries, the query complexity of is . Using as a subroutine, we get the following approximation algorithm for the cost of LZ77.
Algorithm II: An -approximation for 1. Set and . 2. For all in , let . 3. Combine the estimates to get an approximation of from Corollary 7: set . 4. Output
Theorem 8
Algorithm II -estimates . With a proper implementation that reuses queries and an appropriate data structure, its query and time complexity are
Proof: By the Union Bound, with probability , all values computed by the algorithm are -estimates for the corresponding . When this holds, is a -estimate for from Corollary 7, which implies that
Equivalently, Multiplying all three terms by and adding to them, and then substituting parameter settings for and , specified in the algorithm, shows that is indeed an -estimate for .
As explained before the algorithm statement, each call to costs queries. Since the subroutine is called for all , the straightforward implementation of the algorithm would result in queries. Our analysis of the algorithm, however, does not rely on independence of queries used in different calls to the subroutine, since we employ the Union Bound to calculate the error probability. It will still apply if we first run Estimate to approximate and then reuse its queries for the remaining calls to the subroutine, as though it requested to query only the length- prefixes of the length- substrings queried in the first call. With this implementation, the query complexity is To get the same running time, one can maintain counters for all for the number of distinct length- substrings seen so far and use a trie to keep the information about the queried substrings. Every time a new node at some depth is added to the trie, the th counter is incremented.
4.3 Lower Bounds: Reducing Colors to LZ77
We have demonstrated that estimating the LZ77 compressibility of a string reduces to Colors. As shown in [17], Colors is quite hard, and it is not possible to improve much on the simple approximation algorithm in [7] , on which we base the LZ77 approximation algorithm in the previous subsection. A natural question is whether there is a better algorithm for the LZ77 estimation problem. That is, is the LZ77 estimation strictly easier than Colors? As we shall see, it is not much easier in general.
Lemma 9 (Reduction from Colors to LZ77)
Suppose there exists an algorithm that, given access to a string of length over an alphabet , performs queries and with probability at least distinguishes between the case that and the case that , for some .
Then there is an algorithm for Colors taking inputs of length that performs queries and, with probability at least , distinguishes inputs with at most colors from those with at least colors, and .
Two notes are in place regarding the reduction. The first is that the gap between the parameters and that is required by the Colors algorithm obtained in Lemma 9, is larger than the gap between the parameters and for which the LZ-compressibility algorithm works, by a factor of . In particular, for binary strings , while if the alphabet is large, say, of size at least , then . In general, the gap increases by at most . The second note is that the number of queries, , is a function of the parameters of the LZ-compressibility problem and, in particular, of the length of the input strings, . Hence, when writing as a function of the parameters of Colors and, in particular, as a function of , the complexity may be somewhat larger. It is an open question whether a reduction without such increase is possible.
Prior to proving the lemma , we discuss its implications. [17] give a strong lower bound on the sample complexity of approximation algorithms for Colors. An interesting special case is that a subpolynomial-factor approximation for Colors requires many queries even with a promise that the strings are only slightly compressible: for any , distinguishing inputs with colors from those with colors requires queries. Lemma 9 extends that bound to estimating LZ compressibility: For any , and any alphabet , distinguishing strings with LZ compression cost from strings with cost requires queries.
The lower bound for Colors in [17] applies to a broad range of parameters, and yields the following general statement when combined with Lemma 9:
Corollary 10 (LZ is Hard to Approximate with Few Samples)
For sufficiently large , all alphabets and all , there exist where and , such that every algorithm that distinguishes between the case that and the case that for , must perform queries for and .
Proof of Lemma 9. Suppose we have an algorithm for LZ-compressibility as specified in the premise of Lemma 9. Here we show how to transform a Colors instance into an input for , and use the output of to distinguish with at most colors from with at least colors, where and are as specified in the lemma. We shall assume that is bounded below by some sufficiently large constant. Recall that in the reduction from LZ77 to Colors, we transformed substrings into colors. Here we perform the reverse operation.
Given a Colors instance of length , we transform it into a string of length over , where . We then run on to obtain information about . We begin by replacing each color in with a uniformly selected substring in . The string is the concatenation of the corresponding substrings (which we call blocks). We show that:
-
1.
If has at most colors, then ;
-
2.
If has at least colors, then
That is, in the first case we get an input for Colors such that for , and in the second case, with probability at least , for . Recall that the gap between and is assumed to be sufficiently large so that . To distinguish the case that from the case that , we can run on and output its answer. Taking into account the failure probability of and the failure probability in Item 2 above, the Lemma follows.
We prove these two claims momentarily, but first observe that in order to run the algorithm , there is no need to generate the whole string . Rather, upon each query of to , if the index of the query belongs to a block that has already been generated, the answer to is determined. Otherwise, we query the element (color) in that corresponds to the block. If this color was not yet observed, then we set the block to a uniformly selected substring in . If this color was already observed in , then we set the block according to the substring that was already selected for the color. In either case, the query to can now be answered. Thus, each query to is answered by performing at most one query to .
It remains to prove the two items concerning the relation between the number of colors in and . If has at most colors then contains at most distinct blocks. Since each block is of length , at most compressed segments start in each new block. By definition of LZ77, at most one compressed segment starts in each repeated block. Hence,
If contains or more colors, is generated using at least random bits. Hence, with high probability (e.g., at least ) over the choice of these random bits, any lossless compression algorithm (and in particular LZ77) must use at least bits to compress . Each symbol of the compressed version of can be represented by bits, since it is either an alphabet symbol or a pointer-length pair. Since , and , each symbol takes at most bits to represent. This means the number of symbols in the compressed version of is
where we have used the fact that , and hence , is at least some sufficiently large constant.
Acknowledgements.
We would like to thank Amir Shpilka, who was involved in a related paper on distribution support testing [17] and whose comments greatly improved drafts of this article. We would also like to thank Eric Lehman for discussing his thesis material with us and Oded Goldreich and Omer Reingold for helpful comments.
References
- [1] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. J. Comput. Syst. Sci., 58(1):137–147, 1999.
- [2] Ziv Bar-Yossef, Ravi Kumar, and D. Sivakumar. Sampling algorithms: lower bounds and applications. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages 266–275, New York, NY, USA, 2001. ACM Press.
- [3] Tugkan Batu, Sanjoy Dasgupta, Ravi Kumar, and Ronitt Rubinfeld. The complexity of approximating the entropy. SIAM Journal on Computing, 35(1):132–150, 2005.
- [4] Dario Benedetto, Emanuele Caglioti, and Vittorio Loreto. Language trees and zipping. Phys. Rev. Lett., 88(4), 2002. See comment by Khmelev DV, Teahan WJ, Phys Rev Lett. 90(8):089803, 2003 and the reply Phys Rev Lett. 90(8):089804, 2003.
- [5] Mickey Brautbar and Alex Samorodnitsky. Approximating the entropy of large alphabets. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2007.
- [6] John Bunge. Bibligraphy on estimating the number of classes in a population. www.stat.cornell.edu/bunge/bibliography.htm.
- [7] Moses Charikar, Surajit Chaudhuri, Rajeev Motwani, and Vivek R. Narasayya. Towards estimation error guarantees for distinct values. In PODS, pages 268–279. ACM, 2000.
- [8] Rudi Cilibrasi and Paul M. B. Vitányi. Clustering by compression. IEEE Transactions on Information Theory, 51(4):1523–1545, 2005.
- [9] Rudi Cilibrasi and Paul M. B. Vitányi. Similarity of objects and the meaning of words. In Jin-Yi Cai, S. Barry Cooper, and Angsheng Li, editors, TAMC, volume 3959 of Lecture Notes in Computer Science, pages 21–45. Springer, 2006.
- [10] T. Cover and J. Thomas. Elements of Information Theory. Wiley & Sons, 1991.
- [11] O. V. Kukushkina, A. A. Polikarpov, and D. V. Khmelev. Using literal and grammatical statistics for authorship attribution. Prob. Peredachi Inf., 37(2):96–98, 2000. [Probl. Inf. Transm. ( Engl. Transl.) 37, 172–184 (2001)].
- [12] Eric Lehman and Abhi Shelat. Approximation algorithms for grammer-based compression. In Proceedings of the Thirteenth annual ACM–SIAM symposium on Discrete Algorithms, pages 205–212, 2002.
- [13] Ming Li, Xin Chen, Xin Li, Bin Ma, and Paul M. B. Vitányi. The similarity metric. IEEE Transactions on Information Theory, 50(12):3250–3264, 2004. Prelim. version in SODA 2003.
- [14] Ming Li and Paul Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Springer, 1997.
- [15] David Loewenstern, Haym Hirsh, Michiel Noordewier, and Peter Yianilos. DNA sequence classification using compression-based induction. Technical Report 95-04, Rutgers University, DIMACS, 1995.
- [16] Sofya Raskhodnikova, Dana Ron, Ronitt Rubinfeld, and Adam Smith. Sublinear algorithms for approximating string compressibility. Full version of this paper, in preparation., June 2007.
- [17] Sofya Raskhodnikova, Dana Ron, Amir Shpilka, and Adam Smith. On the difficulty of approximating the support size of a distribution. Manuscript, 2007.
- [18] Jacob Ziv and Abraham Lempel. A universal algorithm for sequential data compression. IEEE Transactions on Information Theory, 23:337–343, 1977.
- [19] Jacob Ziv and Abraham Lempel. Compression of individual sequences via variable-rate coding. IEEE Transactions on Information Theory, 24:530–536, 1978.