This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\hideLIPIcs

M&D Data Science Center, Tokyo Medical and Dental University (TMDU), Japanhdbn.dsc@tmd.ac.jphttps://orcid.org/0000-0002-6856-5185JSPS KAKENHI Grant Numbers JP20H04141, JP24K02899. NTT Communication Science Laboratories, Japanmitsuru.funakoshi@ntt.comhttps://orcid.org/0000-0002-2547-1509 M&D Data Science Center, Tokyo Medical and Dental University (TMDU), Japandiptarama.hendrian@tmd.ac.jphttps://orcid.org/0000-0002-8168-7312 Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University (TMDU), Japanma190093@tmd.ac.jp Department of Computer Science, University of Helsinki, Helsinki, Finlandpuglisi@cs.helsinki.fihttps://orcid.org/0000-0001-7668-7636Academy of Finland grants 339070 and 351150. \CopyrightHideo Bannai, Mitsuru Funakoshi, Diptarama Hendrian, Myuji Matsuda, Simon J. Puglisi \ccsdesc[500]Theory of computation Data compression \supplementSoftware available at https://github.com/dscalgo/lzhb/.

Acknowledgements.
Computing resources were provided by Human Genome Center (the Univ. of Tokyo), through the usage service provided by M&D Data Science Center, Tokyo Medical and Dental University.

Height-bounded Lempel-Ziv encodings

Hideo Bannai    Mitsuru Funakoshi    Diptarama Hendrian    Myuji Matsuda    Simon J. Puglisi
Abstract

We introduce height-bounded LZ encodings (LZHB), a new family of compressed representations that are variants of Lempel-Ziv parsings with a focus on bounding the worst-case access time to arbitrary positions in the text directly via the compressed representation. An LZ-like encoding is a partitioning of the string into phrases of length 11 which can be encoded literally, or phrases of length at least 22 which have a previous occurrence in the string and can be encoded by its position and length. An LZ-like encoding induces an implicit referencing forest on the set of positions of the string. An LZHB encoding is an LZ-like encoding where the height of the implicit referencing forest is bounded. An LZHB encoding with height constraint hh allows access to an arbitrary position of the underlying text using O(h)O(h) predecessor queries.

While computing the optimal (i.e., smallest) LZHB encoding efficiently seems to be difficult [Cicalese & Ugazio 2024, arxiv], we give the first linear time algorithm for strings over a constant size alphabet that computes the greedy LZHB encoding, i.e., the string is processed from beginning to end, and the longest prefix of the remaining string that can satisfy the height constraint is taken as the next phrase. Our algorithms significantly improve both theoretically and practically, the very recently and independently proposed algorithms by Lipták et al. (arxiv, to appear at CPM 2024).

We also analyze the size of height bounded LZ encodings in the context of repetitiveness measures, and show that there exists a constant cc such that the size z^𝐻𝐵(clogn)\hat{z}_{\mathit{HB}(c\log n)} of the optimal LZHB encoding whose height is bounded by clognc\log n for any string of length nn is O(g^rl)O(\hat{g}_{\mathrm{rl}}), where g^rl\hat{g}_{\mathrm{rl}} is the size of the smallest run-length grammar. Furthermore, we show that there exists a family of strings such that z^𝐻𝐵(clogn)=o(g^rl)\hat{z}_{\mathit{HB}(c\log n)}=o(\hat{g}_{\mathrm{rl}}), thus making z^𝐻𝐵(clogn)\hat{z}_{\mathit{HB}(c\log n)} one of the smallest known repetitiveness measures for which O(\polylogn)O(\polylog n) time access is possible using linear (O(z^𝐻𝐵(clogn))O(\hat{z}_{\mathit{HB}(c\log n)})) space.

keywords:
Lempel-Ziv parsing, data compression
category:
\relatedversion

1 Introduction

Dictionary compressors are a family of algorithms that produce compressed representations of input strings essentially as sequences of copy and paste operations. These representations are widely used in general compression tools such as gzip and LZ4, and have also received much recent attention since they are especially effective for highly repetitive data sets, such as versioned documents and pangenomes [23].

A desirable operation to support on compressed data is that of random access to arbitrary positions in the original data. Access should be supported without decompressing the data in its entirety, and ideally by decompressing little else than the sought positions. Recent years have seen intense research on the access problem in the context of dictionary compression, most notably for grammar compressors (SLPs) and Lempel-Ziv (LZ)-like schemes. The general approach is to impose structure on the output of the compressor and in doing so add small amounts of information to support fast queries. While LZ77 [34] is known to be theoretically and practically one of the smallest compressed representations that can be computed efficiently, a long-standing question is whether O(\polylog(n))O(\polylog(n)) time access can be achieved with a data structure using O(z)O(z) space [23], where zz is the size of the LZ77 parsing.

In an LZ-like compression scheme, the input string TT of length nn is parsed into znz^{\prime}\leq n phrases (substrings of TT), where each phrase is of length 11, or, is a phrase of length at least 22 which has a previous occurrence. Each phrase can be encoded by a pair (,s)(\ell,s), where 1\ell\geq 1 is the length of the phrase, and ss is the symbol representing the phrase if =1\ell=1, or otherwise, a position of a previous occurrence (source) of the phrase. A greedy left-to-right algorithm that takes the longest prefix of the remaining string that can be a phrase leads to the aforementioned LZ77 parsing. An LZ-like encoding of a string induces a referencing forest, where each position of the string is a node, phrases of length 1 are roots of a tree in the forest, and the parent of all other positions are induced by the previous occurrence of the phrase it is contained in. The principle hurdle to support fast access to an arbitrary symbol T[i]T[i] from an LZ-like parsing is to trace the symbol to a root in the referencing forest. For LZ77, the height of the referencing forest can be Θ(n)\Theta(n).

Contributions.

In this paper, we explore LZ-like parsings specifically designed to bound the height of the referencing forest, i.e., LZ-like parsers in which the parsing rules prevent the introduction of phrases that would exceed a specified maximum height of the referencing forest. Our contributions are summarized as follows:

  1. 1.

    We propose the first linear time111Assuming a constant-size alphabet. algorithm LZHB for computing the greedy height-bounded LZ-like encoding, i.e., the string is processed from beginning to end, and each phrase is greedily taken as the longest prefix of the remaining string that can satisfy the height constraint. This problem was first considered by Kreft and Navarro [16], where an algorithm called LZ-COST, with no efficient implementation, is mentioned. Very recently, contemporaneously and independently of our work, this problem was also revisited by Lipták et al. [18], who presented an algorithm called greedy-BATLZ running in O(nlog3n)O(n\log^{3}n) time. Lipták et al. also proposed greedier-BATLZ that runs in O(zn2logn)=O(n3logn)O(z^{\prime}n^{2}\log n)=O(n^{3}\log n) time, where zz^{\prime} is the size of the parsing, further adding the requirement that the previous occurrence of the phrase is chosen so as to minimize the maximum height. We show that our algorithms can be modified to support this heuristic in O(nlogσ+𝑜𝑐𝑐)=O(nlogσ+zn)=O(n2)O(n\log\sigma+\mathit{occ})=O(n\log\sigma+z^{\prime}n)=O(n^{2}) time, where 𝑜𝑐𝑐\mathit{occ} is the total number of previous occurrences of all phrases.

  2. 2.

    We show that our algorithms allow for simple, lightweight implementations based on suffix arrays and segment trees, and that our implementations are an order of magnitude (or two) faster than the recent implementations of related schemes by Lipták et al.

  3. 3.

    We propose a new LZ-like encoding, which may be of independent theoretical interest, which can be considered as a run-length variant of standard LZ-like encodings which can potentially reduce the referencing height further.

  4. 4.

    We analyze the size of height bounded LZ-like encodings in the context of repetitiveness measures [23]. We show that for some constant cc, there exits data structures of O(z^𝐻𝐵(clogn))O(\hat{z}_{\mathit{HB}(c\log n)}) size that allow access in O(\polylogn)O(\polylog{n}) time, where z^𝐻𝐵(clogn)\hat{z}_{\mathit{HB}(c\log n)} is the size of the smallest LZ-like encoding whose height is bounded by clogn)c\log n) . Furthermore z^𝐻𝐵(clogn)\hat{z}_{\mathit{HB}(c\log n)} is always O(g^rl)O(\hat{g}_{\mathrm{rl}}), and further can be o(g^rl)o(\hat{g}_{\mathrm{rl}}) for some family of strings, where g^rl\hat{g}_{\mathrm{rl}} denotes the size of the smallest run-length grammar (RLSLP) [27] representing the string. This makes z^𝐻𝐵(clogn)\hat{z}_{\mathit{HB}(c\log n)} one of the smallest known measures that can achieve O(\polylogn)O(\polylog n) time access using linear space. The other two are the size zez_{e} of the LZ-End parse [16, 17, 14], and the size g^it(d)\hat{g}_{it(d)} of the smallest iterated SLPs (ISLPs) [26]. Note that while there exist string families such that z^𝐻𝐵(clogn)=o(ze)\hat{z}_{\mathit{HB}(c\log n)}=o(z_{e}), we do not yet know if it can be that ze=o(z^𝐻𝐵(clogn))z_{e}=o(\hat{z}_{\mathit{HB}(c\log n)}).

2 Preliminaries

2.1 Strings

Let Σ\Sigma be a set of symbols called the alphabet, and Σ\Sigma^{*} the set of strings over Σ\Sigma. For any non-negative integer nn, Σn\Sigma^{n} is the set of strings of length nn. For any string xΣx\in\Sigma^{*}, |x||x| denotes the length of xx. The empty string (the string of length 0) is denoted by ε\varepsilon. For any integer i[1,|x|]i\in[1,|x|], x[i]x[i] is the iith symbol of xx, and for any integers i,j[1,|x|]i,j\in[1,|x|], x[i..j]=x[i]x[j]x[i..j]=x[i]\cdots x[j] if iji\leq j, and ε\varepsilon otherwise. We will write x[i..j)x[i..j) to denote x[i..j1]x[i..j-1]. For string ww, p[1,|w|]p\in[1,|w|] is a period of ww if w[i]=w[i+p]w[i]=w[i+p] for all i[1,|w|p]i\in[1,|w|-p]. For any string xx, i[1,|w||x|+1]i\in[1,|w|-|x|+1] is an occurrence of xx in ww if w[i..i+|x|)=xw[i..i+|x|)=x. For any position i>1i>1, the length lpfw(i)\mathrm{lpf}_{w}(i) of the longest previously occurring factor of position ii is lpfw(i)=max{lw[i..i+l)=w[i..i+l),i<i}\mathrm{lpf}_{w}(i)=\max\{l\mid w[i^{\prime}..i^{\prime}+l)=w[i..i+l),i^{\prime}<i\}, and let lpfw(1)=0\mathrm{lpf}_{w}(1)=0. For any i,i,\ell with 1ii+|w|1\leq i\leq i+\ell\leq|w|, the leftmost occurrence of the length-\ell substring starting at ii is lmoccw(i,)=min{j<iw[j..j+)=w[i..i+)}\mathrm{lmocc}_{w}(i,\ell)=\min\{j<i\mid w[j..j+\ell)=w[i..i+\ell)\}, which can be undefined when ii is the leftmost occurrence of w[i..i+)w[i..i+\ell). We will omit the subscript when the underlying string considered is clear.

A string that is both a prefix and suffix of a string is called a border of that string. The border array BB of a string ww is an array B[1..|w|]B[1..|w|] of integers, where the B[i]B[i] is the length of the longest proper border of w[1..i]w[1..i]. The border array is computable in O(|w|)O(|w|) time and space [15], in an on-line fashion, i.e., at each step i=1,,|w|i=1,\ldots,|w|, the border array of B[1..i]B[1..i] is obtained in amortized constant (O(i)O(i) total) time. Notice that the minimum period of w[1..i]w[1..i] is iB[i]i-B[i]. Thus, the minimum periods of all prefixes of a (possibly growing) string can be computed in time linear in the length of the string.

Lemma 2.1 ([15]).

The minimum period of a semi-dynamic string that allows symbols to be appended is non-decreasing and computable in amortized constant time per symbol.

The following is another useful lemma, rediscovered many times in the literature.

Lemma 2.2 (e.g. Lemma 8 of [28]).

The set of occurrences of a word ww inside a word vv which is exactly twice longer than ww forms a single arithmetic progression.

Our parsing algorithms make use of the suffix tree data structure [32]. We assume some familiarity with suffix trees, and give only the basic properties essential to our methods below. We refer the reader to the many textbook treatments of suffix trees for further details [13, 22].

  1. 1.

    The suffix tree of a string TT, denoted 𝒯T\mathcal{T}_{T} (or just 𝒯\mathcal{T} when the context is clear) is a compacted trie containing all the suffixes of TT. Each leaf of the suffix tree corresponds to a suffix of the string and is labelled with the starting position of that suffix.

  2. 2.

    The suffix tree can be constructed in an online fashion via Ukkonen’s algorithm [30], in O(|T|logσ)O(|T|\log\sigma) total time, i.e., at each step i=1,,|T|i=1,\ldots,|T|, the suffix tree of T[1..i]T[1..i] is obtained in amortized O(logσ)O(\log\sigma) (O(ilogσ)O(i\log\sigma) total) time. For linearly-sortable alphabets, it is possible to compute the suffix tree offline in O(|T|)O(|T|) time [11].

  3. 3.

    Suffix trees can support prefix queries that return a pair (,s)(\ell,s), where \ell is the length of the longest prefix of the query string that occurs in the indexed string, and ss is the leftmost position of an occurrence of that prefix. Prefix queries can be conducted simply by traversing the suffix tree from the root with the query string and take O(logσ)O(\ell\log\sigma) time. The query string can be processed left-to-right, where each symbol is processed in O(logσ)O(\log\sigma) time, which is the cost of finding the correct out-going child edge from the current suffix tree node. Prefix queries can be answered in the same time complexity even if the underlying string TT is extended in an online fashion as mentioned in Property 2.

  4. 4.

    The suffix tree of TT can be preprocessed in linear time to answer, given i,i,\ell, the leftmost occurrence in TT of the substring T[i..i+)T[i..i+\ell), i.e., lmoccT(i,)\mathrm{lmocc}_{T}(i,\ell) in constant time [2].

2.2 LZ encodings and random access

An LZ-like parsing of a string is a decomposition of the string into phrases of length 11 (literal phrases), or a phrase of length at least 22 which has a previous occurrence in the string. An LZ-like encoding is a representation of the LZ-like parsing, where literal phrases are encoded as the pair (1,c)(1,c), where cΣc\in\Sigma is the phrase itself, and phrases of length at least 22 are encoded as the pair (,s)(\ell,s), where 2\ell\geq 2 is the length of the phrase and ss is a previous occurrence (or the source) of the phrase. Although LZ parsing and encoding are sometimes used as synonyms, we differentiate them in that a parsing only specifies the length of each phrase, while an encoding specifies the previous occurrence of each phrase as well. The size of an LZ-like encoding is the number of phrases. For example, for the string 𝚊𝚋𝚊𝚋𝚊𝚌𝚋𝚊𝚋𝚊𝚌\mathtt{ababacbabac}, we can have an encoding (1,𝚊),(1,𝚋),(3,1),(1,𝚌),(5,2)(1,\mathtt{a}),(1,\mathtt{b}),(3,1),(1,\mathtt{c}),(5,2) of size 55. A common variant of LZ-like encodings adds an extra symbol explicitly to the phrase, and each phrase is encoded by a triplet. For simplicity, our description will not consider this extra symbol; the required modifications for the algorithms to include this are straightforward. We note that our experiments in Section 6 will include the extra character, in order to compare with implementations of previous work.

Refer to caption
Figure 1: An example of an implicit referencing forest induced from the LZ-like encoding (1,𝚊),(1,𝚋),(3,1),(1,𝚌),(5,2)(1,\mathtt{a}),(1,\mathtt{b}),(3,1),(1,\mathtt{c}),(5,2) for the string 𝚊𝚋𝚊𝚋𝚊𝚌𝚋𝚊𝚋𝚊𝚌\mathtt{ababacbabac}.

An LZ-like encoding of a string induces an implicit referencing forest, where: each position of the string is a node, literal phrases are roots of a tree in the forest, and the parent of all other positions are induced by the source of the phrase it is contained in. For example, for the encoding of the string 𝚊𝚋𝚊𝚋𝚊𝚌𝚋𝚊𝚋𝚊𝚌\mathtt{ababacbabac} as above, the referencing forest is shown in Figure 1. Let =(1,s1),,(z,sz)\mathcal{E}=(\ell_{1},s_{1}),\ldots,(\ell_{z^{\prime}},s_{z^{\prime}}) be an LZ-like encoding of string TT. Given an arbitrary position ii, suppose we would like to retrieve the symbol T[i]T[i]. This can be done by traversing the implicit referencing forest using predecessor queries. We first find the phrase (j,sj)(\ell_{j},s_{j}) with starting position bjb_{j}222We note that it is possible to encode each phrase as (bj,sj)(b_{j},s_{j}), since then, j=bj+1bj\ell_{j}=b_{j+1}-b_{j}. that the position ii is contained in, i.e., jj s.t. bji<bj+jb_{j}\leq i<b_{j}+\ell_{j}. Then, we can deduce that the parent position of ii is i=sj+(ibj)i^{\prime}=s_{j}+(i-b_{j}). This is repeated until a literal phrase (1,c)(1,c) is reached, in which case T[i]=cT[i]=c. The number of times a parent must be traversed (i.e., the number of predecessor queries) is bounded by the height of the referencing forest. This notion of height for LZ-like encodings was introduced by Kreft and Navarro [17].

3 Height bounded LZ-like encodings

We first consider modifying the definition of the height of the referencing forest under some conditions, by implicitly rerouting edges. Consider the case of a unary string T=𝚊nT=\mathtt{a}^{n} and LZ-like encoding (1,𝚊),(n1,1)(1,\mathtt{a}),(n-1,1). The straightforward definition above gives a height of n1n-1 for position nn. This is a result of the second phrase being self-referencing, i.e., the phrase overlaps with its referenced occurrence, and each position references its preceding position. An important observation is that self-referencing phrases are periodic; in particular a self-referencing phrase starting at position bib_{i} with source sis_{i} has period bisib_{i}-s_{i}. Due to this periodicity, any position in the phrase can refer to an appropriate position in T[si..bi)T[s_{i}..b_{i}).

More formally, let =(1,s1),,(z,sz)\mathcal{E}=(\ell_{1},s_{1}),\ldots,(\ell_{z^{\prime}},s_{z^{\prime}}) be an LZ-like encoding of TT. For any position ii, let bjb_{j} be the starting position of the phrase (j,sj)(\ell_{j},s_{j}) such that bji<bj+jb_{j}\leq i<b_{j}+\ell_{j}. Then, T[i]=sjT[i]=s_{j} if j=0\ell_{j}=0, and T[i]=T[sj+(ibj)]=T[sj+((ibj)mod(bjsj))]T[i]=T[s_{j}+(i-b_{j})]=T[s_{j}+((i-b_{j})\bmod(b_{j}-s_{j}))] otherwise.

We define the height 0pt(i)0pt_{\mathcal{E}}(i) of position ii by an encoding \mathcal{E} as

0pt(i)={0if j=00pt(sj+((ibj)mod(bjsj)))+1otherwise.0pt_{\mathcal{E}}(i)=\begin{cases}0&\mbox{if $\ell_{j}=0$}\\ 0pt_{\mathcal{E}}(s_{j}+((i-b_{j})\bmod(b_{j}-s_{j})))+1&\mbox{otherwise.}\end{cases} (1)

The subscript will be omitted if the underlying encoding considered is clear. Since jj can be computed using a predecessor query on the set of phrase starting positions, T[i]T[i] can be computed using O(0pt(i)Q(z))O(0pt(i)Q(z)) time using a data structure of size O(z)O(z), where Q(z)Q(z) is the time required for predecessor queries on zz elements in [1,n][1,n] using O(z)O(z) space. Q(z)Q(z) is O(logn)O(\log n) using a simple binary search, and faster using more sophisticated methods [21].

For example, for the encoding (1,𝚊),(1,𝚊),(1,𝚋),(3,2),(1,𝚌),(4,3)(1,\mathtt{a}),(1,\mathtt{a}),(1,\mathtt{b}),(3,2),(1,\mathtt{c}),(4,3) of string 𝚊𝚊𝚋𝚊𝚋𝚊𝚌𝚋𝚊𝚋𝚊\mathtt{aababacbaba}, the heights are: 0,0,0,1,1,1,0,1,2,2,20,0,0,1,1,1,0,1,2,2,2. Notice that the phrase (3,2)(3,2) that starts at position 44, representing 𝚊𝚋𝚊\mathtt{aba} is self-referencing, and the height of the last position in the phrase (position 6) is 11, since it is defined to reference position 2=2+(64)mod(64)2=2+(6-4)\bmod(6-4). We note that even with this modified definition, the height of the optimal LZ-like encoding (LZ77) can be Θ(n)\Theta(n). See Appendix A in Appendix A.

In order to bound the worstcase query time complexity for access operations, we consider LZ-like encodings with bounded height. An hh-bounded LZ-like encoding is an LZ-like encoding where max{0pt(i)i[1,n]}h\max\{0pt(i)\mid i\in[1,n]\}\leq h.

There are many ways one could enforce such a height restriction. Unfortunately, finding the smallest such encoding was very recently shown to be NP-hard by Cicalese and Ugazio [8]. Therefore, we propose several greedy heuristics to compute hh-bounded LZ-like encodings. Given an encoding for T[1..bj)T[1..b_{j}) (which defines 0pt(i)0pt(i) for any i[1,bj)i\in[1,b_{j})), the next phrase (j,sj)(\ell_{j},s_{j}) starting at position bjb_{j} is defined as follows.

LZHB1

j\ell_{j} is the largest value such that T[sj..sj+j)T[s_{j}..s_{j}+\ell_{j}) satisfies the height constraint, where sj=lmoccT(bj,lpfT(bj))s_{j}=\mathrm{lmocc}_{T}(b_{j},\mathrm{lpf}_{T}(b_{j})), i.e., the leftmost occurrence of the longest previously occurring factor at position bjb_{j}.

LZHB2

j\ell_{j} is the largest value such that for all 1j1\leq\ell^{\prime}\leq\ell_{j}, the leftmost occurrence of T[bj..bj+)T[b_{j}..b_{j}+\ell^{\prime}) satisfies the height constraint. sjs_{j} is the leftmost occurrence of T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}).

LZHB3

j\ell_{j} is the largest value such that for all 1j1\leq\ell^{\prime}\leq\ell_{j}, there exists some previous occurrence of T[bj..bj+)T[b_{j}..b_{j}+\ell^{\prime}) that satisfies the height constraint. sjs_{j} is the leftmost occurrence of T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}) that satisfies the height constraint.

Note that for all variations, j=1\ell_{j}=1 if there is no occurrence of T[bj]T[b_{j}] that satisfies the corresponding conditions described above, in which case, sj=T[bj]s_{j}=T[b_{j}].

We note that LZHB1 corresponds to the baseline algorithm BATLZ2 in [18]. LZHB3 essentially corresponds to greedy-BATLZ in [18] as well, but greedy-BATLZ does not require that the occurrence is leftmost. The difference between LZHB2 and LZHB3 lies in the priority of the choice of the leftmost occurrence and when to check the height constraint. LZHB2 greedily extends the prefix, checking each time whether its leftmost occurrence satisfies the height constraint. LZHB3 finds the leftmost occurrence out of the longest prefix that can satisfy the height constraint. Note that when the height is unbounded, the size of all three variants will be equivalent to the regular LZ77 parsing.

We show that LZHB1 and LZHB2 can be computed in O(n)O(n) time and space for linearly-sortable alphabets, and LZHB3 can be computed online in O(nlogσ)O(n\log\sigma) time and O(n)O(n) space for general ordered alphabets.

We also propose a new encoding of LZ-like parsings that re-routes edges of the implicit referencing forest in an attempt to further reduce the heights, again using periodicity. In the case of self-referencing phrases, our definition of height in Equation (1) utilized the fact that self-referencing phrases implies a period bjsjb_{j}-s_{j} in the phrase. Using this period, we could re-route the parents of all positions inside the phrase to the first period in the previous occurrence of the phrase, which is outside the phrase. Here, we make two further observations: 1) since the referenced substring is the first period of the phrase, we could extended the phrase for free while the period continues, possibly making the phrase longer, and 2) the referenced substring could have a previous occurrence further to the left, which should tend to have shorter heights. Therefore, if we were to store the period of the phrase explicitly, this could be applied to and benefit all periodic phrases, self-referencing or not.

Specifically then, under this scheme, a phrase with period 11 can be encoded as the triple (,c,1)(\ell,c,1), where cc is a symbol, and a phrase with period p2p\geq 2 can be encoded as the triple (,s,p)(\ell,s,p), where p\ell\geq p is the length of the phrase with period p2p\geq 2, and ss is a previous occurrence of the length-pp prefix of the phrase. We will call such an encoding, a modified LZ-like encoding. For example, (2,𝚊,1),(1,𝚋,1),(3,2,2),(1,𝚌,1),(4,3,2)(2,\mathtt{a},1),(1,\mathtt{b},1),(3,2,2),(1,\mathtt{c},1),(4,3,2) would be a modified LZ-like encoding for the string 𝚊𝚊𝚋𝚊𝚋𝚊𝚌𝚋𝚊𝚋𝚊\mathtt{aababacbaba}.

The implicit referencing forest and heights of the modified LZ-like encoding can be defined analogously: a position ii in the jjth phrase (j,sj,pj)(\ell_{j},s_{j},p_{j}) that begins at position bjb_{j}, i.e., bji<bj+jb_{j}\leq i<b_{j}+\ell_{j}, will reference position sj+((ibj)modpj)mod(bjsj)s_{j}+((i-b_{j})\bmod p_{j})\bmod(b_{j}-s_{j}), where the second mod\bmod is to deal with the case where the occurrence of the prefix period is self-referencing. For the above example, the heights are: 0,0,0,1,1,1,0,1,2,1,20,0,0,1,1,1,0,1,2,1,2.

A greedy left-to-right algorithm computes the jjth phrase (j,sj,pj)(\ell_{j},s_{j},p_{j}) starting at bjb_{j} as:

LZHB4

j\ell_{j} is the largest value such that T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}) has period 11, or, for all 1j1\leq\ell^{\prime}\leq\ell_{j}, there exists some previous occurrence of T[bj..bj+p)T[b_{j}..b_{j}+p^{\prime}) that satisfies the height constraint, where pp^{\prime} is the minimum period of T[bj..bj+)T[b_{j}..b_{j}+\ell^{\prime}). sjs_{j} is the leftmost occurrence of T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}) that satisfies the height constraint, and pjp_{j} is the minimum period of T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}).

4 Efficient parsing algorithms for height-bounded encodings

4.1 A linear time algorithm for LZHB1

Theorem 4.1.

For any integer hh, an hh-bounded encoding based on LZHB1 for a string over a linearly-sortable alphabet can be computed in linear time and space.

Proof 4.2.

The algorithm maintains an array H[1..n]H[1..n] of integers, initially set to 0. Phrases are produced left to right. The algorithm maintains the invariant that when the encoding is computed up to position ii, H[j]H[j] gives the height in the referencing forest of position j<ij<i. Using Property 2 in Section 2.1, we build the suffix tree of TT in linear time, and preprocess it, again in linear time, for Property 4. We also precompute and store all the lengths of the longest previously occurring factor at each position, i.e., lpfT(i)\mathrm{lpf}_{T}(i) for all 1in1\leq i\leq n, in linear time [10].

Suppose we have computed the hh-bounded encoding for T[1..bj)T[1..b_{j}) and would like to compute the jjth phrase (j,sj)(\ell_{j},s_{j}) starting at position bjb_{j}. We can compute sj=lmoccT(bj,lpfT(bj))s_{j}=\mathrm{lmocc}_{T}(b_{j},\mathrm{lpf}_{T}(b_{j})), i.e., the leftmost occurrence of the longest previously occurring factor starting at position bjb_{j}, in constant time (Property 4). The encoding of the phrase starting at bjb_{j} is then (sj,j)(s_{j},\ell_{j}), where j\ell_{j}\leq\ell is the largest value such that every value in H[sj..min(bj,sj+j))H[s_{j}..\min(b_{j},s_{j}+\ell_{j})) is less than hh. This can be computed in O(j)O(\ell_{j}) time by simply scanning the above values in HH. Notice that we do not need to check the height beyond position bjb_{j}, since this would imply that the phrase is self-referencing, and the remaining heights will be copies of the first period of the phrase which were already checked to satisfy the height constraint. After pair (sj,j)(s_{j},\ell_{j}) is determined, values in H[bj..bj+j)H[b_{j}..b_{j}+\ell_{j}) are determined according to Equation (1), in O(j)O(\ell_{j}) time.

The total time is thus proportional to the sum of the phrase lengths, which is O(n)O(n).

4.2 A linear time algorithm for LZHB2

The difference between LZHB1 and LZHB2 is that while LZHB1 fixes the source to the leftmost occurrence of the longest previously occurring factor starting at bjb_{j}, LZHB2 considers the leftmost position of the candidate substring. Since shorter lengths allow the source to be further to the left where heights tend to be shorter, it may allow longer phrases.

We show below that we can still achieve a linear time parsing algorithm.

Theorem 4.3.

For any integer hh, an hh-bounded encoding based on LZHB2 for a string over a linearly-sortable alphabet can be computed in linear time and space.

Proof 4.4.

The same steps as in the first paragraph in the description of LZHB1 in Theorem 4.1 are taken (except for the precomputing of lpfT\mathrm{lpf}_{T} which is not needed here).

Suppose we have computed the hh-bounded encoding for T[1..bj)T[1..b_{j}) and would like to compute the jjth phrase (j,sj)(\ell_{j},s_{j}) starting at position bjb_{j}. We start with =1\ell=1, and increase \ell incrementally until there is no previous occurrence of T[bj..bj+)T[b_{j}..b_{j}+\ell), or the leftmost occurrence of T[bj..bj+)T[b_{j}..b_{j}+\ell) violates the height constraint, and will use the last valid value of \ell for j\ell_{j}.

For a given \ell, using Property 4, we can obtain the leftmost occurrence of T[bj..bj+)T[b_{j}..b_{j}+\ell), i.e., sj=lmoccT(bj,)s^{\prime}_{j}=\mathrm{lmocc}_{T}(b_{j},\ell), in constant time. We need to check whether all values in H[sj..min(bj,sj+))H[s^{\prime}_{j}..\min(b_{j},s^{\prime}_{j}+\ell)) are less than hh. Since sjs^{\prime}_{j} may change for different values of \ell, we potentially need to check all values in H[sj..min(bj,sj+))H[s^{\prime}_{j}..\min(b_{j},s^{\prime}_{j}+\ell)).

To do this efficiently, we maintain another array CC, where C[i]C[i] stores the position jj in HH, such that jj is the smallest position in HH that is greater than or equal to ii, such that H[j]=hH[j]=h, if it exists, or \infty otherwise, i.e., C[i]=min({}{jiH[j]=h})C[i]=\min(\{\infty\}\cup\{j\geq i\mid H[j]=h\}). Then, max(H[i..j])+1h\max(H[i..j])+1\leq h if and only if C[i]>jC[i]>j. Thus, checking the height constraint can be done in constant time for a given \ell.

After (sj,j)(s_{j},\ell_{j}) are determined, values in H[i..i+j)H[i..i+\ell_{j}) are determined according to Equation (1), in O(j)O(\ell_{j}) time. The array CC can be maintained in linear total time: all values in CC are initially set to \infty, and whenever we store the value hh in H[j]H[j] for some jj, we set C[i]=jC[i^{\prime}]=j for all ii^{\prime} such that max({0}{i<jH[i]=h})<ij\max(\{0\}\cup\{i<j\mid H[i]=h\})<i^{\prime}\leq j, i.e., we update all values in C[1..i)C[1..i) that were \infty. Clearly each element of CC is set at most once. Thus, the total time is O(n)O(n).

4.3 An O(nlogσ)O(n\log\sigma) time algorithm for LZHB3

Theorem 4.5.

For any integer hh, an hh-bounded encoding based on LZHB3 for a string can be computed in O(nlogσ)O(n\log\sigma) time and O(n)O(n) space.

Proof 4.6.

Our algorithm is a slight modification of a folklore algorithm to compute LZ77 in O(nlogσ)O(n\log\sigma) time based on Ukkonen’s online algorithm for computing suffix trees [30] described, e.g., by Gusfield [13]. We first describe a simple version which does not allow self-references.

At a high level, when computing the jjth phrase that starts at position bjb_{j}, we use the suffix tree of T[1..bj)T[1..b_{j}) so that we can find the longest previous occurrence of T[bj..]T[b_{j}..]. We can find this by simply traversing the suffix tree from the root with T[bj..]T[b_{j}..] as long as possible (Property 3 in Section 2.1). Once we know the length j\ell_{j} of the phrase, we simply append the phrase T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}) to the suffix tree to obtain the suffix tree for T[1..bj+1)T[1..b_{j+1}), and continue. An occurrence (particularly, the leftmost occurrence) can be obtained easily by storing this information on each edge when it is constructed.

For our height-bounded case, the precise height of a position in a phrase can only be determined after j\ell_{j} is determined, and if j2\ell_{j}\geq 2, the leftmost occurrence sjs_{j} of T[bj..bj+)T[b_{j}..b_{j}+\ell) that satisfies the height constraint must be determined. Since a reference adds 1 to the height, we know that the height constraint is satisfied if and only if we reference positions that have height less than hh. Therefore, when adding the symbols of the phrase T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}) to the suffix tree after determining their heights, we change symbols corresponding to positions with height hh to a special symbol $\$ which does not match any other symbols. In other words, for T[1..bj)T[1..b_{j}) and their heights H[1..bj)H[1..b_{j}), we use and maintain a suffix tree for T[1..bj)T^{\prime}[1..b_{j}), where T[i]=T[i]T^{\prime}[i]=T[i] if H[i]<hH[i]<h and T[i]=$T^{\prime}[i]=\$ otherwise. This simple modification allows us to reference only and all substrings with heights less than hh, so that we can find the longest prefix of T[bj..]T[b_{j}..] that occurs in T[1..bj)T[1..b_{j}) and also satisfies the height constraint, by simply traversing the suffix tree of T[1..bj)T^{\prime}[1..b_{j}) (Property 3).

Next, in order to allow self-references, we utilize a combinatorial observation made previously, that self-references lead to periodicity. Let x=T[bj..bj+|x|)x=T[b_{j}..b_{j}+|x|) be the longest prefix of T[bj..n]T[b_{j}..n] such that there is a non-self-referencing occurrence in T[1..bj)T^{\prime}[1..b_{j}). We can find a longer self-referencing prefix of T[bj..n]T[b_{j}..n], if it exists, as follows: Any such occurrence must be prefixed by xx, and in order for the phrase to be self-referencing, this xx must occur as a substring in T[bj|x|..bj+|x|1)T[b_{j}-|x|..b_{j}+|x|-1). Since the heights after the first period of the phrase are essentially defined as copies of the heights of the first period of the phrase (see Equation (1)), any occurrence of xx (and any of its extensions) at or after the leftmost position tjt_{j} in [bj|x|,bj)[b_{j}-|x|,b_{j}) such that H[tj..bj)<hH[t_{j}..b_{j})<h, would satisfy the height constraint.

If there is no occurrence of xx in T[max(tj,bj|x|)..bj+|x|1)T[\max(t_{j},b_{j}-|x|)..b_{j}+|x|-1), then a longer self-referencing phrase satisfying the height constraint cannot exist, so j=|x|\ell_{j}=|x|. If there is only one such occurrence of xx, we can use this occurrence and extend the phrase by naive symbol comparisons to obtain j\ell_{j}. If there are more than one such occurrence, it follows that there are at least three occurrences of xx in T[max(tj,bj|x|)..bj+|x|)T[\max(t_{j},b_{j}-|x|)..b_{j}+|x|), a string of length (at most) 2|x|2|x|. From Lemma 2.2, this implies that the occurrences of xx form an arithmetic progression, whose common difference is the minimum period of xx. Thus, extending the phrase as above from any occurrence of xx in T[max(tj,bj|x|)..bj+|x|1)T[\max(t_{j},b_{j}-|x|)..b_{j}+|x|-1) will lead to the same maximal length due to the periodicity, i.e., j\ell_{j} is the maximum value such that T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}) has the same minimum period as xx. We can find all such occurrences (which obviously includes the leftmost) in O(j)O(\ell_{j}) time using any linear time pattern matching algorithm, e.g., KMP [15].

The time to compute phrase (i,si)(\ell_{i},s_{i}) is O(ilogσ)O(\ell_{i}\log\sigma), so we spend O(nlogσ)O(n\log\sigma) time in total.

Pseudo-code for the algorithm is shown in Algorithm 1 in Appendix B.

4.4 An O(nlogσ)O(n\log\sigma) time algorithm for LZHB4

Theorem 4.7.

For any integer hh, an hh-bounded encoding for a string based on LZHB4 can be computed in O(nlogσ)O(n\log\sigma) time and O(n)O(n) space.

Proof 4.8.

The algorithm is similar to LZHB3, in that it computes the suffix tree of T[1..bj)T[1..b_{j}) in an online manner, and after determining the new phrase T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}) and their heights H[bj..bj+j)H[b_{j}..b_{j}+\ell_{j}), symbols of the new phrase are added to the suffix tree except when its position has height hh, in which case $\$ is added.

In order to determine the LZHB4 phrase, we first compute the LZHB3 phrase, i.e., the longest prefix of T[bj..n]T[b_{j}..n] that has a previous occurrence satisfying the height constraint. Let its length be \ell^{\prime}. This implies that any prefix period of T[bj..n]T[b_{j}..n] that can be referenced (and satisfies the height constraint) is at most \ell^{\prime}. Since prefix periods are non-decreasing (Lemma 2.1), we have that the LZHB4 phrase T[bj..bj+j)T[b_{j}..b_{j}+\ell_{j}) is the longest prefix of T[bj..n]T[b_{j}..n] with period pjp_{j}\leq\ell^{\prime}. This can be computed in O(j)O(\ell_{j}) time (again Lemma 2.1). The leftmost occurrence of T[bj..bj+pj)T[b_{j}..b_{j}+p_{j}) to be referenced can be found simply by traversing the suffix tree if pjp_{j} is at most the length of the longest non-self-referencing prefix of the LZHB3 phrase, and otherwise, will coincide with (the starting position of) the self-referencing occurrence of the LZHB3 phrase. Since j\ell^{\prime}\leq\ell_{j} must hold as well, each LZHB4 phrase is computed in O(jlogσ)O(\ell_{j}\log\sigma) time, and thus, LZHB4 can be computed in O(nlogσ)O(n\log\sigma) total time and O(n)O(n) space.

4.5 Algorithms for the greedier heuristic

Lipták et al. [18] also propose a greedier heuristic for height bounded LZ-like encodings, in which, for any phrase, the previous occurrence that minimimzes the maximum height is chosen. While a naïve algorithm runs in Θ(n2)\Theta(n^{2}) time, Lipták et al. propose an algorithm for which they show an upper bound of O(zn2logn)=O(n3logn)O(z^{\prime}n^{2}\log n)=O(n^{3}\log n) time. We show here that LZHB3 and LZHB4 can be modified to support the greedier heuristic in total O(nlogσ+𝑜𝑐𝑐)O(n\log\sigma+\mathit{occ}) time, where 𝑜𝑐𝑐=O(zn)\mathit{occ}=O(z^{\prime}n) is the total number of previous occurrences of all phrases.

Suppose we are able to obtain all previous occurrences of the phrases. The array HH is not static, but only append operations are performed on it, and so a range maximum query data structure for HH can be maintained in amortized O(1)O(1) time per update and query [12, 29]. Using this, the maximum height of a given occurrence can be checked in amortized O(1)O(1) time, and the occurrence giving the smallest maximum height can be found in additional O(n+𝑜𝑐𝑐)O(n+\mathit{occ}) time.

All previous occrrences of a phrase can be obtained in additional O(n+𝑜𝑐𝑐)O(n+\mathit{occ}) total time using standard techniques on the suffix tree. Recall that when computing the jjth phrase, our algorithm traverses the suffix tree for finding a longest non-self-referencing occurrence. The leaves below the reached position contain the occurrences of the phrase, and since any non-leaf node of a suffix tree has at least two children, these leaves can be found in time linear in their number, thus in additional O(𝑜𝑐𝑐)O(\mathit{occ}) total time. All self-referencing occurrences can be found in O(j)O(\ell_{j}) time as described in Section 4.3. A minor detail we have skipped is that since we only have the implicit suffix tree being built via Ukkonen’s online construction algorithm, the suffix tree does not have leaves corresponding to suffixes that have a previous occurrence. This can be dealt with as follows. Since Ukkonen’s algorithm maintains the longest repeating suffix of the current text as the active point in the suffix tree, we can also maintain some previous occurrence T[u..v]T[u..v] of it. Since any previous occurrence of the phrase contained in the longest repeating suffix can be mapped to an occurrence in T[u..v]T[u..v], we can, given all the occurrences of the phrase in T[u..v]T[u..v] for which there is a corresponding leaf in the suffix tree, find all occurrences in the longest repeating suffix in O(n+𝑜𝑐𝑐)O(n+\mathit{occ}) total time.

4.6 Implementation using suffix arrays

The suffix array [19] of a string TT of length nn, is an array 𝑆𝐴[1..n]\mathit{SA}[1..n] of integers such that T[𝑆𝐴[i]..n]T[\mathit{SA}[i]..n] is the iith lexicographically smallest suffix of TT.

Suffix arrays are well known as a lightweight alternative to suffix trees. While it is not difficult to use suffix arrays in place of suffix trees for static strings [1] by mapping nodes of the suffix tree to ranges in the suffix array, it is not straightforward to do this for LZHB3 and LZHB4, since the algorithms work in an online manner: the string TT^{\prime} for which the suffix tree is maintained is determined during the computation, and is not known in advance. Here, we show how to simulate the algorithm on the suffix tree for TT^{\prime} by using the suffix array 𝑆𝐴\mathit{SA} of TT, in amortized O(logn)O(\log n) time per suffix tree operation, thus obtaining an O(nlogn)O(n\log n) time algorithm for LZHB3 and LZHB4. The running times of the greedier versions can be similarly bounded by O(nlogn+𝑜𝑐𝑐logn)=O(nlogn+znlogn)=O(n2logn)O(n\log n+\mathit{occ}\log n)=O(n\log n+z^{\prime}n\log n)=O(n^{2}\log n).

In a suffix tree of TT, the effect of replacing a symbol T[i]T[i] at position ii with $\$ can be viewed as an operation that truncates the paths of suffixes starting at position j<ij<i for which T[j..i)T[j..i) does not contain $\$, to length iji-j, i.e., to T[j..i)T[j..i). We will maintain an array LL of integers, where L[𝑟𝑎𝑛𝑘[i]]L[\mathit{rank}[i]] holds the valid length of the suffix starting at position ii and 𝑟𝑎𝑛𝑘[i]\mathit{rank}[i] is the lexicographic rank of the suffix T[i..n]T[i..n], i.e., 𝑆𝐴[𝑟𝑎𝑛𝑘[i]]=i\mathit{SA}[\mathit{rank}[i]]=i. All values are initially set to 0, indicating that the suffix has not yet been inserted into the suffix tree. During the online construction of the suffix tree of TT^{\prime}, when the suffix T[i..n]T[i..n] is added to the suffix tree, we set L[𝑟𝑎𝑛𝑘[i]]=ni+1L[\mathit{rank}[i]]=n-i+1. When a $\$ symbol is appended, we modify the values in LL of relevant suffixes. Since the value of each position in LL is modified at most twice (from 0 to ni+1n-i+1 and then to the length up to the next $\$), the total number of updates on LL is at most 2n2n.

The traversal on the suffix tree can then be simulated by a standard search for finding the lexicographic range in the suffix array of suffixes that are prefixed by the considered substring. This range can be computed in O(logn)O(\log n) time per symbol by a simple binary search. In order to make sure we do not traverse truncated suffixes, we allow the traversal if and only if there exists at least one suffix in the lexicographic range whose valid length is at least the length of the substring being traversed. This can be checked in O(logn)O(\log n) time by using a segment tree [3, 7] for representing LL, which allows updates and range maximum queries on LL in O(logn)O(\log n) time. All occurrences, i.e., suffixes in the range for which the value in LL is at least the length of the substring, can be enumerated in O(logn)O(\log n) time per occurrence, by a common technique [20] that calls range maximum queries recursively on sub-ranges excluding the maximum value, until all ranges only contain values less than the desired length.

A minor difference with the suffix tree version is that because we are able to insert the whole suffix starting at a given position rather than just a single symbol at the position, the algorithm will naturally handle self-referencing occurrences, and they no longer require special care. Also, while retrieving an occurrence is easy, we note that we can no longer retrieve the leftmost occurrence for the greedy versions in the same time bound - the chosen occurrence will be a prefix of the suffix with longest valid length.

5 Height bounded LZ-like encodings as repetitiveness measures

Here, we consider the strength of height bounded encodings as repetitiveness measures [23]. Denote by z^𝐻𝐵(h)\hat{z}_{\mathit{HB}(h)}, the optimal (smallest) LZHB encoding whose height is at most hh. It easily follows that z^𝐻𝐵(h)\hat{z}_{\mathit{HB}(h)} is monotonically non-increasing in hh, i.e., for any hhh\leq h^{\prime}, it holds that z^𝐻𝐵(h)z^𝐻𝐵(h)\hat{z}_{\mathit{HB}(h)}\geq\hat{z}_{\mathit{HB}(h^{\prime})}. We will also denote by z~^𝐻𝐵(h)\hat{\tilde{z}}_{\mathit{HB}(h)}, the optimal modified LZ-like encoding whose height is at most hh.

A first, and obvious, observation is that if the height is allowed to grow to nn, then the size of the height-bounded encoding and the LZ parsing are equivalent. {observation} z^𝐻𝐵(n)=z\hat{z}_{\mathit{HB}(n)}=z.

It is also interesting to note height-bounded and run-length encodings can be related: {observation} z~^𝐻𝐵(0)\hat{\tilde{z}}_{\mathit{HB}(0)} is equivalent to the run-length encoding.

The following relation between z^𝐻𝐵(h)\hat{z}_{\mathit{HB}(h)} and the smallest size g^rl\hat{g}_{\mathrm{rl}} of RLSLPs (SLPs with run-length rules) and g^it(d)\hat{g}_{it(d)} of ISLPs [26] can be shown.

Theorem 5.1.

There exists a constant cc such that z^𝐻𝐵(clogn)=O(g^rl)\hat{z}_{\mathit{HB}(c\log n)}=O(\hat{g}_{\mathrm{rl}}) holds.

Theorem 5.2.

There exist a family of strings such that for some constant cc, z^𝐻𝐵(clogn)=o(g^rl)\hat{z}_{\mathit{HB}(c\log n)}=o(\hat{g}_{\mathrm{rl}}) and z^𝐻𝐵(clogn)=o(g^it(d))\hat{z}_{\mathit{HB}(c\log n)}=o(\hat{g}_{it(d)}).

When there is no height constraint, LZHB3 is equivalent to LZ77 and thus gives an optimal LZ-like parsing. For modified LZ-like encodings with no height contraints, LZHB4 can be worse than optimal. For example, for the string 𝚊𝚋𝚊𝚡𝚊𝚋𝚌𝚍𝚊𝚋𝚊𝚋𝚌𝚊\mathtt{abaxabcdababca}, greedy gives 𝚊|𝚋|𝚊|𝚡|𝚊𝚋|𝚌|𝚍|𝚊𝚋𝚊𝚋|𝚌|𝚊\mathtt{a}|\mathtt{b}|\mathtt{a}|\mathtt{x}|\mathtt{ab}|\mathtt{c}|\mathtt{d}|\mathtt{abab}|\mathtt{c}|\mathtt{a} while 𝚊|𝚋|𝚊|𝚡|𝚊𝚋|𝚌|𝚍|𝚊𝚋|𝚊𝚋𝚌𝚊\mathtt{a}|\mathtt{b}|\mathtt{a}|\mathtt{x}|\mathtt{ab}|\mathtt{c}|\mathtt{d}|\mathtt{ab}|\mathtt{abca} is a slightly smaller parsing. However, we can show that the greedy algorithm has an approximation ratio of at most 22.

Theorem 5.3.

Let z~\tilde{z} denote the size of the modified LZ-like encoding generated by LZHB4 with no height constraints. We also denote z~^𝐻𝐵(n)=z~^\hat{\tilde{z}}_{\mathit{HB}(n)}=\hat{\tilde{z}}. Then, z~^z~z2z~^\hat{\tilde{z}}\leq\tilde{z}\leq z\leq 2\hat{\tilde{z}}.

6 Computational Experiments

We have developed prototype implementations of our algorithms described above, in C++. Since Lipták et al. [18] have already demonstrated the effectiveness and superiority of greedy-BATLZ and greedier-BATLZ (which correspond to LZHB3 and the greedier LZHB3) compared to the baselines and that the height can be reduced without increasing the size of the parse too much, our experiments here focus on the performance of our algorithms LZHB3 and LZHB4 in comparison with greedy-BATLZ and greedier-BATLZ. For the BAT-LZ variants, we use the code provided by Lipt’ak et al.333Available at: https://github.com/fmasillo/BAT-LZ. with slight modifications to fit our test environment. Our implementations for LZHB3 and LZHB4 are available at https://github.com/dscalgo/lzhb/.

Experiments were conducted on a system with dual AMD EPYC 9654 2.4GHz processors and 768GB RAM running RedHat Linux 8. We use the Real subset of the repetitive corpus444https://pizzachili.dcc.uchile.cl/repcorpus.html.

Figure 2 shows the running times for various height constraints of the BAT-LZ variants as well as LZHB3, LZHB3SA, LZHB4, and LZHB4SA, and their greedier versions, where the suffix SA denotes the suffix array implementation. Although greedy-BATLZ is fast when there is no height constraint, its performance seems to diminish rapidly when a height constraint is set. Interestingly, greedier-BATLZ seemed to run faster than greedy-BATLZ. We observe that all versions of our LZHB3 and LZHB4 implementation outperform greedy-BATLZ and greedier-BATLZ by a large margin. For the greedier variants, the running times increase as the height constraint decreases. This is because stricter height constraints generally lead to shorter phrase lengths and therefore increases zz^{\prime} and 𝑜𝑐𝑐\mathit{occ}. This becomes more apparent in the suffix array versions, perhaps because the nn in the O(logn)O(\log n) factor for finding the suffix array range is actually the size of the range considered, which becomes larger for shorter pharses.

Figure 3 shows the memory usage of the above mentioned implementations. The memory usage for LZHB3 and LZHB4 (the suffix tree versions) are smallest for small heights, because smaller height constraints imply that many paths in the suffix tree become truncated. On the other hand, the memory usage is the highest for larger heights. Memory usage for the other versions is unaffected by the height constraint. LZHB3SA and LZHB4SA are always superior to greedy-BATLZ, and the same can be said for the greedier versions and their counterparts.

In summary, our suffix array implementations are always faster and use less memory than their BATLZ counterparts, with the exception of greedy-BATLZ with no height constraints.

Figure 4 shows the sizes of the phrases of LZHB3, LZHB4 and their greedier versions555We also ran experiments for LZ End parsing, but do not include results in the figures because they distort the plots. We include statistics for LZ End in Table 1 in the Appendix.. Although a phrase of the modified LZ-like encoding is slightly larger than the traditional LZ-like encoding since it includes the period, we can see that a much smaller encoding with the same height, which can more than compensate for this increase, can be obtained in some cases. This seemed to be prominent for cere, para, and Escherichia_Coli, which are all DNA.

Refer to caption Refer to caption Refer to caption

Figure 2: Running times of BAT-LZ and LZHB variants. For each algorithm, the largest height is the height of the parsing obtained with no height constraint. Results for smaller height constraints are computed for intervals of 55, 5050, or 100100, depending on the data. Missing plots indicate that the computation exceeded 3 hours. See also Fig. 5 in Appendix C.

Refer to caption Refer to caption Refer to caption


Figure 3: Memory usage of BAT-LZ and LZHB variants measured by getrusage. For each algorithm the largest height is the height of the parsing obtained with no height constraint. The results for smaller height constraints are computed for intervals of 55, 5050, or 100100, depending on the data. Missing plots indicate that the computation exceeded 3 hours. See Fig. 7 in Appendix C.
Refer to caption
Refer to caption
Refer to caption
Figure 4: # of phrases of LZHB3, LZHB4 and their greedier variants. See Fig. 9 Appendix C.

7 Discussion

We have introduced new variants of LZ-like encodings that directly focus on supporting random access to the underlying data. We also proposed a modified LZ-like encoding, which naturally connects the run-length encoding (h=0h=0), and LZ77 (h=nh=n). We described linear time algorithms for several greedy variants of such encodings, as well as how to adapt them to support the greedier heuristic. We showed that our algorithms are faster both theoretically and practically, compared to contemporaneously proposed algorithms by Lipták et al. [18]. We also showed some relations between height-bounded encodings and existing repetitiveness measures. In particular, we have shown that the optimal LZ-like encoding with a height constraint of O(logn)O(\log n), is one of the asymptotically smallest repetitiveness measures achieving fast (i.e., O(\polylog(n))O(\polylog(n))-time) access.

There are numerous avenues future work could take. An obvious next step is to engineer actual random-access data structures from height-bounded encodings, which our experiments presented here indicate should be competitive with current state-of-the-art methods. Also, other heuristics for reducing the height while retaining the small size should be explored.

From a theoretical perspective, a natural open problem is whether we can remove the logσ\log\sigma factor from the running times of LZHB3 and LZHB4. The biggest open problem we leave is whether better bounds on the size of height-bounded encodings can be established. Cicalese and Ugazio show that z^𝐻𝐵(h)=O(z)\hat{z}_{\mathit{HB}(h)}=O(z) [8] cannot be achieved for costant hh. We note that since there exist linear sized data structures that can answer predecessor queries for a set of values in the range [1,n][1,n] in O(loglogn)O(\log\log n) time [33], even better lower bounds on the height for which z^𝐻𝐵(h)=O(z)\hat{z}_{\mathit{HB}(h)}=O(z) can be shown by using access time lower bound results of Verbin and Yu [31]. However, these do not rule out O(\polylog(n))O(\polylog(n)) height. Can we balance the height (achieve O(\polylog(n))O(\polylog(n)) height) of an LZ-like encoding or a modified LZ-like encoding, by only increasing the size by a constant factor?

References

  • [1] Mohamed Ibrahim Abouelhoda, Stefan Kurtz, and Enno Ohlebusch. Replacing suffix trees with enhanced suffix arrays. J. Discrete Algorithms, 2(1):53–86, 2004. doi:10.1016/S1570-8667(03)00065-0.
  • [2] Djamal Belazzougui, Dmitry Kosolobov, Simon J. Puglisi, and Rajeev Raman. Weighted ancestors in suffix trees revisited. In Pawel Gawrychowski and Tatiana Starikovskaya, editors, 32nd Annual Symposium on Combinatorial Pattern Matching, CPM 2021, July 5-7, 2021, Wrocław, Poland, volume 191 of LIPIcs, pages 8:1–8:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021. URL: https://doi.org/10.4230/LIPIcs.CPM.2021.8, doi:10.4230/LIPICS.CPM.2021.8.
  • [3] Jon Louis Bentley and Derick Wood. An optimal worst case algorithm for reporting intersections of rectangles. IEEE Trans. Computers, 29(7):571–577, 1980. doi:10.1109/TC.1980.1675628.
  • [4] Jean Berstel and Alessandra Savelli. Crochemore factorization of sturmian and other infinite words. In Rastislav Kralovic and Pawel Urzyczyn, editors, Mathematical Foundations of Computer Science 2006, 31st International Symposium, MFCS 2006, Stará Lesná, Slovakia, August 28-September 1, 2006, Proceedings, volume 4162 of Lecture Notes in Computer Science, pages 157–166. Springer, 2006. doi:10.1007/11821069\_14.
  • [5] Philip Bille, Travis Gagie, Inge Li Gørtz, and Nicola Prezza. A separation between RLSLPs and LZ77. J. Discrete Algorithms, 50:36–39, 2018. URL: https://doi.org/10.1016/j.jda.2018.09.002, doi:10.1016/J.JDA.2018.09.002.
  • [6] Moses Charikar, Eric Lehman, Ding Liu, Rina Panigrahy, Manoj Prabhakaran, Amit Sahai, and Abhi Shelat. The smallest grammar problem. IEEE Trans. Inf. Theory, 51(7):2554–2576, 2005. doi:10.1109/TIT.2005.850116.
  • [7] Bernard Chazelle. A functional approach to data structures and its use in multidimensional searching. SIAM J. Comput., 17(3):427–462, 1988. doi:10.1137/0217026.
  • [8] Ferdinando Cicalese and Francesca Ugazio. On the complexity and approximability of bounded access Lempel Ziv coding. CoRR, abs/2403.15871, 2024. URL: https://doi.org/10.48550/arXiv.2403.15871, arXiv:2403.15871, doi:10.48550/ARXIV.2403.15871.
  • [9] Sorin Constantinescu and Lucian Ilie. The Lempel–Ziv complexity of fixed points of morphisms. SIAM J. Discret. Math., 21(2):466–481, 2007. doi:10.1137/050646846.
  • [10] Maxime Crochemore, Lucian Ilie, Costas S. Iliopoulos, Marcin Kubica, Wojciech Rytter, and Tomasz Walen. Computing the longest previous factor. Eur. J. Comb., 34(1):15–26, 2013. URL: https://doi.org/10.1016/j.ejc.2012.07.011, doi:10.1016/J.EJC.2012.07.011.
  • [11] Martin Farach. Optimal suffix tree construction with large alphabets. In 38th Annual Symposium on Foundations of Computer Science, FOCS ’97, Miami Beach, Florida, USA, October 19-22, 1997, pages 137–143. IEEE Computer Society, 1997. doi:10.1109/SFCS.1997.646102.
  • [12] Johannes Fischer. Inducing the lcp-array. In Frank Dehne, John Iacono, and Jörg-Rüdiger Sack, editors, Algorithms and Data Structures - 12th International Symposium, WADS 2011, New York, NY, USA, August 15-17, 2011. Proceedings, volume 6844 of Lecture Notes in Computer Science, pages 374–385. Springer, 2011. doi:10.1007/978-3-642-22300-6\_32.
  • [13] Dan Gusfield. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press, 1997. doi:10.1017/CBO9780511574931.
  • [14] Dominik Kempa and Barna Saha. An upper bound and linear-space queries on the LZ-end parsing. In Joseph (Seffi) Naor and Niv Buchbinder, editors, Proceedings of the 2022 ACM-SIAM Symposium on Discrete Algorithms, SODA 2022, Virtual Conference / Alexandria, VA, USA, January 9 - 12, 2022, pages 2847–2866. SIAM, 2022. doi:10.1137/1.9781611977073.111.
  • [15] Donald E. Knuth, James H. Morris Jr., and Vaughan R. Pratt. Fast pattern matching in strings. SIAM J. Comput., 6(2):323–350, 1977. doi:10.1137/0206024.
  • [16] Sebastian Kreft and Gonzalo Navarro. LZ77-like compression with fast random access. In James A. Storer and Michael W. Marcellin, editors, 2010 Data Compression Conference (DCC 2010), 24-26 March 2010, Snowbird, UT, USA, pages 239–248. IEEE Computer Society, 2010. doi:10.1109/DCC.2010.29.
  • [17] Sebastian Kreft and Gonzalo Navarro. On compressing and indexing repetitive sequences. Theor. Comput. Sci., 483:115–133, 2013. URL: https://doi.org/10.1016/j.tcs.2012.02.006, doi:10.1016/J.TCS.2012.02.006.
  • [18] Zsuzsanna Lipták, Francesco Masillo, and Gonzalo Navarro. BAT-LZ out of hell. CoRR, abs/2403.09893, 2024. To appear at CPM ’24. URL: https://doi.org/10.48550/arXiv.2403.09893, arXiv:2403.09893, doi:10.48550/ARXIV.2403.09893.
  • [19] Udi Manber and Gene Myers. Suffix arrays: A new method for on-line string searches. In David S. Johnson, editor, Proceedings of the First Annual ACM-SIAM Symposium on Discrete Algorithms, 22-24 January 1990, San Francisco, California, USA, pages 319–327. SIAM, 1990. URL: http://dl.acm.org/citation.cfm?id=320176.320218.
  • [20] S. Muthukrishnan. Efficient algorithms for document retrieval problems. In David Eppstein, editor, Proceedings of the Thirteenth Annual ACM-SIAM Symposium on Discrete Algorithms, January 6-8, 2002, San Francisco, CA, USA, pages 657–666. ACM/SIAM, 2002. URL: http://dl.acm.org/citation.cfm?id=545381.545469.
  • [21] G. Navarro and J. Rojas-Ledesma. Predecessor search. ACM Computing Surveys, 53(5):article 105, 2020.
  • [22] Gonzalo Navarro. Compact Data Structures - A Practical Approach. Cambridge University Press, 2016.
  • [23] Gonzalo Navarro. Indexing highly repetitive string collections, part I: repetitiveness measures. ACM Comput. Surv., 54(2):29:1–29:31, 2022. doi:10.1145/3434399.
  • [24] Gonzalo Navarro, Carlos Ochoa, and Nicola Prezza. On the approximation ratio of ordered parsings. IEEE Trans. Inf. Theory, 67(2):1008–1026, 2021. doi:10.1109/TIT.2020.3042746.
  • [25] Gonzalo Navarro, Francisco Olivares, and Cristian Urbina. Balancing run-length straight-line programs. In Diego Arroyuelo and Barbara Poblete, editors, String Processing and Information Retrieval - 29th International Symposium, SPIRE 2022, Concepción, Chile, November 8-10, 2022, Proceedings, volume 13617 of Lecture Notes in Computer Science, pages 117–131. Springer, 2022. doi:10.1007/978-3-031-20643-6\_9.
  • [26] Gonzalo Navarro and Cristian Urbina. Iterated straight-line programs. In Proc. LATIN 2024, Lecture Notes in Computer Science, 2024.
  • [27] Takaaki Nishimoto, Tomohiro I, Shunsuke Inenaga, Hideo Bannai, and Masayuki Takeda. Fully dynamic data structure for LCE queries in compressed space. In 41st International Symposium on Mathematical Foundations of Computer Science, MFCS 2016, August 22-26, 2016 - Kraków, Poland, volume 58 of LIPIcs, pages 72:1–72:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2016.
  • [28] Wojciech Plandowski and Wojciech Rytter. Application of Lempel-Ziv encodings to the solution of words equations. In Kim Guldstrand Larsen, Sven Skyum, and Glynn Winskel, editors, Automata, Languages and Programming, 25th International Colloquium, ICALP’98, Aalborg, Denmark, July 13-17, 1998, Proceedings, volume 1443 of Lecture Notes in Computer Science, pages 731–742. Springer, 1998.
  • [29] Yohei Ueki, Diptarama, Masatoshi Kurihara, Yoshiaki Matsuoka, Kazuyuki Narisawa, Ryo Yoshinaka, Hideo Bannai, Shunsuke Inenaga, and Ayumi Shinohara. Longest common subsequence in at least k length order-isomorphic substrings. In Bernhard Steffen, Christel Baier, Mark van den Brand, Johann Eder, Mike Hinchey, and Tiziana Margaria, editors, SOFSEM 2017: Theory and Practice of Computer Science - 43rd International Conference on Current Trends in Theory and Practice of Computer Science, Limerick, Ireland, January 16-20, 2017, Proceedings, volume 10139 of Lecture Notes in Computer Science, pages 363–374. Springer, 2017. doi:10.1007/978-3-319-51963-0\_28.
  • [30] Esko Ukkonen. On-line construction of suffix trees. Algorithmica, 14(3):249–260, 1995. doi:10.1007/BF01206331.
  • [31] Elad Verbin and Wei Yu. Data structure lower bounds on random access to grammar-compressed strings. In Johannes Fischer and Peter Sanders, editors, Combinatorial Pattern Matching, 24th Annual Symposium, CPM 2013, Bad Herrenalb, Germany, June 17-19, 2013. Proceedings, volume 7922 of Lecture Notes in Computer Science, pages 247–258. Springer, 2013. doi:10.1007/978-3-642-38905-4\_24.
  • [32] Peter Weiner. Linear pattern matching algorithms. In 14th Annual Symposium on Switching and Automata Theory, Iowa City, Iowa, USA, October 15-17, 1973, pages 1–11. IEEE Computer Society, 1973.
  • [33] Dan E. Willard. Log-logarithmic worst-case range queries are possible in space theta(n). Inf. Process. Lett., 17(2):81–84, 1983. doi:10.1016/0020-0190(83)90075-3.
  • [34] Jacob Ziv and Abraham Lempel. A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory, 23(3):337–343, 1977. doi:10.1109/TIT.1977.1055714.

Appendix A Ommitted Proofs

{observation}

The height of the optimal LZ-like encoding (LZ77) can be Θ(n)\Theta(n).

Proof A.1.

Consider the height of 𝚋\mathtt{b} in the string:

𝚊𝚋𝚊𝚋𝚌𝚍𝚋𝚌#𝚍𝚋𝚎𝚏𝚋𝚎#𝚏𝚋𝚐𝚑𝚋𝚐##ck𝚋ck+1ck+2𝚋ck+1#ck+2𝚋ck+3ck+4𝚋ck+3#\mathtt{ababcdbc\#dbefbe\#fbghbg\#\dots}\#c_{k}\mathtt{b}c_{k+1}c_{k+2}\mathtt{b}c_{k+1}\#c_{k+2}\mathtt{b}c_{k+3}c_{k+4}\mathtt{b}c_{k+3}\#\dots

It is not difficult to see each 𝚋\mathtt{b} will reference the closest previously occurring 𝚋\mathtt{b}.

See 5.1

Proof A.2.

It is known that a given RLSLP can be balanced, i.e., transformed into another RLSLP whose parse tree height is O(logn)O(\log n), without changing the asymptotic size of the RLSLP [25].

Theorem A.3 (Theorem 1 in [25]).

Given an RLSLP GG generating a string ww, it is possible to construct an equivalent balanced RLSLP GG^{\prime} of size O(|G|)O(|G|), in linear time, with only rules of the form AaA\rightarrow a,ABCA\rightarrow BC, and ABtA\rightarrow B^{t}, where aa is a terminal, BB and CC are variables, and t>2t>2.

Given an RLSLP with parse tree height hh, a partial parse tree of the RLSLP, as defined in Theorem 5 of [24], can be obtained. Based on this partial parse tree, it is possible to obtain an LZ-like encoding of size at most that of the run-length grammar. The height of the LZ-like encoding corresponds to the height of the parse tree, and thus is hh-bounded. Since, from Theorem A.3, we can obtain a grammar of size O(g^rl)O(\hat{g}_{\mathrm{rl}}) with height O(logn)O(\log n), we have that the size z^𝐻𝐵(clogn)\hat{z}_{\mathit{HB}(c\log n)} of an optimal clognc\log n-bounded LZ-like encoding will have size at most O(g^rl)O(\hat{g}_{\mathrm{rl}}), where cc is a constant incurred by the height of the balanced RLSLP obtained from balancing the optimal run-length grammar.

See 5.2

Proof A.4.

The proof follows similar arguments to those of Bille et al. [5]. In their proof, they consider a string (originally used by Charikar et al. [6])

s^=t(k1)#1t(k2)#2#p1t(kp)\hat{s}=t(k_{1})\#_{1}t(k_{2})\#_{2}\cdots\#_{p-1}t(k_{p})

where t(k)t(k) denotes the length-kk prefix of the Thue-Morse word, #k\#_{k} are distinct symbols, and k1,k2,,kpk_{1},k_{2},\ldots,k_{p} is an integer sequence where k1k_{1} is the largest of the kik_{i}, p=Θ(logk1)p=\Theta(\log k_{1}), and show that the integers kik_{i} can be chosen so that the smallest RLSLP for s^\hat{s} has size Ω(log2k1loglogk1)\Omega(\frac{\log^{2}{k_{1}}}{\log\log k_{1}}). They further observe that the size of an LZ77 parsing of the length-kk prefix of the Thue-Morse word is O(logk)O(\log k) [4, 9]. Since k1k_{1} is the largest of the kik_{i}’s, the size of the LZ77 parsing of s^\hat{s} is O(logk1+p)=O(logk1)O(\log k_{1}+p)=O(\log k_{1}) for this class of strings.

Now, observe that the height, as defined in Equation (1), of an LZ-like encoding with size zz^{\prime} is O(z)O(z^{\prime}), since any position in a phrase references a position in a previous phrase. Therefore, we have that for some cc, z^𝐻𝐵(clogn)=O(logk1)\hat{z}_{\mathit{HB}(c\log n)}=O(\log k_{1}) as well, thus showing that z^𝐻𝐵(clogn)=o(g^rl)\hat{z}_{\mathit{HB}(c\log n)}=o(\hat{g}_{\mathrm{rl}}).

Since Thue-Morse words are cube-free, the extra rules in the iterated SLPs do not improve the asymptotic size compared to RLSLPs, and thus, as shown in Lemma 3 of [26], git(d)=g^rlg_{it(d)}=\hat{g}_{\mathrm{rl}} for this family of strings. Therefore, we have z^𝐻𝐵(clogn)=o(git(d))\hat{z}_{\mathit{HB}(c\log n)}=o(g_{it(d)}).

See 5.3

Proof A.5.

It is obvious that z~^z~\hat{\tilde{z}}\leq\tilde{z}. We first claim z~z\tilde{z}\leq z. This can be seen from the fact that a phrase that starts at position ii in (the greedy) LZHB4 is at least as long as lpf(i)\mathrm{lpf}(i), and so any LZHB phrase must extend at least to the end of the LZ77 phrase it starts in. Thus, for each LZHB phrase, we can assign a distinct LZ77 phrase (the LZ77 phrase it starts in), and therefore the number of LZHB4 phrases cannot be more than the number of LZ77 phrases.

Next, we claim z2z~z\leq 2\tilde{z}^{\prime} for any modified LZ-like encoding with no height constraints, whose size is denoted by z~\tilde{z}^{\prime}. This can be seen from the fact that a given phrase in a modified LZ-like encoding can contain at most two LZ77 phrases that start in it. Let xkxx^{k}x^{\prime} be a phrase in the modified LZ-like encoding starting at position ii, whose minimum period is |x||x| and xx^{\prime} is a prefix of xx. Since there is a previous occurrence of xx, the first LZ77 phrase that starts in the LZHB phrase must extend to at least the end of the first xx. Since there is a previous occurrence of any suffix of xk1xx^{k-1}x^{\prime}, the next LZ77 phrase will extend at least to the end of the LZHB phrase. The theorem follows from the above arguments, using z~^\hat{\tilde{z}} for z~\tilde{z}^{\prime}.

Appendix B Pseudo codes

1 Function LZHB3(T,h)\textsf{LZHB3}(T,h)
2       Let 𝒯\mathcal{T} be a suffix tree of ε\varepsilon;
3       Initialize H,H,\mathcal{E};
4       i=1,n=|T|,t=0i=1,n=|T|,\mathit{t}=0;
5       while ini\leq n do
             // T[i..i+)T[i..i+\ell) is longest prefix of T[i..n]T[i..n] occurring in T[1..i)T^{\prime}[1..i),
             // and leftmost occurrence is ss.
6             (,s)=𝒯.𝑝𝑟𝑒𝑓𝑞(T[i..n])(\ell,s)=\mathcal{T}.\mathit{prefq}(T[i..n]);
7            
8            if >0\ell>0 then // check for longer self-referencing occurrence
9                   kk = leftmost occurrence of T[i..i+)T[i..i+\ell) in T[max(t,i)..i+1)T[\max(\mathit{t},i-\ell)..i+\ell-1);
10                   if k != nil then
11                         if T[k+]=T[i+]T[k+\ell]=T[i+\ell] then s=ks=k;
12                         while T[s+]=T[i+]T[s+\ell]=T[i+\ell] do =+1\ell=\ell+1;
13                        
14                  
15            
16            if 1\ell\leq 1 then
17                   Add (1,T[i])(1,T[i]) to \mathcal{E};
18                   H[i]=0H[i]=0;
19                   if h > 0 then  𝒯.𝑎𝑝𝑝𝑒𝑛𝑑(T[i])\mathcal{T}.\mathit{append}(T[i]) ;
20                   else  𝒯.𝑎𝑝𝑝𝑒𝑛𝑑($)\mathcal{T}.\mathit{append}(\$) ;
21                   i=i+1i=i+1;
22                  
23            else
24                   S=T[i..i+)S=T[i..i+\ell);
25                   Add (,s)(\ell,s) to \mathcal{E};
26                   for j=0j=0 to 1\ell-1 do
27                         H[i+j]=H[s+(jmod(is))]+1H[i+j]=H[s+(j\bmod(i-s))]+1;
28                         if H[i+j]=hH[i+j]=h then
29                               S[j+1]=$S[j+1]=\$;
30                               t=i+jt=i+j;
31                              
32                        
33                  𝒯.𝑎𝑝𝑝𝑒𝑛𝑑(S)\mathcal{T}.\mathit{append}(S);
34                   i=i+i=i+\ell;
35                  
36            
37      return \mathcal{E};
38      
Algorithm 1 O(nlogσ)O(n\log\sigma) time algorithm for computing LZHB3

Appendix C More results of computational experiments

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 5: Running times of BAT-LZ and LZHB variants. The dotted horizontal lines respectively correspond to 1,2,3 hours. For each algorithm the largest height is the height of the parsing obtained with no height constraint, which is always computed. The results for smaller height constraints are computed for intervals of 55, 5050, or 100100, depending on the data. Missing plots indicate that the computation exceeded 3 hours, which only happened for the BAT-LZ variants, especially prominent in cere, para, and einstein.en.txt.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 6: (contd.) Running times of BAT-LZ and LZHB variants. The dotted horizontal lines respectively correspond to 1,2,3 hours. For each algorithm the largest height is the height of the parsing obtained with no height constraint, which is always computed. The results for smaller height constraints are computed for intervals of 55, 5050, or 100100, depending on the data. Missing plots indicate that the computation exceeded 3 hours, which only happened for the BAT-LZ variants, especially prominent in cere, para, and einstein.en.txt.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 7: Memory usage of BAT-LZ and LZHB variants. For each algorithm the largest height is the height of the parsing obtained with no height constraint, which is always computed. nn The results for smaller height constraints are computed for intervals of 55, 5050, or 100100, depending on the data. Missing plots indicate that the computation exceeded 3 hours, which only happened for the BAT-LZ variants, especially prominent in cere, para, and einstein.en.txt.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 8: (contd.) Memory usage of BAT-LZ and LZHB variants. For each algorithm the largest height is the height of the parsing obtained with no height constraint, which is always computed. The results for smaller height constraints are computed for intervals of 55, 5050, or 100100, depending on the data. Missing plots indicate that the computation exceeded 3 hours, which only happened for the BAT-LZ variants, especially prominent in cere, para, and einstein.en.txt.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 9: Number of phrases of LZHB3 and LZHB4 and their greedier variants.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 10: (contd.) Number of phrases of LZHB3 and LZHB4 and their greedier variants.
Table 1: The number of factors (phrases) and the maximum height of the LZ End parsing for each of our data sets.
Dataset Max. Height # Factors
Escherichia_Coli 27 2212539
cere 256 1863246
coreutils 174 1555394
einstein.de.txt 60 39587
einstein.en.txt 117 104087
influenza 79 919565
kernel 45 868362
para 258 2539381
world_leaders 102 207269