This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\hideLIPIcs

Institute of Informatics, University of Warsaw, Polandbenjamin.bergougnoux@mimuw.edu.pl Humboldt-Universität zu Berlin, Germanyvera.chekan@informatik.hu-berlin.dehttps://orcid.org/0000-0002-6165-1566Supported by the DFG Research Training Group 2434 “Facets of Complexity.” Algorithms and Complexity Group, TU Wien, Vienna, Austriarganian@gmail.comhttps://orcid.org/0000-0002-7762-8045Project No. Y1329 of the Austrian Science Fund (FWF), WWTF Project ICT22-029. Université Clermont Auvergne, Clermont Auvergne INP, LIMOS, CNRS, Clermont-Ferrand, Francemamadou.kante@uca.frhttps://orcid.org/0000-0003-1838-7744Supported by the French National Research Agency (ANR-18-CE40-0025-01 and ANR-20-CE48-0002). Hamburg University of Technology, Institute for Algorithms and Complexity, Hamburg, Germanymatthias.mnich@tuhh.dehttps://orcid.org/0000-0002-4721-5354 Discrete Mathematics Group, Institute for Basic Science (IBS), Daejeon, Korea and Department of Mathematical Sciences, KAIST, Daejeon, Koreasangil@ibs.re.krhttps://orcid.org/0000-0002-6889-7286Supported by the Institute for Basic Science (IBS-R029-C1). Institute of Informatics, University of Warsaw, Polandmichal.pilipczuk@mimuw.edu.pl\flag[0.17]flags.jpgThis work is a part of project BOBR that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 948057). Dept. Information and Computing Sciences, Utrecht University, The Netherlandse.j.vanleeuwen@uu.nlhttps://orcid.org/0000-0001-5240-7257 \CopyrightBenjamin Bergougnoux, Vera Chekan, Robert Ganian, Mamadou M. Kanté, Matthias Mnich, Sang-il Oum, Michał Pilipczuk, and Eric Jan van Leeuwen

Acknowledgements.
This work was initiated at the Graph Decompositions: Small Width, Big Challenges workshop held at the Lorentz Center in Leiden, The Netherlands, in 2022. \ccsdesc[300]Theory of computation Parameterized complexity and exact algorithms

Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

Benjamin Bergougnoux    Vera Chekan    Robert Ganian    Mamadou Moustapha Kanté    Matthias Mnich    Sang-il Oum    Michał Pilipczuk    Erik Jan van Leeuwen
Abstract

Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition’s width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone.

Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth. Here, shrubdepth is a bounded-depth analogue of cliquewidth, in the same way as treedepth is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. More precisely, we prove that on nn-vertex graphs equipped with a tree-model (a decomposition notion underlying shrubdepth) of depth dd and using kk labels,

  • Independent Set can be solved in time 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} using 𝒪(dk2logn)\mathcal{O}(dk^{2}\log n) space;

  • Max Cut can be solved in time n𝒪(dk)n^{\mathcal{O}(dk)} using 𝒪(dklogn)\mathcal{O}(dk\log n) space; and

  • Dominating Set can be solved in time 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} using n𝒪(1)n^{\mathcal{O}(1)} space via a randomized algorithm.

We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of Independent Set the exponent of the parametric factor in the time complexity has to grow with dd if one wishes to keep the space complexity polynomial.

keywords:
Parameterized complexity, shrubdepth, space complexity, algebraic methods

1 Introduction

Treewidth and Treedepth.  Dynamic programming on graph decompositions is a fundamental method in the design of parameterized algorithms. Among various decomposition notions, tree decompositions, which underly the parameter treewidth, are perhaps the most widely used; see e.g. [14, 18] for an introduction. A tree decomposition of a graph GG of width kk provides a way to “sweep” GG while keeping track of at most k+1k+1 “interface vertices” at a time. This can be used for dynamic programming: during the sweep, the algorithm maintains a set of representative partial solutions within the part already swept, one for each possible behavior of a partial solution on the interface vertices. Thus, the width of the decomposition is the key factor influencing the number of partial solutions that need to be stored.

In a vast majority of applications, this number of different partial solutions depends (at least) exponentially on the width kk of the decomposition, which often leads to time complexity of the form f(k)n𝒪(1)f(k)\cdot n^{\mathcal{O}(1)} for an exponential function ff. This should not be surprising, as most problems where this technique is used are 𝖭𝖯{\mathsf{NP}}-hard. Unfortunately, the space complexity—which often appears to be the true bottleneck in practice —is also exponential. There is a simple tradeoff trick, first observed by Lokshtanov et al. [38], which can often be used to reduce the space complexity to polynomial at the cost of increasing the time complexity. For instance, Independent Set can be solved in 2kn𝒪(1)2^{k}\cdot n^{\mathcal{O}(1)} time and using 2kn𝒪(1)2^{k}\cdot n^{\mathcal{O}(1)} space on an nn-vertex graph equipped with a width-kk tree decomposition via dynamic programming [26]; combining this algorithm with a simple recursive Divide&Conquer scheme yields an algorithm with running time 2𝒪(k2)n𝒪(1)2^{\mathcal{O}(k^{2})}\cdot n^{\mathcal{O}(1)} and space complexity n𝒪(1)n^{\mathcal{O}(1)}.

Allender et al. [2] and then Pilipczuk and Wrochna [45] studied the question whether the loss on the time complexity is necessary if one wants to achieve polynomial space complexity in the context of dynamic programming on tree decompositions. While the formal formulation of their results is somewhat technical and complicated, the take-away message is the following: there are good complexity-theoretical reasons to believe that even in the simpler setting of path decompositions, one cannot achieve algorithms with polynomial space complexity whose running times asymptotically match the running times of their exponential-space counterparts. We refer to the works [2, 45] for further details.

However, starting with the work of Fürer and Yu [27], a long line of advances [33, 40, 41, 45] showed that bounding the depth, rather than the width, of a decomposition leads to the possibility of designing algorithms that are both time- and space-efficient. To this end, we consider the treedepth of a graph GG, which is the least possible depth of an elimination forest: a forest FF on the vertex set of GG such that every two vertices adjacent in GG are in the ancestor/descendant relation in FF. An elimination forest of depth dd can be regarded as a tree decomposition of depth dd, and thus treedepth is the bounded-depth analogue of treewidth. As shown in [27, 33, 41, 45], for many classic problems, including 3-Coloring, Independent Set, Dominating Set, and Hamiltonicity, it is possible to design algorithms with running time 2𝒪(d)n𝒪(1)2^{\mathcal{O}(d)}\cdot n^{\mathcal{O}(1)} and polynomial space complexity, assuming the graph is supplied with an elimination forest of depth dd. In certain cases, the space complexity can even be as low as 𝒪(d+logn)\mathcal{O}(d+\log n) or 𝒪(dlogn)\mathcal{O}(d\log n) [45]. Typically, the main idea is to reformulate the classic bottom-up dynamic programming approach so that it can be replaced by a simple top-down recursion. This reformulation is by no means easy—it often involves a highly non-trivial use of algebraic transforms or other tools of algebraic flavor, such as inclusion-exclusion branching.

Cliquewidth and Shrubdepth. In this work, we are interested in the parameter cliquewidth and its low-depth counterpart: shrubdepth. While treewidth applies only to sparse graphs, cliquewidth is a notion of tree-likeness suited for dense graphs as well. The decompositions underlying cliquewidth are called clique expressions [13]. A clique expression is a term operating over kk-labelled graphs—graphs where every vertex is assigned one of kk labels—and the allowed operations are: (i) apply any renaming function to the labels; (ii) make a complete bipartite graph between two given labels; and (iii) take the disjoint union of two kk-labelled graphs. Then the cliquewidth of GG is the least number of labels using which (some labelling of) GG can be constructed. Similarly to treewidth, dynamic programming over clique expressions can be used to solve a wide range of problems, in particular all problems expressible in 𝖬𝖲𝖮1\mathsf{MSO}_{1} logic, in 𝖥𝖯𝖳\mathsf{FPT} time when parameterized by cliquewidth. Furthermore, while several problems involving edge selection or edge counting, such as Hamiltonicity or Max Cut, remain 𝖶[1]\mathsf{W}[1]-hard under the cliquewidth parameterization [23, 24], standard dynamic programming still allows us to solve them in 𝖷𝖯\mathsf{XP} time. In this sense, clique-width can be seen as the “least restrictive” general-purpose graph parameter which allows for efficient dynamic programming algorithms where the decompositions can also be computed efficiently [25]. Nevertheless, since the cliquewidth of a graph is at least as large as its linear cliquewidth, which in turn is as large as its pathwidth, the lower bounds of Allender et al. [2] and of Pilipczuk and Wrochna [45] carry over to the cliquewidth setting. Hence, reducing the space complexity to polynomial requires a sacrifice in the time complexity.

Shrubdepth, introduced by Ganian et al. [30], is a variant of cliquewidth where we stipulate the decomposition to have bounded depth. This necessitates altering the set of operations used in clique expressions in order to allow taking disjoint unions of multiple graphs as a single operation. In this context, we call the decompositions used for shrubdepth (d,k)(d,k)-tree-models, where dd stands for the depth and kk for the number of labels used; a formal definition is provided in Section 2. Shrubdepth appears to be a notion of depth that is sound from the model-theoretic perspective, is 𝖥𝖯𝖳\mathsf{FPT}-time computable [28], and has become an important concept in the logic-based theory of well-structured dense graphs [19, 20, 29, 30, 42, 43].

Since shrubdepth is a bounded-depth analogue of cliquewidth in the same way as treedepth is a bounded-depth analogue of treewidth, it is natural to ask whether for graphs from classes of bounded shrubdepth, or more concretely, for graphs admitting (d,k)(d,k)-tree-models where both dd and kk are considered parameters, one can design space-efficient 𝖥𝖯𝖳\mathsf{FPT} algorithms. Exploring this question is the topic of this work.

Our contribution. We consider three example problems: Independent Set, Max Cut, and Dominating Set. For each of them we show that on graphs supplied with (d,k)(d,k)-tree-models where d=𝒪(1)d=\mathcal{O}(1), one can design space-efficient fixed-parameter algorithms whose running times asymptotically match the running times of their exponential-space counterparts working on general clique expressions. While we focus on the three problems mentioned above for concreteness, we in fact provide a more general algebraic framework, inspired by the work on the treedepth parameterization [27, 33, 40, 41, 45], that can be applied to a wider range of problems. Once the depth dd is not considered a constant, the running times of our algorithms increase with dd. To mitigate this concern, we give a conditional lower bound showing that this is likely to be necessary if one wishes to keep the space complexity polynomial.

Recall that standard dynamic programming solves the Independent Set problem in time 2kn𝒪(1)2^{k}\cdot n^{\mathcal{O}(1)} and space 2kn𝒪(1)2^{k}\cdot n^{\mathcal{O}(1)} on a graph constructed by a clique expression of width kk [26]. Our first contribution is to show that on graphs with (d,k)(d,k)-tree-models, the space complexity can be reduced to as low as 𝒪(dk2logn)\mathcal{O}(dk^{2}\cdot\log n) at the cost of allowing time complexity 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)}. In fact, we tackle the more general problem of computing the independent set polynomial.

Theorem 1.1.

There is an algorithm which takes as input an nn-vertex graph GG along with a (d,k)(d,k)-tree model of GG, runs in time 2𝒪(kd)n𝒪(1)2^{\mathcal{O}(kd)}\cdot n^{\mathcal{O}(1)} and uses at most 𝒪(dk2logn)\mathcal{O}(dk^{2}\log n) space, and computes the independent set polynomial of GG.

The idea of the proof of Theorem 1.1 is to reorganize the computation of the standard bottom-up dynamic programming by applying the zeta-transform to the computed tables. This allows a radical simplification of the way a dynamic programming table for a node is computed from the tables of its children, so that the whole dynamic programming can be replaced by top-down recursion. Applying just this yields an algorithm with space polynomial in nn. We reduce space to 𝒪(dk2logn)\mathcal{O}(dk^{2}\log n) by computing the result modulo several small primes, and using space-efficient Chinese remaindering. This is inspired by the algorithm for Dominating Set on graphs of small treedepth of Pilipczuk and Wrochna [45].

In fact, the technique used to prove Theorem 1.1 is much more general and can be used to tackle all coloring-like problems of local character. We formalize those under a single umbrella by solving the problem of counting List HH-homomorphisms (for an arbitrary but fixed pattern graph HH), for which we provide an algorithm with the same complexity guarantees as those of Theorem 1.1. The concrete problems captured by this framework include, e.g., Odd Cycle Transveral and qq-Coloring for a fixed constant qq; details are provided in Section 3.2.

Next, we turn our attention to the Max Cut problem. This problem is 𝖶[1]\mathsf{W}[1]-hard when parameterized by cliquewidth, but it admits a simple n𝒪(k)n^{\mathcal{O}(k)}-time algorithm on nn-vertex graphs provided with clique expressions of width kk [24]. Our second contribution is a space-efficient counterpart of this result for graphs equipped with bounded-depth tree-models.

Theorem 1.2.

There is an algorithm which takes as input an nn-vertex graph GG along with a (d,k)(d,k)-tree model of GG, runs in time n𝒪(dk)n^{\mathcal{O}(dk)} and uses at most 𝒪(dklogn)\mathcal{O}(dk\log n) space, and solves the Max Cut problem in GG.

Upon closer inspection, the standard dynamic programming for Max Cut on clique expressions solves a Subset Sum-like problem whenever aggregating the dynamic programming tables of children to compute the table of their parent. We apply the approach of Kane [36] that was used to solve Unary Subset Sum in logarithmic space: we encode the aforementioned Subset Sum-like problem as computing the product of polynomials, and use Chinese remaindering to compute this product in a space-efficient way.

Finally, we consider the Dominating Set problem, for which we prove the following.

Theorem 1.3.

There is a randomized algorithm which takes as input an nn-vertex graph GG along with a (d,k)(d,k)-tree model of GG, runs in time 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} and uses at most 𝒪(dk2logn+nlogn)\mathcal{O}(dk^{2}\log n+n\log n) space, and reports the minimum size of a dominating set in GG that is correct with probability at least 1/21/2.

Note that the algorithm of Theorem 1.3 is randomized and uses much more space than our previous algorithms: more than nlognn\log n. The reason for this is that we use the inclusion-exclusion approach proposed very recently by Hegerfeld and Kratsch [34], which is able to count dominating sets only modulo 22. Consequently, while the parity of the number of dominating sets of certain size can be computed in space 𝒪(dk2logn)\mathcal{O}(dk^{2}\log n), to determine the existence of such dominating sets we use the Isolation Lemma and count the parity of the number of dominating sets of all possible weights. This introduces randomization and necessitates sampling—and storing—a weight function. At this point we do not know how to remove neither the randomization nor the super-linear space complexity in Theorem 1.3; we believe this is an excellent open problem.

Note that in all the algorithms presented above, the running times contain a factor dd in the exponent compared to the standard (exponential-space) dynamic programming on clique expressions. The following conditional lower bound shows that some additional dependency on the depth is indeed necessary; the relevant precise definitions are provided in Section 4.

Theorem 1.4.

Suppose Longest Common Subsequence cannot be solved in time Mf(r)M^{f(r)} and space f(r)M𝒪(1)f(r)\cdot M^{\mathcal{O}(1)} for any computable function ff, even if the length tt of the sought subsequence is bounded by δ(N)\delta(N) for any unbounded computable function δ\delta; here rr is the number of strings on input, NN is the common length of each string, and MM is the total bitsize of the instance. Then for every unbounded computable function δ\delta, there is no algorithm that solves the Independent Set problem in graphs supplied with (d,k)(d,k)-tree-models satisfying dδ(k)d\leqslant\delta(k) that would run in time 2𝒪(k)n𝒪(1)2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)} and simultaneously use n𝒪(1)n^{\mathcal{O}(1)} space.

The possibility of achieving time- and space-efficient algorithms for Longest Common Subsequence was also the base of conjectures formulated by Pilipczuk and Wrochna [45] for their lower bounds against time- and space-efficient algorithms on graphs of bounded pathwidth. The supposition made in Theorem 1.4 is a refined version of those conjectures that takes also the length of the sought subsequence into account. The reduction underlying Theorem 1.4 is loosely inspired by the constructions of [45], but requires new ideas due to the different setting of tree-models of low depth.

Finally, given that the above results point to a fundamental role of shrubdepth in terms of space complexity, it is natural to ask whether shrubdepth can also be used to obtain meaningful tractability results with respect to the “usual” notion of fixed-parameter tractability. We conclude our exposition by highlighting two examples of problems which are 𝖭𝖯\mathsf{NP}-hard on graphs of bounded cliquewidth (and even of bounded pathwidth) [12, 37], and yet which admit fixed-parameter algorithms when parameterized by the shrubdepth.

Theorem 1.5.

Metric Dimension and Firefighter can be solved in fixed-parameter time on graphs supplied with (d,k)(d,k)-tree-models, where dd and kk are considered the parameters.

2 Preliminaries

For a positive integer kk, we denote by [k]={1,,k}[k]=\{1,\ldots,k\} and [k]0=[k]{0}[k]_{0}=[k]\cup\{0\}. For a function f:ABf\colon A\to B and elements a,ba,b (not necessarily from ABA\cup B), the function f[ab]:A{a}B{b}f[a\mapsto b]\colon A\cup\{a\}\to B\cup\{b\} is given by f[ab](x)=f(x)f[a\mapsto b](x)=f(x) for xax\neq a and f[ab](a)=bf[a\mapsto b](a)=b. We use standard graph terminology [17].

We use the same computational model as Pilipczuk and Wrochna [45], namely the RAM model where each operation takes time polynomially proportional to the number of bits of the input, and the space is measured in terms of bits. We say that an algorithm AA runs in time t(n)t(n) and space s(n)s(n) if, for every input of size nn, the number of operations of AA is bounded by t(n)t(n) and the auxiliary used space of AA has size bounded by s(n)s(n) bits.

Shrubdepth.

We first introduce the decomposition notion for shrubdepth: tree-models.

Definition 2.1.

For d,kd,k\in\mathbb{N}, a (d,k)(d,k)-tree-model (T,,,λ)(T,\mathcal{M},\mathcal{R},\lambda) of a graph GG is a rooted tree TT of depth dd together with a family of symmetric Boolean k×kk\times k-matrices ={Ma}aV(T)\mathcal{M}=\{M_{a}\}_{a\in V(T)}, a labeling function λ:V(G)[k]\lambda\colon V(G)\to[k], and a family of renaming functions ={ρab}abE(T)\mathcal{R}=\{\rho_{ab}\}_{ab\in E(T)} with ρab:[k][k]\rho_{ab}\colon[k]\to[k] for all abE(T)ab\in E(T) such that:

  • The leaves of TT are identified with vertices of GG. For each node aa of TT, we denote by VaV(G)V_{a}\subseteq V(G) the leaves of TT that are descendants of aa, and with Ga=G[Va]G_{a}=G[V_{a}] we denote the subgraph induced by these vertices.

  • With each node aa of TT we associate a labeling function λa:Va[k]\lambda_{a}:V_{a}\to[k] defined as follows. If aa is a leaf, then λa(a)=λ(a)\lambda_{a}(a)=\lambda(a). If aa is a non-leaf node, then for every child bb of aa and every vertex vVbv\in V_{b}, we have λa(v)=ρab(λb(v))\lambda_{a}(v)=\rho_{ab}(\lambda_{b}(v)).

  • For every pair of vertices (u,v)(u,v) of GG, let aa denote their least common ancestor in TT. Then we have uvE(G)uv\in E(G) if and only if Ma[λa(u),λa(v)]=1M_{a}[\lambda_{a}(u),\lambda_{a}(v)]=1.

We introduce some notation. If (T,,,λ)(T,\mathcal{M},\mathcal{R},\lambda) is a (d,k)(d,k)-tree model of a graph GG, then for every node aa of TT and every i[k]i\in[k], let Va(i)=λa1(i)V_{a}(i)=\lambda_{a}^{-1}(i) be the set of vertices labeled ii at aa. Given a subset XX of VaV_{a} and i[k]i\in[k], let Xa(i)=XVa(i)X_{a}(i)=X\cap V_{a}(i) be the vertices of XX labeled ii at aa.

A (d,k)(d,k)-tree-model can be understood as a term of depth dd that constructs a kk-labelled graph from single-vertex graphs by means of the following operations: renaming of the labels, and joining several labelled graphs while introducing edges between vertices originating from different parts based on their labels. This makes tree-models much closer to the NLC-decompositions which underly the parameter NLC-width than to clique expressions. NLC-width is a graph parameter introduced by Wanke [46] that can be seen as an alternative, functionally equivalent variant of cliquewidth.

We say that a class 𝒞\mathcal{C} of graphs has shrubdepth dd if there exists kk\in\mathbb{N} such that every graph in 𝒞\mathcal{C} admits a (d,k)(d,k)-tree-model. Thus, shrubdepth is a parameter of a graph class, rather than of a single graph; though there are functionally equivalent notions, such as SC-depth [30] or rank-depth [16], that are suited for the treatment of single graphs.

We remark that in the original definition proposed by Ganian et al. [30], there is no renaming of the labels: for every vertex uV(G)u\in V(G), λa(u)\lambda_{a}(u) is always the same label λ(u)\lambda(u) for all relevant nodes aa. This boils down to all the renaming functions ρab\rho_{ab} equal to the identify function on [k][k]. Clearly, a (d,k)(d,k)-tree-model in the sense of Ganian et al. is also a (d,k)(d,k)-tree-model in our sense, while a (d,k)(d,k)-tree-model in our sense can be easily turned into a (d,kd+1)(d,k^{d+1})-model in the sense of Ganian et al. by setting λ(u)\lambda(u) to be the d+1d+1 tuple of consisting of labels λa(u)\lambda_{a}(u), for aa ranging over the ancestors of uu in TT. Thus, using either definition yields the same notion of shrubdepth for graph classes. We choose to use the definition with renaming, as it provides more flexibility in the construction of tree-models that can result in a smaller number of labels and, consequently, better running times. It is also closer to the original definitions of clique expressions or NLC-decompositions.

Within this work we will always assume that a (d,k)(d,k)-tree-model of the considered graph is provided on input. Thus, we abstract away the complexity of computing tree-models, but let us briefly discuss this problem. Gajarský and Kreutzer [28] gave an algorithm that given a graph GG and parameters dd and kk, computes a (d,k)(d,k)-tree-model of GG (in the sense of Ganian et al. [30]), if there exists one, in time f(d,k)n𝒪(1)f(d,k)\cdot n^{\mathcal{O}(1)} for a computable function ff. The approach of Gajarský and Kreutzer is essentially kernelization: they iteratively “peel off” isomorphic parts of the graph until the problem is reduced to a kernel of size bounded only in terms of dd and kk. This kernel is then treated by any brute-force method. Consequently, a straightforward inspection of the algorithm of [28] shows that it can be implemented so that it uses polynomial space; but not space of the form (d+k)𝒪(1)logn(d+k)^{\mathcal{O}(1)}\cdot\log n, due to the necessity of storing all the intermediate graphs in the kernelization process.

Cover products and transforms.

We now recall the algebraic tools we are going to use. Let UU be a finite set and RR be a ring. Let g1,,gt:2URg_{1},\dots,g_{t}\colon 2^{U}\to R be set functions, for some integer tt. For every i[t]i\in[t], the zeta-transform ξgi:2UR\xi g_{i}\colon 2^{U}\to R of gig_{i} is defined by

(ξgi)(Y)=XYgi(X),(\xi g_{i})(Y)=\sum\limits_{X\subseteq Y}g_{i}(X),

and similarly, the Möbius-transform μgi:2UR\mu g_{i}\colon 2^{U}\to R of gig_{i} is given by

(μgi)(Y)=XY(1)|YX|gi(X).(\mu g_{i})(Y)=\sum\limits_{X\subseteq Y}(-1)^{|Y\setminus X|}g_{i}(X).

The cover product g1cg2ccgt:2URg_{1}\ast_{c}g_{2}\ast_{c}\dots\ast_{c}g_{t}\colon 2^{U}\to R of g1,,gtg_{1},\dots,g_{t} is defined by

(g1cg2ccgt)(Y)=X1,,Xt2[k]:X1Xt=Yg1(X1)g2(X2)gt(Xt).(g_{1}\ast_{c}g_{2}\ast_{c}\dots\ast_{c}g_{t})(Y)=\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq 2^{[k]}\colon\\ X_{1}\cup\dots\cup X_{t}=Y\end{subarray}}g_{1}(X_{1})\cdot g_{2}(X_{2})\cdot\dots\cdot g_{t}(X_{t}).

We emphasize that unlike another well-known concept of subset convolution, here the sets X1,,XtX_{1},\dots,X_{t} are not required to be pairwise disjoint. The following result of Björklund et al. [6] will be relevant for us:

Lemma 2.2 ([6]).

Let UU be a finite set, RR be a ring, and g1,,gt:2URg_{1},\dots,g_{t}\colon 2^{U}\to R be set functions for a positive integer tt. Then for every X2UX\in 2^{U}, it holds that

(ξ(g1cg2ccgt))(X)=(ξg1)(X)(ξg2)(X)(ξgt)(X).(\xi(g_{1}\ast_{c}g_{2}\ast_{c}\dots\ast_{c}g_{t}))(X)=(\xi g_{1})(X)\cdot(\xi g_{2})(X)\cdot\dots\cdot(\xi g_{t})(X).

Also for every i[t]i\in[t], we have μ(ξ(gi))=gi\mu(\xi(g_{i}))=g_{i}.

3 Space-Efficient Algorithms on Tree-Models

3.1 Independent Set

In this section, we provide a fixed-parameter algorithm computing the independent set polynomial of a graph in time 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} and using 𝗉𝗈𝗅𝗒(d,k)logn\mathsf{poly}(d,k)\log n space, when given a (d,k)(d,k)-tree model. In particular, given a (d,k)(d,k)-tree model (T,,,λ)(T,\mathcal{M},\mathcal{R},\lambda) of an nn-vertex graph GG, our algorithm will allow to compute the number of independent sets of size pp for each p[n]p\in[n]. For simplicity of representation, we start by describing an algorithm that uses 𝗉𝗈𝗅𝗒(d,k,n)\mathsf{poly}(d,k,n) space and then show how a result by Pilipczuk and Wrochna [45] can be applied to decrease the space complexity to 𝗉𝗈𝗅𝗒(d,k)logn\mathsf{poly}(d,k)\log n.

In order to simplify forthcoming definitions/statements, let aa be an internal node of TT with b1,,btb_{1},\ldots,b_{t} as children. For S[k]S\subseteq[k], we denote by q(a,S,p)q(a,S,p) the number of independent sets II of size pp of GaG_{a} such that S={i[k]:Ia(i)}S=\{i\in[k]\,:\,I_{a}(i)\neq\emptyset\}. Let us define the polynomial

𝖨𝖲(a,S)\displaystyle\mathsf{IS}(a,S) =pq(a,S,p)xp.\displaystyle=\sum\limits_{p\in\mathbb{N}}q(a,S,p)\cdot x^{p}.

For the root rr of TT, the number of independent sets of GG of size pp is then given by

S[k]q(r,S,p).\sum\limits_{S\subseteq[k]}q(r,S,p).

and the independent set polynomial of GG is

S[k]𝖨𝖲(r,S).\sum\limits_{S\subseteq[k]}\mathsf{IS}(r,S).

Therefore, the problem boils down to the computation of 𝖨𝖲(r,S)\mathsf{IS}(r,S) and its coefficients q(r,S,p)q(r,S,p). A usual way to obtain a polynomial or logarithmic space algorithm is a top-down traversal of a rooted tree-like representation of the input—in our case, this will be the tree model. In this top-down traversal, the computation of coefficients q(a,S,p)q(a,S,p) of 𝖨𝖲(a,S)\mathsf{IS}(a,S) makes some requests to the coefficients q(bi,Si,pi)q(b_{i},S_{i},p_{i}) of 𝖨𝖲(bi,Si)\mathsf{IS}(b_{i},S_{i}) for each i[t]i\in[t], for some integer pip_{i}, and some set SiS_{i} of labels of GbiG_{b_{i}} so that i[t]pi=p\sum_{i\in[t]}p_{i}=p and i[t]ρabi(Si)=S\bigcup_{i\in[t]}\rho_{ab_{i}}(S_{i})=S. Since there are exponentially many (in tt) possible partitions of pp into tt integers and tt can be Θ(n)\Theta(n), we must avoid running over all such integer partitions, and this will be done by the fast computation of a certain subset cover.

We will later show that if some independent set of GaG_{a} contains vertices of labels ii and jj with Ma[i,j]=1M_{a}[i,j]=1, then all these vertices come from the same child of aa. In particular, the vertices of label ii (rsp. jj) cannot come from multiple children of aa. To implement this observation, after fixing a set SS of labels, for each label class in SS we “guess” (i.e., branch on) whether it will come from a single child of aa or from many. Such a guess is denoted by α:S{1=,2}\alpha\colon S\to\{1_{=},2_{\geqslant}\}. So, the assignment α\alpha will allow us to control the absence of edges in the sought-after independent set. For a fixed α\alpha, naively branching over all possibilities of assigning the labels of SS to the children of aa with respect to α\alpha would take time exponential in tt, which could be as large as Θ(n)\Theta(n). We will use inclusion-exclusion branching to speed-up the computations while retaining the space complexity. In some sense, we will first allow less restricted assignments of labels to the children of aa, and then filter out the ones that result in non-independent sets using the construction of a certain auxiliary graph. The former will be implemented by using “less restricted” guesses β:S{1=,1}\beta\colon S\to\{1_{=},1_{\geqslant}\} where 11_{\geqslant} reflects that vertices of the corresponding label come from at least one child of aa. Note that if the vertices of some label ii come from exactly one child of aa, then such an independent set satisfies both β(i)=1=\beta(i)=1_{=} and β(i)=1\beta(i)=1_{\geqslant}. Although it might seem counterintuitive, this type of guesses will enable a fast computation of a certain subset cover. After that, we will be able to compute the number of independent sets satisfying guesses of type α:S{1=,2}\alpha\colon S\to\{1_{=},2_{\geqslant}\} by observing that independent sets where some label ii occurs in at least two children of aa can be obtained by counting those where label ii occurs in at least one child and subtracting those where this label occurs in exactly one child.

We now proceed to a formalization of the above. Let Sλa(Va)S\subseteq\lambda_{a}(V_{a}) and α:S{1=,2}\alpha\colon S\to\{1_{=},2_{\geqslant}\} be fixed. Let s1,,s|α1(2)|s_{1},\ldots,s_{|\alpha^{-1}(2_{\geqslant})|} be an arbitrary linear ordering of α1(2)\alpha^{-1}(2_{\geqslant}). To compute the number of independent sets that match our choice of α\alpha, we proceed by iterating over c{0,,|α1(2)|}c\in\{0,\ldots,|\alpha^{-1}(2_{\geqslant})|\}, and we count independent sets where the labels in {s1,,sc}\{s_{1},\dots,s_{c}\} occur exactly once, and the number of such sets where the labels occur at least once. Later, we will obtain the desired number of independent sets via carefully subtracting these two values. In particular, let γ:{s1,,sc}{1=,1}\gamma\colon\{s_{1},\ldots,s_{c}\}\to\{1_{=},1_{\geqslant}\}, and we denote by q(a,S,α,c,γ,p)q(a,S,\alpha,c,\gamma,p) the number of independent sets II of size pp of GaG_{a} such that

  • for every label iSi\notin S, we have Ia(i)=I_{a}(i)=\emptyset;

  • for every label i{s1,,sc}i\in\{s_{1},\dots,s_{c}\} with γ(i)=1=\gamma(i)=1_{=}, there exists a unique child bjb_{j} of aa such that Ia(i)VbjI_{a}(i)\cap V_{b_{j}}\neq\emptyset;

  • for every label i{s1,,sc}i\in\{s_{1},\dots,s_{c}\} with γ(i)=1\gamma(i)=1_{\geqslant}, there exists at least one child bjb_{j} of aa such that Ia(i)VbjI_{a}(i)\cap V_{b_{j}}\neq\emptyset;

  • for every label iS{s1,,sc}i\in S\setminus\{s_{1},\dots,s_{c}\} with α(i)=1=\alpha(i)=1_{=}, there exists a unique child bjb_{j} of aa such that Ia(i)VbjI_{a}(i)\cap V_{b_{j}}\neq\emptyset;

  • and for every label iS{s1,,sc}i\in S\setminus\{s_{1},\dots,s_{c}\} with α(i)=2\alpha(i)=2_{\geqslant}, there exist at least two children bj1b_{j_{1}} and bj2b_{j_{2}} of aa such that Ia(i)Vbj1I_{a}(i)\cap V_{b_{j_{1}}}\neq\emptyset and Ia(i)Vbj2I_{a}(i)\cap V_{b_{j_{2}}}\neq\emptyset.

Then for c[α1(2)]0c\in[\alpha^{-1}(2_{\geqslant})]_{0} we define the polynomial T(a,S,α,c,γ)[x]T(a,S,\alpha,c,\gamma)\in\mathbb{Z}[x] as

T(a,S,α,c,γ)=p0q(a,S,α,c,γ,p)xp.T(a,S,\alpha,c,\gamma)=\sum\limits_{p\in\mathbb{N}_{0}}q(a,S,\alpha,c,\gamma,p)x^{p}.

We now proceed with some observations that directly follow from the definitions.

{observation}

For every Sλa(Va)S\subseteq\lambda_{a}(V_{a}) and integer pp, we have

q(a,S,p)=α{1=,2}S,γ{1=,1}q(a,S,α,0,γ,p)q(a,S,p)=\sum\limits_{\begin{subarray}{c}\alpha\in\{1_{=},2_{\geqslant}\}^{S},\\ \gamma\in\{1_{=},1_{\geqslant}\}^{\emptyset}\end{subarray}}q(a,S,\alpha,0,\gamma,p)

and hence,

𝖨𝖲(a,S)=α{1=,2}Sγ{1=,1}T(a,S,α,0,γ)\mathsf{IS}(a,S)=\sum\limits_{\begin{subarray}{c}\alpha\in\{1_{=},2_{\geqslant}\}^{S}\\ \gamma\in\{1_{=},1_{\geqslant}\}^{\emptyset}\end{subarray}}T(a,S,\alpha,0,\gamma) (1)

Moreover, for every α{1=,2}S\alpha\in\{1_{=},2_{\geqslant}\}^{S}, every c{0,,|α1(2)|1}c\in\{0,\ldots,|\alpha^{-1}(2_{\geqslant})|-1\} and every γ:{s1,,sc}{1=,1}\gamma\colon\{s_{1},\dots,s_{c}\}\to\{1_{=},1_{\geqslant}\}, we have

q(a,S,α,c,γ,p)=q(a,S,α,c+1,γ[sc+11],p)q(a,S,α,c+1,γ[sc+11=],p).q(a,S,\alpha,c,\gamma,p)=q(a,S,\alpha,c+1,\gamma[s_{c+1}\mapsto 1_{\geqslant}],p)-q(a,S,\alpha,c+1,\gamma[s_{c+1}\mapsto 1_{=}],p).

and hence

T(a,S,α,c,γ)=T(a,S,α,c+1,γ[sc+11])T(a,S,α,c+1,γ[sc+11=]).T(a,S,\alpha,c,\gamma)=T(a,S,\alpha,c+1,\gamma[s_{c+1}\mapsto 1_{\geqslant}])-T(a,S,\alpha,c+1,\gamma[s_{c+1}\mapsto 1_{=}]). (2)

It remains then to show how to compute, for every α{1=,2}S\alpha\in\{1_{=},2_{\geqslant}\}^{S} and every γ{1=,1}α1(2)\gamma\in\{1_{=},1_{\geqslant}\}^{\alpha^{-1}(2_{\geqslant})}, the polynomial T(a,S,α,|α1(2)|,γ)T(a,S,\alpha,|\alpha^{-1}(2_{\geqslant})|,\gamma). It is worth mentioning that if β:S{1=,1}\beta\colon S\to\{1_{=},1_{\geqslant}\} is such that β1(1=)=α1(1=)γ1(1=)\beta^{-1}(1_{=})=\alpha^{-1}(1_{=})\cup\gamma^{-1}(1_{=}) and β1(1)=α1(2)γ1(1=)\beta^{-1}(1_{\geqslant})=\alpha^{-1}(2_{\geqslant})\setminus\gamma^{-1}(1_{=}), then q(a,S,α,|α1(1)|,γ,p)q(a,S,\alpha,|\alpha^{-1}(1_{\geqslant})|,\gamma,p) is exactly the number of independent sets II of size pp of GaG_{a} satisfying the following:

  1. 1.

    For every i[k]Si\in[k]\setminus S, we have Ia(i)=I_{a}(i)=\emptyset.

  2. 2.

    For every iβ1(1=)i\in\beta^{-1}(1_{=}), there exists a unique index j[t]j\in[t] such that Ia(i)VbjI_{a}(i)\cap V_{b_{j}}\neq\emptyset.

  3. 3.

    For every iβ1(1)i\in\beta^{-1}(1_{\geqslant}), there exists a (not necessarily unique) index j[t]j\in[t] such that Ia(i)VbjI_{a}(i)\cap V_{b_{j}}\neq\emptyset.

We will therefore write q(a,S,β,p)q(a,S,\beta,p) instead of q(a,S,α,|α1(1)|,γ,p)q(a,S,\alpha,|\alpha^{-1}(1_{\geqslant})|,\gamma,p) and we define the polynomial 𝖳𝖨𝖲(a,S,β)[x]\mathsf{TIS}(a,S,\beta)\in\mathbb{Z}[x] (where “T” stands for “transformed”) as 𝖳𝖨𝖲(a,S,β)=pq(a,S,β,p)xp.\mathsf{TIS}(a,S,\beta)=\sum_{p\in\mathbb{N}}q(a,S,\beta,p)\cdot x^{p}. Recall that because we are computing 𝖨𝖲(a,S)\mathsf{IS}(a,S) and 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta) in a top-down manner, some queries for 𝖨𝖲(bi,Si)\mathsf{IS}(b_{i},S_{i}) will be made during the computation. Before continuing in the computation of 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta), let us first explain how to request the polynomials 𝖨𝖲(bj,Sj)\mathsf{IS}(b_{j},S_{j}) from each child bjb_{j} of aa. If aa is not the root, let aa^{*} be its parent in TT, and we use 𝖯𝖨𝖲(a,S)\mathsf{PIS}(a,S) (where “P” stands for “parent”) to denote the polynomial

𝖯𝖨𝖲(a,S)\displaystyle\mathsf{PIS}(a,S) =p0qρ(a,S,p)xp\displaystyle=\sum\limits_{p\in\mathbb{N}_{0}}q^{\rho}(a,S,p)\cdot x^{p}
where
qρ(a,S,p)\displaystyle q^{\rho}(a,S,p) =Dλa(Va):ρaa(D)=Sq(a,D,p)\displaystyle=\sum\limits_{\begin{subarray}{c}D\subseteq\lambda_{a}(V_{a})\colon\\ \rho_{a^{*}a}(D)=S\end{subarray}}q(a,D,p)

is the number of independent sets of GaG_{a} of size pp that contain a vertex with label i[k]i\in[k] (i.e., Ia(i)I_{a*}(i)\neq\emptyset) if and only if iSi\in S holds, where the labels are treated with respect to λa\lambda_{a^{*}}. Then it holds that

𝖯𝖨𝖲(a,S)=Dλa(Va):ρaa(D)=S𝖨𝖲(a,D).\mathsf{PIS}(a,S)=\sum\limits_{\begin{subarray}{c}D\subseteq\lambda_{a}(V_{a})\colon\\ \rho_{a^{*}a}(D)=S\end{subarray}}\mathsf{IS}(a,D)\enspace. (3)

As our next step, we make some observations that will not only allow to restrict the β\beta’s we will need in computing the polynomial 𝖨𝖲(a,S)\mathsf{IS}(a,S) from the polynomials 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta), but will also motivate the forthcoming definitions. Recall that we have fixed Sλa(Va)S\subseteq\lambda_{a}(V_{a}) and β:S{1=,1}\beta\colon S\to\{1_{=},1_{\geqslant}\}, and in 𝖨𝖲(a,S)\mathsf{IS}(a,S) and 𝖳𝖨𝖲(a,S,α)\mathsf{TIS}(a,S,\alpha) we are only counting independent sets II such that Ia(i)I_{a}(i)\neq\emptyset if and only if iSi\in S.

{observation}

If there exist i1,i2Si_{1},i_{2}\in S such that Ma[i1,i2]=1M_{a}[i_{1},i_{2}]=1, then for any independent set II counted in 𝖨𝖲(a,S)\mathsf{IS}(a,S), there exists a unique j[t]j\in[t] such that Ia(i1)Ia(i2)VbjI_{a}(i_{1})\cup I_{a}(i_{2})\subseteq V_{b_{j}}.

Proof 3.1.

Both Ia(i1)I_{a}(i_{1}) and Ia(i2)I_{a}(i_{2}) are non-empty. So if there are at least two distinct j1j_{1} and j2j_{2} in [t][t] such that I1Ia(i1)Vbj1I_{1}\coloneqq I_{a}(i_{1})\cap V_{b_{j_{1}}} and I2Ia(i2)Vbj2I_{2}\coloneqq I_{a}(i_{2})\cap V_{b_{j_{2}}} are non-empty, then Ma[i1,i2]=1M_{a}[i_{1},i_{2}]=1 implies that there is a complete bipartite graph between I1I_{1} and I2I_{2}. Hence the graph induced on II would contain an edge, which is a contradiction.

Recall that for every label iα1(2)i\in\alpha^{-1}(2_{\geqslant}), each independent set II contributing to the value q(a,S,α,0,γ,p)q(a,S,\alpha,0,\gamma,p) has the property that there are distinct children bj1b_{j_{1}} and bj2b_{j_{2}} such that Ia(i)Vbj1I_{a}(i)\cap V_{b_{j_{1}}} and Ia(i)Vbj2I_{a}(i)\cap V_{b_{j_{2}}} are both non-empty. Then by Section˜3.1 for every i1Si_{1}\in S it holds that if α(i1)=2\alpha(i_{1})=2_{\geqslant}, then Ma[i1,i2]=0M_{a}[i_{1},i_{2}]=0 for all i2Si_{2}\in S. So if α\alpha does not satisfy this, the request T(a,S,α,0,γ)T(a,S,\alpha,0,\gamma) can be directly answered with 0. Otherwise, since we use Section˜3.1 for recursive requests, the requests 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta) made all have the property that for each i1Si_{1}\in S the following holds: if β(i1)=1\beta(i_{1})=1_{\geqslant}, then Ma[i1,i2]=0M_{a}[i_{1},i_{2}]=0 for all i2Si_{2}\in S. We call such β\beta’s conflict-free and we restrict ourselves to only conflict-free β\beta’s. In other words, we may assume that if i1,i2Si_{1},i_{2}\in S and Ma[i1,i2]=1M_{a}[i_{1},i_{2}]=1, then we have β(i1)=β(i2)=1=\beta(i_{1})=\beta(i_{2})=1_{=}. Observation 3.1 implies that for such i1i_{1} and i2i_{2}, each independent set II counted in 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta) is such that Ia(i1)Ia(i2)VbjI_{a}(i_{1})\cup I_{a}(i_{2})\subseteq V_{b_{j}} for some child bjb_{j} of aa. Now, to capture this observation, we define an auxiliary graph Fa,βF^{a,\beta} as follows. The vertex set of Fa,βF^{a,\beta} is β1(1=)\beta^{-1}(1_{=}) and there is an edge between vertices i1i2i_{1}\neq i_{2} if and only if Ma[i1,i2]=1M_{a}[i_{1},i_{2}]=1. Thus, by the above observation, if we consider a connected component CC of Fa,βF^{a,\beta}, then in each independent set II counted in 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta), all the vertices of II with labels from CC come from a single child of aa.

{observation}

Let CC be a connected component of Fa,βF^{a,\beta}. For every independent set II counted in 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta), there exists a unique j[t]j\in[t] such that iCIa(i)Vbj\bigcup_{i\in C}I_{a}(i)\subseteq V_{b_{j}}.

We proceed with some intuition on how we compute 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta) by requesting some 𝖯𝖨𝖲(bj,Sj)\mathsf{PIS}(b_{j},S_{j}). Let II be some independent set counted in 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta). This set contains vertices with labels from the set SS, and the assignment β\beta determines whether there is exactly one or at least one child from which the vertices of a certain label come from. Moreover, by Section˜3.1, for two labels i1,i2i_{1},i_{2} from the same connected component of Fa,βF^{a,\beta}, the vertices with labels i1i_{1} and i2i_{2} in II come from the same child of aa. Hence, to count such independent sets, we have to consider all ways to assign labels from SS to subsets of children of aa such that the above properties are satisfied—namely, each connected component of Fa,βF^{a,\beta} is assigned to exactly one child while every label from β1(1)\beta^{-1}(1_{\geqslant}) is assigned to at least one child. Since the number of such assignments can be exponential in nn, we employ the fast computation of a certain subset cover.

We now formalize this step. Let 𝖼𝖼(Fa,β)\mathsf{cc}(F^{a,\beta}) we denote the set of connected components of Fa,βF^{a,\beta}. The universe Ua,βU^{a,\beta} (i.e., the set of objects we assign to the children of aa) is defined as Ua,β=β1(1)𝖼𝖼(Fa,β)U^{a,\beta}=\beta^{-1}(1_{\geqslant})\cup\mathsf{cc}(F^{a,\beta}). For every j[t]j\in[t], we define a mapping fja,β:2Ua,β[x,z]f_{j}^{a,\beta}\colon 2^{U^{a,\beta}}\to\mathbb{Z}[x,z] (i.e., to polynomials over xx and zz) as follows: fja,β(X)=𝖯𝖨𝖲(bj,𝖿𝗅𝖺𝗍a,β(X))z|X𝖼𝖼(Fa,β)|f_{j}^{a,\beta}(X)=\mathsf{PIS}(b_{j},\mathsf{flat}^{a,\beta}(X))z^{|X\cap\mathsf{cc}(F^{a,\beta})|} where 𝖿𝗅𝖺𝗍a,β:2Ua,β2S\mathsf{flat}^{a,\beta}\colon 2^{U^{a,\beta}}\to 2^{S} intuitively performs a union over all the present labels—formally:

𝖿𝗅𝖺𝗍a,β(W)=(Wβ1(1))wW𝖼𝖼(Fa,β)w.\mathsf{flat}^{a,\beta}(W)=(W\cap\beta^{-1}(1_{\geqslant}))\cup\bigcup\limits_{w\in W\cap\mathsf{cc}(F^{a,\beta})}w.

So if we fix the set XX of labels coming from the child bjb_{j}, then the (unique) coefficient in fja,β(X)f_{j}^{a,\beta}(X) reflects the number of independent sets of GbjG_{b_{j}} using exactly these labels (with respect to λa\lambda_{a}). The exponent of the formal variable zz is intended to store the number of connected components of Fa,βF^{a,\beta} assigned to bjb_{j}. This will later allow us to exclude from the computation those assignments of labels from SS to children of aa where the elements of some connected component of Fa,βF^{a,\beta} are assigned to multiple children of aa. For every j[t]j\in[t], we define a similar function gja,β:2S[x,z]g^{a,\beta}_{j}\colon 2^{S}\to\mathbb{Z}[x,z] as follows:

gja,β(Y)={fja,β(X)if 𝖿𝗅𝖺𝗍a,β(X)=Y for some X2Ua,β,0otherwise.g^{a,\beta}_{j}(Y)=\begin{cases}f^{a,\beta}_{j}(X)&\text{if }\mathsf{flat}^{a,\beta}(X)=Y\text{ for some }X\in 2^{U^{a,\beta}},\\ 0&\text{otherwise.}\end{cases}

Observe that the function 𝖿𝗅𝖺𝗍a,β\mathsf{flat}^{a,\beta} is injective and hence gja,βg^{a,\beta}_{j} is well-defined. The mapping gja,βg^{a,\beta}_{j} filters out those assignments where some connected component of Fa,βF^{a,\beta} is “split”. For simplicity of notation, when aa and β\beta are clear from the context, we omit the superscript a,βa,\beta.

Crucially for our algorithm, we claim that the following holds:

𝖳𝖨𝖲(a,S,β)=(X1,,Xt[k]:X1Xt=Sg1(X1)g2(X2)gt(Xt))z|𝖼𝖼(F)|\mathsf{TIS}(a,S,\beta)=\left(\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq[k]\colon\\ X_{1}\cup\dots\cup X_{t}=S\end{subarray}}g_{1}(X_{1})g_{2}(X_{2})\dots g_{t}(X_{t})\right)\langle z^{|\mathsf{cc}(F)|}\rangle

where for a polynomial P=u1,u20qu1,u2xu1zu2[x,z]P=\sum_{u_{1},u_{2}\in\mathbb{N}_{0}}q_{u_{1},u_{2}}x^{u_{1}}z^{u_{2}}\in\mathbb{Z}[x,z] the polynomial Pz|𝖼𝖼(F)|[x]P\langle z^{|\mathsf{cc}(F)|}\rangle\in\mathbb{Z}[x] is defined as Pz|𝖼𝖼(F)|=u10qu1,|𝖼𝖼(F)|xu1P\langle z^{|\mathsf{cc}(F)|}\rangle=\sum_{u_{1}\in\mathbb{N}_{0}}q_{u_{1},|\mathsf{cc}(F)|}x^{u_{1}}. In simple words, the z|𝖼𝖼(F)|\langle z^{|\mathsf{cc}(F)|}\rangle operator first removes all terms where the degree of zz is not equal to |𝖼𝖼(F)||\mathsf{cc}(F)| and then “forgets” about zz. Before we provide a formal proof, let us sketch the idea behind it. On the left side of the equality, we have the polynomial keeping track of the independent sets of GaG_{a} that “respect” β\beta. First, for every label iSi\in S, some vertex of this label must occur in at least one child of aa: this is handled by considering all covers X1Xt=SX_{1}\cup\dots\cup X_{t}=S where for every j[t]j\in[t], the set XjX_{j} represents the labels assigned to the child bjb_{j}. Next, if some XjX_{j} “splits” a connected component, i.e., takes only a proper non-empty subset of this component, then such an assignment would not yield an independent set by Section˜3.1 and the function gjg_{j} ensures that the corresponding cover contributes zero to the result. Hence, for every cover X1Xt=SX_{1}\cup\dots\cup X_{t}=S with a non-zero contribution to the sum, every connected component of FF is completely contained in at least one XjX_{j}. In particular, this implies that for every non-zero term on the right side, the degree of the formal variable zz in this term is at least z|𝖼𝖼(F)|z^{|\mathsf{cc}(F)|}. On the other hand, if some connected component of FF is contained in several sets XjX_{j}, then the degree of the corresponding monomial is strictly larger than the total number of connected components and such covers X1,,XtX_{1},\dots,X_{t} are excluded from the consideration by applying z|𝖼𝖼(F)|\langle z^{|\mathsf{cc}(F)|}\rangle. We formalize this intuition below:

Lemma 3.2.

Let (T,,,λ)(T,\mathcal{M},\mathcal{R},\lambda) be a (d,k)(d,k)-tree model of an nn-vertex graph GG. Let aa be a non-leaf node of TT and let b1,,btb_{1},\dots,b_{t} be the children of aa. For every Sλa(Va)S\subseteq\lambda_{a}(V_{a}), and every conflict-free β:S{1=,1}\beta\colon S\to\{1_{=},1_{\geqslant}\}, it holds that

𝖳𝖨𝖲(a,S,β)=(X1,,Xt[k]:X1Xt=S(j=1tgja,β(Xj)))z|𝖼𝖼(Fa,β)|.\mathsf{TIS}(a,S,\beta)=\left(\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq[k]\colon\\ X_{1}\cup\dots\cup X_{t}=S\end{subarray}}\left(\prod\limits_{j=1}^{t}g^{a,\beta}_{j}(X_{j})\right)\right)\langle z^{|\mathsf{cc}(F^{a,\beta})|}\rangle.
Proof 3.3.

First, we bring the right-hand side of the equality into a more suitable form.

X1,,Xt[k]:X1Xt=Sj=1tgj(Xj)=\displaystyle\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq[k]\colon\\ X_{1}\cup\dots\cup X_{t}=S\end{subarray}}\prod\limits_{j=1}^{t}g_{j}(X_{j})=
X1,,Xt[k]:X1Xt=S,j[t]WjU:𝖿𝗅𝖺𝗍(Wj)=Xjj=1t𝖯𝖨𝖲(bj,Xj)z|Wj𝖼𝖼(F)|=\displaystyle\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq[k]\colon\\ X_{1}\cup\dots\cup X_{t}=S,\\ \forall j\in[t]\exists W_{j}\subseteq U\colon\mathsf{flat}(W_{j})=X_{j}\end{subarray}}\prod\limits_{j=1}^{t}\mathsf{PIS}(b_{j},X_{j})z^{|W_{j}\cap\mathsf{cc}(F)|}=
X1,,Xt[k]:X1Xt=S,j[t]WjU:𝖿𝗅𝖺𝗍(Wj)=Xj(j=1t(pj0qρ(bj,Xj,pj)xpj)z|Wj𝖼𝖼(F)|)=\displaystyle\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq[k]\colon\\ X_{1}\cup\dots\cup X_{t}=S,\\ \forall j\in[t]\exists W_{j}\subseteq U\colon\mathsf{flat}(W_{j})=X_{j}\end{subarray}}\left(\prod\limits_{j=1}^{t}\Big{(}\sum\limits_{p_{j}\in\mathbb{N}_{0}}q^{\rho}(b_{j},X_{j},p_{j})x^{p_{j}}\Big{)}z^{|W_{j}\cap\mathsf{cc}(F)|}\right)=
X1,,Xt[k]:X1Xt=S,j[t]WjU:𝖿𝗅𝖺𝗍(Wj)=Xjp1,,pt0j=1tqρ(bj,Xj,pj)xpjz|Wj𝖼𝖼(F)|=\displaystyle\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq[k]\colon\\ X_{1}\cup\dots\cup X_{t}=S,\\ \forall j\in[t]\exists W_{j}\subseteq U\colon\mathsf{flat}(W_{j})=X_{j}\\ p_{1},\dots,p_{t}\in\mathbb{N}_{0}\end{subarray}}\prod\limits_{j=1}^{t}q^{\rho}(b_{j},X_{j},p_{j})x^{p_{j}}z^{|W_{j}\cap\mathsf{cc}(F)|}=
X1,,Xt[k]:X1Xt=S,j[t]WjU:𝖿𝗅𝖺𝗍(Wj)=Xjp1,,pt0(j=1tqρ(bj,Xj,pj))xj=1tpjzj=1t|Wj𝖼𝖼(F)|\displaystyle\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq[k]\colon\\ X_{1}\cup\dots\cup X_{t}=S,\\ \forall j\in[t]\exists W_{j}\subseteq U\colon\mathsf{flat}(W_{j})=X_{j}\\ p_{1},\dots,p_{t}\in\mathbb{N}_{0}\end{subarray}}\left(\prod\limits_{j=1}^{t}q^{\rho}(b_{j},X_{j},p_{j})\right)x^{\sum\limits_{j=1}^{t}p_{j}}z^{\sum\limits_{j=1}^{t}|W_{j}\cap\mathsf{cc}(F)|}

We recall that 𝖿𝗅𝖺𝗍\mathsf{flat} is injective so the sum above is well-defined. So we have to prove that

𝖳𝖨𝖲(a,S,β)=\displaystyle\mathsf{TIS}(a,S,\beta)=
(X1,,Xt[k]:X1Xt=S,j[t]WjU:𝖿𝗅𝖺𝗍(Wj)=Xj,p1,,pt0(j=1tqρ(bj,Xj,pj))xj=1tpjzj=1t|Wj𝖼𝖼(F)|)z|𝖼𝖼(F)|,\displaystyle\left(\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq[k]\colon\\ X_{1}\cup\dots\cup X_{t}=S,\\ \forall j\in[t]\exists W_{j}\subseteq U\colon\mathsf{flat}(W_{j})=X_{j},\\ p_{1},\dots,p_{t}\in\mathbb{N}_{0}\end{subarray}}\left(\prod\limits_{j=1}^{t}q^{\rho}(b_{j},X_{j},p_{j})\right)x^{\sum\limits_{j=1}^{t}p_{j}}z^{\sum\limits_{j=1}^{t}|W_{j}\cap\mathsf{cc}(F)|}\right)\langle z^{|\mathsf{cc}(F)|}\rangle,

i.e.,

𝖳𝖨𝖲(a,S,β)=\displaystyle\mathsf{TIS}(a,S,\beta)=
X1,,Xt2[k]:X1Xt=S,j[t]Wj:𝖿𝗅𝖺𝗍(Wj)=Xj,j[t]|Wj𝖼𝖼(F)|=|𝖼𝖼(F)|,p1,,pt0(j=1tqρ(bj,Xj,pj))xj=1tpj\displaystyle\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq 2^{[k]}\colon\\ X_{1}\cup\dots\cup X_{t}=S,\\ \forall j\in[t]\exists W_{j}\colon\mathsf{flat}(W_{j})=X_{j},\\ \sum\limits_{j\in[t]}|W_{j}\cap\mathsf{cc}(F)|=|\mathsf{cc}(F)|,\\ p_{1},\dots,p_{t}\in\mathbb{N}_{0}\end{subarray}}\left(\prod\limits_{j=1}^{t}q^{\rho}(b_{j},X_{j},p_{j})\right)x^{\sum\limits_{j=1}^{t}p_{j}} (4)

To prove that these two polynomials are equal, we show that for every power p0p\in\mathbb{N}_{0} of xx, the coefficients at xpx^{p} in both polynomials are equal. So let us fix an arbitrary integer pp.

For one direction, let II be an independent set counted in the coefficient q(a,S,β,p)q(a,S,\beta,p) at the term xpx^{p} on the left-hand side 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta); in particular, we then have |I|=p|I|=p. For every j[t]j\in[t], let Ij=IVbjI^{j}=I\cap V_{b_{j}}, pj=|Ij|p_{j}=|I_{j}|, and Xj={i[k]:Ia(i)Vbj}X_{j}=\{i\in[k]\,:\,I_{a}(i)\cap V_{b_{j}}\neq\emptyset\}. Clearly, we have p1++pt=pp_{1}+\dots+p_{t}=p and X1Xt=SX_{1}\cup\dots\cup X_{t}=S. Now consider some j[t]j\in[t]. The set IjI^{j} is an independent set of GbjG_{b_{j}} that contains vertices with labels from exactly XjX_{j} (with respect to λa\lambda_{a}). So IjI^{j} is counted in qρ(bj,Xj,pj)q^{\rho}(b_{j},X_{j},p_{j}). Let Aj=Xjβ1(1=)A_{j}=X_{j}\cap\beta^{-1}(1_{=}) and Bj=Xjβ1(1)B_{j}=X_{j}\cap\beta^{-1}(1_{\geqslant}). Note that AjBj=XjA_{j}\cup B_{j}=X_{j}, A1At=β1(1=)A_{1}\cup\dots\cup A_{t}=\beta^{-1}(1_{=}), and B1Bt=β1(1)B_{1}\cup\dots\cup B_{t}=\beta^{-1}(1_{\geqslant}). Then by Section˜3.1, for every connected component CC of FF and every j[t]j\in[t], we either have CXjC\subseteq X_{j} or CXj=C\cap X_{j}=\emptyset. Therefore, for every j[t]j\in[t], we have Xj=BjC𝖼𝖼(F):CXjCX_{j}=B_{j}\cup\bigcup_{{C\in\mathsf{cc}(F)\colon C\cap X_{j}\neq\emptyset}}C. and hence, Xj=𝖿𝗅𝖺𝗍(Wj)X_{j}=\mathsf{flat}(W_{j}) where Wj=Bj{C𝖼𝖼(F):CXj}W_{j}=B_{j}\cup\{C\in\mathsf{cc}(F)\,:\,C\cap X_{j}\neq\emptyset\}. Finally, by the definition of objects counted in 𝖳𝖨𝖲(x,S,β)\mathsf{TIS}(x,S,\beta), since the labels from β1(1=)\beta^{-1}(1_{=}) occur in exactly one child of aa, it holds that Aj1Aj2=A_{j_{1}}\cap A_{j_{2}}=\emptyset for any j1j2[t]j_{1}\neq j_{2}\in[t]. Together with A1At=β1(1=)A_{1}\cup\dots\cup A_{t}=\beta^{-1}(1_{=}) this implies that for every connected component CC of FF, there exists exactly one index jC[t]j_{C}\in[t] with CAjCC\subseteq A_{j_{C}}, i.e., CWjCC\in W_{j_{C}}. So we obtain j[t]|Wi𝖼𝖼(F)|=|𝖼𝖼(F)|.\sum\limits_{j\in[t]}|W_{i}\cap\mathsf{cc}(F)|=|\mathsf{cc}(F)|. Altogether, the tuple (I1,,It)(I^{1},\dots,I^{t}) is counted in the product j=1tqρ(bj,Xj,pj)\prod_{j=1}^{t}q^{\rho}(b_{j},X_{j},p_{j}) and the properties shown above imply that this product contributes to the coefficient at the monomial xpx^{p}. Also note that the mapping of II to (I1,,It)(I^{1},\dots,I^{t}) is injective so we indeed obtain that the coefficient at xpx^{p} on the left-hand side of (4) is at most as large as one the right-hand side.

Now we show that the other inequality holds as well. Let X1,,Xt[k]X_{1},\dots,X_{t}\subseteq[k], I1,,ItVI^{1},\dots,I^{t}\subseteq V, W1,,WtUW_{1},\dots,W_{t}\subseteq U, and p1,,pt0p_{1},\dots,p_{t}\in\mathbb{N}_{0} be such that the following properties hold:

  • p1++pt=pp_{1}+\dots+p_{t}=p,

  • X1Xt=SX_{1}\cup\dots\cup X_{t}=S,

  • for every j[t]j\in[t], it holds that 𝖿𝗅𝖺𝗍(Wj)=Xj\mathsf{flat}(W_{j})=X_{j},

  • j[t]|Wj𝖼𝖼(F)|=|𝖼𝖼(F)|\sum_{j\in[t]}|W_{j}\cap\mathsf{cc}(F)|=|\mathsf{cc}(F)|,

  • and for every j[t]j\in[t], the set IjI^{j} is an independent set of GbjG_{b_{j}} of size pjp_{j} such that for every i[k]i\in[k], Iaj(i)I^{j}_{a}(i)\neq\emptyset holds iff iXji\in X_{j}, i.e., IjI^{j} is counted in qρ(bj,Xj,pj)q^{\rho}(b_{j},X_{j},p_{j}).

Let I=I1ItI=I^{1}\cup\dots\cup I^{t}. Since for every j[t]j\in[t], we have IjVbjI^{j}\subseteq V_{b_{j}}, the sets I1,,ItI^{1},\dots,I^{t} are pairwise disjoint and we have |I|=p|I|=p. We also have IVaI\subseteq V_{a} and for every i[k]i\in[k], we have Ia(i)I_{a}(i)\neq\emptyset iff iSi\in S, i.e., II contains vertices with labels from exactly SS with respect to λa\lambda_{a}. We claim that II is an independent set of GaG_{a}. Since I1,,ItI_{1},\dots,I_{t} are independent sets of Gb1,,GbtG_{b_{1}},\dots,G_{b_{t}}, respectively, and Gb1,,GbtG_{b_{1}},\dots,G_{b_{t}} are induced subgraphs of GaG_{a}, it suffices to show that there are no edges between Ij1I_{j_{1}} and Ij2I_{j_{2}} for any j1j2[t]j_{1}\neq j_{2}\in[t]. For this, suppose there is an edge v1v2v_{1}v_{2} of GaG_{a} with v1Ij1v_{1}\in I^{j_{1}} and v2Ij2v_{2}\in I^{j_{2}} for some j1j2[t]j_{1}\neq j_{2}\in[t]. Also let i1=λa(v1)i_{1}=\lambda_{a}(v_{1}) and i2=λa(v2)i_{2}=\lambda_{a}(v_{2}). Since aa is the lowest common ancestor of v1v_{1} and v2v_{2}, it holds that Ma[i1,i2]=1M_{a}[i_{1},i_{2}]=1. By the assumption of the lemma, the mapping β\beta is conflict-free so we have β(i1)=β(i2)=1=\beta(i_{1})=\beta(i_{2})=1_{=}. Then, the property Ma[i1,i2]=1M_{a}[i_{1},i_{2}]=1 implies that i1i_{1} and i2i_{2} belong to the same connected component, say CC, of FF. Recall that we have i1Xj1i_{1}\in X_{j_{1}}, i2Xj2i_{2}\in X_{j_{2}}, 𝖿𝗅𝖺𝗍(Wj1)=Xj1\mathsf{flat}(W_{j_{1}})=X_{j_{1}}, and 𝖿𝗅𝖺𝗍(Wj2)=Xj2\mathsf{flat}(W_{j_{2}})=X_{j_{2}}. Hence, it holds that CWj1C\in W_{j_{1}} and CWj2C\in W_{j_{2}}. On the other hand, let CC^{\prime} be an arbitrary connected component of FF and let iCi\in C^{\prime} be some label. The property X1Xt=SX_{1}\cup\dots\cup X_{t}=S implies that there exists an index jCj_{C^{\prime}} with iXjCi\in X_{j_{C^{\prime}}}. Due to 𝖿𝗅𝖺𝗍(WjC)=XjC\mathsf{flat}(W_{j_{C^{\prime}}})=X_{j_{C^{\prime}}} we then have CWjCC^{\prime}\in W_{j_{C^{\prime}}}. I.e., every connected component of FF is contained in at least one of the sets W1,,WtW_{1},\dots,W_{t} while CC is contained in at least two such sets so we get j[t]|Wj𝖼𝖼(F)|>|𝖼𝖼(F)|\sum_{j\in[t]}|W_{j}\cap\mathsf{cc}(F)|>|\mathsf{cc}(F)| – a contradiction.

Hence, the set II is indeed an independent set of GaG_{a} of size pp such that it contains vertices with labels from exactly SS with respect to λa\lambda_{a}. So it is counted in the coefficient qρ(a,S,β,p)q^{\rho}(a,S,\beta,p) of the term xpx^{p} in 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta). Finally, first note that (I1,,It)(I^{1},\dots,I^{t}) is uniquely mapped to the tuple (X1,,Xt,p1,,pt)(X_{1},\dots,X_{t},p_{1},\dots,p_{t}) so it is counted only once on the right-hand side. And second, the mapping of (I1,,It)(I^{1},\dots,I^{t}) to II is injective (since Vb1,,VbtV_{b_{1}},\dots,V_{b_{t}} are pairwise disjoint). Therefore, the coefficient at the term xpx^{p} on the right-hand side of (4) is at most as large as on the left-hand side. Altogether, we conclude that the two polynomials in (4) are equal, as desired.

The above lemma implies that:

𝖳𝖨𝖲(a,S,β)=\displaystyle\mathsf{TIS}(a,S,\beta)=
X1,,Xt[k]:X1Xt=S(j=1tgj(Xj))z|𝖼𝖼(F)|=\displaystyle\sum\limits_{\begin{subarray}{c}X_{1},\dots,X_{t}\subseteq[k]\colon\\ X_{1}\cup\dots\cup X_{t}=S\end{subarray}}\left(\prod\limits_{j=1}^{t}g_{j}(X_{j})\right)\langle z^{|\mathsf{cc}(F)|}\rangle=
(g1cg2ccgt)(S)z|𝖼𝖼(F)|=Lemma 2.2\displaystyle(g_{1}\ast_{c}g_{2}\ast_{c}\dots\ast_{c}g_{t})(S)\langle z^{|\mathsf{cc}(F)|}\rangle\stackrel{{\scriptstyle\lx@cref{creftype~refnum}{lem:cover-product}}}{{=}}
((μ(ξ(g1cg2ccgt)))(S))z|𝖼𝖼(F)|=\displaystyle\bigl{(}(\mu(\xi(g_{1}\ast_{c}g_{2}\ast_{c}\dotsc\ast_{c}g_{t})))(S)\bigr{)}\langle z^{|\mathsf{cc}(F)|}\rangle=
(YS(1)|SY|(ξ(g1cg2ccgt))(Y))z|𝖼𝖼(F)|=Lemma 2.2\displaystyle\left(\sum\limits_{Y\subseteq S}(-1)^{|S\setminus Y|}(\xi(g_{1}\ast_{c}g_{2}\ast_{c}\dots\ast_{c}g_{t}))(Y)\right)\langle z^{|\mathsf{cc}(F)|}\rangle\stackrel{{\scriptstyle\lx@cref{creftype~refnum}{lem:cover-product}}}{{=}}
(YS(1)|SY|(ξg1)(Y)(ξg2)(Y)(ξgt)(Y))z|𝖼𝖼(F)|=\displaystyle\left(\sum\limits_{Y\subseteq S}(-1)^{|S\setminus Y|}(\xi g_{1})(Y)(\xi g_{2})(Y)\dots(\xi g_{t})(Y)\right)\langle z^{|\mathsf{cc}(F)|}\rangle=
(YS(1)|SY|j=1t(ξgj)(Y))z|𝖼𝖼(F)|=\displaystyle\left(\sum\limits_{Y\subseteq S}(-1)^{|S\setminus Y|}\prod\limits_{j=1}^{t}(\xi g_{j})(Y)\right)\langle z^{|\mathsf{cc}(F)|}\rangle=
(YS(1)|SY|j=1tZYgj(Z))z|𝖼𝖼(F)|\displaystyle\left(\sum\limits_{Y\subseteq S}(-1)^{|S\setminus Y|}\prod\limits_{j=1}^{t}\sum\limits_{Z\subseteq Y}g_{j}(Z)\right)\langle z^{|\mathsf{cc}(F)|}\rangle (5)

We now have the equalities required for our algorithm to solve Independent Set parameterized by shrubdepth. By using these equalities directly, we would obtain an algorithm running in time 2𝒪(kd)n𝒪(1)2^{\mathcal{O}(kd)}\cdot n^{\mathcal{O}(1)} and space 𝒪(dk2n2)\mathcal{O}(dk^{2}n^{2}). However, the latter can be substantially improved by using a result of Pilipczuk and Wrochna [45] based on the Chinese remainder theorem:

Theorem 3.4 ([45]).

Let P(x)=i=0nqixiP(x)=\sum\limits_{i=0}^{n^{\prime}}q_{i}x^{i} be a polynomial in one variable xx of degree at most nn^{\prime} with integer coefficients satisfying 0qi2n0\leqslant q_{i}\leqslant 2^{n^{\prime}} for i=0,,ni=0,\dots,n^{\prime}. Suppose that given a prime number p2n+2p\leqslant 2n^{\prime}+2 and s𝔽ps\in\mathbb{F}_{p}, the value P(s)(modp)P(s)\pmod{p} can be computed in time TT and space SS. Then given k{0,,n}k\in\{0,...,n^{\prime}\}, the value qkq_{k} can be computed in time 𝒪(T𝗉𝗈𝗅𝗒(n))\mathcal{O}(T\cdot\mathsf{poly}(n^{\prime})) and space 𝒪(S+logn)\mathcal{O}(S+\log n^{\prime}).

With this, we can finally prove:

See 1.1

Proof 3.5.

The independent set polynomial of the graph G=GrG=G_{r} is exactly S[k]𝖨𝖲(r,S)\sum_{S\subseteq[k]}\mathsf{IS}(r,S) where rr is the root of TT. Let us denote this polynomial PP. To apply Theorem˜3.4, we use n:=nn^{\prime}:=n. Let p2n+2p\leqslant 2n+2 be a prime number. The bound on pp implies that any number from 𝔽p\mathbb{F}_{p} can be encoded using 𝒪(logn)\mathcal{O}(\log n) bits, this will bound the space complexity. There are at most 2n2^{n} independent sets of GG so every coefficient of PP lies between 0 and 2n2^{n}, and therefore the prerequisites stated in the first sentence of Theorem˜3.4 are satisfied. Let s𝔽ps\in\mathbb{F}_{p}. We will now show that the value P(s)modpP(s)\mod p can be evaluated in time 2𝒪(kd)n𝒪(1)2^{\mathcal{O}(kd)}\cdot n^{\mathcal{O}(1)} and space 𝒪(dk2logn)\mathcal{O}(dk^{2}\log n). At that point, the result will follow by Theorem˜3.4.

Since we are interested in the evaluation of PP at ss modulo pp, instead of querying and storing all coefficients, as a result of the recursion, we return the evaluation of a certain polynomial (e.g., 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta)) at ss modulo pp. For this, the formal variable xx is always substituted by ss and then arithmetic operations in 𝔽p\mathbb{F}_{p} are carried out. In the following, when computing a sum (resp. product) of certain values, these values are computed recursively one after another and we store the counter (e.g., current subset S[k]S\subseteq[k]) as well as the current value of the sum (resp. product). Our algorithm relies on the equalities provided above and we now provide more details to achieve the desired time and space complexity. Let us denote 𝖠𝖨𝖲(a,S,α)T0(a,S,α,0,)\mathsf{AIS}(a,S,\alpha)\coloneqq T_{0}(a,S,\alpha,0,\emptyset) for simplicity.

First, if aa is a leaf of TT, then 𝖨𝖲(a,S)\mathsf{IS}(a,S) can be computed directly via

𝖨𝖲(a,S)={1if S=xif S={λa(a)}0otherwise\mathsf{IS}(a,S)=\begin{cases}1&\text{if }S=\emptyset\\ x&\text{if }S=\{\lambda_{a}(a)\}\\ 0&\text{otherwise}\end{cases} (6)

and this is our base case. Otherwise, the queries are answered recursively and five types of queries occur, namely 𝖨𝖲(a,S)\mathsf{IS}(a,S), 𝖠𝖨𝖲(a,S,β)\mathsf{AIS}(a,S,\beta), T(a,S,α,c,γ)T(a,S,\alpha,c,\gamma), 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta), and 𝖯𝖨𝖲(a,S)\mathsf{PIS}(a,S). Let aa be an inner node with children b1,,btb_{1},\dots,b_{t}. To answer a query T(a,S,α,c,γ)T(a,S,\alpha,c,\gamma) for c<|α1(2)|c<|\alpha^{-1}(2_{\geqslant})|, we recurse via (1). If c=|α1(2)|c=|\alpha^{-1}(2_{\geqslant})|, then we first construct β:S{1=,1}\beta\colon S\to\{1_{=},1_{\geqslant}\} given by β1(1=)=α1(1=)γ1(1=)\beta^{-1}(1_{=})=\alpha^{-1}(1_{=})\cup\gamma^{-1}(1_{=}) and β1(1)=α1(2)γ1(1=)\beta^{-1}(1_{\geqslant})=\alpha^{-1}(2_{\geqslant})\setminus\gamma^{-1}(1_{=}) and then query 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta). Then, to answer a query 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta), we recurse via (5). And finally, to answer a query 𝖯𝖨𝖲(a,S)\mathsf{PIS}(a,S), we recurse using (3).

Each of the above recurrences is given by a combination of sums and products of the results of recursive calls and these values are from 𝔽p\mathbb{F}_{p}. To keep the space complexity of the algorithm bounded, for such recursion, the result is computed “from inside to outside” by keeping track of the current sums (resp. products) as well as the next value to be queried. For example, for (5), we iterate through all YSY\subseteq S, store the current value of the outer sum (modulo pp), then for fixed YY, we iterate over j[t]j\in[t] and store jj and the current value of the product (modulo pp), and then for fixed YY and jj, iterate through ZYZ\subseteq Y and store the current value of ZZ and the current inner sum. After the complete iteration over ZZ (resp. jj) we update the current value of the product (resp. outer sum) and move on to the next jj (resp. YY).

Now we analyze the time and space complexity of the algorithm. We start with the running time. For this, we analyze how often every query is answered. Namely, for all relevant values of aa, SS, α\alpha, β\beta, cc, and γ\gamma, for each query 𝖨𝖲(a,S)\mathsf{IS}(a,S), 𝖠𝖨𝖲(a,S,α)\mathsf{AIS}(a,S,\alpha), T(a,S,α,c,γ)T(a,S,\alpha,c,\gamma), 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta), resp. 𝖯𝖨𝖲(a,S)\mathsf{PIS}(a,S), we use Q(𝖨𝖲(a,S))Q(\mathsf{IS}(a,S)), Q(𝖠𝖨𝖲(a,S,α))Q(\mathsf{AIS}(a,S,\alpha)), Q(T(a,S,α,c,γ))Q(T(a,S,\alpha,c,\gamma)), Q(𝖳𝖨𝖲(a,S,β))Q(\mathsf{TIS}(a,S,\beta)), Q(𝖯𝖨𝖲(a,S))Q(\mathsf{PIS}(a,S)), respectively, to denote the number of times the query is answered by the algorithm and we call this value the multiplicity of the query. Then, for h[d]0h\in[d]_{0}, we define the value q𝖨𝖲(h)q_{\mathsf{IS}}(h) to be the maximum multiplicity of a query 𝖨𝖲(a,S)\mathsf{IS}(a,S) over all nodes aa at height hh in TT and all reasonable SS. Similarly, we define the values Q𝖠𝖨𝖲(h)Q_{\mathsf{AIS}}(h), QT(h)Q_{T}(h), Q𝖳𝖨𝖲(h)Q_{\mathsf{TIS}}(h), and Q𝖯𝖨𝖲(h)Q_{\mathsf{PIS}}(h) where we maximize over all nodes aa at height hh and all reasonable values of SS, α\alpha, β\beta, γ\gamma and cc. We now upper bound these values.

Let bb be a node at height hh for some h[d]0h\in[d]_{0}. If b=rb=r, then a query 𝖯𝖨𝖲(b,S)\mathsf{PIS}(b,S) is not asked at all. Otherwise, let aa be the parent of bb, and let jj be such that b=bjb=b_{j} is the jj-th child of aa. Then 𝖯𝖨𝖲(b,S)\mathsf{PIS}(b,S) can be asked when answering some query of the form 𝖳𝖨𝖲(a,D,β)\mathsf{TIS}(a,D,\beta) to compute some value ξgja,β(Y)\xi g_{j}^{a,\beta}(Y) such that SYDS\subseteq Y\subseteq D. Therefore, for fixed DD and β\beta, the value 𝖨𝖲(b,S)\mathsf{IS}(b,S) is queried at most 2k2^{k} times, so we obtain Q(𝖯𝖨𝖲(b,S))SD[k],β:D{1=,1}Q(𝖳𝖨𝖲(a,D,β))Q(\mathsf{PIS}(b,S))\leqslant\sum_{{S\subseteq D\subseteq[k],\beta\colon D\to\{1_{=},1_{\geqslant}\}}}Q({\mathsf{TIS}}(a,D,\beta)) and hence,

Q𝖯𝖨𝖲(h){=0if h=023kQ𝖳𝖨𝖲(h1)otherwise.Q_{\mathsf{PIS}}(h)\begin{cases}=0&\text{if }h=0\\ \leqslant 2^{3k}Q_{\mathsf{TIS}}(h-1)&\text{otherwise}\end{cases}.

Next, we consider a query of form 𝖳𝖨𝖲(b,S,β)\mathsf{TIS}(b,S,\beta). Observe that for every α:S{1=,2}\alpha\colon S\to\{1_{=},2_{\geqslant}\} when recursing via (1) to answer T(a,S,α,0,)T(a,S,\alpha,0,\emptyset), we branch on the values 1=1_{=} and 11_{\geqslant} for s1,,s|α1(2)|s_{1},\dots,s_{|\alpha^{-1}(2_{\geqslant})|} one after another. Thus, after |α1(2)||\alpha^{-1}(2_{\geqslant})| steps every branch results in its own γ:s1,,s|α1(2)|{1=,1}\gamma\colon s_{1},\dots,s_{|\alpha^{-1}(2_{\geqslant})|}\to\{1_{=},1_{\geqslant}\}, and hence, in its own β:S{1=,1}\beta\colon S\to\{1_{=},1_{\geqslant}\}. Therefore, if we fix α\alpha, then every 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta) is queried at most once when answering T(a,S,α,0,)T(a,S,\alpha,0,\emptyset). Hence, we have

Q(𝖳𝖨𝖲(b,S,β))α:S{1=,1}Q(𝖠𝖨𝖲(b,S,α))Q(\mathsf{TIS}(b,S,\beta))\leqslant\sum_{\alpha\colon S\to\{1_{=},1_{\geqslant}\}}Q(\mathsf{AIS}(b,S,\alpha))

and therefore, Q𝖳𝖨𝖲(h)2kQ𝖠𝖨𝖲(h)Q_{\mathsf{TIS}}(h)\leqslant 2^{k}Q_{\mathsf{AIS}}(h). Every query T(b,S,α,c,γ)T(b,S,\alpha,c,\gamma) is also asked at most once while answering a query of T(b,S,α,0,)T(b,S,\alpha,0,\emptyset), i.e., Q(T(b,S,α,c,γ))Q(𝖠𝖨𝖲(b,S,α))Q(T(b,S,\alpha,c,\gamma))\leqslant Q(\mathsf{AIS}(b,S,\alpha)) and QT(h)Q𝖠𝖨𝖲(h)Q_{T}(h)\leqslant Q_{\mathsf{AIS}}(h).

Further, for each fixed α\alpha, a query 𝖠𝖨𝖲(b,S,α)\mathsf{AIS}(b,S,\alpha) is asked exactly once for every query of 𝖨𝖲(b,S)\mathsf{IS}(b,S), i.e., Q(𝖠𝖨𝖲(b,S,α))Q(𝖨𝖲(b,S))Q(\mathsf{AIS}(b,S,\alpha))\leqslant Q(\mathsf{IS}(b,S)) and Q𝖠𝖨𝖲(h)Q𝖨𝖲(h)Q_{\mathsf{AIS}}(h)\leqslant Q_{\mathsf{IS}}(h). Finally, a query of the form 𝖨𝖲(b,S)\mathsf{IS}(b,S) is queried at most once for every query of the form 𝖯𝖨𝖲(b,D)\mathsf{PIS}(b,D), so we have Q(𝖨𝖲(b,S))D[k]Q(𝖯𝖨𝖲(b,D))Q(\mathsf{IS}(b,S))\leqslant\sum_{D\subseteq[k]}Q(\mathsf{PIS}(b,D)) and Q𝖨𝖲(h)2kQ𝖯𝖨𝖲(p)Q_{\mathsf{IS}}(h)\leqslant 2^{k}Q_{\mathsf{PIS}}(p).

By induction over hh, we obtain that

Q𝖠𝖨𝖲(h),QT(h),Q𝖳𝖨𝖲(h),Q𝖨𝖲(h),Q𝖯𝖨𝖲(h)25hkQ_{\mathsf{AIS}}(h),Q_{T}(h),Q_{\mathsf{TIS}}(h),Q_{\mathsf{IS}}(h),Q_{\mathsf{PIS}}(h)\leqslant 2^{5hk}

and

Q𝖠𝖨𝖲(h),QT(h),Q𝖳𝖨𝖲(h),Q𝖨𝖲(h),Q𝖯𝖨𝖲(h)25dk2𝒪(kd)Q_{\mathsf{AIS}}(h),Q_{T}(h),Q_{\mathsf{TIS}}(h),Q_{\mathsf{IS}}(h),Q_{\mathsf{PIS}}(h)\leqslant 2^{5dk}\in 2^{\mathcal{O}(kd)}

for every h[d]0h\in[d]_{0}, i.e., any fixed query is asked 2𝒪(kd)2^{\mathcal{O}(kd)} times.

There are 𝒪(nd)\mathcal{O}(nd) nodes in TT and there are at most 2k2^{k} reasonable values of SS; for any SS, there are at most 2k2^{k} choices for α\alpha, β\beta, and γ\gamma; and there are at most kk reasonable values of bb. Hence, there are at most

𝒪(nd)(22k+23kk+22k+2k+2k)2𝒪(k)n𝒪(1)\mathcal{O}(nd)(2^{2k}+2^{3k}k+2^{2k}+2^{k}+2^{k})\in 2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}

different forms of queries and so there are at most 2𝒪(kd)2𝒪(k)n𝒪(1)=2𝒪(kd)n𝒪(1)2^{\mathcal{O}(kd)}2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}=2^{\mathcal{O}(kd)}\cdot n^{\mathcal{O}(1)} recursive calls.

Next, we bound the time spent on each query additionally to the recursive calls. For each query, this additional time is mostly determined by 𝒪(22kn)\mathcal{O}(2^{2k}n) arithmetic operations. For a query of the form 𝖳𝖨𝖲()\mathsf{TIS}(\cdot), arithmetic operations are carried out over polynomials in a formal variable zz where the coefficients are from 𝔽p\mathbb{F}_{p}. It is crucial to observe that since in the end of the computation we apply the z|𝖼𝖼(F)|\langle z^{|\mathsf{cc}(F)|}\rangle operation and the auxiliary graph FF has at most kk connected components, we can safely discard coefficients at terms zrz^{r} for any r>kr>k. Therefore, it suffices to keep track of at most kk coefficients from 𝔽p\mathbb{F}_{p}. For the remaining queries, the arithmetic operations are carried out over 𝔽p\mathbb{F}_{p}. So in any case, there are at most kk relevant values from 𝔽p\mathbb{F}_{p} to store as a partial sum resp. product and a single arithmetic operation can be therefore carried out in n𝒪(1)n^{\mathcal{O}(1)} time. Further, when answering a query of the form 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta) and computing a value of the form gja,β(Xj)g_{j}^{a,\beta}(X_{j}) for this, we can check whether for XjX_{j} there is WjW_{j} with 𝖿𝗅𝖺𝗍a,β(Wj)=Xj\mathsf{flat}^{a,\beta}(W_{j})=X_{j} as follows. First, we compute the connected components of Fa,βF^{a,\beta}: we start with a partition of β1(1=)\beta^{-1}(1_{=}) into singletons and then iterate over all pairs of vertices i1i_{1} and i2i_{2} and if Ma[i1,i2]=1M_{a}[i_{1},i_{2}]=1, then we merge the sets containing i1i_{1} and i2i_{2}. As a result of this process, we obtain the set of connected components of Fa,βF^{a,\beta}. Then for each connected component CC, we check if CXj{,C}C\cap X_{j}\in\{\emptyset,C\} holds. If this does not hold for at least one connected component, then we conclude that gja,β(Xj)=0g_{j}^{a,\beta}(X_{j})=0. Otherwise, the desired set WjW_{j} exists and we have |Wj𝖼𝖼(Fa,β)|=r|W_{j}\cap\mathsf{cc}(F^{a,\beta})|=r where rr is the number of connected components CC with CXj=CC\cap X_{j}=C. This process then runs in time 𝒪(k3)\mathcal{O}(k^{3}) and space 𝒪(k)\mathcal{O}(k). Although this can be accelerated, this step is not a bottleneck so this time and space complexity suffices for our purposes. Also, when answering a query of the form 𝖠𝖨𝖲(a,S,α)\mathsf{AIS}(a,S,\alpha), we need to check whether there exist labels i1,i2Si_{1},i_{2}\in S with α(i1)=1\alpha(i_{1})=1_{\geqslant} and Ma[i1,i2]=1M_{a}[i_{1},i_{2}]=1: this can be done in time 𝒪(k2)\mathcal{O}(k^{2}) and space 𝒪(logk)\mathcal{O}(\log k) by considering all pairs i1,i2Si_{1},i_{2}\in S and looking up these properties. So for any query, the time spent on this query apart from the recursive calls is bounded by 𝒪(22knk3logn)=2𝒪(k)n𝒪(1)\mathcal{O}(2^{2k}n\cdot k^{3}\cdot\log n)=2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}. And the total running time of the algorithm is bounded by 2𝒪(kd)𝗉𝗈𝗅𝗒(n)2𝒪(k)𝗉𝗈𝗅𝗒(n)=2𝒪(kd)𝗉𝗈𝗅𝗒(n)2^{\mathcal{O}(kd)}\cdot\mathsf{poly}(n)\cdot 2^{\mathcal{O}(k)}\cdot\mathsf{poly}(n)=2^{\mathcal{O}(kd)}\cdot\mathsf{poly}(n), i.e., the number of queries times the complexity of a single query.

Finally, we bound the space complexity. The space used by a single query is to store the partial sums and/or products modulo pp as well as the counters that store the information about the next recursive call (e.g., current SS). For any query other than 𝖳𝖨𝖲()\mathsf{TIS}(\cdot), the partial result is in 𝔽p\mathbb{F}_{p}. For a query of the form 𝖳𝖨𝖲()\mathsf{TIS}(\cdot), we are working with a polynomial in the formal variable zz. Above we have argued why the coefficients at zpz^{p} for p>rp>r can be discarded. Therefore, it suffices to keep track of at most kk coefficients from 𝔽p\mathbb{F}_{p}. Recall that p2n+2p\leqslant 2n+2 so any value from 𝔽p\mathbb{F}_{p} can be encoded with logn\log n bits. When answering a query of the form 𝖳𝖨𝖲(a,S,β)\mathsf{TIS}(a,S,\beta), we also need to consider the connected components of Fa,βF^{a,\beta}: as argued above, this can be accomplished in 𝒪(k)\mathcal{O}(k) space. So the space complexity of a single query can be bounded by 𝒪(klogn)+𝒪(k)+logn=𝒪(klogn)\mathcal{O}(k\log n)+\mathcal{O}(k)+\log n=\mathcal{O}(k\log n). The depth of the recursion is bounded by 𝒪(kd)\mathcal{O}(kd): the depth of TT is dd and for each node, there are at most k+4k+4 recursive calls queried at this node (namely, 𝖯𝖨𝖲()\mathsf{PIS}(\cdot), 𝖨𝖲()\mathsf{IS}(\cdot), 𝖠𝖨𝖲=T(,c=0,)\mathsf{AIS}=T(\cdot,c=0,\cdot), \dots, T(,c=|α1(2)|k,)T(\cdot,c=|\alpha^{-1}(2_{\geqslant})|\leqslant k,\cdot), 𝖳𝖨𝖲()\mathsf{TIS}(\cdot)). Finally, during the algorithm we need to keep track of the node we are currently at. Therefore, the space complexity of the algorithm is 𝒪(kd)𝒪(klogn)+𝒪(logn)=𝒪(k2dlogn)\mathcal{O}(kd)\mathcal{O}(k\log n)+\mathcal{O}(\log n)=\mathcal{O}(k^{2}d\log n).

3.2 Counting List-Homomorphisms

We now explain how to apply the techniques from Section˜3.1 to a broader class of problems, namely all problems expressible as instantiations of the #-List-HH-Homomorphism problem for a fixed pattern graph HH (which we will introduce in a moment). In this way, we cover problems such as Odd Cycle Transversal and qq-Coloring, for a fixed qq. Furthermore, the techniques will be useful for solving Dominating Set later.

Let HH be a fixed undirected graph (possibly with loops) and let RV(H)R\subseteq V(H) be a designated set of vertices. An instance of the RR-Weighted #-List-HH-Homomorphism problem consists of a graph GG, a weight function ω:V(G)\omega\colon V(G)\to\mathbb{N}, a list function L:V(G)2V(H)L\colon V(G)\to 2^{V(H)}, a cardinality CC\in\mathbb{N} and a total weight WW\in\mathbb{N}. The goal is to count the number of list HH-homomorphisms of GG such that exactly CC vertices of GG are mapped to RR and their total weight in ω\omega is WW. More formally, we seek the value

|{φ:V(G)V(H)|\displaystyle\bigl{|}\bigl{\{}\varphi\colon V(G)\to V(H)\ \bigm{|}\ vV(G):φ(v)L(v),uvE(G):φ(u)φ(v)E(H),\displaystyle\forall v\in V(G)\colon\varphi(v)\in L(v),\forall uv\in E(G)\colon\varphi(u)\varphi(v)\in E(H),
|φ1(R)|=C, and ω(φ1(R))=W}|.\displaystyle|\varphi^{-1}(R)|=C,\textrm{ and }\omega(\varphi^{-1}(R))=W\bigr{\}}\bigr{|}\enspace.

We say that such φ\varphi has cardinality CC and weight WW. For the “standard” #-List HH-Homomorphism problem we would use R=V(H)R=V(H), C=W=|V(G)|C=W=|V(G)|, and unit weights. We also have the following special cases of the RR-Weighted #-List-HH-Homomorphism problem. In all cases, we consider unit weights.

  • To model Independent Set, the pattern graph HH consists of two vertices 𝐮\mathbf{u} and 𝐯\mathbf{v} and the edge set contains a loop at 𝐯\mathbf{v} and the edge 𝐮𝐯\mathbf{uv}. The set RR consists of 𝐮\mathbf{u} only. Then Independent Set is equivalent to finding the largest CC for which we have a positive number of solutions in the constructed instance of RR-Weighted #-List-HH-Homomorphism.

  • Similarly, to model Odd Cycle Transversal, the pattern graph HH is a triangle on vertex set {𝐮,𝐯,𝐰}\{\mathbf{u},\mathbf{v},\mathbf{w}\} with a loop added on 𝐮\mathbf{u}. Again, we take R={𝐮}R=\{\mathbf{u}\}.

  • To model qq-Coloring, we take HH to be the loopless clique on qq vertices, and R=V(H)R=V(H).

While in all the cases described above we only use unit weights, we need to work with any weight function in our application to Dominating Set.

Theorem 3.6.

Fix a graph HH (possibly with loops) and RV(H)R\subseteq V(H). There is an algorithm which takes as input an nn-vertex graph GG together with a weight function ω\omega and a (d,k)(d,k)-tree-model, runs in time 2𝒪(dk)n𝒪(1)(W)𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)}\cdot(W^{*})^{\mathcal{O}(1)} and uses space 𝒪(k2d(logn+logW))\mathcal{O}(k^{2}d(\log n+\log W^{*})), and solves the RR-Weighted #-List-HH-Homomorphism in GG, where WW^{*} denotes the maximum weight in ω\omega.

Using the argumentation above, from Theorem˜3.6 we can derive the following corollaries.

Corollary A.

Fix a graph HH (possibly with loops). Then given an nn-vertex graph GG together with a (d,k)(d,k)-tree-model, #-List-HH-Homomorphism in GG can be solved in time 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} and space 𝒪(dk2logn)\mathcal{O}(dk^{2}\log n).

Corollary B.

Fix qq\in\mathbb{N}. Then given an nn-vertex graph GG together with a (d,k)(d,k)-tree-model, qq-Coloring and Odd Cycle Transversal in GG can be solved in time 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} and space 𝒪(dk2logn)\mathcal{O}(dk^{2}\log n).

The remainder of this section is devoted to the proof of Theorem˜3.6. We assume that the reader is familiar with the approach presented in Section˜3.1, as we will build upon it.

Let now HH and RR be fixed and let W=maxvVω(v)W^{*}=\max_{v\in V}\omega(v) be the maximum weight in ω\omega. Now we show how to adapt our techniques from Section˜3.1 to the RR-Weighted #-List-HH-Homomorphism problem. We assume that the graph GG is provided with a (d,k)(d,k)-model (T,,,λ)(T,\mathcal{M},\mathcal{R},\lambda). There are two main changes: first, we adapt the dynamic programming formulas and second, we show how to apply Theorem˜3.4 to polynomials in two variables that will appear in the proof.

We start with dynamic programming. Let aa be a node of TT. For Maximum Independent Set, our guess SS was the set of labels occurring in an independent set of the current subgraph GaG_{a}. Now, instead, we guess a subset SS of 𝐒𝐭𝐚𝐭𝐞𝐬{(𝐡,i):𝐡V(H),i[k]}\mathbf{States}\coloneqq\{(\mathbf{h},i)\,:\,\mathbf{h}\in V(H),i\in[k]\}. For each label i[k]i\in[k], the set SS is intended to reflect to which vertices of HH the set VaiV_{a}^{i} is mapped by a homomorphism. The set 𝐒𝐭𝐚𝐭𝐞𝐬\mathbf{States} has size |V(H)|k|V(H)|\cdot k, i.e., 𝒪(k)\mathcal{O}(k) for fixed HH. So as in Section˜3.1, there are still 2𝒪(k)2^{\mathcal{O}(k)} possibilities for SS and this will be the reason for the running time of 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} as in that section. As before, we then employ guesses of the form α:S{1=,2}\alpha\colon S\to\{1_{=},2_{\geqslant}\} and β:S{1,1=}\beta\colon S\to\{1_{\geqslant},1_{=}\} to compute the polynomials reflecting the number of HH-homomorphisms of certain cardinality via inclusion-exclusion. Further, we need to forbid that edges of GG are mapped to non-edges of HH. For this, the auxiliary graph Fa,βF^{a,\beta} again has vertex set β1(1=)\beta^{-1}(1_{=}) but now there is an edge between two vertices (𝐡,i)(\mathbf{h},i) and (𝐡,j)(\mathbf{h^{\prime}},j) whenever Ma[i,j]=1M_{a}[i,j]=1 and 𝐡𝐡\mathbf{hh^{\prime}} is not an edge of HH. Then, if a homomoprhism maps a vertex v1v_{1} with label ii to 𝐡\mathbf{h} and a vertex v2v_{2} with label jj to 𝐡\mathbf{h^{\prime}}, our approach from Section˜3.1 ensures that v1v_{1} and v2v_{2} come from the same child of aa so that no edge between v1v_{1} and v2v_{2} is created at aa.

In Section˜3.1, all polynomials had only one variable xx whose degree reflected the size of an independent set. Here, additionally to cardinality we are interested in the weight of vertices mapped to HH. So instead of univariate polynomials from [x]\mathbb{Z}[x], we use polynomials in two variables xx and yy where the degree of yy keeps track of the weight. The weights of partial solutions are initialized in the leaves of the tree-model, there we also take care of lists LL: the polynomial for a guess SS and a leaf vv is given by xyω(v)x\cdot y^{\omega(v)} provided S={(𝐡,i)}S=\{(\mathbf{h},i)\} for some 𝐡L(v)\mathbf{h}\in L(v) and i=λ(v)i=\lambda(v), and otherwise this polynomial is the zero polynomial.

With this adaptations in hand, by a straightforward implementation of the recursion we can already obtain a 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)}-time algorithm that uses only polynomial space and computes the polynomial Q(x,y)=p[n]0,w[nW]0qp,wxpywQ(x,y)=\sum_{p\in[n]_{0},w\in[nW^{*}]_{0}}q_{p,w}x^{p}y^{w} where qp,wq_{p,w} is the number of list HH-homomorphisms of GG of cardinality pp and weight ww. The answer to the problem is then the value qC,Wq_{C,W}. To obtain logarithmic dependency on the graph size in space complexity, in Section˜3.1 we relied on Theorem˜3.4. However, Theorem˜3.4 concerns univariate polynomials, while QQ has two variables. We now explain how to model QQ as a univariate polynomial P[t]P\in\mathbb{Z}[t] in order to apply the theorem.

Let

P(t)=j1[n]0,j2[nW]0qj1,j2tj1(nW+1)+j2.P(t)=\sum\limits_{j_{1}\in[n]_{0},j_{2}\in[nW^{*}]_{0}}q_{j_{1},j_{2}}t^{j_{1}(nW^{*}+1)+j_{2}}.

First, observe that j1j_{1} and j2j_{2} form a base nW+1nW^{*}+1 representation of the degree of the corresponding monomial. So the coefficient standing by tC(nW+1)+Wt^{C(nW^{*}+1)+W} in PP is exactly qC,Wq_{C,W}, i.e., the value we seek. Further, it holds

P(t)=j1[n]0,j2[nW]0qj1,j2(tnW+1)j1tj2,P(t)=\sum_{j_{1}\in[n]_{0},j_{2}\in[nW^{*}]_{0}}q_{j_{1},j_{2}}(t^{nW^{*}+1})^{j_{1}}t^{j_{2}},

so evaluating PP at some value ss\in\mathbb{Z} modulo a prime number pp is equivalent to computing the value Q(snW+1,s)modpQ(s^{nW^{*}+1},s)\bmod p.

It remains to choose suitable values to apply Theorem˜3.4. The degree of PP is bounded by 𝒪(n2W)\mathcal{O}(n^{2}W^{*}). The number of HH-homomorphisms of GG, and hence each coefficient of PP as well, is bounded by |V(H)|n|V(H)|^{n}. Since |V(H)||V(H)| is a problem-specific constant, there is a value nn^{\prime} of magnitude 𝒪(n2W)\mathcal{O}(n^{2}W^{*}) satisfying the prerequisities of Theorem˜3.4. Then for a prime number p2n+2p\leqslant 2n^{\prime}+2, any value from 𝔽p\mathbb{F}_{p} is 𝒪(logn+logW)\mathcal{O}(\log n+\log W^{*}) bits long. Now to compute the value Q(snW+1,s)modpQ(s^{nW^{*}+1},s)\bmod p for some s𝔽ps\in\mathbb{F}_{p}, we proceed similarly to Section˜3.1: during the recursion, instead of storing all coefficients of the polynomials, as a partial result we only store the current result of the evaluation at x=snW+1x=s^{nW^{*}+1} and y=sy=s modulo pp.

Let us now summarize the time and space complexity of this evaluation similarly to Section˜3.1. The depth of TT is dd and per node of TT, there are at most 4+|States|4+|\textbf{States}| recursive calls where |States||\textbf{States}| reflects that the transformation from 11_{\geqslant} to 22_{\geqslant} is carried out for every element of a guess SStatesS\subseteq\textbf{States} (recall the tables T(,c=0,),,T(,c=|α1(2)|,)T(\cdot,c=0,\cdot),\dots,T(\cdot,c=|\alpha^{-1}(2_{\geqslant})|,\cdot) in Section˜3.1). Due to |States|=k|V(H)||\textbf{States}|=k\cdot|V(H)|, the recursion depth is then 𝒪(kd)\mathcal{O}(kd). The number of possible guesses SS as well as reasonable α\alpha and β\beta is bounded by 2𝒪(States)=2𝒪(k)2^{\mathcal{O}(\textbf{States})}=2^{\mathcal{O}(k)}. Also, for a node aa and a reasonable β\beta, the auxiliary graph Fa,βF^{a,\beta} has at most |States|=𝒪(k)|\textbf{States}|=\mathcal{O}(k) vertices. Recall that in Section˜3.1, at some point of the computation we work with a polynomial using a variable zz. For this variable, only coefficients at monomials ziz^{i} for i|V(Fa,β)|i\leqslant|V(F^{a,\beta})| are relevant. Hence, for each query we need to keep only 𝒪(k)\mathcal{O}(k) coefficients from 𝔽p\mathbb{F}_{p} and such a coefficient uses 𝒪(logn+logW)\mathcal{O}(\log n+\log W^{*}) bits. The addition and multiplication of two such coefficients can be done in time 𝒪(logn+logW)\mathcal{O}(\log n+\log W^{*}). These properties imply that following the argument from Section˜3.1 we obtain the running time of 2𝒪(dk)n𝒪(1)logW2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)}\cdot\log W^{*} and space complexity of 𝒪(kd)k𝒪(logn+logW)=𝒪(k2d(logn+logW))\mathcal{O}(kd)\cdot k\cdot\mathcal{O}(\log n+\log W^{*})=\mathcal{O}(k^{2}d(\log n+\log W^{*})).

With that, Theorem˜3.4 implies that the coefficients of PP, and in particular the sought value qC,Wq_{C,W}, can be reconstructed in time 2𝒪(dk)n𝒪(1)logW2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)}\cdot\log W^{*} and using 𝒪(k2d(logn+logW))\mathcal{O}(k^{2}d(\log n+\log W^{*})) space. This concludes the proof of Theorem˜3.6.

We remark that the result of Theorem˜3.6 can be combined with the Cut&Count technique of Cygan et al. [15] in order to incorporate also connectivity constraints to List HH-Homomorphism and solve problems like Connected Vertex Cover and Connected Odd Cycle Transversal. In essence Cut&Count provides a randomized reduction from List HH-Homomorphism with connectivity constraints to #-List HH^{\prime}-Homomorphism for a new pattern graph HH^{\prime} with at most twice as many vertices as HH. Since in the reduction only the parity of the number of solutions is preserved, in Cut&Count one typically uses the Isolation Lemma [35] to sample a weight function so that with high probability, there is exactly one (and thus, an odd number) solution of minimum possible weight; then counting the number of solutions mod 22 for all possible weights reveals the existence of a solution. Note here that the algorithm of Theorem˜3.6 is already prepared to count weighted solutions. In our setting, the usage of Isolation Lemma necessitates allowing randomization and adds an 𝒪(nlogn)\mathcal{O}(n\log n) factor to the space complexity for storing the sampled weights. We leave the details to the reader.

3.3 Max-Cut

In the classical Max Cut problem, we are given a graph GG and the task is to output maxXV(G)|E(X,V(G)X)|\max_{X\subseteq V(G)}|E(X,V(G)\setminus X)|. Towards solving the problem, let us fix a graph GG and a (d,k)(d,k)-tree model (T,,,λ)(T,\mathcal{M},\mathcal{R},\lambda) of GG. Recall that for every node aa of TT, i[k]i\in[k] and XVaX\subseteq V_{a}, we denote by Xa(i)X_{a}(i) the set of vertices in XX labeled ii at aa, i.e., Xλa1(i)X\cap\lambda_{a}^{-1}(i). Given a child bb of aa, we let Vab=VbV_{ab}=V_{b} and we denote by Vab(i)V_{ab}(i) the set of vertices in VbV_{b} labeled ii at aa, i.e., VbVa(i)V_{b}\cap V_{a}(i). By Xab(i)X_{ab}(i) we denote the set XVab(i)X\cap V_{ab}(i). Given c{a,ab}c\in\{a,ab\}, we define the cc-signature of XVcX\subseteq V_{c} — denoted by 𝗌𝗂𝗀c(X)\mathsf{sig}_{c}(X) — as the vector (|Xc(1)|,|Xc(2)|,,|Xc(k)|)(|X_{c}(1)|,|X_{c}(2)|,\dots,|X_{c}(k)|). We let 𝒮(c)\mathcal{S}(c) be the set of cc-signatures of all the subsets of VcV_{c}, i.e., 𝒮(c) . . ={𝗌𝗂𝗀c(X):XVc}\mathcal{S}(c)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\{\mathsf{sig}_{c}(X)\,:\,X\subseteq V_{c}\}. Observe that |𝒮(c)|n𝒪(k)|\mathcal{S}(c)|\in n^{\mathcal{O}(k)} holds. Also, for the children b1,,btb_{1},\dots,b_{t} of aa, we define 𝒮(ab1,,abt)\mathcal{S}(ab_{1},\dots,ab_{t}) as the set of all tuples (s1,,st)(s^{1},\dots,s^{t}) with si𝒮(abi)s^{i}\in\mathcal{S}(ab_{i}) for each i[t]i\in[t]. Given s𝒮(c)s\in\mathcal{S}(c), we define fc(s)f_{c}(s) as the maximum of |E(X,VcX)||E(X,V_{c}\setminus{X})| over all the subsets XVcX\subseteq V_{c} with cc-signature ss. To solve Max Cut on GG, it suffices to compute maxs𝒮(r)fr(s)\max_{s\in\mathcal{S}(r)}f_{r}(s) where rr is the root of TT.

Let bb be a child of aa. We start explaining how to compute fab(s)f_{ab}(s) by making at most n𝒪(k)n^{\mathcal{O}(k)} calls to the function fbf_{b}. Given s𝒮(b)s^{\prime}\in\mathcal{S}(b), we define ρab(s)\rho_{ab}(s^{\prime}) as the vector s=(s1,,sk)𝒮(ab)s=(s_{1},\dots,s_{k})\in\mathcal{S}(ab) such that, for each i[k]i\in[k], we have si=jρab1(i)sjs_{i}=\sum_{j\in\rho_{ab}^{-1}(i)}s_{j}^{\prime}. Observe that for every XVbX\subseteq V_{b}, we have 𝗌𝗂𝗀ab(X)=ρab(𝗌𝗂𝗀b(X))\mathsf{sig}_{ab}(X)=\rho_{ab}(\mathsf{sig}_{b}(X)). Consequently, for every s𝒮(ab)s\in\mathcal{S}(ab), fab(s)f_{ab}(s) is the maximum of fb(s)f_{b}(s^{\prime}) over the bb-signatures s𝒮(b)s^{\prime}\in\mathcal{S}(b) such that ρab(s)=s\rho_{ab}(s^{\prime})=s. It follows that we can compute fab(s)f_{ab}(s) with at most n𝒪(k)n^{\mathcal{O}(k)} calls to the function fbf_{b}.

{observation}

Given a node aa of TT with a child bb and s𝒮(ab)s\in\mathcal{S}(ab), we can compute fabf_{ab} in space 𝒪(klog(n))\mathcal{O}(k\log(n)) and time n𝒪(k)n^{\mathcal{O}(k)} with n𝒪(k)n^{\mathcal{O}(k)} oracle access to the function fbf_{b}.

In order to simplify forthcoming statements, we fix a node aa of TT with children b1,,btb_{1},\ldots,b_{t}. Now, we explain how to compute fa(s)f_{a}(s) by making at most n𝒪(k)n^{\mathcal{O}(k)} calls to the functions fab1,,fabtf_{ab_{1}},\ldots,f_{ab_{t}}. The first step is to express fa(s)f_{a}(s) in terms of fab1,,fabtf_{ab_{1}},\ldots,f_{ab_{t}}. We first describe |E(X,VaX)||E(X,V_{a}\setminus X)| in terms of |E(XVbi,VbiX)||E(X\cap V_{b_{i}},V_{b_{i}}\setminus X)|. We denote by E(Vb1,,Vbt)E(V_{b_{1}},\dots,V_{b_{t}}) the set of edges of G[Va]G[V_{a}] whose endpoints lie in different VbiV_{b_{i}}’s, i.e. E(G[Vb1,,Vbt])(E(G[Vb1]E(G[Vbt])))E(G[V_{b_{1}},\dots,V_{b_{t}}])\setminus(E(G[V_{b_{1}}]\cup\dots\cup E(G[V_{b_{t}}]))). Given XVaX\subseteq V_{a}, we denote by Ea(X)E_{a}(X) the intersection of E(X,VaX)E(X,V_{a}\setminus X) and E(Vb1,,Vbt)E(V_{b_{1}},\dots,V_{b_{t}}). In simple words, Ea(X)E_{a}(X) is the set of all cut-edges (i.e., between XX and VaXV_{a}\setminus X) running between distinct children of aa. For i,j[k]i,j\in[k], we denote by Ea(X,i,j)E_{a}(X,i,j) the subset of Ea(X)E_{a}(X) consisting of the edges whose endpoints are labeled ii and jj. We capture the size of Ea(X,i,j)E_{a}(X,i,j) with the following notion. For every c{a,ab1,,abt}c\in\{a,ab_{1},\dots,ab_{t}\}, s𝒮(c)s\in\mathcal{S}(c) and i,j[k]i,j\in[k], we define

#𝗉𝖺𝗂𝗋𝗌c(s,i,j) . . ={si(|Vc(j)|sj)+sj(|Vc(i)|si) if ij,si(|Vc(i)|si) otherwise.\#\mathsf{pairs}_{c}(s,i,j)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\begin{cases}s_{i}\cdot(|V_{c}(j)|-s_{j})+s_{j}\cdot(|V_{c}(i)|-s_{i})\text{ if }i\neq j,\\ s_{i}\cdot(|V_{c}(i)|-s_{i})\text{ otherwise.}\end{cases}

It is not hard to check that, for every subset XVaX\subseteq V_{a} with aa-signature ss, #𝗉𝖺𝗂𝗋𝗌a(s,i,j)\#\mathsf{pairs}_{a}(s,i,j) is the size of 𝗉𝖺𝗂𝗋𝗌a(X,i,j)\mathsf{pairs}_{a}(X,i,j) being the set of pairs of distinct vertices in VaV_{a} labeled ii and jj at aa such that exactly one of them is in XX. Observe that when Ma[i,j]=1M_{a}[i,j]=1, then |Ea(X,i,j)||E_{a}(X,i,j)| is the number of pairs in 𝗉𝖺𝗂𝗋𝗌a(X,i,j)\mathsf{pairs}_{a}(X,i,j) whose endpoints belong to different sets among Vb1,,VbtV_{b_{1}},\dots,V_{b_{t}}. Moreover, given a child bb of aa, the number of pairs in 𝗉𝖺𝗂𝗋𝗌a(X,i,j)\mathsf{pairs}_{a}(X,i,j) whose both endpoints belong to VbV_{b} is exactly #𝗉𝖺𝗂𝗋𝗌ab(𝗌𝗂𝗀ab(X),i,j)\#\mathsf{pairs}_{ab}(\mathsf{sig}_{ab}(X),i,j). Thus when Ma[i,j]=1M_{a}[i,j]=1, we have

|Ea(X,i,j)|=#𝗉𝖺𝗂𝗋𝗌a(𝗌𝗂𝗀a(X),i,j)i[t]#𝗉𝖺𝗂𝗋𝗌abi(𝗌𝗂𝗀abi(X),i,j).|E_{a}(X,i,j)|=\#\mathsf{pairs}_{a}(\mathsf{sig}_{a}(X),i,j)-\sum_{i\in[t]}\#\mathsf{pairs}_{ab_{i}}(\mathsf{sig}_{ab_{i}}(X),i,j)\enspace. (7)

We capture the size of Ea(X)E_{a}(X) with the following notion. For every c{a,ab1,,abt}c\in\{a,ab_{1},\dots,ab_{t}\}, s𝒮(c)s\in\mathcal{S}(c) and (k×k)(k\times k)-matrix MM, we define

mc(s,M) . . =i,j[k],ijM[i,j]=1#𝗉𝖺𝗂𝗋𝗌c(s,i,j).m_{c}(s,M)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sum_{\begin{subarray}{c}i,j\in[k],i\leqslant j\\ M[i,j]=1\end{subarray}}\#\mathsf{pairs}_{c}(s,i,j).

Note that |Ea(X)|=i,j[k]:ij,Ma[i,j]=1|Ea(X,i,j)||E_{a}(X)|=\sum_{i,j\in[k]\colon i\leqslant j,M_{a}[i,j]=1}|E_{a}(X,i,j)|. Hence, by Equation 7, we deduce that |Ea(X)|=ma(𝗌𝗂𝗀a(X),Ma)i[t]mabi(𝗌𝗂𝗀abi(X),Ma)|E_{a}(X)|=m_{a}(\mathsf{sig}_{a}(X),M_{a})-\sum_{i\in[t]}m_{ab_{i}}(\mathsf{sig}_{ab_{i}}(X),M_{a}). Since E(X,VaX)E(X,V_{a}\setminus X) is the disjoint union of Ea(X)E_{a}(X) and the sets E(XVb1,Vb1X),,E(XVbt,VbtX)E(X\cap V_{b_{1}},V_{b_{1}}\setminus X),\dots,E(X\cap V_{b_{t}},V_{b_{t}}\setminus X) , we deduce:

{observation}

For every XVaX\subseteq V_{a} we have

|E(X,VaX)|=ma(𝗌𝗂𝗀a(X),Ma)+i=1t(|E(XiVbi,VbiXi)|mabi(𝗌𝗂𝗀abi(Xi),Ma)).|E(X,V_{a}\setminus X)|=m_{a}(\mathsf{sig}_{a}(X),M_{a})+\sum_{i=1}^{t}\left(|E(X_{i}\cap V_{b_{i}},V_{b_{i}}\setminus X_{i})|-m_{ab_{i}}(\mathsf{sig}_{ab_{i}}(X_{i}),M_{a})\right)\enspace.

We are ready to express fa(s)f_{a}(s) in terms of fab1,,fabtf_{ab_{1}},\dots,f_{ab_{t}} and ma,mab1,,mabtm_{a},m_{ab_{1}},\dots,m_{ab_{t}}.

Lemma 3.7.

For every s𝒮(a)s\in\mathcal{S}(a), we have

fa(s)=ma(s,Ma)+max(s1,,st)𝒮(ab1,,abt)s=s1++st(i=1t(fabi(si)mabi(si,Ma))).f_{a}(s)=m_{a}(s,M_{a})+\max_{\begin{subarray}{c}(s^{1},\dots,s^{t})\in\mathcal{S}(ab_{1},\dots,ab_{t})\\ s=s^{1}+\dots+s^{t}\end{subarray}}\left(\sum_{i=1}^{t}\left(f_{ab_{i}}(s^{i})-m_{ab_{i}}(s^{i},M_{a})\right)\right)\enspace.
Proof 3.8.

Let s𝒮(a)s\in\mathcal{S}(a). By Section˜3.3 we know that

fa(s)=maxXVa𝗌𝗂𝗀a(X)=s|E(X,VaX)|=ma(s,Ma)+maxXVa𝗌𝗂𝗀a(X)=s(i=1t|E(Xi,VbiXi)|mabi(𝗌𝗂𝗀abi(Xi),Ma))f_{a}(s)=\max_{\begin{subarray}{c}X\subseteq V_{a}\\ \mathsf{sig}_{a}(X)=s\end{subarray}}|E(X,V_{a}\setminus X)|=m_{a}(s,M_{a})+\max_{\begin{subarray}{c}X\subseteq V_{a}\\ \mathsf{sig}_{a}(X)=s\end{subarray}}\left(\sum_{i=1}^{t}|E(X_{i},V_{b_{i}}\setminus X_{i})|-m_{ab_{i}}(\mathsf{sig}_{ab_{i}}(X_{i}),M_{a})\right)

where XiX_{i} is a shorthand for XVbiX\cap V_{b_{i}}. Observe that for every XVaX\subseteq V_{a}, we have 𝗌𝗂𝗀a(X)=s\mathsf{sig}_{a}(X)=s iff s=i=1t𝗌𝗂𝗀abi(XVbi)s=\sum_{i=1}^{t}\mathsf{sig}_{ab_{i}}(X\cap V_{b_{i}}). Since fabi(si)f_{ab_{i}}(s^{i}) is the maximum |E(Xi,VbiXi)||E(X_{i},V_{b_{i}}\setminus X_{i})| over all XiVbiX_{i}\subseteq V_{b_{i}} with abiab_{i}-signature sis^{i} while mabi(𝗌𝗂𝗀abi(Xi),Ma)m_{ab_{i}}(\mathsf{sig}_{ab_{i}}(X_{i}),M_{a}) only depends on sis^{i} and not on the concrete choice of XiX_{i}, we conclude that fa(s)f_{a}(s) equals ma(s,Ma)m_{a}(s,M_{a}) plus

max(s1,,st)𝒮(ab1,,abt)s=s1++st(i=1tfabi(si)mabi(si,Ma)).\max_{\begin{subarray}{c}(s^{1},\dots,s^{t})\in\mathcal{S}(ab_{1},\dots,ab_{t})\\ s=s^{1}+\dots+s^{t}\end{subarray}}\left(\sum_{i=1}^{t}f_{ab_{i}}(s^{i})-m_{ab_{i}}(s^{i},M_{a})\right).

To compute fa(s)f_{a}(s) we use a twist of Kane’s algorithm [36] for solving the kk-dimensional Unary Subset Sum in Logspace. The twist relies on using a polynomial, slightly different from the original work of Kane [36], defined in the following lemma.

Given a vector s=(s1,,sk)ks=(s_{1},\dots,s_{k})\in\mathbb{Z}^{k} and BB\in\mathbb{Z}, we denote by s|Bs|B the vector (s1,,sk,B)(s_{1},\dots,s_{k},B). We denote by CC the number 2n2+12n^{2}+1 and, given a vector sk+1s^{\prime}\in\mathbb{Z}^{k+1}, we denote by C(s)C(s^{\prime}) the sum i[k+1]Ci1si\sum_{i\in[k+1]}C^{i-1}s^{\prime}_{i}.

Lemma C.

Let s𝒮(a)s\in\mathcal{S}(a) and B[|E(G[Va])|]B\in[|E(G[V_{a}])|]. Let A(s,B)A(s,B) be the number of tuples (s1,,st)𝒮(ab1,,abt)(s^{1},\dots,s^{t})\in\mathcal{S}(ab_{1},\dots,ab_{t}) such that s=s1++sts=s^{1}+\dots+s^{t} and

Bma(s,Ma)=j=1tfabj(sj)mabj(sj,Ma).B-m_{a}(s,M_{a})=\sum_{j=1}^{t}f_{ab_{j}}(s^{j})-m_{ab_{j}}(s^{j},M_{a}).

For every prime number p>Ck+1+1p>C^{k+1}+1, we have A(s,B)Pa,s(B,p)-A(s,B)\equiv P_{a,s}(B,p) (mod pp) where

Pa,s(B,p) . . =x=1p1xC(s|Bma(s,Ma))(j=1t(sj𝒮(abj)xC(sj|fabj(sj)mabj(sj,Ma)))).P_{a,s}(B,p)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sum_{x=1}^{p-1}x^{C(s|B-m_{a}(s,M_{a}))}\left(\prod_{j=1}^{t}\left(\sum_{s^{j}\in\mathcal{S}(ab_{j})}x^{-C(s^{j}|f_{ab_{j}}(s^{j})-m_{ab_{j}}(s^{j},M_{a}))}\right)\right).
Proof 3.9.

First, note that

xC(s|Bma(s,Ma))(j=1t(sj𝒮(abj)xC(sj|fabj(sj)mabj(sj,Ma))))=s1,,st𝒮(b1,,bt)xα(s1,,st)x^{C(s|B-m_{a}(s,M_{a}))}\left(\prod_{j=1}^{t}\left(\sum_{s^{j}\in\mathcal{S}(ab_{j})}x^{-C(s^{j}|f_{ab_{j}}(s^{j})-m_{ab_{j}}(s^{j},M_{a}))}\right)\right)=\sum_{s^{1},\dots,s^{t}\in\mathcal{S}(b_{1},\dots,b_{t})}x^{\alpha(s^{1},\dots,s^{t})} (8)

where

α(s1,,st)=C(s|Bma(s,Ma))j=1t(C(sj|fabj(sj)mabj(sj,Ma))).\alpha(s^{1},\dots,s^{t})=C(s|B-m_{a}(s,M_{a}))-\sum\limits_{j=1}^{t}\left(C(s^{j}|f_{ab_{j}}(s^{j})-m_{ab_{j}}(s^{j},M_{a}))\right).

As in [36], the idea of this proof is to change the order of summation, show that the terms where α(s1,,st)0\alpha(s^{1},\dots,s^{t})\neq 0 cancel out, and prove that the sum of the terms where α(s1,,st)=0\alpha(s^{1},\dots,s^{t})=0 is A(s,B)-A(s,B). The latter is implied by the following claim.

Claim D.

For every (s1,,st)𝒮(ab1,,abt)(s^{1},\dots,s^{t})\in\mathcal{S}(ab_{1},\dots,ab_{t}), the absolute value of α(s1,,st)\alpha(s^{1},\dots,s^{t}) is at most Ck+1C^{k+1}. Moreover, α(s1,,st)=0\alpha(s^{1},\dots,s^{t})=0 iff s=s1++sts=s^{1}+\dots+s^{t} and Bma(s,Ma)=i=1tfabi(si)mabi(si,Ma)B-m_{a}(s,M_{a})=\sum_{i=1}^{t}f_{ab_{i}}(s^{i})-m_{ab_{i}}(s^{i},M_{a}).

{claimproof}

By definition of C(|)C(\cdot|\cdot), we have

α(s1,,st)=(i=1kCi1(sij=1tsij))+Ck+1(Bma(s,Ma)j=1t(fabj(sj)mabj(sj,Ma))).\alpha(s^{1},\dots,s^{t})=\left(\sum_{i=1}^{k}C^{i-1}\left(s_{i}-\sum_{j=1}^{t}s_{i}^{j}\right)\right)+C^{k+1}\left(B-m_{a}(s,M_{a})-\sum_{j=1}^{t}\left(f_{ab_{j}}(s^{j})-m_{ab_{j}}(s^{j},M_{a})\right)\right).

I.e.,

α(s1,,st)=i=1k+1Ci1ei\alpha(s^{1},\dots,s^{t})=\sum_{i=1}^{k+1}C^{i-1}e_{i}

with

ei={sij=1tsijif 1ikBma(s,Ma)j=1t(fabj(sj)mabj(sj,Ma))if i=k+1.e_{i}=\begin{cases}s_{i}-\sum_{j=1}^{t}s_{i}^{j}&\text{if }1\leqslant i\leqslant k\\ B-m_{a}(s,M_{a})-\sum_{j=1}^{t}\left(f_{ab_{j}}(s^{j})-m_{ab_{j}}(s^{j},M_{a})\right)&\text{if }i=k+1\end{cases}.

We claim that the absolute value of each eie_{i} is at most C1C-1. For every i[k]i\in[k], by definition, sis_{i} and j=1tsij\sum_{j=1}^{t}s_{i}^{j} are at least 0 and at most |Va(i)|n|V_{a}(i)|\leqslant n. Hence, for each i[k]i\in[k] the absolute value of eie_{i} is at most n<Cn<C. Both BB and j=1tfabj(sj)\sum_{j=1}^{t}f_{ab_{j}}(s^{j}) are upper bounded by |E(G[Va])|n2|E(G[V_{a}])|\leqslant n^{2}. Moreover, from the definition of the functions ma,mab1,,mabtm_{a},m_{ab_{1}},\dots,m_{ab_{t}}, we deduce that both ma(s,Ma)m_{a}(s,M_{a}) and j=1tmabj(sj,Ma)\sum_{j=1}^{t}m_{ab_{j}}(s^{j},M_{a}) are upper bounded by |Va|2n2|V_{a}|^{2}\leqslant n^{2}. It follows that the absolute value of ek+1e_{k+1} is at most 2n2<C2n^{2}<C. Thus, the absolute value of α(s1,,st)\alpha(s^{1},\dots,s^{t}) is at most i=1k+1Ci1eii=1k+1Ci1(C1)=Ck+11\sum_{i=1}^{k+1}C^{i-1}e_{i}\leqslant\sum_{i=1}^{k+1}C^{i-1}(C-1)=C^{k+1}-1.

It remains to prove that that α(s1,,st)=0\alpha(s^{1},\dots,s^{t})=0 iff ej=0e_{j}=0 for every j[k+1]j\in[k+1]. One direction is trivial. For the other direction, observe that if ek+10e_{k+1}\neq 0, then the absolute value of Ckek+1C^{k}e_{k+1} is at least CkC^{k}. But the absolute value of α(s1,,st)Ckek+1=i=1kCi1ei\alpha(s^{1},\dots,s^{t})-C^{k}e_{k+1}=\sum_{i=1}^{k}C^{i-1}e_{i} is at most i=1kCi1(C1)=Ck1\sum_{i=1}^{k}C^{i-1}(C-1)=C^{k}-1. Hence, if ek+10e_{k+1}\neq 0, then α(s1,,st)0\alpha(s^{1},\dots,s^{t})\neq 0. By induction, it follows that α(s1,,st)=0\alpha(s^{1},\dots,s^{t})=0 is equivalent to ei=0e_{i}=0 for every i[k+1]i\in[k+1].

By using Equation˜8 on Pa,s(B,p)P_{a,s}(B,p) and interchanging the sums, we deduce that

Pa,s(B,p)=s1,,st𝒮(b1,,bt)(x=1p1xα(s1,,st)).P_{a,s}(B,p)=\sum_{s^{1},\dots,s^{t}\in\mathcal{S}(b_{1},\dots,b_{t})}\left(\sum_{x=1}^{p-1}x^{\alpha(s^{1},\dots,s^{t})}\right).

It was proven in the proof of Lemma 1 in [36] that

x=1p1x(modp)={1 if 0(modp1)0 otherwise.\sum_{x=1}^{p-1}x^{\ell}\,\pmod{p}=\begin{cases}-1\text{ if }\ell\equiv 0\,\pmod{p-1}\\ 0\text{ otherwise.}\end{cases}

We infer from the above formula that

Pa,s(B,p)(modp)=s1,,st𝒮(ab1,,abt)α(s1,,st)0(modp1)1(modp).P_{a,s}(B,p)\pmod{p}=\sum_{\begin{subarray}{c}s^{1},\dots,s^{t}\in\mathcal{S}(ab_{1},\dots,ab_{t})\\ \alpha(s^{1},\dots,s^{t})\equiv 0\pmod{p-1}\end{subarray}}-1\pmod{p}.

Observe that, for every (s1,,st)𝒮(ab1,,abt)(s^{1},\dots,s^{t})\in\mathcal{S}(ab_{1},\dots,ab_{t}), we have α(s1,,st)0(modp1)\alpha(s^{1},\dots,s^{t})\equiv 0\pmod{p-1} iff α(s1,,st)=0\alpha(s^{1},\dots,s^{t})=0 because Ck+1<p1C^{k+1}<p-1 and the absolute value of α(s1,,st)\alpha(s^{1},\dots,s^{t}) is at most Ck+1C^{k+1} by ˜D. From the equivalence given by ˜D, we deduce that there are A(s,B)A(s,B) tuples (s1,,st)𝒮(ab1,,abt)(s^{1},\dots,s^{t})\in\mathcal{S}(ab_{1},\dots,ab_{t}) such that α(s1,,st)=0\alpha(s^{1},\dots,s^{t})=0, i.e.,

Pa,s(B,p)(modp)=A(s,B)(modp).P_{a,s}(B,p)\pmod{p}=-A(s,B)\pmod{p}.

With this, we can prove Theorem 1.2 via Algorithm˜1. As a subroutine, we use the function NextPrime(pp), which computes the smallest prime larger than pp.

Input: A internal node aa of TT and s𝒮(a)s\in\mathcal{S}(a).
Output: fa(s)f_{a}(s)
1 for B=|E(G[Va])|B=|E(G[V_{a}])| to 0 do
2 c . . =0c\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=0
3 p . . =NextPrime(Ck+1)p\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\textsf{NextPrime}(C^{k+1})
4 while cnklog(n)c\leqslant nk\log(n) do
5    if Pa,s(B,p)0P_{a,s}(B,p)\not\equiv 0 (mod pp) then return BB
6    c . . =c+log(p)c\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=c+\lfloor\log(p)\rfloor
7    p . . =NextPrime(p)p\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\textsf{NextPrime}(p)
8    
Algorithm 1 Algorithm for computing fx(s)f_{x}(s).
Lemma E.

Let s𝒮(a)s\in\mathcal{S}(a). Algorithm˜1 computes fa(s)f_{a}(s) in space 𝒪(klog(n))\mathcal{O}(k\log(n)) and time n𝒪(k)n^{\mathcal{O}(k)} with n𝒪(k)n^{\mathcal{O}(k)} oracle access to the functions fab1,,fabtf_{ab_{1}},\dots,f_{ab_{t}}.

Proof 3.10.

The correctness of Algorithm˜1 follows from the following claims. Let BB be an integer between 0 and |E(G[Va])||E(G[V_{a}])|, and let A(s,B)A(s,B) be the integer defined in ˜C.

Claim F.

If the algorithm returns BB, then A(s,B)0A(s,B)\neq 0.

{claimproof}

Suppose there exists a prime number p>Ck+1p>C^{k+1} such that Pa,s(B,p)0P_{a,s}(B,p)\not\equiv 0 (mod pp). As Pa,s(B,p)A(s,B)P_{a,s}(B,p)\equiv A(s,B) (mod pp) by ˜C, we have A(s,B)0A(s,B)\neq 0 and thus there exists (s1,,st)𝒮(ab1,,abt)(s^{1},\dots,s^{t})\in\mathcal{S}(ab_{1},\dots,ab_{t}) such that s=s1++sts=s^{1}+\dots+s^{t} and Bma(s,Ma)=i=1tfabi(si)mabi(si,Ma).B-m_{a}(s,M_{a})=\sum_{i=1}^{t}f_{ab_{i}}(s^{i})-m_{ab_{i}}(s^{i},M_{a}). From Section˜3.3, we deduce that there exists XVxX\subseteq V_{x} such that 𝗌𝗂𝗀(X)=s1++st=s\mathsf{sig}(X)=s^{1}+\dots+s^{t}=s and |E(X,VxX)|=B|E(X,V_{x}\setminus X)|=B.

Claim G.

If Pa,s(B,p)0P_{a,s}(B,p)\equiv 0 (mod pp) for every value taken by the variable pp, then A(s,B)=0A(s,B)=0.

{claimproof}

Let dd be the product of the values taken by pp. Then dd is a product of distinct primes pp such that Pa,s(B,p)0P_{a,s}(B,p)\equiv 0 (mod pp). By ˜C, we have Pa,s(B,p)A(s,B)P_{a,s}(B,p)\equiv A(s,B) (mod pp) for every prime p>Ck+1p>C^{k+1}. Therefore, A(s,B)A(s,B) is a multiple of dd. Observe that d>2cd>2^{c} and c>nklog(n)c>nk\log(n). Hence, we have d>nnkd>n^{nk}. Since A(s,B)A(s,B) corresponds to the number of tuples (s1,,st)𝒮(ab1,,abt)(s^{1},\dots,s^{t})\in\mathcal{S}(ab_{1},\dots,ab_{t}) that satisfy some properties, we have A(s,B)i=1t|𝒮(abi)|nnkA(s,B)\leqslant\prod_{i=1}^{t}|\mathcal{S}(ab_{i})|\leqslant n^{nk}. As dd divides A(s,B)A(s,B) and d>A(s,B)d>A(s,B), we conclude that A(s,B)=0A(s,B)=0.

From Claims F and G, we infer that Algorithm 1 returns BB where BB is the maximum between 0 and |E(G[Va])||E(G[V_{a}])| such that A(s,B)0A(s,B)\neq 0. By definition of A(s,B)A(s,B) and Lemma 3.7, we conclude that fa(s)=Bf_{a}(s)=B.

Complexity.

We adapt the arguments used in [36] to prove the complexity of our algorithm.

  • First, the variable pp is never more than n𝒪(k)n^{\mathcal{O}(k)}. Indeed, standard facts about prime numbers imply that there are nklog(n)nk\log(n) prime numbers between Ck+1C^{k+1} and (Ck+1+nklog(n))𝒪(1)=n𝒪(k)(C^{k+1}+nk\log(n))^{\mathcal{O}(1)}=n^{\mathcal{O}(k)}. Each of these primes causes cc to increase by at least 1. Thus, each value of pp can be encoded with 𝒪(klog(n))\mathcal{O}(k\log(n)) bits.

  • Secondly, observe that we can compute Pa,s(p,B)(modp)P_{a,s}(p,B)\pmod{p} in space 𝒪(klog(n))\mathcal{O}(k\log(n)). Recall that

    Pa,s(B,p) . . =x=1p1xC(s|Bma(s,Ma))(i=1t(si𝒮(bi)xC(si|fabi(si)mabi(si,Ma)))).P_{a,s}(B,p)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sum_{x=1}^{p-1}x^{C(s|B-m_{a}(s,M_{a}))}\left(\prod_{i=1}^{t}\left(\sum_{s^{i}\in\mathcal{S}(b_{i})}x^{-C(s^{i}|f_{ab_{i}}(s^{i})-m_{ab_{i}}(s^{i},M_{a}))}\right)\right).

    To compute Pa,s(B,p)P_{a,s}(B,p), it is sufficient to keep track of the current value of xx, the current running total (modulo pp) and enough information to compute the next term, i.e. xC(s|Bma(s,Ma))x^{C(s|B-m_{a}(s,M_{a}))} or xC(si|fabi(si)mabi(si,Ma))x^{-C(s^{i}|f_{ab_{i}}(s^{i})-m_{ab_{i}}(s^{i},M_{a}))}. For that, we need only the current values of ii (at most logn\log n bits) and sis^{i} (at most klognk\log n bits) and the current running total to compute C(s|Bma(s,Ma))C(s|B-m_{a}(s,M_{a})) (or C(si|fbi(si)mbi(si,Ma)C(s^{i}|f_{b_{i}}(s^{i})-m_{b_{i}}(s^{i},M_{a})) modulo pp.

  • Finally, primality testing of numbers between Ck+1C^{k+1} and n𝒪(k)n^{\mathcal{O}(k)} can be done in space 𝒪(klog(n))\mathcal{O}(k\log(n)) via n𝒪(k)n^{\mathcal{O}(k)} divisions, and thus each call to NextPrime()\textsf{NextPrime}(\cdot) can be computed in n𝒪(k)n^{\mathcal{O}(k)} time and 𝒪(klog(n))\mathcal{O}(k\log(n)) space.

We are now ready to prove that one can solve Max-Cut in time n𝒪(dk)n^{\mathcal{O}(dk)} using 𝒪(dklog(n))\mathcal{O}(dk\log(n)) space.

See 1.2

Proof 3.11.

Given rr the root of TT, we solve Max-Cut by computing maxs𝒮(r)fr(s)\max_{s\in\mathcal{S}(r)}f_{r}(s). For every internal node of aa of TT with children b1,,btb_{1},\dots,b_{t}, we use Algorithm˜1 to compute each call of faf_{a} from calls to fab1,,fabtf_{ab_{1}},\dots,f_{ab_{t}}. For every internal node aa with child bb, we use Section˜3.3 to compute each call of fabf_{ab} from calls to fbf_{b}. Finally, for every leaf \ell of TT, we simply have f(s)=0f_{\ell}(s)=0 for every s𝒮()s\in\mathcal{S}(\ell) because VV_{\ell} is a singleton.

First, we prove the running time. By ˜E, for each node aa with children b1,,btb_{1},\dots,b_{t} and s𝒮(a)s\in\mathcal{S}(a), we compute fa(s)f_{a}(s) by calling at most n𝒪(k)n^{\mathcal{O}(k)} times the functions fab1,,fabtf_{ab_{1}},\dots,f_{ab_{t}}. By Section˜3.3, for each node bb with parent aa and s𝒮(ab)s\in\mathcal{S}(ab), we compute fab(s)f_{ab}(s) by calling at most n𝒪(k)n^{\mathcal{O}(k)} times the function fbf_{b}. Consequently, we call each of these functions at most n𝒪(dk)n^{\mathcal{O}(dk)} times in total. Since TT has 𝒪(n)\mathcal{O}(n) nodes, we conclude that computing maxs𝒮(r)fr(s)\max_{s\in\mathcal{S}(r)}f_{r}(s) this way takes n𝒪(dk)n^{\mathcal{O}(dk)} time.

Finally, observe that the stack storing the calls to these functions is of size at most 𝒪(d)\mathcal{O}(d). Our algorithm solves Max Cut in space 𝒪(dklog(n))\mathcal{O}(dk\log(n)).

3.4 Dominating Set

In this section we prove Theorem˜1.3, which we recall for convenience.

See 1.3

The remainder of this section is devoted to the proof of Theorem˜1.3. Note that Dominating Set cannot be directly stated in terms of HH-homomorphisms for roughly the following reason. For HH-homomorphisms, the constraints are universal: every neighbor of a vertex with a certain state must have one of allowed states. For Dominating Set, there is an existential constraint: a vertex in state “dominated” must have at least one neighbor in the dominating set. Also, the state of a vertex might change from “undominated” to “dominated” during the algorithm. The techniques we used for HH-homomorphisms cannot capture such properties.

The problem occurs for other parameters as well. One approach that circumvents the issue is informally called inclusion-exclusion branching, and was used by Pilipczuk and Wrochna [45] in the context of Dominating Set on graphs of low treedepth. Their dynamic programming uses the states Taken (i.e., in a dominating set), Allowed (i.e., possibly dominated), and Forbidden (i.e., not dominated). These states reflect that we are interested in vertex partitions into three groups such that there are no edges between Taken vertices and Forbidden vertices; these are constraints that can be modelled using HH-homomorphisms for a three-vertex pattern graph HH. Crucially, for a single vertex vv, if we fix the states of the remaining vertices, the number of partitions in which vv is dominated is given by the number of partitions where vv is possibly dominated minus the number of partitions where it is not dominated, i.e., informally “Dominated = Allowed - Forbidden”. We will come back to this state transformation later to provide more details. We also remark that the transformed formulation of dynamic programming is exactly what one gets by applying the zeta-transform to the standard dynamic programming for Dominating Set.

For technical reasons explained later, our algorithm uses the classic Isolation Lemma:

Theorem 3.12 (Isolation lemma, [39]).

Let 2[n]\mathcal{F}\subseteq 2^{[n]} be a non-empty set family over the universe [n][n]. For each i[n]i\in[n], choose a weight ω(i)[2n]\omega(i)\in[2n] uniformly and independently at random. Then with probability at least 1/21/2 there exists a unique set of minimum weight in \mathcal{F}.

Consequently, we pick a weight function ω\omega that assigns every vertex a weight from 1,,2n1,\dots,2n uniformly and independently at random. Storing ω\omega takes 𝒪(nlogn)\mathcal{O}(n\log n) space. By Theorem˜3.12, with probability at least 1/21/2 among dominating sets with the smallest possible cardinality there will be a unique one of minimum possible weight.

To implement the above idea, we let the graph HH have vertex set {𝐓,𝐀,𝐅}\{\mathbf{T},\mathbf{A},\mathbf{F}\} standing for Taken, Allowed, and Forbidden. This graph HH has a loop at each vertex as well as the edges 𝐓𝐀\mathbf{TA} and 𝐀𝐅\mathbf{AF}. Further, let R{𝐓}R\coloneqq\{\mathbf{T}\}. Following our approach for HH-homomorphisms, for every set S𝐒𝐭𝐚𝐭𝐞𝐬S\subseteq\mathbf{States} with States{(𝐓,1),(𝐅,1),,(𝐓,k),(𝐅,k)}\textbf{States}\coloneqq\{(\mathbf{T},1),(\mathbf{F},1),\dots,(\mathbf{T},k),(\mathbf{F},k)\}, every cardinality c[n]0c\in[n]_{0}, and every weight w[2n2]0w\in[2n^{2}]_{0}, in time 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} and space 𝒪(dk2logn)\mathcal{O}(dk^{2}\log n) (recall that here for the maximum weight WW^{*} we have W2nW^{*}\leqslant 2n) we can compute the value aS,c,wa_{S,c,w} being the number of ordered partitions (T^,F^,A^)(\widehat{T},\widehat{F},\widehat{A}) of V(G)V(G) satisfying the following properties:

  1. 1.

    there are no edges between T^\widehat{T} and F^\widehat{F};

  2. 2.

    |T^|=c|\widehat{T}|=c and ω(T^)=w\omega(\widehat{T})=w; and

  3. 3.

    for every i[k]i\in[k] and I{T,F}I\in\{T,F\}, we have (𝐈,i)S(\mathbf{I},i)\in S iff I^V(i)\widehat{I}\cap V(i)\neq\emptyset.

Note that we do not care whether vertices of some label ii are mapped to AA or not.

After that, we aim to obtain the number of dominating sets of cardinality cc and weight ww from values aS,c,wa_{S,c,w}. For this we need to transform the “states” Allowed and Forbidden into Dominated. Above we have explained how this transformation works if we know the state of a single vertex. However, now the set SS only captures for every label ii, which states occur on the vertices of label ii. First, the vertices of this label might be mapped to different vertices of HH. And even if we take the partitions where all vertices of label ii are possibly dominated and subtract the partitions where all these vertices are not dominated, then we obtain the partitions where at least one vertex with label ii is dominated. However, our goal is that all vertices of label ii are dominated. So the Dominated = Allowed - Forbidden equality is not directly applicable here.

Recently, Hegerfeld and Kratsch [34] showed that when working with label sets, this equality is in some sense still true modulo 22. On a high level, they show that if we fix a part T^\widehat{T} of a partition satisfying the above properties, then any undominated vertex might be put to any of the sides A^\widehat{A} and F^\widehat{F}. Thus, if T^\widehat{T} is not a dominating set of GG, then there is an even number of such partitions and they cancel out modulo 22.

Now we follow their ideas to formalize this approach and conclude the construction of the algorithm. For i[k]i\in[k] and S{(𝐓,1),(𝐅,1),,(𝐓,i),(𝐅,i)}S\subseteq\{(\mathbf{T},1),(\mathbf{F},1),\dots,(\mathbf{T},i),(\mathbf{F},i)\} we define the value Diw(S)D^{w}_{i}(S) as the number of ordered partitions (T^,F^,X^)(\widehat{T},\widehat{F},\widehat{X}) of V(G)V(G) with the following properties:

  1. 1.

    there are no edges between T^\widehat{T} and F^\widehat{F};

  2. 2.

    |T^|=c|\widehat{T}|=c and ω(T^)=w\omega(\widehat{T})=w;

  3. 3.

    for every j[i]j\in[i] and I{T,F}I\in\{T,F\}, we have (𝐈,j)S(\mathbf{I},j)\in S iff I^V(j)\widehat{I}\cap V(j)\neq\emptyset; and

  4. 4.

    (V(i+1)V(k))T^(V({i+1})\cup\dots\cup V(k))\setminus\widehat{T} is dominated by T^\widehat{T}.

The following observation is obvious.

Claim H.

For every SStatesS\subseteq\textbf{States}, we have Dkc,w(S)=aS,c,wD^{c,w}_{k}(S)=a_{S,c,w}.

Next, we observe that it suffices to compute values Dic,w(S)D^{c,w}_{i}(S) for i=0i=0 and S=S=\emptyset.

Claim I.

D0c,w()D^{c,w}_{0}(\emptyset) is the number of dominating sets of size cc and total weight ww.

Proof 3.13.

Consider a partition (T^,F^,X^)(\widehat{T},\widehat{F},\widehat{X}) counted in D0c,w()D^{c,w}_{0}(\emptyset). Recall that V(1)V(k)=V(G)V(1)\cup\dots\cup V(k)=V(G). So the fourth property implies that V(G)T^V(G)\setminus\widehat{T} is dominated by T^\widehat{T}, i.e., T^\widehat{T} is a dominating set of GG. The first property then implies that F^\widehat{F} is empty and X^=V(G)T^\widehat{X}=V(G)\setminus\widehat{T}. Finally, by definition of D0c,w()D^{c,w}_{0}(\emptyset), we know that the size of T^\widehat{T} is cc and its weight is ww. On the other hand, every dominating set T^\widehat{T} of cardinality cc and weight ww defines a partition (T^,,V(G)T^)(\widehat{T},\emptyset,V(G)\setminus\widehat{T}) counted in D0c,w()D^{c,w}_{0}(\emptyset).

Finally, we prove that modulo 22, Dic,w(S)D_{i}^{c,w}(S) can be computed from Di+1c,w(S)D_{i+1}^{c,w}(S).

Claim J.

For every i[k1]0i\in[k-1]_{0} and every S{(𝐓,1),(𝐅,1),,(𝐓,i),(𝐅,i)}S\subseteq\{(\mathbf{T},1),(\mathbf{F},1),\dots,(\mathbf{T},i),(\mathbf{F},i)\}, it holds that

Dic,w(S)B{(𝐓,i+1),(𝐅,i+1)}Di+1c,w(SB)mod2.D^{c,w}_{i}(S)\equiv\sum_{B\subseteq\{(\mathbf{T},i+1),(\mathbf{F},i+1)\}}D^{c,w}_{i+1}(S\cup B)\mod 2.
Proof 3.14.

We follow the proof idea of Hegerfeld and Kratsch. For B{(𝐓,i+1),(𝐅,i+1)}B\subseteq\{(\mathbf{T},i+1),(\mathbf{F},i+1)\}, let 𝒜i+1(SB)\mathcal{A}_{i+1}(S\cup B) be the set of partitions counted in Di+1c,w(SB)D^{c,w}_{i+1}(S\cup B) (see the definition above). Note that we have 𝒜i+1(SB1)𝒜i+1(SB2)=\mathcal{A}_{i+1}(S\cup B_{1})\cap\mathcal{A}_{i+1}(S\cup B_{2})=\emptyset for any B1B2{(𝐓,i+1),(𝐅,i+1)}B_{1}\neq B_{2}\subseteq\{(\mathbf{T},i+1),(\mathbf{F},i+1)\}. So

B{(𝐓,i+1),(𝐅,i+1)}Di+1w(SB)=|B{(𝐓,i+1),(𝐅,i+1)}𝒜i+1(SB)|.\sum\limits_{B\subseteq\{(\mathbf{T},i+1),(\mathbf{F},i+1)\}}D^{w}_{i+1}(S\cup B)=\left|\bigcup\limits_{B\subseteq\{(\mathbf{T},i+1),(\mathbf{F},i+1)\}}\mathcal{A}_{i+1}(S\cup B)\right|.

Let \mathcal{L} be the set of partitions counted in Dic,w(S)D^{c,w}_{i}(S) and let =B{(𝐓,i+1),(𝐅,i+1)}𝒜i+1(SB)\mathcal{R}=\cup_{B\subseteq\{(\mathbf{T},i+1),(\mathbf{F},i+1)\}}\mathcal{A}_{i+1}(S\cup B). The goal is to prove ||||mod2|\mathcal{L}|\equiv|\mathcal{R}|\bmod{2}.

By definition of these values we have \mathcal{L}\subseteq\mathcal{R}. We claim that the size of \mathcal{R}\setminus\mathcal{L} is even. To see this, consider some fixed partition (T^,F^,X^)(\widehat{T},\widehat{F},\widehat{X})\in\mathcal{R}\setminus\mathcal{L}. This is exactly the case if the following properties hold:

  1. 1.

    there are no edges between T^\widehat{T} and F^\widehat{F};

  2. 2.

    |T^|=c|\widehat{T}|=c and ω(T^)=w\omega(\widehat{T})=w;

  3. 3.

    for every j[i]j\in[i] and I{T,F}I\in\{T,F\}, we have (I,j)S(I,j)\in S iff I^V(j)\widehat{I}\cap V(j)\neq\emptyset; and

  4. 4.

    the set (V(i+2)V(k))T^(V({i+2})\cup\dots\cup V(k))\setminus\widehat{T} is dominated by T^\widehat{T} while the set (V(i+1)V(k))T^(V({i+1})\cup\dots\cup V(k))\setminus\widehat{T} is not dominated by T^\widehat{T},

Let U=V(i+1)N[T^]U=V(i+1)\setminus N[\widehat{T}]. The last property implies that UU is non-empty. Also let X=X^V(i+1)X^{\prime}=\widehat{X}\setminus V(i+1) and F=F^V(i+1)F^{\prime}=\widehat{F}\setminus V({i+1}). Observe that N(T^)V(i+1)X^N(\widehat{T})\cap V({i+1})\subseteq\widehat{X} due to the first property. We claim that if we fix the first set T^\widehat{T} of the partition as well as the partition of VV(i+1)V\setminus V({i+1}) (by fixing XX^{\prime} and FF^{\prime}), then the extensions of (T^,F,X)(\widehat{T},F^{\prime},X^{\prime}) to a partition in \mathcal{R}\setminus\mathcal{L} are exactly the partitions of form

(T^,F(UU),X(N(T^)V(i+1))U)\bigl{(}\widehat{T},F^{\prime}\cup(U\setminus U^{\prime}),X^{\prime}\cup(N(\widehat{T})\cap V({i+1}))\cup U^{\prime}\bigr{)} (9)

for UUU^{\prime}\subseteq U. So informally speaking, if we fix T^,X,F\widehat{T},X^{\prime},F^{\prime}, every vertex of UU can be put to either X^\widehat{X} or F^\widehat{F} thus giving rise to an even number 2|U|2^{|U|} of such extensions.

Now we prove this claim following the idea of Hegerfeld and Kratsch. First, consider a partition of form (9) for an arbitrary UUU^{\prime}\subseteq U. Since T^\widehat{T} is fixed and the partition on VV(i+1)V\setminus V({i+1}) is fixed as well, the last three properties defining \mathcal{R}\setminus\mathcal{L} trivially hold. Next, due to FF^F^{\prime}\subseteq\widehat{F}, there are no edges between T^\widehat{T} and FF^{\prime}. And since UUUU\setminus U^{\prime}\subseteq U is not dominated by T^\widehat{T}, there are no edges between T^\widehat{T} and UUU\setminus U^{\prime} as well, so the first property holds too.

For the other direction, if we consider an extension (T^,F~,X~)(\widehat{T},\widetilde{F},\widetilde{X})\in\mathcal{R}\setminus\mathcal{L} of (T^,F,X)(\widehat{T},F^{\prime},X^{\prime}), then by the first property we know that F~V(i+1)\widetilde{F}\cap V(i+1) has no edges to T^\widehat{T} and hence, it is a subset of UU.

So, for any fixed (T^,F,X)(\widehat{T},F^{\prime},X^{\prime}), either there is no extension to a partition from \mathcal{R}\setminus\mathcal{L} at all or there are 2|U|2^{|U|} of them where UU is a non-empty set. So the size of \mathcal{R}\setminus\mathcal{L} is even and this concludes the proof.

The application of ˜J for i=0,,k1i=0,\dots,k-1 implies

D0c,w()\displaystyle D^{c,w}_{0}(\emptyset)\equiv B1{(𝐓,1),(𝐅,1)}D1c,w(B1)\displaystyle\sum_{B_{1}\subseteq\{(\mathbf{T},1),(\mathbf{F},1)\}}D^{c,w}_{1}(B_{1})\equiv
B1{(𝐓,1),(𝐅,1)}B2{(𝐓,2),(𝐅,2)}D2c,w(B1B2)\displaystyle\sum_{B_{1}\subseteq\{(\mathbf{T},1),(\mathbf{F},1)\}}\sum_{B_{2}\subseteq\{(\mathbf{T},2),(\mathbf{F},2)\}}D^{c,w}_{2}(B_{1}\cup B_{2})\equiv
\displaystyle\dots
B1{(𝐓,1),(𝐅,1)}B2{(𝐓,2),(𝐅,2)}Bk{(𝐓,k),(𝐅,k)}Dkc,w(B1B2Bk)\displaystyle\sum_{B_{1}\subseteq\{(\mathbf{T},1),(\mathbf{F},1)\}}\sum_{B_{2}\subseteq\{(\mathbf{T},2),(\mathbf{F},2)\}}\dots\sum_{B_{k}\subseteq\{(\mathbf{T},k),(\mathbf{F},k)\}}D^{c,w}_{k}(B_{1}\cup B_{2}\dots\cup B_{k})\equiv
S{(𝐓,1),(𝐅,1),,(𝐓,k),(𝐅,k)}Dkw(S)mod2.\displaystyle\sum\limits_{S\subseteq\{(\mathbf{T},1),(\mathbf{F},1),\dots,(\mathbf{T},k),(\mathbf{F},k)\}}D^{w}_{k}(S)\mod{2}.

By ˜H, the parity of the number of dominating sets of size cc and weight ww can be expressed as

D0c,w()S{(𝐓,1),(𝐅,1),,(𝐓,k),(𝐅,k)}aS,c,wmod2.D^{c,w}_{0}(\emptyset)\equiv\sum_{S\subseteq\{(\mathbf{T},1),(\mathbf{F},1),\dots,(\mathbf{T},k),(\mathbf{F},k)\}}a_{S,c,w}\mod{2}.

Recall that every aS,c,wa_{S,c,w} can be computed in time 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} and space 𝒪(dk2logn)\mathcal{O}(dk^{2}\log n), hence this is also the case for their sum modulo 2. We compute the value D0c,w()D^{c,w}_{0}(\emptyset) for all cardinalities c[n]0c\in[n]_{0} and all weights w[2n2]0w\in[2n^{2}]_{0} and output the smallest value cc such that for some ww the value D0c,w()D^{c,w}_{0}(\emptyset) is non-zero (or it outputs nn if no such value exists).

Now we argue the correctness of our algorithm. Let CC denote the size of the smallest dominating set of GG. First, this implies that for any c<Cc<C and any w[2n2]0w\in[2n^{2}]_{0}, the value D0c,w()D^{c,w}_{0}(\emptyset) is zero. And second, Isolation Lemma (Theorem˜3.12) implies that with probability at least 1/21/2, the weight function ω\omega isolates the family of dominating sets of GG of size CC, i.e., there exists a weight WW such that there is exactly one dominating set of size CC and weight WW, and therefore D0c,w()=1D^{c,w}_{0}(\emptyset)=1. In this case, the algorithm outputs CC. So with probability at least 1/21/2 our algorithm outputs the minimum size of a dominating set of GG.

The iteration over all cc and ww increases the space complexity by an additive 𝒪(logn)\mathcal{O}(\log n) and it increases the running time by a factor of 𝒪(n2)\mathcal{O}(n^{2}). Recall that in the beginning, to sample the weight function we have used space 𝒪(nlogn)\mathcal{O}(n\log n). So all in all, the running time of the algorithm is 2𝒪(dk)n𝒪(1)2^{\mathcal{O}(dk)}\cdot n^{\mathcal{O}(1)} and the space complexity is 𝒪(dk2logn+nlogn)\mathcal{O}(dk^{2}\log n+n\log n). This concludes the proof of Theorem˜1.3.

Note that in our algorithm, the only reason for super-logarithmic dependency on nn in the space complexity is the need to sample and store a weight function in order to isolate a minimum-weight dominating set. We conjecture that this can be avoided and ask:

Question 3.15.

Is there an algorithm for Dominating Set of nn-vertex graphs provided with a (d,k)(d,k)-tree-model that runs in time 2𝒪(kd)n𝒪(1)2^{\mathcal{O}(kd)}\cdot n^{\mathcal{O}(1)} and uses (d+k)𝒪(1)logn(d+k)^{\mathcal{O}(1)}\log n space?

4 The Lower Bound

In this section, we prove Theorem˜1.4. This lower bound is based on a reasonable conjecture on the complexity of the problem Longest Common Subsequence (LCS).

An instance of LCS is a tuple (N,t,Σ,s1,,sr)(N,t,\Sigma,s_{1},\dots,s_{r}) where NN and tt are positive integers, Σ\Sigma is an alphabet and s1,,srs_{1},\dots,s_{r} are rr strings over Σ\Sigma of length NN. The goal is to decide whether there exists a string sΣts\in\Sigma^{t} of length tt appearing as a subsequence in each sis_{i}. There is a standard dynamic programming algorithm for LCS that has time and space complexity 𝒪(Nr)\mathcal{O}(N^{r}). From the point of view of parameterized complexity, LCS is 𝖶[p]\mathsf{W}[p]-hard for every level pp when parameterized by rr [7]. It remains W[1]W[1]-hard when the size of the alphabet is constant [44], and it is 𝖶[1]\mathsf{W}[1]-complete when parameterized by r+tr+t [32].Abboud et al. [1] proved that the existence of an algorithm with running time 𝒪(Nrε)\mathcal{O}(N^{r-\varepsilon}) for any ε>0\varepsilon>0 would contradict the Strong Exponential-Time Hypothesis. As observed by Elberfeld et al. [21], LCS parameterized by rr is complete for the class 𝖷𝖭𝖫𝖯\mathsf{XNLP}: parameterized problems solvable by a nondeterministic Turing machine using f(k)n𝒪(1)f(k)\cdot n^{\mathcal{O}(1)} time and f(k)lognf(k)\log n space, for a computable function ff. See also [7, 8, 9, 10, 11] for further research on 𝖷𝖭𝖫𝖯\mathsf{XNLP} and related complexity classes.The only known progress on the space complexity is due to Barsky et al. with an algorithm running in 𝒪(Nr1)\mathcal{O}(N^{r-1}) space [4]. This motivated Pilipczuk and Wrochna to formulate the following conjecture [45].

Conjecture 4.1 ([45]).

There is no algorithm that solves the LCS problem in time Mf(r)M^{f(r)} and using f(r)M𝒪(1)f(r)M^{\mathcal{O}(1)} space for any computable function ff, where MM is the total bitsize of the instance and rr is the number of input strings.

Note that in particular, the existence of an algorithm with time and space complexity as in ˜4.1 implies the existence of such algorithms for all problems in the class 𝖷𝖭𝖫𝖯\mathsf{XNLP}.

Our lower bound is based on the following stronger variant of ˜4.1, in which we additionally assume that the sought substring is short.

Conjecture 4.2.

For any unbounded and computable function δ\delta, ˜4.1 holds even when tδ(N)t\leqslant\delta(N).

Thus, we may rephrase Theorem˜1.4 as follows.

Theorem 4.3.

Unless ˜4.2 fails, for any unbounded and computable function δ\delta, there is no algorithm that solves the Independent Set problem in graphs supplied with (d,k)(d,k)-tree-models satisfying dδ(k)d\leqslant\delta(k) that would run in time 2𝒪(k)n𝒪(1)2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)} and use n𝒪(1)n^{\mathcal{O}(1)} space.

The remainder of this section is devoted to the proof of Theorem˜4.3. Not surprisingly, we provide a reduction from LCS to Independent Set on graphs provided with suitable tree-models.

Let (N,t,Σ,s1,,sr)(N,t,\Sigma,s_{1},\dots,s_{r}) be an instance of LCS. For the sake of clarity, we assume without loss of generality that NN is a power of 22. Indeed, we can always obtain an equivalent instance (2logN,t+t,Σ,s1,,sr)(2^{\lceil\log N\rceil},t+t^{\prime},\Sigma^{\prime},s_{1}^{\prime},\dots,s_{r}^{\prime}) where t=2logNNt^{\prime}=2^{\lceil\log N\rceil}-N, Σ\Sigma^{\prime} is obtained from Σ\Sigma by adding a new letter \spadesuit and each sis_{i}^{\prime} is obtained by adding tt^{\prime} times \spadesuit at the end of sis_{i}.

For every I[N]I\in[N], we denote the II-th letter of sps_{p} by sp[I]s_{p}[I]. In the following, we present our reduction from (N,t,Σ,s1,,sr)(N,t,\Sigma,s_{1},\dots,s_{r}) to an equivalent instance of Independent Set consisting of a graph GG with (r+t+N)𝒪(1)(r+t+N)^{\mathcal{O}(1)} vertices and a (d,k)(d,k)-tree-model where d=𝒪(logt)d=\mathcal{O}(\log t) and k=𝒪(rlogN)k=\mathcal{O}(r\log N). This implies Theorem˜4.3 since for every unbounded and computable function δ\delta there exists an unbounded and computable function δ\delta^{\prime} such that if tδ(N)t\leqslant\delta^{\prime}(N), then dδ(k)d\leqslant\delta(k) (we explain this in more details at the end of this section).

In the intuitions along the construction, we denote by ss^{\star} a potential common substring of s1,,srs_{1},\dots,s_{r} of length tt. The main idea is to use matchings to represent the binary encoding of the positions of the letters of ss^{\star} in each string.

For every string sps_{p} and q[t]q\in[t], we define the selection gadget 𝖲pq\mathsf{S}_{p}^{q} which contains, for every i[logN]i\in[\log N], an edge called the ii-edge of 𝖲pq\mathsf{S}_{p}^{q}. One endpoint of this edge is called the 0-endpoint and the other is called the 1-endpoint; i.e., a selection gadget induces a matching on logN\log N edges. This results in the following natural bijection between [N][N] and the maximal independent sets of 𝖲pq\mathsf{S}_{p}^{q}. For every I[N]I\in[N], we denote by 𝖲pq|I\mathsf{S}_{p}^{q}|I the independent set that contains, for each i[logN]i\in[\log N], the xx-endpoint of the ii-edge of 𝖲pq\mathsf{S}_{p}^{q} where xx is the value of the ii-th bit of the binary representation of I1I-1 (we consider the first bit to be the most significant one and the logN\log N-th one the least significant). Then the vertices selected in 𝖲pq\mathsf{S}_{p}^{q} encode the position of the qq-th letter of ss^{\star} in sps_{p}.

We need to guarantee that the selected positions in the gadgets 𝖲p1,,𝖲pt\mathsf{S}_{p}^{1},\dots,\mathsf{S}_{p}^{t} are coherent, namely, for every q[t]q\in[t], the position selected in 𝖲pq\mathsf{S}_{p}^{q} is strictly smaller than the one selected in 𝖲pq+1\mathsf{S}_{p}^{q+1}. For this, we construct an inferiority gadget denoted by 𝖨𝗇𝖿(p,q)\mathsf{Inf}(p,q) for every string sps_{p} and every q[t1]q\in[t-1]. The idea behind it is to ensure that the only possibility for an independent set to contain at least 3logN3\log N vertices from 𝖲pq,𝖲pq+1\mathsf{S}_{p}^{q},\mathsf{S}_{p}^{q+1}, and their inferiority gadget, is the following: there exist I<J[N]I<J\in[N] such that the independent set contains 𝖲pq|I𝖲pq+1|J\mathsf{S}_{p}^{q}|I\cup\mathsf{S}_{p}^{q+1}|J. The maximum solution size in the constructed instance of Independent Set—which is the sum of the independence number of each gadget—will guarantee that only such selections are possible.

Refer to caption
Figure 1: Example of an inferiority gadget with logN=3\log N=3. The squares represent the sets of vertices Vi01V^{01}_{i}. For legibility, among the edges going out of the inferiority gadget, we only represent those incident to v10,v11v_{1}^{0},v_{1}^{1} and V101V^{01}_{1}. The independent set consisting of the white filled vertices has 3logN3\log N vertices, the position selected for 𝖲pq\mathsf{S}_{p}^{q} is 5 and for 𝖲pq+1\mathsf{S}_{p}^{q+1} it is 6.

Figure˜1 provides an example of the following construction. The vertex set of 𝖨𝗇𝖿(p,q)\mathsf{Inf}(p,q) consists of the following vertices: for each i[logN1]i\in[\log N-1], there are two vertices vi0,p,qv_{i}^{0,p,q} and vi1,p,qv_{i}^{1,p,q}. Moreover, for each i[logN]i\in[\log N], there is a set Vi,p,q01V^{01}_{i,p,q} of logNi+1\log N-i+1 vertices (we drop p,qp,q from the notation when they are clear from the context). We now describe the edges incident to the inferiority gadget:

  • For every i[logN1]i\in[\log N-1], vi0v_{i}^{0} and vi1v_{i}^{1} are adjacent and for each x{0,1}x\in\{0,1\}, vixv_{i}^{x} is adjacent to the (1x)(1-x)-endpoints of the ii-edges from 𝖲pq\mathsf{S}_{p}^{q} and 𝖲pq+1\mathsf{S}_{p}^{q+1}.

  • For every i[logN]i\in[\log N], all the vertices in Vi01V^{01}_{i} are adjacent to (1) the 11-endpoint of the ii-edge from 𝖲pq\mathsf{S}_{p}^{q}, (2) the 0-endpoint from the ii-edge of 𝖲pq+1\mathsf{S}_{p}^{q+1}, (3) all the vertices vj0,vj1v_{j}^{0},v_{j}^{1} for every jij\geqslant i and (4) all the vertices in Vj01V_{j}^{01} for every j>ij>i.

On a high level, an inferiority gadget reflects that for values I<J[N]I<J\in[N], if we go from high-order to low-order bits, then the binary encodings of II and JJ first contains the same bits and then there is an index, where II has a zero-bit and JJ has a one-bit. If such a difference first occurs at some position [logN]\ell\in[\log N], then the corresponding independent set first takes 1\ell-1 vertices of the form v0v_{\ell^{\prime}}^{0} or v1v_{\ell^{\prime}}^{1} (for <\ell^{\prime}<\ell) and then takes logN(1)\log N-(\ell-1) vertices from V01V^{01}_{\ell} – this results in logN\log N vertices taken in the inferiority gadget. The following statement follows from

{observation}

Let p[r]p\in[r] and q[t1]q\in[t-1]. The independence number of 𝖨𝗇𝖿(p,q)\mathsf{Inf}(p,q) is logN\log N and for every I,J[N]I,J\in[N], we have I<JI<J iff there exists a set of logN\log N vertices SS from 𝖨𝗇𝖿(p,q)\mathsf{Inf}(p,q) such that the union of SS, 𝖲pq|I\mathsf{S}_{p}^{q}|I and 𝖲pq+1|J\mathsf{S}_{p}^{q+1}|J induces an independent set.

Next, we need to ensure that the tt positions chosen in s1,,srs_{1},\dots,s_{r} indeed correspond to a common subsequence, i.e., for every q[t]q\in[t], the qq-th chosen letter must be the same in every s1,,srs_{1},\dots,s_{r}. For p[r1]p\in[r-1], let p\mathcal{M}_{p} denote the set of all ordered pairs (I,J)[N]2(I,J)\in[N]^{2} such that the II-th letter of sps_{p} and the JJ-th of sp+1s_{p+1} are identical. For each p[r1]p\in[r-1] and q[t]q\in[t], we create the matching gadget 𝖬𝖺𝗍𝖼𝗁(p,q)\mathsf{Match}(p,q) as follows:

  • For every pair (I,J)p(I,J)\in\mathcal{M}_{p} and for each p{p,p+1}p^{\star}\in\{p,p+1\}, we create a copy 𝖬pp,q,I,J\mathsf{M}_{p^{\star}}^{p,q,I,J} of 𝖲pq\mathsf{S}_{p^{\star}}^{q} and for every [logN]\ell\in[\log N] and x{0,1}x\in\{0,1\}, we add an edge between the xx-endpoint of the \ell-edge of 𝖲pq\mathsf{S}_{p^{\star}}^{q} and the (1x)(1-x)-endpoint of the \ell-edge of 𝖬pp,q,I,J\mathsf{M}_{p^{\star}}^{p,q,I,J}.

  • For every pair (I,J)p(I,J)\in\mathcal{M}_{p}, we add a new vertex vp,I,Jqv^{q}_{p,I,J} adjacent to (1) all the vertices from 𝖬pp,q,I,J\mathsf{M}_{p}^{p,q,I,J} that are not in 𝖬pp,q,I,J|I\mathsf{M}_{p}^{p,q,I,J}|I and (2) all the vertices from 𝖬p+1p,q,I,J\mathsf{M}_{p+1}^{p,q,I,J} that are not in 𝖬p+1p,q,I,J|J\mathsf{M}_{p+1}^{p,q,I,J}|J.

Finally, we turn {vp,I,Jq:(I,J)p}\{v^{q}_{p,I,J}\,:\,(I,J)\in\mathcal{M}_{p}\} into a clique. Observe that, for each p{p,p+1}p^{\star}\in\{p,p+1\}, an independent set SS contains (|p|+1)logN(|\mathcal{M}_{p}|+1)\log N vertices from 𝖲pq\mathsf{S}_{p^{\star}}^{q} and its copies 𝖬pp,q,I,J\mathsf{M}_{p^{\star}}^{p,q,I,J} if and only if there exists a value I[N]I\in[N] such that SS contains 𝖲pq|I\mathsf{S}_{p^{\star}}^{q}|I and 𝖬pp,q,I,J|I\mathsf{M}_{p^{\star}}^{p,q,I,J}|I for each copy. This leads to the following observation.

{observation}

Let p[r1]p\in[r-1] and q[t]q\in[t]. The independence number of 𝖬𝖺𝗍𝖼𝗁(p,q)\mathsf{Match}(p,q) is 1+2|p|logN1+2\cdot|\mathcal{M}_{p}|\cdot\log N and for every I,J[N]I,J\in[N], we have (I,J)p(I,J)\in\mathcal{M}_{p} iff there exists an independent set SS of 𝖬𝖺𝗍𝖼𝗁(p,q)\mathsf{Match}(p,q) with 1+2|p|logN1+2|\mathcal{M}_{p}|\cdot\log N vertices such that the union of SS, 𝖲pq|I\mathsf{S}_{p}^{q}|I and 𝖲p+1q|J\mathsf{S}_{p+1}^{q}|J is an independent set.

This concludes the construction of the graph GG. See Figure˜2 below for an overview.

Refer to caption
Figure 2: Overview of the graph GG with logN=3\log N=3, r=2r=2 and t=4t=4. There are some edges between two gadgets if and only if there are some edges between their vertices in GG.

We prove the correctness of the reduction in the following lemma.

Lemma 4.4.

There exists an integer 𝗀𝗈𝖺𝗅\mathsf{goal} such that GG admits an independent set of size at least 𝗀𝗈𝖺𝗅\mathsf{goal} iff the strings s1,,srs_{1},\dots,s_{r} admit a common subsequence of length tt.

Proof 4.5.

Let 𝗀𝗈𝖺𝗅=(rt+r(t1))logN+p[r1]t(1+2|p|logN)\mathsf{goal}=(rt+r(t-1))\log N+\sum_{p\in[r-1]}t(1+2\cdot|\mathcal{M}_{p}|\cdot\log N).

(\Rightarrow)

Assume that s1,,srs_{1},\dots,s_{r} admit a common subsequence ss^{\star} of length tt. Then, for every string sps_{p}, there exist Ip1,,Ipt[N]I_{p}^{1},\dots,I_{p}^{t}\in[N] such that Ip1<<IptI_{p}^{1}<\dots<I_{p}^{t} and sp[Ipq]=s[q]s_{p}[I_{p}^{q}]=s^{\star}[q] for every q[t]q\in[t]. We construct an independent set SS as follows. For every selection gadget 𝖲pq\mathsf{S}_{p}^{q}, we add 𝖲pq|Ipq\mathsf{S}_{p}^{q}|I_{p}^{q} to SS. Note that, at this point, SS is an independent set because there is no edge between the selection gadgets 𝖲pq\mathsf{S}_{p}^{q} in GG. For every inferiority gadget 𝖨𝗇𝖿(p,q)\mathsf{Inf}(p,q), since Ipq<Ipq+1I_{p}^{q}<I_{p}^{q+1}, we can use Section˜4 and add a set of logN\log N vertices from 𝖨𝗇𝖿(p,q)\mathsf{Inf}(p,q) to SS. Note that SS remains an independent set because the added vertices are not adjacent to 𝖲pq|Ipq\mathsf{S}_{p}^{q}|I_{p}^{q} and 𝖲pq|Ipq+1\mathsf{S}_{p}^{q}|I_{p}^{q+1} by Section˜4 and the only edges going out of 𝖨𝗇𝖿(p,q)\mathsf{Inf}(p,q) are incident to 𝖲pq\mathsf{S}_{p}^{q} and 𝖲pq+1\mathsf{S}_{p}^{q+1}. At this point, we have (rt+r(t1))logN(rt+r(t-1))\log N vertices in SS.

Observe that for every p[r1]p\in[r-1] and q[t]q\in[t], we have sp[Ipq]=s[q]=sp+1[Ip+1q]s_{p}[I_{p}^{q}]=s^{\star}[q]=s_{p+1}[I_{p+1}^{q}]. Thus, we have (Ipq,Ip+1q)p(I_{p}^{q},I_{p+1}^{q})\in\mathcal{M}_{p} and by Section˜4, there exists an independent set Sp,qS_{p,q} of 𝖬𝖺𝗍𝖼𝗁(p,q)\mathsf{Match}(p,q) with 1+2|p|logN1+2|\mathcal{M}_{p}|\cdot\log N vertices such that the union of Sp,qS_{p,q}, 𝖲pq|Ipq\mathsf{S}_{p}^{q}|I_{p}^{q} and 𝖲p+1q|Ip+1q\mathsf{S}_{p+1}^{q}|I_{p+1}^{q} is an independent set. We add Sp,qS_{p,q} to SS and note that SS remains an independent set since the only edges going out of 𝖬𝖺𝗍𝖼𝗁(p,q)\mathsf{Match}(p,q) are incident to 𝖲pq\mathsf{S}_{p}^{q} and 𝖲p+1q\mathsf{S}_{p+1}^{q}. As we do this for every p[r1]p\in[r-1] and q[t]q\in[t], the union of the Sp,qS_{p,q}’s contains p[r1]t(1+2|p|logN)\sum_{p\in[r-1]}t(1+2|\mathcal{M}_{p}|\cdot\log N) vertices. We conclude that GG admits an independent set of size 𝗀𝗈𝖺𝗅\mathsf{goal}.

(\Leftarrow)

Assume GG admits an independent set SS of size at least 𝗀𝗈𝖺𝗅\mathsf{goal}. The independence number of each selection gadget 𝖲pq\mathsf{S}_{p}^{q} is logN\log N and, by Section˜4, this is also the case for each inferiority gadget 𝖨𝗇𝖿(p,q)\mathsf{Inf}(p,q). Hence, SS contains at most (rt+r(t1))logN(rt+r(t-1))\log N vertices from selection and inferiority gadgets. By Section˜4, the independence number of each matching gadget 𝖬𝖺𝗍𝖼𝗁(p,q)\mathsf{Match}(p,q) is 1+2|p|logN1+2|\mathcal{M}_{p}|\cdot\log N, thus SS contains at most p[r1]t(1+2|p|logN)\sum_{p\in[r-1]}t(1+2|\mathcal{M}_{p}|\cdot\log N) vertices from the matching gadgets. From the definition of 𝗀𝗈𝖺𝗅\mathsf{goal}, we obtain that SS contains exactly logN\log N vertices from each selection and inferiority gadget, and it contains exactly 1+2|p|logN1+2|\mathcal{M}_{p}|\cdot\log N vertices from each matching gadget. We make the following deductions:

  • For each p[r]p\in[r], there exist Ip1,,Ipt[N]I_{p}^{1},\dots,I_{p}^{t}\in[N] such that SS contains 𝖲pq|Ipq\mathsf{S}_{p}^{q}|I_{p}^{q} for every q[t]q\in[t].

  • For each p[r]p\in[r] and q[t1]q\in[t-1], the independent set SS contains the vertices in 𝖲pq|Ipq\mathsf{S}_{p}^{q}|I_{p}^{q} and 𝖲pq|Ipq+1\mathsf{S}_{p}^{q}|I_{p}^{q+1} as well as logN\log N vertices from 𝖨𝗇𝖿(p,q)\mathsf{Inf}(p,q). Section˜4 implies that Ipq<Ipq+1I_{p}^{q}<I_{p}^{q+1}. Thus, sp[Ip1]sp[Ipt]s_{p}[I_{p}^{1}]\dots s_{p}[I_{p}^{t}] is a subsequence of sps_{p}.

  • For every p[r1]p\in[r-1] and q[t]q\in[t], the independent set SS contains 𝖲pq|Ipq\mathsf{S}_{p}^{q}|I_{p}^{q} and 𝖲p+1q|Ip+1q\mathsf{S}_{p+1}^{q}|I_{p+1}^{q} as well as 1+2|p|logN1+2|\mathcal{M}_{p}|\cdot\log N vertices from 𝖬𝖺𝗍𝖼𝗁(p,q)\mathsf{Match}(p,q). We deduce from Section˜4 that (Ipq,Ip+1q)p(I_{p}^{q},I_{p+1}^{q})\in\mathcal{M}_{p} and consequently, sp[Ipq]=sp+1[Ip+1q]s_{p}[I_{p}^{q}]=s_{p+1}[I_{p+1}^{q}]. Hence, for every q[t]q\in[t], we have s1[I1q]==sr[Irq]s_{1}[I_{1}^{q}]=\dots=s_{r}[I_{r}^{q}].

We conclude that s1[I11]s1[I1t]s_{1}[I_{1}^{1}]\dots s_{1}[I_{1}^{t}] is a common subsequence of s1,,srs_{1},\dots,s_{r}.

The next step is to construct a tree-model of GG.

Lemma 4.6.

We can compute in polynomial time a (d,k)(d,k)-tree-model of GG where d=2logt+4d=2\log t+4 and k=14rlogN3k=14r\log N-3.

Proof 4.7.

Let L𝖲,L𝖬,Lmin,Lmax,L𝖨𝗇𝖿L_{\mathsf{S}},L_{\mathsf{M}},L_{\min},L_{\max},L_{\mathsf{Inf}} and {0}\{\ell_{0}\} be disjoint subsets of [k][k] such that each set among L𝖲,L𝖬,Lmin,Lmax,L_{\mathsf{S}},L_{\mathsf{M}},L_{\min},L_{\max}, has size 2rlogN2r\log N and L𝖨𝗇𝖿L_{\mathsf{Inf}} has size 6rlogN46r\log N-4. First, we prove that the union of the gadgets associated with a position q[t]q\in[t] admits a simple tree-model. For every q[t]q\in[t], we denote by GqG^{q} the union of the selection gadgets 𝖲pq\mathsf{S}_{p}^{q} with p[r]p\in[r] and the matching gadgets 𝖬𝖺𝗍𝖼𝗁(p,q)\mathsf{Match}(p,q) with p[r1]p\in[r-1].

Claim 1.

For every q[t]q\in[t], we can construct in polynomial time a (3,k)(3,k)-tree-model (Tq,q,q,λq)(T^{q},\mathcal{M}^{q},\mathcal{R}^{q},\lambda^{q}) for GqG^{q}.

{claimproof}

Let q[t]q\in[t]. We create the root aqa^{q} of TqT^{q} and we attach all the vertices in the selection gadgets 𝖲pq\mathsf{S}_{p}^{q} with p[r]p\in[r] as leaves adjacent to aqa^{q}. Then, for every p[r1]p\in[r-1], we create a node apqa^{q}_{p} adjacent to aqa^{q} and for every (I,J)p(I,J)\in\mathcal{M}_{p}, we create a node ap,I,Jqa^{q}_{p,I,J} adjacent to apqa^{q}_{p}. For each (I,J)p(I,J)\in\mathcal{M}_{p}, we make ap,I,Jqa^{q}_{p,I,J} adjacent to the vertex vp,I,Jqv^{q}_{p,I,J} and all the vertices in 𝖬pp,q,I,J\mathsf{M}^{p,q,I,J}_{p}, 𝖬p+1p,q,I,J\mathsf{M}^{p,q,I,J}_{p+1}. Note that all the vertices in 𝖬𝖺𝗍𝖼𝗁(p,q)\mathsf{Match}(p,q) are the leaves of the subtree rooted at apqa^{q}_{p}, and the leaves of TqT^{q} are exactly the vertices in GqG^{q}. See Figure˜3 for an illustration of TqT^{q}.

We define λq\lambda^{q} as follows:

  • λq\lambda^{q} maps each vertex in 𝖲1q,,𝖲rq\mathsf{S}_{1}^{q},\dots,\mathsf{S}_{r}^{q} to a unique label in L𝖲L_{\mathsf{S}}.

  • For every (p,i,x)[r]×[logN]×{0,1}(p,i,x)\in[r]\times[\log N]\times\{0,1\}, λq\lambda^{q} maps all the xx-endpoints of the ii-edges from the different copies 𝖬pp,q,I,J\mathsf{M}_{p}^{p^{\prime},q,I,J} of 𝖲pq\mathsf{S}_{p}^{q} to a unique label in L𝖬L_{\mathsf{M}}.

  • We have λq(vp,I,Jq)=0\lambda^{q}(v_{p,I,J}^{q})=\ell_{0} for every p[r1]p\in[r-1] and (I,J)p(I,J)\in\mathcal{M}_{p}.

We define q={ρab:abE(Tq)}\mathcal{R}^{q}=\{\rho_{ab}\,:\,ab\in E(T^{q})\} such that ρab\rho_{ab} is the identity function for every abE(Tq)ab\in E(T^{q}). It follows that λaq=λq\lambda^{q}_{a}=\lambda^{q} for every node aa of TqT^{q}. We finish the construction of the tree-model of GqG^{q} by proving that there exists a family of matrices q\mathcal{M}^{q} such that (Tq,q,q,λq)(T^{q},\mathcal{M}^{q},\mathcal{R}^{q},\lambda^{q}) is a (3,k)(3,k)-tree-model of GqG^{q}. For doing so, we simply prove that the property φ(a,)\varphi(a,\ell) holds for every label [k]\ell\in[k] and internal node aa of TqT^{q} with children b1,,bcb_{1},\dots,b_{c}, where φ(a,)\varphi(a,\ell) is true if:

  • For every uVbiu\in V_{b_{i}} and vVbjv\in V_{b_{j}} with λaq(u)=λaq(v)=\lambda^{q}_{a}(u)=\lambda^{q}_{a}(v)=\ell, we have N(u)(Va(VbiVbj))=N(v)(Va(VbiVbj))N(u)\cap(V_{a}\setminus(V_{b_{i}}\cup V_{b_{j}}))=N(v)\cap(V_{a}\setminus(V_{b_{i}}\cup V_{b_{j}})).

Observe that φ(a,)\varphi(a,\ell) trivially holds when there is at most one vertex labeled \ell in VaV_{a}. Consequently, φ(ap,I,Jq,)\varphi(a^{q}_{p,I,J},\ell) is true for every node ap,I,Jqa^{q}_{p,I,J} and [k]\ell\in[k]. Moreover, φ(a,)\varphi(a,\ell) is true for every internal node aa and every L𝖲LminLmaxL𝖨𝗇𝖿\ell\in L_{\mathsf{S}}\cup L_{\min}\cup L_{\max}\cup L_{\mathsf{Inf}}. Recall that λaq=λq\lambda^{q}_{a}=\lambda^{q} for every node aa of TqT^{q}. For every pair (u,v)(u,v) of distinct vertices in V(Gq)V(G^{q}), if λq(u)=λq(v)\lambda^{q}(u)=\lambda^{q}(v) then either:

  • There exists (p,i,x)[r]×[logN]×{0,1}(p,i,x)\in[r]\times[\log N]\times\{0,1\} such that uu and vv are the xx-endpoints of the ii-edges in respectively 𝖬pp,q,I,J\mathsf{M}_{p}^{p^{\star},q,I,J} and 𝖬pp,q,I,J\mathsf{M}_{p}^{p^{\prime},q,I^{\prime},J^{\prime}} for some p,p{p1,p}p^{\star},p^{\prime}\in\{p-1,p\}, (I,J)p(I,J)\in\mathcal{M}_{p^{\star}} and (I,J)p(I^{\prime},J^{\prime})\in\mathcal{M}_{p^{\prime}}. Observe that the parent of uu is ap,I,Jqa^{q}_{p^{\star},I,J} and the neighbors of uu in VapqV_{a^{q}_{p^{\star}}} are all children of ap,I,Jqa^{q}_{p^{\star},I,J}. Indeed, the only neighbors of uu in VapqV_{a^{q}_{p^{\star}}} are the (1x)(1-x)-endpoint of the ii-edge of 𝖬pp,q,I,J\mathsf{M}_{p}^{p^{\star},q,I,J} and potentially vp,I,Jqv_{p^{\star},I,J}^{q}. Symmetrically, vv belongs to Vap,I,JqV_{a^{q}_{p^{\prime},I^{\prime},J^{\prime}}}, its parent is ap,I,Jqa^{q}_{p^{\prime},I^{\prime},J^{\prime}} and the only neighbors of vv in VapqV_{a^{q}_{p^{\prime}}} are children of ap,I,Jqa^{q}_{p^{\prime},I^{\prime},J^{\prime}}. Moreover, observe that uu and vv have both only one neighbor in VaqVapqV_{a^{q}}\setminus V_{a^{q}_{p}} which is the (1x)(1-x)-endpoint of the ii-edge of 𝖲pq\mathsf{S}^{q}_{p}. We deduce that φ(aq,)\varphi(a^{q},\ell) and φ(apq,)\varphi(a^{q}_{p},\ell) and φ(ap,I,Jq,)\varphi(a^{q}_{p,I,J},\ell) are true for every L𝖬\ell\in L_{\mathsf{M}}.

  • We have u=vp,I,Jqu=v^{q}_{p,I,J} and v=vp,I,Jqv=v^{q}_{p^{\prime},I^{\prime},J^{\prime}}. In that case, uu is a child of ap,I,Jqa^{q}_{p,I,J} and vv a child of ap,I,Jqa^{q}_{p^{\prime},I^{\prime},J^{\prime}}. The only neighbors of uu and vv that are not children of ap,I,Jqa^{q}_{p,I,J} nor ap,I,Jqa^{q}_{p^{\prime},I^{\prime},J^{\prime}} are all the other vertices of label 0\ell_{0}. Thus, φ(a,0)\varphi(a,\ell_{0}) holds for every internal node aa of TqT^{q}.

We conclude that φ(a,)\varphi(a,\ell) holds for every internal node aa of TqT^{q} and every [k]\ell\in[k]. Hence, there exists a family of matrices q\mathcal{M}^{q} such that (Tq,q,q,λq)(T^{q},\mathcal{R}^{q},\mathcal{M}^{q},\lambda^{q}) is a (3,k)(3,k)-tree model of GqG^{q} for every q[t]q\in[t].

Refer to caption
Figure 3: Illustration of the tree TT and its subtree TqT^{q} for the tree-model constructed in Lemma˜4.6. An edge between a white filled rectangle labeled 𝖷\mathsf{X} and a node aa of the tree means that all the vertices in 𝖷\mathsf{X} are leaves adjacent to aa.

For every q[t1]q\in[t-1], we denote by 𝖨𝗇𝖿(q)\mathsf{Inf}(q) the union of 𝖨𝗇𝖿(1,q),,𝖨𝗇𝖿(r,q)\mathsf{Inf}(1,q),\dots,\mathsf{Inf}(r,q). Moreover, for every interval [x,y][t][x,y]\subseteq[t], we denote by Gx,yG^{x,y}, the union of the graphs GqG^{q} over q[x,y]q\in[x,y] and the inferiority gadgets in 𝖨𝗇𝖿(q)\mathsf{Inf}(q) over q[x,y]q\in[x,y] such that q+1[x,y]q+1\in[x,y]. We prove by induction that for every interval [x,y][t][x,y]\subseteq[t], there exists a (2log(yx+1)+4,k)(2\log(y-x+1)+4,k)-tree-model (T,,,λ)(T,\mathcal{R},\mathcal{M},\lambda) of Gx,yG^{x,y} such that given the root α\alpha of TT, the following properties are satisfied:

  1. (A)

    λα\lambda_{\alpha} maps each vertex from 𝖲1x,,𝖲rx\mathsf{S}^{x}_{1},\dots,\mathsf{S}^{x}_{r} to a unique label in LminL_{\min}.

  2. (B)

    λα\lambda_{\alpha} maps each vertex from 𝖲1y,,𝖲ry\mathsf{S}^{y}_{1},\dots,\mathsf{S}^{y}_{r} to a unique label in LminL_{\min}.

When x=yx=y, we only require that exactly one property among (A) and (B) is satisfied (we can choose which one as it is symmetric). The induction is on yxy-x. The base case is when x=yx=y, in which case Gx,y=GxG^{x,y}=G^{x} and we simply modify the (3,k)(3,k)-tree-model (Tx,λx,x,x)(T^{x},\lambda^{x},\mathcal{R}^{x},\mathcal{M}^{x}) for GxG^{x} as follows. We add to TxT^{x} a new root α\alpha adjacent to the former root axa^{x}. We add to x\mathcal{R}^{x} the function ραax\rho_{\alpha a^{x}} that bijectively maps L𝖲L_{\mathsf{S}} to LminL_{\min} (or LmaxL_{\max} if we want to satisfy (B) rather than (A)) and every label not in L𝖲L_{\mathsf{S}} to 0\ell_{0}. Finally, we add MαM_{\alpha} the zero k×kk\times k-matrix to x\mathcal{M}^{x}. After these modifications, it is easy to see that (Tx,λx,x,x)(T^{x},\lambda^{x},\mathcal{R}^{x},\mathcal{M}^{x}) is a (4,k)(4,k)-tree model of Gx,yG^{x,y} that satisfies (A) or (B).

Now assume that x<yx<y and that Gx,yG^{x^{\prime},y^{\prime}} admits the desired tree-model for every [x,y][x^{\prime},y^{\prime}] strictly included in [x,y][x,y]. Let q=(yx)/2q=\lfloor(y-x)/2\rfloor. By induction hypothesis, there exist:

  • A (2log(qx)+4,k)(2\log(q-x)+4,k)-tree model (T<q,<q,<q,λ<q)(T^{<q},\mathcal{R}^{<q},\mathcal{M}^{<q},\lambda^{<q}) for Gx,q1G^{x,q-1} with the desired properties (if x=q1x=q-1, we require (A) to be satisfied).

  • A (2log(yq)+4,k)(2\log(y-q)+4,k)-tree-model (T>q,>q,>q,λ>q)(T^{>q},\mathcal{R}^{>q},\mathcal{M}^{>q},\lambda^{>q}) for Gq+1,yG^{q+1,y} with the desired properties (if y=q+1y=q+1, we require (B) to be satisfied).

For the sake of legibility, we assume that xx is different from qq, which implies that Gx,q1G^{x,q-1} is not empty graph (note that Gq+1,yG^{q+1,y} is not empty as x<yx<y and q=(yx)/2q=\lfloor(y-x)/2\rfloor). We lose some generality with this assumption, but we can easily deal with the case x=qx=q with some simple modifications on the following construction (i.e. removing some nodes and changing some renaming functions).

In the following, we construct a (4+2log(yx+1),k)(4+2\log(y-x+1),k)-tree-model (T,,,λ)(T,\mathcal{R},\mathcal{M},\lambda) of Gx,yG^{x,y} from the above tree-models of Gx,q1,Gq+1,yG^{x,q-1},G^{q+1,y}, but also the (3,k)(3,k)-tree-model (Tq,λq,q,q)(T^{q},\lambda^{q},\mathcal{R}^{q},\mathcal{M}^{q}) of GqG^{q} given by ˜1. To obtain TT, we create the root α\alpha of TT and we make it adjacent to aqa^{q} the root of TqT^{q} and two new vertices: α<q\alpha^{<q} and α>q\alpha^{>q}. We make α<q\alpha^{<q} adjacent to the root of T<qT^{<q} and to all the vertices in 𝖨𝗇𝖿(q1)\mathsf{Inf}(q-1). Symmetrically, we make α>q\alpha^{>q} adjacent to the root of T>qT^{>q} and to all the vertices in 𝖨𝗇𝖿(q)\mathsf{Inf}(q). See Figure˜3 for an illustration of TT.

We define λ\lambda as follows:

λ(v)={λ<q(v) if vV(Gx,q1),λ>q(v) if vV(Gq+1,y),λq(v) if vV(Gq),λ(v) otherwise (when v belongs to 𝖨𝗇𝖿(q1) or 𝖨𝗇𝖿(q))\lambda(v)=\begin{cases}\lambda^{<q}(v)&\text{ if }v\in V(G^{x,q-1}),\\ \lambda^{>q}(v)&\text{ if }v\in V(G^{q+1,y}),\\ \lambda^{q}(v)&\text{ if }v\in V(G^{q}),\\ \lambda^{\prime}(v)&\text{ otherwise (when $v$ belongs to $\mathsf{Inf}(q-1)$ or $\mathsf{Inf}(q))$}\end{cases}

where λ\lambda^{\prime} maps the vertices in Gx,yG^{x,y} from 𝖨𝗇𝖿(q1)\mathsf{Inf}(q-1) and 𝖨𝗇𝖿(q)\mathsf{Inf}(q) to L𝖨𝗇𝖿L_{\mathsf{Inf}} such that for each label \ell of L𝖨𝗇𝖿L_{\mathsf{Inf}}, there exists q{q1,q}q^{\prime}\in\{q-1,q\} and p[r]p\in[r] such that \ell is associated with either: (1) vix,p,qv^{x,p,q^{\prime}}_{i} for some i[logN1]i\in[\log N-1] and x{0,1}x\in\{0,1\} or (2) all the vertices in Vi01,p,qV^{01,p,q^{\prime}}_{i} for some i[logN]i\in[\log N]. Since |L𝖨𝗇𝖿|=6rlog(N)4|L_{\mathsf{Inf}}|=6r\log(N)-4, we have enough labels for doing so. The family of renaming function \mathcal{R} is obtained from the union of <q>qq\mathcal{R}^{<q}\cup\mathcal{R}^{>q}\cup\mathcal{R}^{q} by adding for every edge ee in TT that is not in Tq,T<qT^{q},T^{<q} or T>qT^{>q} a function ρe\rho_{e} defined as follows:

  • ρe\rho_{e} is the identify function when ee is an edge adjacent to a leaf from 𝖨𝗇𝖿(q1)\mathsf{Inf}(q-1) or 𝖨𝗇𝖿(q)\mathsf{Inf}(q).

  • ρe\rho_{e} maps every label in LminLmaxL_{\min}\cup L_{\max} to itself and every other label to 0\ell_{0}, when ee is the edge between aqa^{\circledast q} and the root of TqT^{\circledast q} for {<,>}\circledast\in\{<,>\}.

  • ρe\rho_{e} maps every label in L𝖲L_{\mathsf{S}} to itself and every other label to 0\ell_{0} when e=αaqe=\alpha a^{q}.

  • ρe\rho_{e} maps every label in LminL𝖨𝗇𝖿L_{\min}\cup L_{\mathsf{Inf}} to itself and every other label to 0\ell_{0}, when e=αα<qe=\alpha\alpha^{<q}.

  • ρe\rho_{e} maps every label in LmaxL𝖨𝗇𝖿L_{\max}\cup L_{\mathsf{Inf}} to itself and every other label to 0\ell_{0}, when e=αα>qe=\alpha\alpha^{>q}.

Observe that λα\lambda_{\alpha} satisfies Properties (A) and (B). As λα<q\lambda_{\alpha^{<q}} satisfies (A), this function maps every vertex from 𝖲1x,,𝖲rx\mathsf{S}^{x}_{1},\dots,\mathsf{S}^{x}_{r} to a unique label in LminL_{\min}. The above renaming functions guarantee that the only vertices mapped to a label in LminL_{\min} by λα\lambda_{\alpha} are from Vα<qV_{\alpha^{<q}}. We deduce that Property (A) holds and symmetrically, Property (B) holds too.

Now we prove that a family of matrices \mathcal{M} exists such that (T,,,λ)(T,\mathcal{R},\mathcal{M},\lambda) is a tree model of Gx,yG^{x,y}. As before, we prove that φ(a,)\varphi(a,\ell) holds for every internal node aa of TT and every label [k]\ell\in[k]. Since our construction is based on tree-models for Gq,Gx,q1G^{q},G^{x,q-1} and Gq+1,yG^{q+1,y}, we only need to prove that φ(a,)\varphi(a,\ell) holds for every a{α<q,α>q,α}a\in\{\alpha^{<q},\alpha^{>q},\alpha\} and [k]\ell\in[k].

We first deal with α<q\alpha^{<q}. Let us describe the labeling function λα<q\lambda_{\alpha^{<q}}. Remember that (T<q,<q,<q,λ<q)(T^{<q},\mathcal{R}^{<q},\mathcal{M}^{<q},\lambda^{<q}) satisfies Properties (A) and (B), or just (A) when x=q1x=q-1. Moreover, the renaming function associated with the edge between α<q\alpha^{<q} and T<qT^{<q} preserves the labels in LminLmaxL_{\min}\cup L_{\max} and maps the other labels to 0\ell_{0}. Thus, λα<q\lambda_{\alpha^{<q}} assign every vertex in 𝖲1x,𝖲1q1,,𝖲rx,𝖲rq1\mathsf{S}^{x}_{1},\mathsf{S}^{q-1}_{1},\dots,\mathsf{S}^{x}_{r},\mathsf{S}^{q-1}_{r} to a unique label in LminLmaxL_{\min}\cup L_{\max}. We deduce that for every a pair (u,v)(u,v) of distinct vertices in Vα<qV_{\alpha^{<q}}, if λα<q\lambda_{\alpha^{<q}} assigns uu and vv to the same label [k]\ell\in[k], then either:

  • u,vVi01,p,q1u,v\in V^{01,p,q-1}_{i} for some p[r]p\in[r] and i[logN]i\in[\log N]. In this case, uu and vv are false twins by construction of 𝖨𝗇𝖿(p,q1)\mathsf{Inf}(p,q-1)—i.e. N(u)=N(v)N(u)=N(v)—and we deduce that φ(α<q,)\varphi(\alpha^{<q},\ell) holds.

  • =0\ell=\ell_{0} and u,vu,v are in V(Gx,q1)V(G^{x,q-1}) and not from 𝖲1x,𝖲1q1,,𝖲rx,𝖲rq1\mathsf{S}^{x}_{1},\mathsf{S}^{q-1}_{1},\dots,\mathsf{S}^{x}_{r},\mathsf{S}^{q-1}_{r}. Then, all the neighbors of uu and vv are in Gx,q1G^{x,q-1} and thus N(u)V(Gx,q1)=N(v)V(Gx,q1)N(u)\setminus V(G^{x,q-1})=N(v)\setminus V(G^{x,q-1}). We deduce that φ(α<q,)\varphi(\alpha^{<q},\ell) holds in this case too.

We conclude that φ(α<q,)\varphi(\alpha^{<q},\ell) holds for every [k]\ell\in[k] and with symmetric arguments, we can prove that φ(α>q,)\varphi(\alpha^{>q},\ell) holds also for every [k]\ell\in[k].

For α\alpha, notice that for every a{aq,α<q,α>q}a\in\{a^{q},\alpha^{<q},\alpha^{>q}\}, the vertices in VaV_{a} labeled 0\ell_{0} by λα\lambda_{\alpha} have neighbors only in VaV_{a}, hence φ(a,0)\varphi(a,\ell_{0}) holds. Furthermore, every label \ell in LminLmaxL𝖲L_{\min}\cup L_{\max}\cup L_{\mathsf{S}} is mapped by λα\lambda_{\alpha} to a unique vertex in VαV_{\alpha}, so φ(α,)\varphi(\alpha,\ell) holds. Finally, each label in L𝖨𝗇𝖿L_{\mathsf{Inf}} is mapped by λα\lambda_{\alpha} to a unique vertex or to all the vertices in Vi01,p,qV^{01,p,q^{\prime}}_{i} for some p[r],q{q1,q}p\in[r],q^{\prime}\in\{q-1,q\} and i[logN]i\in[\log N]. Since the vertices in Vi01,p,qV^{01,p,q^{\prime}}_{i} are false twins, we deduce that φ(α,)\varphi(\alpha,\ell) holds for every L𝖨𝗇𝖿\ell\in L_{\mathsf{Inf}}. We conclude that φ(α,)\varphi(\alpha,\ell) holds for every [k]\ell\in[k] and thus there exists a family \mathcal{M} of matrices such that (T,,,λ)(T,\mathcal{M},\mathcal{R},\lambda) is a tree-model of Gx,yG^{x,y}.

It remains to prove that the depth of TT is at most d=2log(yx+1)+4d=2\log(y-x+1)+4. By definition of qq, both qxq-x and yqy-q are smaller than (yx+1)/2(y-x+1)/2. Thus, log(qx)\log(q-x) and log(yq)\log(y-q) are smaller than log(yx+1)1\log(y-x+1)-1. Now observe that the depth of TT is the maximum between (i) the depth of TqT^{q} plus 1 which is 4, (ii) the depth of T<qT^{<q} plus 2, and (iii) the depth of T>qT^{>q} plus 2. The depth of T<qT^{<q} is at most 2log(qx)+42\log(q-x)+4. Since log(qx)log(yx+1)1\log(q-x)\leqslant\log(y-x+1)-1, the depth of T<qT^{<q} plus 2 is at most 2log(yx+1)+42\log(y-x+1)+4. Symmetrically, the depth of T>qT^{>q} plus 2 is also upper bounded by 2log(yx+1)+42\log(y-x+1)+4. It follows that the depth of TT is at most 2log(yx+1)+42\log(y-x+1)+4.

We conclude that for every interval [x,y][x,y], Gx,yG^{x,y} admits a (2log(yx+1)+4,k)(2\log(y-x+1)+4,k)-tree-model. In particular, it implies that G1,t=GG^{1,t}=G admits a (d,k)(d,k)-tree-model. It is easy to see from our proof that this (d,k)(d,k)-tree-model is computable in polynomial time.

We are now ready to prove Theorem˜4.3.

Proof 4.8 (Proof of Theorem˜4.3).

Let δ\delta be an unbounded and computable function. Assume towards a contradiction that there exists an algorithm 𝒜\mathcal{A} that solves the Independent Set problem in graphs supplied with (d,k)(d,k)-tree-models satisfying dδ(k)d\leqslant\delta(k) that runs in time 2𝒪(k)n𝒪(1)2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)} and uses n𝒪(1)n^{\mathcal{O}(1)} space.

Since δ\delta is unbounded and computable and log\log is monotone, there exists an unbounded and computable function δ\delta^{\prime} such that for all sufficiently large N,rN,r\in\mathbb{N}, we have

2log(δ(N))+4δ(14rlogN3).2\log(\delta^{\prime}(N))+4\leqslant\delta(14r\log N-3).

Let (N,t,Σ,s1,,sr)(N,t,\Sigma,s_{1},\dots,s_{r}) be an instance of LCS such that tδ(N)t\leqslant\delta^{\prime}(N). Our reduction provides us with a graph GG and an integer 𝗀𝗈𝖺𝗅\mathsf{goal} such that the following holds:

  • GG has 𝒪(rtN2logN)\mathcal{O}(rtN^{2}\log N) vertices and thus it can be constructed in M𝒪(1)M^{\mathcal{O}(1)} time where MM is the total bitsize of (N,t,Σ,s1,,sr)(N,t,\Sigma,s_{1},\dots,s_{r}). Indeed, the selection gadgets are made of 2rtlogN2rt\log N vertices, the inferiority gadgets have exactly r(t1)(2logN+logN(1+logN)/2)r(t-1)(2\log N+\log N(1+\log N)/2) vertices and the matching gadgets consist of p[r1]t2|p|(1+2logN)\sum_{p\in[r-1]}t\cdot 2|\mathcal{M}_{p}|\cdot(1+2\log N) vertices.

  • By Lemma˜4.4, GG admits an independent set of size at least 𝗀𝗈𝖺𝗅\mathsf{goal} iff s1,,srs_{1},\dots,s_{r} admits a common subsequence of size tt.

  • By Lemma˜4.6, we can construct in polynomial time a (d,k)(d,k)-tree-model of GG with d=2logt+4d=2\log t+4 and k=14rlogN3k=14r\log N-3.

Observe that we have

d=2logt+42log(δ(N))+4δ(14rlogN3)=δ(k).d=2\log t+4\leqslant 2\log(\delta^{\prime}(N))+4\leqslant\delta(14r\log N-3)=\delta(k).

Consequently, we can run 𝒜\mathcal{A} to check whether GG admits an independent set of size at least 𝗀𝗈𝖺𝗅\mathsf{goal} in time 2𝒪(k)n𝒪(1)2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)} and space n𝒪(1)n^{\mathcal{O}(1)}. Since k=14rlogN3k=14r\log N-3 and n=𝒪(rtN2logN)M𝒪(1)n=\mathcal{O}(rtN^{2}\log N)\leqslant M^{\mathcal{O}(1)}, it follows that we can solve (N,t,Σ,s1,,sr)(N,t,\Sigma,s_{1},\dots,s_{r}) in time N𝒪(r)M𝒪(1)M𝒪(r)N^{\mathcal{O}(r)}\cdot M^{\mathcal{O}(1)}\leqslant M^{\mathcal{O}(r)} and space M𝒪(1)M^{\mathcal{O}(1)}. As this can be done for every instance (N,t,Σ,s1,,sr)(N,t,\Sigma,s_{1},\dots,s_{r}) where tδ(N)t\leqslant\delta^{\prime}(N), it contradicts ˜4.2.

5 Fixed-Parameter Algorithms for Metric Dimension and Firefighting

The Firefighter problem on a graph GG is the following. At time 0, a vertex rV(G)r\in V(G) catches fire. Then at each time step i1i\geqslant 1, first a firefighter is permanently placed on a vertex that is not currently on fire. This vertex is now permanently protected. Then the fire spreads to all unprotected neighbors of all vertices currently on fire. This process ends in the time step when the fire no longer spreads to new vertices. All vertices that do not catch fire during this process (including the protected vertices) are called saved; the rest are called burned. The goal is to maximize the number of saved vertices.

Bazgan et al. [5] showed that the Firefighter problem is fixed-parameter tractable when parameterized by the treewidth of the input graph and the number kk of vertices that may be protected during the process. In this result, one first writes an 𝖬𝖲𝖮2\mathsf{MSO}_{2} formula φ(X)\varphi(X) that expresses that a set of vertices XX can be saved assuming that kk vertices can be protected, and then applies the optimization variant of Courcelle’s Theorem, due to Arnborg et al. [3], to find the largest vertex subset AA for which φ(A)\varphi(A) is satisfied. By inspection of the formula, we can see that it does not quantify over edge sets, hence φ(X)\varphi(X) is in fact an 𝖬𝖲𝖮1\mathsf{MSO}_{1} formula. Then, by replacing the usage of the algorithm of Arnborg et al. with the algorithm of Courcelle et al. [13], we conclude that the Firefighter problem is fixed-parameter tractable when parameterized by the cliquewidth of the input graph and the number of vertices that may be protected.

We now recall that in graphs with a (d,k)(d,k)-tree model, any induced path has length at most 𝒪(2kd+1)\mathcal{O}(2^{k^{d+1}}) (this follows from [30, Theorem 3.7]; the bound accounts for our slightly different definition of a tree model). This implies that the firefighting game has at most 𝒪(2kd+1)\mathcal{O}(2^{k^{d+1}}) time steps and the same amount of vertices can be protected. Hence, recalling that a graph with a (d,k)(d,k)-tree model has bounded cliquewidth [30, Proposition 3.4], we immediately obtain the following result.

Theorem 5.1.

The Firefighter problem is fixed-parameter tractable when parameterized by dd and kk on graphs provided with a (d,k)(d,k)-tree-model.

We observe that this is in contrast to the complexity of the Firefighter problem on graphs of bounded treewidth. The Firefighter problem is in fact already NP-hard on trees of maximum degree 33 (which are graphs of treewidth 11) [22] and trees of pathwidth 33 [12].

A similar situation arises for the Metric Dimension problem. In Metric Dimension, given a graph GG, we are asked to find a smallest set ZV(G)Z\subseteq V(G) such that for any pair u,vV(G)u,v\in V(G), there is a vertex zZz\in Z such that the distance between uu and zz and the distance between vv and zz are distinct. Gima et al. [31] observed that Metric Dimension is fixed-parameter tractable when parameterized by the cliquewidth and the diameter of the input. Since in graphs with a (d,k)(d,k)-tree model, any induced path has length at most 𝒪(2kd+1)\mathcal{O}(2^{k^{d+1}}) (the bound accounts for our slightly different definition of a tree model), and any such graph has bounded cliquewidth [30], we immediately obtain the following.

Theorem 5.2.

The Metric Dimension problem is fixed-parameter tractable when parameterized by dd and kk on graphs provided with a (d,k)(d,k)-tree-model.

This is again in contrast to the complexity of the Metric Dimension problem on graphs of bounded treewidth. The Metric Dimension problem is in fact already 𝖭𝖯\mathsf{NP}-hard on graphs of pathwidth 2424 [37].

References

  • [1] Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. Tight hardness results for LCS and other sequence similarity measures. In Proc. FOCS 2015, pages 59–78, 2015.
  • [2] Eric Allender, Shiteng Chen, Tiancheng Lou, Periklis A. Papakonstantinou, and Bangsheng Tang. Width-parametrized SAT: Time–space tradeoffs. Theory Comput., 10(12):297–339, 2014.
  • [3] Stefan Arnborg, Jens Lagergren, and Detlef Seese. Easy problems for tree-decomposable graphs. J. Algorithms, 12(2):308–340, 1991.
  • [4] Marina Barsky, Ulrike Stege, Alex Thomo, and Chris Upton. Shortest path approaches for the longest common subsequence of a set of strings. In Proc. BIBE 2007, pages 327–333, 2007.
  • [5] Cristina Bazgan, Morgan Chopin, Marek Cygan, Michael R. Fellows, Fedor V. Fomin, and Erik Jan van Leeuwen. Parameterized complexity of firefighting. J. Comput. System Sci., 80(7):1285–1297, 2014.
  • [6] Andreas Björklund, Thore Husfeldt, Petteri Kaski, and Mikko Koivisto. Fourier meets Möbius: fast subset convolution. In Proc. STOC 2007, pages 67–74, 2007.
  • [7] Hans L. Bodlaender, Rodney G. Downey, Michael R. Fellows, and Harold T. Wareham. The parameterized complexity of sequence alignment and consensus. Theoret. Comput. Sci., 147(1&2):31–54, 1995.
  • [8] Hans L. Bodlaender, Carla Groenland, Hugo Jacob, Lars Jaffke, and Paloma T. Lima. Xnlp-completeness for parameterized problems on graphs with a linear structure. In 17th International Symposium on Parameterized and Exact Computation, IPEC 2022, volume 249 of LIPIcs, pages 8:1–8:18. Schloss Dagstuhl — Leibniz-Zentrum für Informatik, 2022.
  • [9] Hans L. Bodlaender, Carla Groenland, Hugo Jacob, Marcin Pilipczuk, and Michał Pilipczuk. On the complexity of problems on tree-structured graphs. In 17th International Symposium on Parameterized and Exact Computation, IPEC 2022, volume 249 of LIPIcs, pages 6:1–6:17. Schloss Dagstuhl — Leibniz-Zentrum für Informatik, 2022.
  • [10] Hans L. Bodlaender, Carla Groenland, Jesper Nederlof, and Céline M. F. Swennenhuis. Parameterized problems complete for nondeterministic FPT time and logarithmic space. In 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, pages 193–204. IEEE, 2021.
  • [11] Hans L. Bodlaender, Carla Groenland, and Michał Pilipczuk. Parameterized complexity of binary CSP: Vertex cover, treedepth, and related parameters. CoRR, abs/2208.12543, 2022.
  • [12] Janka Chlebíková and Morgan Chopin. The firefighter problem: further steps in understanding its complexity. Theoret. Comput. Sci., 676:42–51, 2017.
  • [13] Bruno Courcelle, Johann A. Makowsky, and Udi Rotics. Linear time solvable optimization problems on graphs of bounded clique-width. Theory Comput. Syst., 33(2):125–150, 2000.
  • [14] Marek Cygan, Fedor V Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer, 2015.
  • [15] Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Michał Pilipczuk, Johan M. M. van Rooij, and Jakub Onufry Wojtaszczyk. Solving connectivity problems parameterized by treewidth in single exponential time. ACM Trans. Algorithms, 18(2):17:1–17:31, 2022.
  • [16] Matt DeVos, O-joung Kwon, and Sang-il Oum. Branch-depth: Generalizing tree-depth of graphs. European J. Combin., 90:Article 103186, 2020.
  • [17] Reinhard Diestel. Graph Theory, volume 173 of Graduate texts in mathematics. Springer, 4th edition, 2012.
  • [18] Rodney G. Downey and Michael R. Fellows. Fundamentals of Parameterized Complexity. Texts in Computer Science. Springer, 2013.
  • [19] Jan Dreier. Lacon- and shrub-decompositions: A new characterization of first-order transductions of bounded expansion classes. In Proc. LICS 2021, pages 1–13, 2021.
  • [20] Jan Dreier, Jakub Gajarský, Sandra Kiefer, Michał Pilipczuk, and Szymon Toruńczyk. Treelike decompositions for transductions of sparse graphs. In Proc. LICS 2022, pages 31:1–31:14, 2022.
  • [21] Michael Elberfeld, Christoph Stockhusen, and Till Tantau. On the space and circuit complexity of parameterized problems: Classes and completeness. Algorithmica, 71(3):661–701, 2015.
  • [22] Stephen Finbow, Andrew D. King, Gary MacGillivray, and Romeo Rizzi. The firefighter problem for graphs of maximum degree three. Discret. Math., 307(16):2094–2105, 2007.
  • [23] Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, and Saket Saurabh. Intractability of clique-width parameterizations. SIAM J. Comput., 39(5):1941–1956, 2010.
  • [24] Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, and Saket Saurabh. Almost optimal lower bounds for problems parameterized by clique-width. SIAM J. Comput., 43(5):1541–1563, 2014.
  • [25] Fedor V. Fomin and Tuukka Korhonen. Fast FPT-approximation of branchwidth. In Proc. STOC 2022, pages 886–899, 2022.
  • [26] Martin Fürer. Multi-clique-width. In Proc. ITCS 2017, volume 67 of Leibniz Int. Proc. Inform., pages 14:1–14:13, 2017.
  • [27] Martin Fürer and Huiwen Yu. Space saving by dynamic algebraization based on tree-depth. Theory Comput. Syst., 61(2):283–304, 2017.
  • [28] Jakub Gajarský and Stephan Kreutzer. Computing shrub-depth decompositions. In Proc. STACS 2020, volume 154 of Leibniz Int. Proc. Inform., pages 56:1–56:17, 2020.
  • [29] Jakub Gajarský, Stephan Kreutzer, Jaroslav Nešetřil, Patrice Ossona de Mendez, Michał Pilipczuk, Sebastian Siebertz, and Szymon Toruńczyk. First-order interpretations of bounded expansion classes. ACM Trans. Comput. Log., 21(4):Art. 29, 41, 2020.
  • [30] Robert Ganian, Petr Hliněný, Jaroslav Nešetřil, Jan Obdržálek, and Patrice Ossona de Mendez. Shrub-depth: Capturing height of dense graphs. Log. Methods Comput. Sci., 15(1):7:1–7:25, 2019.
  • [31] Tatsuya Gima, Tesshu Hanaka, Masashi Kiyomi, Yasuaki Kobayashi, and Yota Otachi. Exploring the gap between treedepth and vertex cover through vertex integrity. Theoret. Comput. Sci., 918:60–76, 2022.
  • [32] Sylvain Guillemot. Parameterized complexity and approximability of the longest compatible sequence problem. Discret. Optim., 8(1):50–60, 2011.
  • [33] Falko Hegerfeld and Stefan Kratsch. Solving connectivity problems parameterized by treedepth in single-exponential time and polynomial space. In Proc. STACS 2020, volume 154 of Leibniz Int. Proc. Inform., 2020.
  • [34] Falko Hegerfeld and Stefan Kratsch. Tight algorithms for connectivity problems parameterized by clique-width. Technical report, 2023. https://arxiv.org/abs/2302.03627.
  • [35] Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which problems have strongly exponential complexity? J. Comput. Syst. Sci., 63(4):512–530, 2001.
  • [36] Daniel M. Kane. Unary subset-sum is in logspace. Technical report, 2010. https://arxiv.org/abs/1012.1336.
  • [37] Shaohua Li and Marcin Pilipczuk. Hardness of metric dimension in graphs of constant treewidth. Algorithmica, 84(11):3110–3155, 2022.
  • [38] Daniel Lokshtanov, Matthias Mnich, and Saket Saurabh. Planar kk-path in subexponential time and polynomial space. In Proc. WG 2011, volume 6986 of Lecture Notes Comput. Sci., pages 262–270, 2011.
  • [39] Ketan Mulmuley, Umesh V. Vazirani, and Vijay V. Vazirani. Matching is as easy as matrix inversion. Combinatorica, 7(1):105–113, 1987.
  • [40] Wojciech Nadara, Michał Pilipczuk, and Marcin Smulewicz. Computing treedepth in polynomial space and linear FPT time. In Proc. ESA 2022, volume 244 of Leibniz Int. Proc. Inform., pages 79:1–79:14, 2022.
  • [41] Jesper Nederlof, Michał Pilipczuk, Céline M. F. Swennenhuis, and Karol Węgrzycki. Hamiltonian cycle parameterized by treedepth in single exponential time and polynomial space. In Proc. WG 2020, volume 12301 of Lecture Notes Comput. Sci., pages 27–39, 2020.
  • [42] Pierre Ohlmann, Michał Pilipczuk, Wojciech Przybyszewski, and Szymon Toruńczyk. Canonical decompositions in monadically stable and bounded shrubdepth graph classes. Technical report, 2023. https://arxiv.org/abs/2303.01473.
  • [43] Patrice Ossona de Mendez, Michał Pilipczuk, and Sebastian Siebertz. Transducing paths in graph classes with unbounded shrubdepth. European J. Combin., page 103660, 2022.
  • [44] Krzysztof Pietrzak. On the parameterized complexity of the fixed alphabet shortest common supersequence and longest common subsequence problems. J. Comput. Syst. Sci., 67(4):757–771, 2003.
  • [45] Michał Pilipczuk and Marcin Wrochna. On space efficiency of algorithms working on structural decompositions of graphs. ACM Trans. Comput. Theory, 9(4):18:1–18:36, 2018.
  • [46] Egon Wanke. kk-NLC graphs and polynomial algorithms. Discret. Appl. Math., 54(2-3):251–266, 1994.