This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

The Predecessor-Existence Problem for kk-Reversible Processes

Leonardo I. L. Oliveira
Valmir C. Barbosa

Programa de Engenharia de Sistemas e Computação, COPPE
Universidade Federal do Rio de Janeiro
Caixa Postal 68511, 21941-972 Rio de Janeiro - RJ, Brazil

Fábio Protti

Instituto de Computação
Universidade Federal Fluminense
Rua Passo da Pátria, 156, 24210-240 Niterói - RJ, Brazil
Corresponding author (fabio@ic.uff.br).
Abstract

For k1k\geq 1, we consider the graph dynamical system known as a kk-reversible process. In such process, each vertex in the graph has one of two possible states at each discrete time. Each vertex changes its state between the present time and the next if and only if it currently has at least kk neighbors in a state different than its own. Given a kk-reversible process and a configuration of states assigned to the vertices, the Predecessor Existence problem consists of determining whether this configuration can be generated by the process from another configuration within exactly one time step. We can also extend the problem by asking for the number of configurations from which a given configuration is reachable within one time step. Predecessor Existence can be solved in polynomial time for k=1k=1, but for k>1k>1 we show that it is NP-complete. When the graph in question is a tree we show how to solve it in O(n)O(n) time and how to count the number of predecessor configurations in O(n2)O(n^{2}) time. We also solve Predecessor Existence efficiently for the specific case of 22-reversible processes when the maximum degree of a vertex in the graph is no greater than 33. For this case we present an algorithm that runs in O(n)O(n) time.


Keywords: kk-reversible processes, Garden-of-Eden configurations, Predecessor-existence problem, Graph dynamical systems.

1 Introduction

Let GG be a simple, undirected, finite graph with nn vertices and mm edges. The set of vertices of GG is denoted by V(G)={v1,v2,,vn}V(G)=\{v_{1},v_{2},\ldots,v_{n}\} and its set of edges is denoted by E(G)E(G). A kk-reversible process on GG is an iterative process in which, at each discrete time tt, each vertex in GG has one of two possible states. A state of a vertex is represented by an integer belonging to the set Q={1,+1}Q=\{-1,+1\} and each vertex has its state changed from one time to the next if and only if it currently has at least kk neighbors in a state different than its own, where kk is a positive integer.

Let Yt(vi)Y_{t}(v_{i}) be the state of vertex viv_{i} at time tt. A configuration of states at time tt for the vertices in V(G)V(G) is denoted by Yt=(Yt(v1),Yt(v2),,Yt(vn))Y_{t}=(Y_{t}(v_{1}),Y_{t}(v_{2}),\ldots,Y_{t}(v_{n})). The one-step dynamics in a kk-reversible process for graph GG can be described by a function FGk:QnQnF_{G}^{k}:Q^{n}\to Q^{n} such that Yt=FGk(Yt1)Y_{t}=F_{G}^{k}(Y_{t-1}) through a local state update rule for each vertex viv_{i} given by

Yt(vi)={Yt1(vi),if vi has fewer than k neighbors in state Yt1(vi)at time t1;Yt1(vi),otherwise.\displaystyle Y_{t}(v_{i})=\left\{\begin{array}[]{rl}Y_{t-1}(v_{i}),&\mbox{if $v_{i}$ has fewer than $k$ neighbors in state $-Y_{t-1}(v_{i})$}\\ &\mbox{at time $t-1$;}\\ -Y_{t-1}(v_{i}),&\mbox{otherwise.}\\ \end{array}\right. (4)

The motivation to study kk-reversible processes is related to the analysis of opinion dissemination in social networks. For example, suppose that a network is modeled by a graph, each vertex representing a person and each edge between two vertices indicating that the corresponding persons are friends. Suppose further that state 1-1 represents disagreement on some issue and that the state +1+1 means agreement on the same issue. A kk-reversible process is an approach to model opinion dissemination when people are strongly influenced by the opinions of their friends and the society they are part of. Notice that in this model we are assuming that all people act in the same manner. A more complex approach could assume, for example, distinct thresholds for each person or thresholds based on one’s number of friends.

Note that kk-reversible processes are examples of graph dynamical systems; more precisely, of synchronous dynamical systems, which extend the notion of a cellular automaton to arbitrary graph topologies. The study of graph dynamical systems and cellular automata is multidisciplinary and related to several areas, like optics [9], neural networks [12], statistical mechanics [1], as well as opinion [5] and disease dissemination [14]. Also in distributed computing there are several studies regarding models of graph dynamical systems. An example is the model of majority processes in which each vertex changes its state if and only if at least half of its neighbors have a different state [13]. An application of this model is used for maintaining data consistency [16].

Most of the studies regarding kk-reversible processes are related to the Minimum Conversion Set problem in such processes. This problem consists of determining the cardinality of the minimum set of vertices that, if in state +1+1, lead all vertices in the graph also to state +1+1 after a finite number of time steps. It has been proved that this problem is NP-hard for k>1k>1 [6]. There are also several interesting results about this problem in the work by Dreyer [7], who also presents some important results regarding the periodic behavior of kk-reversible processes, as well as upper bounds on the transient length that precedes periodicity. Most of the results presented by Dreyer are based on reductions from the so-called threshold processes, which are broadly studied by Goles and Olivos [11, 10]. Another approach to study the transient and periodic behavior of kk-reversible processes is the use of a specific energy function that leads to a much more intuitive proof of the maximum period length and also better bounds on the transient length [15].

A problem that arises in synchronous dynamical systems on graphs is the so-called Predecessor Existence problem, defined as follows. Given one such dynamical system and a configuration of states, the question is whether this configuration can be generated from another configuration in a single time step using the system’s update rule. In the affirmative case, such configuration is called a predecessor of the one that was given initially. This problem was studied by Sutner [17] within the context of cellular automata, where configurations lacking a predecessor configuration are known as garden-of-Eden configurations, and was proved to be NP-complete for finite cellular automata. NP-completeness results for related dynamical systems as well as polynomial-time algorithms for some graph classes can also be found in the literature [3]. An extension of the Predecessor Existence problem is to count the number of predecessor configurations. This is also a hard problem and has been proved to be #\#P-complete [18].

For kk-reversible processes, we address these two problems in this paper. We are interested in determining whether a configuration Yt1Y_{t-1} exists for which Yt=FGk(Yt1)Y_{t}=F_{G}^{k}(Y_{t-1}). Because only time steps t1t-1 and tt matter, for simplicity we denote Yt1Y_{t-1} and YtY_{t} by YY^{\prime} and YY, respectively. We henceforth denote this special case of Predecessor Existence by Pre(kk). We also consider the associated counting problem, #Pre(kk), which asks for the number of predecessor configurations. Our results include an NP-completeness proof for the general case of kk-reversible processes and polynomial-time algorithms for some particular cases.

The remainder of the paper is organized as follows. In Section 2 we show that Pre(11) is polynomial-time solvable. In Section 3 we provide an NP-completeness proof of Pre(kk) for k>1k>1. In Section 4 we describe two efficient algorithms for trees, one for solving Pre(kk) and the other for solving #Pre(kk). In Section 5 we show an efficient algorithm to solve Pre(22) for graphs with maximum degree no greater than 33. Section 6 contains our conclusions.

2 Polynomial-time solvability of Pre(𝟏1)

For k=1k=1, if YY^{\prime} exists then any pair of neighbors uu and vv for which Y(u)=Y(v)Y(u)=Y(v) also has Y(u)=Y(v)Y^{\prime}(u)=Y^{\prime}(v). Based on this observation, we start by partitioning GG into connected subgraphs that are maximal with respect to the property that each of them contains only vertices whose states in YY are the same. Clearly, all vertices in the same subgraph must have equal states also in a predecessor of YY. Let us call each of such maximal connected subgraphs an MCS.

Let HH be an MCS in which a vertex vv exists whose neighbors in GG all have the same state as its own. In other words, all of vv’s neighbors are also in HH. For this vertex, clearly there is no possibility other than Y(v)=Y(v)Y^{\prime}(v)=Y(v). Because there is only one choice of state for vv in a predecessor configuration, we call both the vertex and its containing MCS locked. We refer to all other vertices and MCSs as being unlocked (so there may exist unlocked vertices in a locked MCS). We also say that any two MCSs are neighbors whenever they contain vertices that are themselves neighbors.

Theorem 1.

Pre(11) is solved affirmatively if and only if the following two conditions hold:

  • No two locked MCSs are neighbors;

  • Every vertex in an unlocked MCS has at least one neighbor in another unlocked MCS.

In this case, YY^{\prime} is obtained from YY by changing the state of all vertices in unlocked MCSs.

Proof.

The case of a single (necessarily locked) MCS is trivial. If, on the other hand, more than one MCS exists in GG, then clearly the two conditions suffice for YY^{\prime} to exist and be as stated: in one time step from YY^{\prime}, the state of every vertex in a locked MCS remains unchanged and that of every vertex in an unlocked MCS changes, thus yielding YY.

It remains for necessity to be shown. We do this by noting that, should the first condition fail and at least two locked MCSs be neighbors, any prospective YY^{\prime} would have to differ from YY in all vertices of each of these MCSs, but the presence of locked vertices in them would make it impossible for YY to be obtained in one time step. Should the second condition be the one to fail and at least one vertex in an unlocked MCS have neighbors outside its MCS only in locked MCSs, any prospective YY^{\prime} would have to differ from YY in all vertices of such an unlocked MCS. Once again it would be impossible to obtain YY in one time step due to the locked vertices. It follows that both conditions are necessary for YY^{\prime} to exist. ∎

By Theorem 1, we can easily solve Pre(11) in O(n+m)O(n+m) time.

3 NP-completeness of Pre(𝒌k) for 𝒌>𝟏k>1

We first note that the NP-completeness proof of Predecessor Existence for finite cellular automata [17] cannot be directly extended to kk-reversible processes in graphs for k>1k>1. Sutner’s proof only shows that there exist finite cellular automata for which Predecessor Existence is NP-complete. In other words, it depends on the vertex update rule being used in the cellular automaton. A different approach is then needed.

We present a reduction from a satisfiability problem known as 3Sat Exactly-Two. This problem is the variation of the 3Sat problem in which each clause must be satisfied by exactly two positive literals. We start by proving that 3Sat Exactly-Two is NP-complete.

Lemma 2.

3Sat Exactly-Two is NP-complete.

Proof.

The problem is trivially in NP. We proceed with the reduction from another variation of the 3Sat problem, known as 3Sat Exactly-One [8], which is NP-complete and asks whether there exists an assignment of variables satisfying each clause by exactly one positive literal.

The reduction is simple and consists of inverting all literals in all clauses of an instance SS of 3Sat Exactly-One, resulting in an instance SS^{\prime} of 3Sat Exactly-Two. It is easy to check that a solution for SS directly gives a solution for SS^{\prime}, and conversely, a solution for SS^{\prime} directly gives a solution for SS. ∎

Theorem 3.

Pre(kk) is NP-complete for k>1k>1.

Proof.

Given two configurations YY and YY^{\prime}, verifying whether YY^{\prime} is a predecessor configuration of YY is straightforward and can be done by simulating one step of the kk-reversible process starting with configuration YY^{\prime}. Simulating one step of the process takes O(n+m)O(n+m) time and the final comparison between the resulting configuration and YY takes O(n)O(n) time; thus, Pre(kk) is in NP. The remainder of the proof is a reduction from 3Sat Exactly-Two, which by Lemma 2 is NP-complete.

Let SS be an arbitrary instance of 3Sat Exactly-Two with the MM clauses c1,c2,,cMc_{1},c_{2},\dots,c_{M} and the NN variables x1,x2,,xNx_{1},x_{2},\dots,x_{N}. We construct an instance (G,Y)(G,Y) of Pre(k)(k) from SS as follows.

Vertex set V(G)V(G) is the union of:

  • {xi,¬xi}\{x_{i},\neg{x_{i}}\}, for each variable xix_{i} in SS;

  • {zi,zi}\{z_{i},z^{\prime}_{i}\}, for each variable xix_{i} in SS;

  • {ui,1,,ui,2k3}\{u_{i,1},\dots,u_{i,2k-3}\}, for each variable xix_{i} in SS;

  • {pi,1,,pi,2k3}\{p_{i,1},\dots,p_{i,2k-3}\}, for each variable xix_{i} in SS;

  • {wi,1,,wi,k2}\{w_{i,1},\dots,w_{i,k-2}\}, for each variable xix_{i} in SS, provided k>2k>2;

  • {wi,1,,wi,k2}\{w^{\prime}_{i,1},\dots,w^{\prime}_{i,k-2}\}, for each variable xix_{i} in SS, provided k>2k>2;

  • {ci,ci}\{c_{i},c^{\prime}_{i}\}, for each clause cic_{i} in SS;

  • {bi,1,,bi,k2}\{b_{i,1},\dots,b_{i,k-2}\}, for each clause cic_{i} in SS, provided k>2k>2;

  • {bi,1,,bi,k1}\{b^{\prime}_{i,1},\dots,b^{\prime}_{i,k-1}\}, for each clause cic_{i} in SS.

Vertices xix_{i} and ¬xi\neg{x_{i}} are called literal vertices and vertices cic_{i} and cic^{\prime}_{i} are called clause vertices. If xx is a neighbor of uu and xx is a literal vertex, then we say that xx is a literal neighbor of uu. Similarly, if xx is a neighbor of uu and xx is a clause vertex, then xx is a clause neighbor of uu.

Edge set E(G)E(G) is the union of:

  • {(xi,zi),(xi,zi),(¬xi,zi),(¬xi,zi)}\{(x_{i},z_{i}),(x_{i},z^{\prime}_{i}),(\neg{x_{i}},z_{i}),(\neg{x_{i}},z^{\prime}_{i})\}, for each variable xix_{i} in SS;

  • {(xi,ui,1),,(xi,ui,2k3)}\{(x_{i},u_{i,1}),\dots,(x_{i},u_{i,2k-3})\}, for each variable xix_{i} in SS;

  • {(¬xi,pi,1),,(¬xi,pi,2k3)}\{(\neg{x_{i}},p_{i,1}),\dots,(\neg{x_{i}},p_{i,2k-3})\}, for each variable xix_{i} in SS;

  • {(zi,wi,1),,(zi,wi,k2)}\{(z_{i},w_{i,1}),\dots,(z_{i},w_{i,k-2})\}, for each variable xix_{i} in SS, provided k>2k>2;

  • {(zi,wi,1),,(zi,wi,k2)}\{(z^{\prime}_{i},w^{\prime}_{i,1}),\dots,(z^{\prime}_{i},w^{\prime}_{i,k-2})\}, for each variable xix_{i} in SS, provided k>2k>2;

  • {(cj,bj,1),,(cj,bj,k1)}\{(c^{\prime}_{j},b^{\prime}_{j,1}),\dots,(c^{\prime}_{j},b^{\prime}_{j,k-1})\}, for each clause cjc_{j} in SS;

  • {(cj,bj,1),,(cj,bj,k2)}\{(c_{j},b_{j,1}),\dots,(c_{j},b_{j,k-2})\}, for each clause cjc_{j} in SS, provided k>2k>2;

  • {(cj,xi),(cj,xi)}\{(c_{j},x_{i}),(c^{\prime}_{j},x_{i})\}, for each literal xix_{i} occurring in clause cjc_{j};

  • {(cj,¬xi),(cj,¬xi)}\{(c_{j},\neg{x_{i}}),(c^{\prime}_{j},\neg{x_{i}})\}, for each literal ¬xi\neg{x_{i}} occurring in clause cjc_{j}.

We finish the construction by defining the target configuration YY:

  • Y(xi)=Y(¬xi)=+1Y(x_{i})=Y(\neg{x_{i}})=+1, for 1iN1\leq i\leq N;

  • Y(zi)=+1Y(z_{i})=+1, Y(zi)=1Y(z^{\prime}_{i})=-1, for 1iN1\leq i\leq N;

  • Y(ui,j)=Y(pi,j)=+1Y(u_{i,j})=Y(p_{i,j})=+1, for 1iN1\leq i\leq N and 1jk11\leq j\leq k-1;

  • Y(ui,j)=Y(pi,j)=1Y(u_{i,j})=Y(p_{i,j})=-1, for 1iN1\leq i\leq N and kj2k3k\leq j\leq 2k-3, provided k>2k>2;

  • Y(wi,j)=1Y(w_{i,j})=-1, Y(wi,j)=+1Y(w^{\prime}_{i,j})=+1, for 1iN1\leq i\leq N and 1jk21\leq j\leq k-2, provided k>2k>2;

  • Y(ci)=+1Y(c_{i})=+1, Y(ci)=1Y(c^{\prime}_{i})=-1, for 1iM1\leq i\leq M;

  • Y(bi,j)=1Y(b_{i,j})=-1, for 1iM1\leq i\leq M and 1jk21\leq j\leq k-2, provided k>2k>2;

  • Y(bi,j)=1Y(b^{\prime}_{i,j})=-1, for 1iM1\leq i\leq M and 1jk11\leq j\leq k-1, provided k>2k>2.

Figure 1 illustrates the case of k=3k=3, M=1M=1, and N=3N=3, the single clause being c1=x1¬x2¬x3c_{1}=x_{1}\vee\neg{x_{2}}\vee\neg{x_{3}}.

x1x_{1}¬x1\neg{x_{1}}x2x_{2}¬x2\neg{x_{2}}x3x_{3}¬x3\neg{x_{3}}c1{c_{1}}c1{c^{\prime}_{1}}b1,1{b^{\prime}_{1,1}}b1,2{b^{\prime}_{1,2}}b1,1{b_{1,1}}u1,1u_{1,1}u1,2u_{1,2}u1,3u_{1,3}p1,1p_{1,1}p1,2p_{1,2}p1,3p_{1,3}u2,1u_{2,1}u2,2u_{2,2}u2,3u_{2,3}p2,1p_{2,1}p2,2p_{2,2}p2,3p_{2,3}u3,1u_{3,1}u3,2u_{3,2}u3,3u_{3,3}p3,1p_{3,1}p3,2p_{3,2}p3,3p_{3,3}z1z_{1}z1z^{\prime}_{1}z2z_{2}z2z^{\prime}_{2}z3z_{3}z3z^{\prime}_{3}w1,1w_{1,1}w2,1w_{2,1}w3,1w_{3,1}w1,1w^{\prime}_{1,1}w2,1w^{\prime}_{2,1}w3,1w^{\prime}_{3,1}
Figure 1: Graph GG in the instance of Pre(3) having M=1M=1 and N=3N=3 for which c1=x1¬x2¬x3c_{1}=x_{1}\vee\neg{x_{2}}\vee\neg{x_{3}}. Shaded circles indicate state +1+1 in configuration YY; empty circles indicate state 1-1.

For each variable in SS, at most 6k66k-6 vertices are created, and for each clause at most 2k12k-1 vertices, resulting in at most n=N(6k6)+M(2k1)n=N(6k-6)+M(2k-1) vertices. Likewise, the total number of edges is at most m=N(6k6)+M(2k+3)m=N(6k-6)+M(2k+3). Since kk is a constant, we have a polynomial-time reduction.

We proceed by showing that if SS is satisfiable then YY has at least one predecessor configuration. In fact, given any satisfying truth assignment for SS, we can construct a predecessor configuration YY^{\prime} of YY in the following manner:

  • Y(xi)=+1Y^{\prime}(x_{i})=+1, if variable xix_{i} is true in the given assignment;

  • Y(xi)=1Y^{\prime}(x_{i})=-1, if variable xix_{i} is false in the given assignment;

  • Y(¬xi)=Y(xi)Y^{\prime}(\neg{x_{i}})=-Y^{\prime}(x_{i});

  • Y(zi)=+1Y^{\prime}({z_{i}})=+1, Y(zi)=1Y^{\prime}(z^{\prime}_{i})=-1, for 1iN1\leq i\leq N;

  • Y(ui,j)=Y(pi,j)=+1Y^{\prime}(u_{i,j})=Y^{\prime}(p_{i,j})=+1, for 1iN1\leq i\leq N and 1jk11\leq j\leq k-1;

  • Y(ui,j)=Y(pi,j)=1Y^{\prime}(u_{i,j})=Y^{\prime}(p_{i,j})=-1, for 1iN1\leq i\leq N and kj2k3k\leq j\leq 2k-3;

  • Y(wi,j)=1Y^{\prime}(w_{i,j})=-1, Y(wi,j)=+1Y^{\prime}(w^{\prime}_{i,j})=+1, for 1iN1\leq i\leq N and 1jk21\leq j\leq k-2, provided k>2k>2;

  • Y(ci)=Y(ci)=+1Y^{\prime}(c_{i})=Y^{\prime}(c^{\prime}_{i})=+1, for 1iM1\leq i\leq M;

  • Y(bi,j)=1Y^{\prime}(b_{i,j})=-1, for 1iM1\leq i\leq M and 1jk21\leq j\leq k-2, provided k>2k>2;

  • Y(bi,j)=1Y^{\prime}(b^{\prime}_{i,j})=-1, for 1iM1\leq i\leq M and 1jk11\leq j\leq k-1.

Figure 2 shows predecessor configuration YY^{\prime} for the YY given in Figure 1. As an example, we have let all of x1x_{1}, ¬x2\neg{x_{2}}, and x3x_{3} be true in the satisfying assignment.

By construction, and considering configuration YY^{\prime} as described above, each vertex xix_{i} or ¬xi\neg{x_{i}} has at least kk neighbors in state +1+1 in YY^{\prime} and exactly k1k-1 neighbors in state 1-1. This condition is sufficient to guarantee that every literal vertex reaches state +1+1 in one time step, regardless of its state in configuration YY^{\prime}.

Each of vertices ui,ju_{i,j}, pi,jp_{i,j}, wi,jw_{i,j}, wi,jw^{\prime}_{i,j} , bi,jb_{i,j}, and bi,jb^{\prime}_{i,j} has only one neighbor, and will obviously keep its state in the next configuration. Each of vertices ziz_{i} and ziz^{\prime}_{i} has at most k1k-1 neighbors in the opposite state, since vertices xix_{i} and ¬xi\neg{x_{i}} have mutually opposite states. So ziz_{i} and ziz^{\prime}_{i} remain unchanged as well.

In order for configuration YY to be obtained after one time step, in configuration YY^{\prime} no vertex cic_{i} can have more than one literal neighbor in state 1-1, which would lead to state 1-1 for cic_{i} in the next configuration. Since SS is satisfiable, there are exactly two positive literals for each clause in SS, and by construction of YY^{\prime}, each vertex cic_{i} has exactly one literal neighbor in state 1-1. Similarly, every vertex cic^{\prime}_{i} needs at least one literal neighbor in state 1-1, and as the construction guarantees, cic^{\prime}_{i} has exactly one literal neighbor in state 1-1.

x1x_{1}¬x1\neg{x_{1}}x2x_{2}¬x2\neg{x_{2}}x3x_{3}¬x3\neg{x_{3}}c1{c_{1}}c1{c^{\prime}_{1}}b1,1{b^{\prime}_{1,1}}b1,2{b^{\prime}_{1,2}}b1,1{b_{1,1}}u1,1u_{1,1}u1,2u_{1,2}u1,3u_{1,3}p1,1p_{1,1}p1,2p_{1,2}p1,3p_{1,3}u2,1u_{2,1}u2,2u_{2,2}u2,3u_{2,3}p2,1p_{2,1}p2,2p_{2,2}p2,3p_{2,3}u3,1u_{3,1}u3,2u_{3,2}u3,3u_{3,3}p3,1p_{3,1}p3,2p_{3,2}p3,3p_{3,3}z1z_{1}z1z^{\prime}_{1}z2z_{2}z2z^{\prime}_{2}z3z_{3}z3z^{\prime}_{3}w1,1w_{1,1}w2,1w_{2,1}w3,1w_{3,1}w1,1w^{\prime}_{1,1}w2,1w^{\prime}_{2,1}w3,1w^{\prime}_{3,1}
Figure 2: Predecessor configuration YY^{\prime} for the GG and YY of Figure 1 when all of x1,¬x2x_{1},\neg{x_{2}} and x3x_{3} are true. Shaded circles indicate state +1+1 in YY^{\prime}; empty circles indicate state 1-1.

Hence, given the configuration YY^{\prime} as constructed, we see that the next configuration is precisely configuration YY. We then conclude that whenever SS is satisfiable, YY has at least one predecessor configuration.

Conversely, we now show that if YY has at least one predecessor configuration then SS is satisfiable.

In any predecessor configuration of YY, vertices ui,ju_{i,j}, pi,jp_{i,j}, wi,jw_{i,j}, wi,jw^{\prime}_{i,j}, bi,jb_{i,j}, and bi,jb^{\prime}_{i,j} should all be in the same states as in configuration YY, since they all have only one neighbor each.

Vertices ziz_{i} and ziz^{\prime}_{i} should also have the same states as in configuration YY in any predecessor configuration. For suppose that, in a predecessor configuration, ziz_{i} is in state 1-1; then necessarily xix_{i} and ¬xi\neg{x_{i}} should be in state +1+1, and consequently vertex ziz^{\prime}_{i} would reach state +1+1 in the next configuration, which is different from its state in configuration YY. An analogous argument holds for vertex ziz^{\prime}_{i}. This condition also forces xix_{i} to have a state different than ¬xi\neg{x_{i}} in any predecessor configuration of YY, since if both have the same state then in the next configuration ziz_{i} and ziz^{\prime}_{i} will also have the same state.

Each vertex cic_{i} must have state +1+1 in a predecessor configuration of YY, otherwise every literal neighbor of cic_{i} would need to have state 1-1 in the predecessor configuration and consequently vertex cic_{i} would not change to state +1+1, which is its state in YY. Also, it must be the case that at least two of the literal neighbors of cic_{i} have state +1+1, otherwise cic_{i} would have state 1-1 in the next configuration, thus not matching configuration YY.

Each vertex cic^{\prime}_{i} has the same literal neighbors as vertex cic_{i}. As we know that some of these neighbors have state +1+1 in a predecessor configuration, cic^{\prime}_{i} must have state +1+1, otherwise all of these neighbors would need to have state 1-1 in the predecessor configuration, contradicting the fact that some of them have state +1+1. Along with the restriction imposed by cic_{i}, we have that exactly two of the three literal vertices associated with clause cic_{i} must have state +1+1 in a predecessor configuration.

Hence, given any predecessor configuration of YY, we can associate with any literal the value true if its vertex has state +1+1, and value false otherwise. The construction guarantees that this will satisfy every clause with exactly two positive literals and that no opposite literals will have the same assignment.

We conclude that if YY has a predecessor configuration then SS is satisfiable. ∎

Corollary 4.

Pre(k)(k) on bipartite graphs is NP-complete for k>1k>1.

Proof.

The graph constructed in the proof of Theorem 3 is bipartite for the node sets i,j{xi,¬xi,bi,j,bi,j,wi,j,wi,j}\bigcup_{i,j}\{x_{i},\neg{x_{i}},b_{i,j},b^{\prime}_{i,j},w_{i,j},w^{\prime}_{i,j}\} and i,j{ci,ci,ui,j,pi,j,zi,zi}\bigcup_{i,j}\{c_{i},c^{\prime}_{i},u_{i,j},p_{i,j},z_{i},z^{\prime}_{i}\}. ∎

4 Polynomial-time algorithms for trees

In this section, we consider a tree TT rooted at an arbitrary vertex called 𝑟𝑜𝑜𝑡\mathit{root}. For each vertex vv in TT, 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} denotes the parent of vv, and 𝑐ℎ𝑖𝑙𝑑𝑟𝑒𝑛v\mathit{children}_{v} denotes the set of children of vv. A subtree of TT will be denoted by TuT_{u}, where uu is the root of the subtree.

It will be helpful to adopt a notation for a configuration of a subtree of TT. Denote by Yv,t=(Yt(w1),Yt(w2),,Yt(w|Tv|))Y_{v,t}=(Y_{t}(w_{1}),Y_{t}(w_{2}),\ldots,Y_{t}(w_{|T_{v}|})) the configuration of states of the vertices in subtree TvT_{v} at time tt, where each wiw_{i} is one of the |Tv||T_{v}| vertices of TvT_{v}. Notice that Yv,tY_{v,t} is a subsequence of YtY_{t} and its purpose is to refer to states of vertices in subtree TvT_{v} only; thus, if v=𝑟𝑜𝑜𝑡v=\mathit{root}, Yv,t=YtY_{v,t}=Y_{t}.

The one-step dynamics in a kk-reversible process in subtree TvT_{v} can be described using the function FG,vk:Q|Tv|Q|Tv|F_{G,v}^{k}:Q^{|T_{v}|}\to Q^{|T_{v}|} such that

Yv,t=FG,vk(Yv,t1)=(Yt(w1),Yt(w2),,Yt(w|Tv|)).Y_{v,t}=F_{G,v}^{k}(Y_{v,t-1})=(Y_{t}(w_{1}),Y_{t}(w_{2}),\ldots,Y_{t}(w_{|T_{v}|})). (5)

As in Section 11, we omit, for simplicity, the subscript referring to time, using notation YY^{\prime} to refer to the configuration at time t1t-1 and YY to refer to the configuration at time tt. Hence, Yv=Yv,tY_{v}=Y_{v,t} and Yv=Yv,t1Y^{\prime}_{v}=Y_{v,t-1}.

So long as we take into account the influence of parentv\textit{parent}_{v} on the dynamics of TvT_{v}, then it is easy to see that the following holds. If a configuration YY^{\prime} exists for which Y=FGk(Y)Y=F_{G}^{k}(Y^{\prime}), then we have Yv=FG,vk(Yv)Y_{v}=F_{G,v}^{k}(Y^{\prime}_{v}) as well. That is, the subsequence of YY^{\prime} that corresponds to TvT_{v} is a predecessor configuration of the subsequence of YY that corresponds to TvT_{v}.

Section 4.1 presents an algorithm that solves Pre(k)(k) in polynomial time for any kk when the graph is a tree. Section 4.2 presents an algorithm that solves the associated counting problem #Pre(k)(k), also in polynomial time, for any kk when the graph is a tree.

4.1 Polynomial-time algorithm for Pre(𝒌)(k)

We start by defining the function 𝑣𝑠𝑡𝑎𝑡𝑒(𝑡𝑎𝑟𝑔𝑒𝑡,𝑐𝑢𝑟𝑟𝑒𝑛𝑡,p,k)\mathit{vstate}(\mathit{target},\mathit{current},p,k) that determines whether a vertex in state 𝑐𝑢𝑟𝑟𝑒𝑛𝑡\mathit{current} can reach state 𝑡𝑎𝑟𝑔𝑒𝑡\mathit{target} in one time step assuming that it has pp neighbors in the state different than 𝑐𝑢𝑟𝑟𝑒𝑛𝑡\mathit{current} in a kk-reversible process. This is a simple Boolean function that only checks whether a given state transition is possible. It can be calculated in O(1)O(1) time, since

𝑣𝑠𝑡𝑎𝑡𝑒(𝑡𝑎𝑟𝑔𝑒𝑡,𝑐𝑢𝑟𝑟𝑒𝑛𝑡,p,k)={𝑓𝑎𝑙𝑠𝑒,if 𝑐𝑢𝑟𝑟𝑒𝑛𝑡=𝑡𝑎𝑟𝑔𝑒𝑡 and pk,or 𝑐𝑢𝑟𝑟𝑒𝑛𝑡𝑡𝑎𝑟𝑔𝑒𝑡 and p<k;𝑡𝑟𝑢𝑒,otherwise.\mathit{vstate}(\mathit{target},\mathit{current},p,k)=\left\{\begin{array}[]{rl}\mathit{false},&\mbox{if }\mathit{current}=\mathit{target}\mbox{ and }p\geq k,\\ &\mbox{or }\mathit{current}\neq\mathit{target}\mbox{ and }p<k;\\ \mathit{true},&\mbox{otherwise.}\end{array}\right. (6)

The algorithm tries to build a predecessor configuration YY^{\prime} of YY by determining possible configurations for its subtrees and testing states on vertices. Suppose we traverse the tree in a top-down fashion attaching states to each vertex we visit. Let vv be the vertex of TT that the algorithm is visiting. Suppose that for parentvparent_{v} the algorithm attached state cc. We define the function 𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{fstate}(v,c) that returns the state that vv must have in a configuration YvY^{\prime}_{v} such that F(Yv)=YvF(Y^{\prime}_{v})=Y_{v} or returns \infty if there is no such configuration YvY^{\prime}_{v} when Y(𝑝𝑎𝑟𝑒𝑛𝑡v)=cY^{\prime}(\mathit{parent}_{v})=c. If both states are possible for vv, the function simply returns Y(𝑝𝑎𝑟𝑒𝑛𝑡v)Y(\mathit{parent}_{v}) or +1+1 when vv is the root.

We assume that function 𝑓𝑠𝑡𝑎𝑡𝑒\mathit{fstate} will be called with parameters vv and 0 when vv is the root of the tree. Hence, 𝑓𝑠𝑡𝑎𝑡𝑒(𝑟𝑜𝑜𝑡,0)\mathit{fstate}(\mathit{root},0) is simply the state that 𝑟𝑜𝑜𝑡\mathit{root} must have in a predecessor configuration of YY. If 𝑓𝑠𝑡𝑎𝑡𝑒(𝑟𝑜𝑜𝑡,0)\mathit{fstate}(\mathit{root},0) is different than \infty, configuration YY has a predecessor configuration; otherwise, it does not.

Given the function 𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{fstate}(v,c) one can easily test whether the state cc attached during the algorithm is possible or not in a predecessor configuration of YY. When visiting vertex 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v}, the algorithm calls the function 𝑓𝑠𝑡𝑎𝑡𝑒(w,c)\mathit{fstate}(w,c) for each child ww of 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v}, and then by using function 𝑣𝑠𝑡𝑎𝑡𝑒\mathit{vstate} it decides whether a transition is possible from state cc to state Y(𝑝𝑎𝑟𝑒𝑛𝑡v)Y(\mathit{parent}_{v}).

Notice that when it is possible for both states to be returned by 𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{fstate}(v,c), we define the function to return state Y(𝑝𝑎𝑟𝑒𝑛𝑡v)Y(\mathit{parent}_{v}). This choice is always correct in this case, since it is not important for the state to be actually assigned to 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} in the predecessor configuration. What it does is to increase the number of neighbors that can help 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} to reach state Y(𝑝𝑎𝑟𝑒𝑛𝑡v)Y(\mathit{parent}_{v}) in the next time step. We maximize the number of neighbors in state Y(𝑝𝑎𝑟𝑒𝑛𝑡v)Y(\mathit{parent}_{v}), and if this maximum number is not enough to make 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} reach state Y(𝑝𝑎𝑟𝑒𝑛𝑡v)Y(\mathit{parent}_{v}), then a lower number of neighbors would certainly not be either.

Now, the problem is to calculate 𝑓𝑠𝑡𝑎𝑡𝑒(𝑟𝑜𝑜𝑡,0)\mathit{fstate}(\mathit{root},0) in a correct and efficient way. Assume that for each child ff of vv the values of 𝑓𝑠𝑡𝑎𝑡𝑒(f,+1)\mathit{fstate}(f,+1) and 𝑓𝑠𝑡𝑎𝑡𝑒(f,1)\mathit{fstate}(f,-1) are correctly calculated. In other words, we know the states of every child of vv when vv has state +1+1 or state 1-1 in a predecessor configuration. Let ll be the number of children of vv with state other than the Y(v)Y^{\prime}(v) calculated by the 𝑓𝑠𝑡𝑎𝑡𝑒\mathit{fstate} function. As we set the state of 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} we can update ll if 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} also has a state different than Y(v)Y^{\prime}(v), meaning that ll represents the number of vertices in a state different than vv in the configuration we are trying to construct. Then we can verify 𝑣𝑠𝑡𝑎𝑡𝑒(Y(v),st,l,k)\mathit{vstate}(Y(v),st,l,k) to check whether 𝑠𝑡\mathit{st} is a valid state for vv in a predecessor configuration YY^{\prime} such that FG,vk(Yv)=YvF_{G,v}^{k}(Y^{\prime}_{v})=Y_{v}.

An important observation to have in mind is that, for the case in which there is at least one child ff of vv that does not have a possible state (which means that 𝑓𝑠𝑡𝑎𝑡𝑒(f,𝑠𝑡)\mathit{fstate}(f,\mathit{st}) = \infty), then the state 𝑠𝑡\mathit{st} is not valid for vv.

Given that we already know all the valid states for vv when Y(𝑝𝑎𝑟𝑒𝑛𝑡v)=cY^{\prime}(\mathit{parent}_{v})=c, function 𝑓𝑠𝑡𝑎𝑡𝑒\mathit{fstate} can be calculated like this:

𝑓𝑠𝑡𝑎𝑡𝑒(v,c)={Y(𝑝𝑎𝑟𝑒𝑛𝑡v),if both states are valid for v;+1,if only state +1 is valid for v;1,if only state 1 is valid for v;,otherwise.\mathit{fstate}(v,c)=\left\{\begin{array}[]{rl}Y(\mathit{parent}_{v}),&\mbox{if both states are valid for $v$;}\\ +1,&\mbox{if only state $+1$ is valid for $v$;}\\ -1,&\mbox{if only state $-1$ is valid for $v$;}\\ \infty,&\mbox{otherwise.}\end{array}\right. (7)

Since we already know which the valid states are, 𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{fstate}(v,c) is easily determined in O(1)O(1) time. To check whether a given state is valid or not we simply need to count the number of neighbors with state opposite to the one being checked using function 𝑓𝑠𝑡𝑎𝑡𝑒\mathit{fstate}, and then call function 𝑣𝑠𝑡𝑎𝑡𝑒\mathit{vstate} to verify whether the state is valid. This can be done in O(d(v))O(d(v)) time.

It is possible to calculate 𝑓𝑠𝑡𝑎𝑡𝑒\mathit{fstate} for all vertices in TT using a recursive algorithm similar to depth-first search. The algorithm, when visiting vertex vv with 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} in state cc, recursively calculates 𝑓𝑠𝑡𝑎𝑡𝑒(f,+1)\mathit{fstate}(f,+1) and 𝑓𝑠𝑡𝑎𝑡𝑒(f,1)\mathit{fstate}(f,-1) for each child ff of vv, and once the algorithm returns from the recursion, all values needed to calculate 𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{fstate}(v,c) are available. Hence, we simply need to calculate 𝑓𝑠𝑡𝑎𝑡𝑒(𝑟𝑜𝑜𝑡,0)\mathit{fstate}(\mathit{root},0) recursively and check whether the returned value is \infty or not.

1 begin
2 if 𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]𝑁𝐼𝐿\mathit{fstate}[v,c+1]\neq\mathit{NIL} then
3    return 𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]\mathit{fstate}[v,c+1];
4    
5 𝑐𝑜𝑢𝑛𝑡0\mathit{count}\leftarrow 0;
6 𝑠𝑡𝑎𝑡𝑒Y(𝑝𝑎𝑟𝑒𝑛𝑡v)\mathit{state}\leftarrow Y(\mathit{parent}_{v});
7 while 𝑐𝑜𝑢𝑛𝑡2\mathit{count}\neq 2 do
8    𝑐𝑜𝑢𝑛𝑡𝑐𝑜𝑢𝑛𝑡+1\mathit{count}\leftarrow\mathit{count}+1;
9    l0l\leftarrow 0;
10    𝑟𝑒𝑡𝑡𝑟𝑢𝑒\mathit{ret}\leftarrow\mathit{true};
11    if c=𝑠𝑡𝑎𝑡𝑒c=-\mathit{state} then
12       ll+1l\leftarrow l+1;
13       
14    
15    foreach f𝑐ℎ𝑖𝑙𝑑𝑟𝑒𝑛vf\in\mathit{children}_{v} do
16       if 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,𝑠𝑡𝑎𝑡𝑒)=𝑠𝑡𝑎𝑡𝑒\mathit{calcfstate}(f,\mathit{state})=-\mathit{state} then
17          ll+1l\leftarrow l+1;
18          
19       if 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,𝑠𝑡𝑎𝑡𝑒)=\mathit{calcfstate}(f,\mathit{state})=\infty then
20          𝑟𝑒𝑡𝑓𝑎𝑙𝑠𝑒\mathit{ret}\leftarrow\mathit{false};
21          
22       
23    
24    if 𝑣𝑠𝑡𝑎𝑡𝑒(Y(v),𝑠𝑡𝑎𝑡𝑒,l,k)=𝑡𝑟𝑢𝑒 and 𝑟𝑒𝑡𝑓𝑎𝑙𝑠𝑒\mathit{vstate}(Y(v),\mathit{state},l,k)=\mathit{true}\mbox{ and }\mathit{ret}\neq\mathit{false} then
25       𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]𝑠𝑡𝑎𝑡𝑒\mathit{fstate}[v,c+1]\leftarrow\mathit{state};
26       return 𝑠𝑡𝑎𝑡𝑒\mathit{state};
27       
28    𝑠𝑡𝑎𝑡𝑒𝑠𝑡𝑎𝑡𝑒\mathit{state}\leftarrow-\mathit{state};
29    
30 𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]\mathit{fstate}[v,c+1]\leftarrow\infty;
31 return \infty;
32 
33
Algorithm 1 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{calcfstate}(v,c)

This is what Algorithm 1 does. The algorithm maintains a table 𝑓𝑠𝑡𝑎𝑡𝑒\mathit{fstate} that contains all the function values. This table is initialized with the null value for all vertices and states.

For each vertex vv, the algorithm tries to assign state Y(𝑝𝑎𝑟𝑒𝑛𝑡v)Y(\mathit{parent}_{v}), and in case this is a valid state, this value is stored in the table and returned. Otherwise, the algorithm tries to assign the opposite state to Y(𝑝𝑎𝑟𝑒𝑛𝑡v)Y(\mathit{parent}_{v}) and does assign it in case it is a valid state. If none of the states is valid then the algorithm stores \infty in the table and returns. Notice that when the algorithm accesses a position in the table it increments the cc value by 11, so it does not try to access a negative position when cc has value 1-1. Table handling is very important to make the algorithm run in polynomial time. Without verification in line 2, it would be a simple backtracking algorithm with time complexity O(2n)O(2^{n}). We can verify this by analyzing the case in which TT is the graph PnP_{n} containing a single path with nn vertices and k=2k=2. Let H(n)H(n) be the worst-case time complexity of the algorithm in this case, without using the table. Suppose we choose the root to be one of the vertices with degree 11. H(n)H(n) is easily verified to be

H(n)=2H(n1)+O(1),H(n)=2H(n-1)+O(1), (8)

considering that the algorithm tries both states for each vertex, which happens when the given input configuration does not have a predecessor configuration. Additionally, the structure of PnP_{n} allows us to calculate the number of steps as a function of the number of steps to solve the problem on the subtree Pn1P_{n-1}. By solving the above recurrence relation from H(1)=O(1)H(1)=O(1) we obtain H(n)=O(2n)H(n)=O(2^{n}). Using the table amounts to employing memoization [4] to avoid the exponential running time.

Suppose that the last function call is 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{calcfstate}(v,c) and that this is the first time the function is called with these parameters. In the worst case, the algorithm calls 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,+1)\mathit{calcfstate}(f,+1) and 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,1)\mathit{calcfstate}(f,-1) for each child ff of vv and calculates 𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]\mathit{fstate}[v,c+1]. For each child ff of vv, the algorithm makes two visits, and any other call to 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{calcfstate}(v,c) will not result in any other visit to vv’s children since the algorithm returns 𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]\mathit{fstate}[v,c+1] in line 3. When the algorithm calls 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{calcfstate}(v,-c), again in the worst case it will call 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,+1)\mathit{calcfstate}(f,+1) and 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,1)\mathit{calcfstate}(f,-1) for each child ff of vv, incrementing to four the number of visits to each child of vv. After this call to 𝑐𝑎𝑙𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{calcfstate}(v,-c), any other visit to vertex vv will not produce any other visit to any child of vv since both 𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{fstate}(v,c) and 𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{fstate}(v,-c) will be stored in the table and the algorithm will return in line 3. We conclude that each vertex will be visited at most four times. We also conclude that each edge will be traversed at most four times, twice for each state being tested. Thus, the time complexity of Algorithm 1 is O(n+m)O(n+m). Using m=n1m=n-1 yields a time complexity of O(n)O(n).

Algorithm 2 implements the recursive idea to recover a predecessor configuration YY^{\prime} once we know that one exists. It traverses the tree accessing values in table 𝑓𝑠𝑡𝑎𝑡𝑒\mathit{fstate} according to the vertex being visited and to the state assigned to its parent. Once a state is assigned to a vertex, the algorithm looks at the table to check the only option of recursive call to make. Algorithm 2 traverses the tree exactly once, assigning states; thus, its time complexity is O(n)O(n).

1 begin
2 y[v]=𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]y[v]=\mathit{fstate}[v,c+1];
3 
4 foreach f𝑐ℎ𝑖𝑙𝑑𝑟𝑒𝑛vf\in\mathit{children}_{v} do
5    𝑏𝑢𝑖𝑙𝑑𝑠𝑡𝑎𝑡𝑒(f,y[v])\mathit{buildstate}(f,y[v]);
6    
7 
8
Algorithm 2 𝑏𝑢𝑖𝑙𝑑𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{buildstate}(v,c)

4.2 Polynomial-time algorithm for #Pre(𝒌)(k)

Suppose a class of trees constructed in the following manner:

  • A vertex v1v_{1} connected to p>1p>1 vetices v2,v3,,vp+1v_{2},v_{3},\ldots,v_{p+1};

  • Each vertex viv_{i}, with 2ip+12\leq i\leq p+1, connected to two other vertices, denoted by di1d_{i-1} and ei1e_{i-1}.

Each tree in this class has 3p+13p+1 vertices. Assume a 22-reversible process and a configuration YY in which all states are +1+1. In this case, any configuration in which vertex v1v_{1} has state 1-1 and at least two vertices out of v2,v3,,vp+1v_{2},v_{3},\ldots,v_{p+1} have state +1+1 is a predecessor configuration of YY. Notice that all vertices did_{i} and eie_{i} must have state +1+1 in a predecessor configuration of YY, since they have degree 11. A lower bound on the number of predecessor configurations of YY for this class of trees is given by

i=2p(pi)=2pp1.\sum_{i=2}^{p}{\binom{p}{i}}=2^{p}-p-1. (9)

Figure 3 illustrates the construction of a tree in this class for p=3p=3 and the respective configuration YY. We also illustrate some of the possible predecessor configurations for a 22-reversible process.

Given this lower bound on the number of predecessor configurations, it turns out that modifying the previous algorithm in order to store all possible predecessor configurations and then reconstruct them is an exponential-time task. However, solving the associated counting problem can be done in time O(n2)O(n^{2}) for every kk.

v1v_{1}v2v_{2}v3v_{3}v4v_{4}d1d_{1}e1e_{1}d2d_{2}e2e_{2}d3d_{3}e3e_{3}
v1v_{1}v2v_{2}v3v_{3}v4v_{4}d1d_{1}e1e_{1}d2d_{2}e2e_{2}d3d_{3}e3e_{3}
v1v_{1}v2v_{2}v3v_{3}v4v_{4}d1d_{1}e1e_{1}d2d_{2}e2e_{2}d3d_{3}e3e_{3}
Figure 3: Configuration YY with a possibly exponential number of predecessor configurations. Shaded circles indicate state +1+1; empty circles indicate state 1-1. (a) Configuration YY; (b) Predecessor configuration YY^{\prime}; (c) Predecessor configuration Y′′Y^{\prime\prime}.

In order to do this, we use a function similar to 𝑓𝑠𝑡𝑎𝑡𝑒\mathit{fstate}. However, a more robust function must be used, one that will contain not only the state that a vertex must have in a predecessor configuration, but also the number of predecessor configurations of the subtree for the cases in which the vertex has states +1+1 and 1-1. We define the function 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{cfstate}(v,c) as an ordered pair whose first element is the number of predecessor configurations of subtree TvT_{v} when 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} has state cc and vv has state +1+1, the second element being the number of predecessor configurations of subtree TvT_{v} when 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} has state cc and vv has state 1-1.

Similarly to function 𝑓𝑠𝑡𝑎𝑡𝑒\mathit{fstate}, when vv is the root of the tree, function 𝑐𝑓𝑠𝑡𝑎𝑡𝑒\mathit{cfstate} is called with parameters vv and 0. Thus, the total number of predecessor configurations of subtree TvT_{v}, when 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} has state cc, is the sum of the two elements in the ordered pair 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{cfstate}(v,c), and in case vv is the root, the sum of the two elements in the ordered pair 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,0)\mathit{cfstate}(v,0).

The natural way to calculate 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{cfstate}(v,c) is quite simple. Without loss of generality, suppose that we are calculating the first element of pair 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{cfstate}(v,c) and that, in this case, at least ll children of vv must have an arbitrary state 𝑠𝑡\mathit{st} in a predecessor configuration. For simplicity, the first element in the pair 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{cfstate}(v,c) will be denoted by 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)+1\mathit{cfstate}(v,c)_{+1}, whereas the second element will be denoted by 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)1\mathit{cfstate}(v,c)_{-1}. Then

𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)+1=XCvl𝑐𝑎𝑙𝑐(X,𝑠𝑡),\mathit{cfstate}(v,c)_{+1}=\sum\limits_{X\in C_{v}^{l}}{\mathit{calc}(X,\mathit{st})}, (10)

where CvlC_{v}^{l} is the set of all subsets of children of vv with at least ll elements and

𝑐𝑎𝑙𝑐(X,𝑠𝑡)={1,if X=;fX𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,+1)𝑠𝑡fV(G)X𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,+1)𝑠𝑡,otherwise. \mathit{calc}(X,\mathit{st})=\left\{\begin{array}[]{rl}1,&\mbox{if }X=\emptyset;\\ \prod\limits_{f\in X}{\mathit{cfstate}(f,+1)_{\mathit{st}}}\prod\limits_{f\in V(G)\setminus X}{\mathit{cfstate}(f,+1)}_{-\mathit{st}},&\mbox{otherwise. }\\ \end{array}\right. (11)

In other words, we simply test all possibilities of state assignment to all children of vv such that at least ll of them have state +1+1. For each one of these possibilities we calculate the total number of predecessor configurations, multiplying the number of predecessor configurations for each subtree. The value 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)+1\mathit{cfstate}(v,c)_{+1} is the sum obtained in each possibility.

Notice that it is possible to calculate 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)1\mathit{cfstate}(v,c)_{-1} likewise, again assuming that in each predecessor configuration at least ll children of vv have state 𝑠𝑡\mathit{st}. We just need to redefine 𝑐𝑎𝑙𝑐(X,𝑠𝑡)\mathit{calc}(X,\mathit{st}) to be

𝑐𝑎𝑙𝑐(X,𝑠𝑡)={1,if X=;fX𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,1)𝑠𝑡fV(G)X𝑐𝑓𝑠𝑡𝑎𝑡𝑒(f,1)𝑠𝑡,otherwise. \mathit{calc}(X,\mathit{st})=\left\{\begin{array}[]{rl}1,&\mbox{if }X=\emptyset;\\ \prod\limits_{f\in X}{\mathit{cfstate}(f,-1)_{\mathit{st}}}\prod\limits_{f\in V(G)\setminus X}{\mathit{cfstate}(f,-1)}_{-\mathit{st}},&\mbox{otherwise. }\\ \end{array}\right. (12)

A point to note in this approach is that the total number of configurations to iterate through is O(2d(v)1)O(2^{d(v)-1}). For example, for l=1l=1,

|Cv1|=i=1d(v)1(d(v)1i)=2d(v)11.|C_{v}^{1}|=\displaystyle\sum_{i=1}^{d(v)-1}{\binom{d(v)-1}{i}}=2^{d(v)-1}-1. (13)

We can, however, use dynamic programming [4] to calculate 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{cfstate}(v,c) in polynomial time without needing to iterate through all possible configurations.

Assume that the children of vv are ordered as in 𝑐ℎ𝑖𝑙𝑑v,0,,𝑐ℎ𝑖𝑙𝑑v,d(v)2\mathit{child}_{v,0},\dots,\mathit{child}_{v,d(v)-2}, where vv is an internal vertex of the tree. If vv is the root, the order is: 𝑐ℎ𝑖𝑙𝑑v,0,,𝑐ℎ𝑖𝑙𝑑v,d(v)1\mathit{child}_{v,0},\dots,\break\mathit{child}_{v,d(v)-1}. For simplicity, denote the number of children of vv by d(v)d^{\prime}(v).

Define the function gv𝑟𝑡(i,j)g_{v}^{\mathit{rt}}(i,j) as the total number of predecessor configurations for subtree TvT_{v} in which vv has state 𝑟𝑡\mathit{rt} and, moreover, exactly jj of the vertices 𝑐ℎ𝑖𝑙𝑑v,0,𝑐ℎ𝑖𝑙𝑑v,1,,𝑐ℎ𝑖𝑙𝑑v,i1\mathit{child}_{v,0},\mathit{child}_{v,1},\ldots,\mathit{child}_{v,i-1} have state +1+1. Similarly, define hv𝑟𝑡(i,j)h_{v}^{\mathit{rt}}(i,j) as the total number of predecessor configurations for subtree TvT_{v} in which vv has state 𝑟𝑡\mathit{rt} and, moreover, exactly jj of the vertices 𝑐ℎ𝑖𝑙𝑑v,0,𝑐ℎ𝑖𝑙𝑑v,1,,𝑐ℎ𝑖𝑙𝑑v,i1\mathit{child}_{v,0},\mathit{child}_{v,1},\ldots,\mathit{child}_{v,i-1} have state 1-1. Then we can calculate 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)𝑟𝑡\mathit{cfstate}(v,c)_{\mathit{rt}} in the following way.

If at least ll children of vv are required to have state +1+1 in a predecessor configuration of TvT_{v}, then

𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)𝑟𝑡=i=ld(v)gv𝑟𝑡(d(v),i),\mathit{cfstate}(v,c)_{\mathit{rt}}=\sum_{i=l}^{d^{\prime}(v)}g_{v}^{\mathit{rt}}(d^{\prime}(v),i), (14)

where gv𝑟𝑡(i,j)g_{v}^{\mathit{rt}}(i,j) is defined recursively, as in

gv𝑟𝑡(i,j)={0,if i=0 and j>0;1,if i=0 and j=0;gv𝑟𝑡(i1,j)ai+gv𝑟𝑡(i1,j1)bi,if i>0 and j>0;gv𝑟𝑡(i1,j)ai,if i>0 and j=0,g_{v}^{\mathit{rt}}(i,j)=\left\{\begin{array}[]{rl}0,&\mbox{if }i=0\mbox{ and }j>0;\\ 1,&\mbox{if }i=0\mbox{ and }j=0;\\ g_{v}^{\mathit{rt}}(i-1,j)a_{i}+g_{v}^{\mathit{rt}}(i-1,j-1)b_{i},&\mbox{if }i>0\mbox{ and }j>0;\\ g_{v}^{\mathit{rt}}(i-1,j)a_{i},&\mbox{if }i>0\mbox{ and }j=0,\\ \end{array}\right. (15)

with ai=𝑐𝑓𝑠𝑡𝑎𝑡𝑒(𝑐ℎ𝑖𝑙𝑑v,i,𝑟𝑡)1a_{i}=\mathit{cfstate}(\mathit{child}_{v,i},\mathit{rt})_{-1} and bi=𝑐𝑓𝑠𝑡𝑎𝑡𝑒(𝑐ℎ𝑖𝑙𝑑v,i,𝑟𝑡)+1b_{i}=\mathit{cfstate}(\mathit{child}_{v,i},\mathit{rt})_{+1}.

If, instead, at least ll children of vv are required to have state 1-1 in a predecessor configuration of TvT_{v}, then

𝑐𝑓𝑠𝑡𝑎𝑡𝑒(v,c)𝑟𝑡=i=ld(v)hv𝑟𝑡(d(v),i),\mathit{cfstate}(v,c)_{\mathit{rt}}=\sum_{i=l}^{d^{\prime}(v)}h_{v}^{\mathit{rt}}(d^{\prime}(v),i), (16)

where hv𝑟𝑡(i,j)h_{v}^{\mathit{rt}}(i,j) is such that

hv𝑟𝑡(i,j)={0,if i=0 and j>0;1,if i=0 and j=0;hv𝑟𝑡(i1,j)bi+hv𝑟𝑡(i1,j1)ai,if i>0 and j>0;hv𝑟𝑡(i1,j)bi,if i>0 and j=0.h_{v}^{\mathit{rt}}(i,j)=\left\{\begin{array}[]{rl}0,&\mbox{if }i=0\mbox{ and }j>0;\\ 1,&\mbox{if }i=0\mbox{ and }j=0;\\ h_{v}^{\mathit{rt}}(i-1,j)b_{i}+h_{v}^{\mathit{rt}}(i-1,j-1)a_{i},&\mbox{if }i>0\mbox{ and }j>0;\\ h_{v}^{\mathit{rt}}(i-1,j)b_{i},&\mbox{if }i>0\mbox{ and }j=0.\\ \end{array}\right. (17)

As given above, the calculation of gv𝑟𝑡(i,j)g_{v}^{\mathit{rt}}(i,j) involves four cases that depend on both state possibilities for vertex 𝑐ℎ𝑖𝑙𝑑v,i1\mathit{child}_{v,i-1}. They are the following (the cases for hv𝑟𝑡(i,j)h_{v}^{\mathit{rt}}(i,j) are analogous):

  • i=0i=0 and j>0j>0: In this case there is no vertex to be considered and there should exist j>0j>0 vertices in state +1+1. Hence, no predecessor configuration exists.

  • i=0i=0 and j=0j=0: This is the only case in which a predecessor configuration exists with i=0i=0, since there is no need to have a vertex in state +1+1. Hence, there exists exactly one predecessor configuration.

  • i>0i>0 and j>0j>0: In this case we add the number of predecessor configurations for the first ii subtrees, considering that vertex 𝑐ℎ𝑖𝑙𝑑v,i1\mathit{child}_{v,i-1} has state +1+1, to the total number of predecessor configurations when this vertex has state 1-1. Notice that if we assume that 𝑐ℎ𝑖𝑙𝑑v,i1\mathit{child}_{v,i-1} has state +1+1, then exactly j1j-1 of vertices 𝑐ℎ𝑖𝑙𝑑v,0,,𝑐ℎ𝑖𝑙𝑑v,i2\mathit{child}_{v,0},\ldots,\mathit{child}_{v,i-2} must have state +1+1 as well. Otherwise, if we assume 𝑐ℎ𝑖𝑙𝑑v,i1\mathit{child}_{v,i-1} to have state 1-1, then jj of the first i1i-1 children of vv must have state +1+1.

  • i>0i>0 and j=0j=0: In this case none of vertices 𝑐ℎ𝑖𝑙𝑑v,0,,𝑐ℎ𝑖𝑙𝑑v,i1\mathit{child}_{v,0},\ldots,\mathit{child}_{v,i-1} is required to have state +1+1. We calculate the total number of predecessor configurations of the subtrees rooted at vertices 𝑐ℎ𝑖𝑙𝑑v,0,,𝑐ℎ𝑖𝑙𝑑v,i2\mathit{child}_{v,0},\ldots,\mathit{child}_{v,i-2} with all of them having state 1-1, which is given by gv𝑟𝑡(i1,j)g_{v}^{\mathit{rt}}(i-1,j), and multiply it by the total number of predecessor configurations of subtree T𝑐ℎ𝑖𝑙𝑑v,i1T_{\mathit{child}_{v,i-1}} with vertex 𝑐ℎ𝑖𝑙𝑑v,i1\mathit{child}_{v,i-1} having state 1-1, which is given by 𝑐𝑓𝑠𝑡𝑎𝑡𝑒(𝑐ℎ𝑖𝑙𝑑v,i1,𝑟𝑡)1\mathit{cfstate}(\mathit{child}_{v,i-1},\mathit{rt})_{-1}.

1 begin
2  if 𝑐𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]𝑁𝐼𝐿\mathit{cfstate[v,c+1]}\neq\mathit{NIL} then
3     return 𝑐𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]\mathit{cfstate}[v,c+1];
4    
5 d(v)d(v)2d^{\prime}(v)\leftarrow d(v)-2;
6  if v=𝑟𝑜𝑜𝑡v=\mathit{root} then
7     d(v)d(v)1d^{\prime}(v)\leftarrow d(v)-1;
8    
9 𝑐𝑜𝑢𝑛𝑡0\mathit{count}\leftarrow 0; 𝑠𝑡𝑎𝑡𝑒+1\mathit{state}\leftarrow+1;
10  while 𝑐𝑜𝑢𝑛𝑡2\mathit{count}\neq 2 do
11     i0i\leftarrow 0; 𝑐𝑜𝑢𝑛𝑡𝑐𝑜𝑢𝑛𝑡+1\mathit{count}\leftarrow\mathit{count}+1;
12     foreach f𝑝𝑎𝑟𝑒𝑛𝑡vf\in\mathit{parent}_{v} do
13        𝑝𝑎𝑟[i]𝑐𝑜𝑢𝑛𝑡𝑓𝑠𝑡𝑎𝑡𝑒(f,𝑠𝑡𝑎𝑡𝑒)\mathit{par}[i]\leftarrow\mathit{countfstate}(f,\mathit{state}); ii+1i\leftarrow i+1;
14       
15    l𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑v(𝑠𝑡𝑎𝑡𝑒,c,k)l\leftarrow\mathit{threshold}_{v}(\mathit{state},c,k);
16     if v=𝑟𝑜𝑜𝑡v=\mathit{root} then
17        l𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑v(𝑠𝑡𝑎𝑡𝑒,,k)l\leftarrow\mathit{threshold}_{v}(\mathit{state},\infty,k);
18       
19    
20    𝑡𝑎𝑏[0,0]1\mathit{tab}[0,0]\leftarrow 1;
21    
22    for i1i\leftarrow 1 to d(v)d^{\prime}(v) do
23        𝑡𝑎𝑏[0,j]0\mathit{tab}[0,j]\leftarrow 0;
24       
25    
26    for i1i\leftarrow 1 to d(v)d^{\prime}(v) do
27        for j0j\leftarrow 0 to d(v)d^{\prime}(v) do
28           if Y(v)=+1Y(v)=+1 then
29              𝑡𝑎𝑏[i,j]𝑡𝑎𝑏[i1,j]𝑝𝑎𝑟[i]\mathit{tab}[i,j]\leftarrow\mathit{tab}[i-1,j]\mathit{par}[i]_{-};
30              if j>0j>0 then
31                 𝑡𝑎𝑏[i,j]𝑡𝑎𝑏[i,j]+𝑡𝑎𝑏[i1,j1]𝑝𝑎𝑟[i]+\mathit{tab}[i,j]\leftarrow\mathit{tab}[i,j]+\mathit{tab}[i-1,j-1]\mathit{par}[i]_{+};
32                
33             
34          else
35              𝑡𝑎𝑏[i,j]𝑡𝑎𝑏[i1,j]𝑝𝑎𝑟[i]+\mathit{tab}[i,j]\leftarrow\mathit{tab}[i-1,j]\mathit{par}[i]_{+};
36              if j>0j>0 then
37                 𝑡𝑎𝑏[i,j]𝑡𝑎𝑏[i,j]+𝑡𝑎𝑏[i1,j1]𝑝𝑎𝑟[i]\mathit{tab}[i,j]\leftarrow\mathit{tab}[i,j]+\mathit{tab}[i-1,j-1]\mathit{par}[i]_{-};
38                
39             
40          
41       
42    𝑐𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]𝑠𝑡𝑎𝑡𝑒i=ld(v)𝑡𝑎𝑏[d(v),i]\mathit{cfstate}[v,c+1]_{\mathit{state}}\leftarrow\sum_{i=l}^{d^{\prime}(v)}\mathit{tab}[d^{\prime}(v),i];
43     𝑠𝑡𝑎𝑡𝑒𝑠𝑡𝑎𝑡𝑒\mathit{state}\leftarrow-\mathit{state};
44    
45 return 𝑐𝑓𝑠𝑡𝑎𝑡𝑒[v,c+1]\mathit{cfstate}[v,c+1];
46 
47
Algorithm 3 𝑐𝑜𝑢𝑛𝑡𝑓𝑠𝑡𝑎𝑡𝑒(v,c)\mathit{countfstate}(v,c)

Applying the recursion directly results in an algorithm whose number of operations grows very fast. Thus, once again we resort to dynamic programming to calculate gv𝑟𝑡g_{v}^{\mathit{rt}} and hv𝑟𝑡h_{v}^{\mathit{rt}}.

Algorithm 3 implements the recursive scheme given above. Similarly to Algorithm 1, we keep 𝑐𝑓𝑠𝑡𝑎𝑡𝑒\mathit{cfstate} values in a table to avoid exponential times. The algorithm does not use the functions gv𝑟𝑡g_{v}^{\mathit{rt}} and hv𝑟𝑡h_{v}^{\mathit{rt}} explicitly, but instead a table tabtab for checking the value of Y(v)Y(v) to decide which multiplication to perform in lines 23 and 27. We use gv𝑟𝑡g_{v}^{\mathit{rt}} if Y(v)=+1Y(v)=+1 and hv𝑟𝑡h_{v}^{\mathit{rt}} if Y(v)=1Y(v)=-1. Suppose that Y(v)=+1Y(v)=+1. If in a predecessor configuration of YY vertex vv has state 1-1, then, depending on the state of 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v}, vertex vv will need kk or k1k-1 children with state +1+1 in that configuration. If vv has state +1+1 in the predecessor configuration, then, depending on the state of 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v}, vertex vv will need d(v)k+1d(v)-k+1 or d(v)kd(v)-k children with state +1+1. The analysis for the case of Y(v)=1Y(v)=-1 is analogous.

Besides choosing which function to use, we also need to calculate the value of ll. Given a vertex vv and that 𝑝𝑎𝑟𝑒𝑛𝑡v\mathit{parent}_{v} has state cc in the predecessor configuration, we define the function 𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑v(𝑐𝑢𝑟𝑟𝑒𝑛𝑡,c,k)\mathit{threshold}_{v}(\mathit{current},c,k) as the least number of children of vv in state Y(v)Y(v) in the predecessor configuration such that in the next step vv has state Y(v)Y(v), assuming that vv has state current in the predecessor configuration. Thus,

𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑v(𝑐𝑢𝑟𝑟𝑒𝑛𝑡,c,k)={min{d(v)k+1,0},if Y(v)=𝑐𝑢𝑟𝑟𝑒𝑛𝑡and cY(v);min{d(v)k,0},if Y(v)=𝑐𝑢𝑟𝑟𝑒𝑛𝑡and c=Y(v);k,if Y(v)𝑐𝑢𝑟𝑟𝑒𝑛𝑡and cY(v);k1,if Y(v)𝑐𝑢𝑟𝑟𝑒𝑛𝑡and c=Y(v).\mathit{threshold}_{v}(\mathit{current},c,k)=\left\{\begin{array}[]{rl}\min\{d(v)-k+1,0\},&\mbox{if }Y(v)=\mathit{current}\\ &\mbox{and }c\neq Y(v);\\ \min\{d(v)-k,0\},&\mbox{if }Y(v)=\mathit{current}\\ &\mbox{and }c=Y(v);\\ k,&\mbox{if }Y(v)\neq\mathit{current}\\ &\mbox{and }c\neq Y(v);\\ k-1,&\mbox{if }Y(v)\neq\mathit{current}\\ &\mbox{and }c=Y(v).\\ \end{array}\right. (18)

The time spent for each vertex vv in the double loop of line 1818 is O(d(v)2)O(d^{\prime}(v)^{2}). Summing up over all vertices we get an O(n2)O(n^{2}) time complexity.

5 Polynomial-time algorithm for Pre(𝟐)(2) on graphs with maximum degree no greater than 3

In this section, we show that Pre(2)(2) is in P when Δ(G)3\Delta(G)\leq 3, where Δ(G)\Delta(G) is the maximum degree in GG. Hence, this result covers the case of cubic graphs.

We show how to reduce the problem to 2Sat, solvable in O(N+M)O(N+M) time; as before, NN is the number of variables and MM is the number of clauses [2]. That is, given a configuration YY we want to create a 2Sat instance SS such that SS is satisfiable if and only if YY has a predecessor configuration.

We start creating SS by adding literals xvx_{v} and ¬xv\neg{x_{v}} for each vertex vv in the graph. We will construct the clauses of SS in such a way that whenever YY has a predecessor configuration YY^{\prime}, SS is satisfied by letting each xvx_{v} with Y(v)=+1Y^{\prime}(v)=+1 be true and each xvx_{v} with Y(v)=1Y^{\prime}(v)=-1 be false. Similarly, from any satisfying truth assignment for SS we construct a predecessor configuration of YY by assigning state +1+1 to vv whenever xvx_{v} is true and assigning state 1-1 whenever xvx_{v} is false. Because Δ(G)3\Delta(G)\leq 3, we can construct a set of clauses in the following way.

For each vertex vv such that Y(v)=+1Y(v)=+1:

  • If d(v)=1d(v)=1: In the predecessor configuration YY^{\prime}, vv must have the same state as in configuration YY, since the process is 22-reversible. Thus, we add the clause:

    • \circ

      xvx_{v}.

    Clearly, this clause is satisfied whenever YY has a predecessor configuration.

  • If d(v)=2d(v)=2 with neighbors uu and ww: If in the predecessor configuration YY^{\prime} vertex vv has state +1+1, then in YY^{\prime} at least one of its neighbors must also have state +1+1. If in the predecessor configuration YY^{\prime} vertex vv has state 1-1, then both its neighbors must have state +1+1. We can encode these conditions by adding the following clauses:

    • \circ

      ¬xvxuxvxu;\neg{x_{v}}\rightarrow x_{u}\equiv x_{v}\vee x_{u};

    • \circ

      ¬xvxwxvxw;\neg{x_{v}}\rightarrow x_{w}\equiv x_{v}\vee x_{w};

    • \circ

      xvxuxw¬xvxuxw.x_{v}\rightarrow x_{u}\vee x_{w}\equiv\neg{x_{v}}\vee x_{u}\vee x_{w}.

    Analyzing these three clauses reveals that when xvx_{v} is true then xux_{u} or xwx_{w} is also true in order to make all three clauses satisfiable. In case xvx_{v} is false, we force xux_{u} and xwx_{w} to have value true. We simplify the clauses as follows:

    • \circ

      xvxux_{v}\vee x_{u};

    • \circ

      xvxwx_{v}\vee x_{w};

    • \circ

      xuxwx_{u}\vee x_{w}.

  • If d(v)d(v) = 3 with neighbors uu, ww, and zz: If in the predecessor configuration YY^{\prime} vertex vv has state +1+1, then at least two of its neighbors must have state +1+1 in YY^{\prime}. If in YY^{\prime} vertex vv has state 1-1, then again at least two of its neighbors must have state +1+1 in YY^{\prime}. Therefore, we add the following clauses:

    • \circ

      ¬xvxuxwxvxuxw\neg{x_{v}}\rightarrow x_{u}\vee x_{w}\equiv x_{v}\vee x_{u}\vee x_{w};

    • \circ

      ¬xvxuxzxvxuxz\neg{x_{v}}\rightarrow x_{u}\vee x_{z}\equiv x_{v}\vee x_{u}\vee x_{z};

    • \circ

      ¬xvxwxzxvxwxz\neg{x_{v}}\rightarrow x_{w}\vee x_{z}\equiv x_{v}\vee x_{w}\vee x_{z};

    • \circ

      xvxuxw¬xvxuxwx_{v}\rightarrow x_{u}\vee x_{w}\equiv\neg{x_{v}}\vee x_{u}\vee x_{w};

    • \circ

      xvxuxz¬xvxuxzx_{v}\rightarrow x_{u}\vee x_{z}\equiv\neg{x_{v}}\vee x_{u}\vee x_{z};

    • \circ

      xvxwxz¬xvxwxzx_{v}\rightarrow x_{w}\vee x_{z}\equiv\neg{x_{v}}\vee x_{w}\vee x_{z}.

    As the value assigned to xvx_{v} is not important in this subset of clauses, we can easily simplify them:

    • \circ

      xuxwx_{u}\vee x_{w};

    • \circ

      xuxzx_{u}\vee x_{z};

    • \circ

      xwxzx_{w}\vee x_{z}.

For each vertex vv such that Y(v)=1Y(v)=-1, the cases are analogous:

  • If d(v)=1d(v)=1, create the clause:

    • \circ

      ¬xv\neg{x_{v}}.

  • If d(v)=2d(v)=2 with neighbors uu and ww, create the clauses:

    • \circ

      ¬xv¬xu\neg{x_{v}}\vee\neg{x_{u}};

    • \circ

      ¬xv¬xw\neg{x_{v}}\vee\neg{x_{w}};

    • \circ

      ¬xu¬xw\neg{x_{u}}\vee\neg{x_{w}}.

  • If d(v)=3d(v)=3 with neighbors uu, ww, and zz, create the clauses:

    • \circ

      ¬xu¬xw\neg{x_{u}}\vee\neg{x_{w}};

    • \circ

      ¬xu¬xz\neg{x_{u}}\vee\neg{x_{z}};

    • \circ

      ¬xw¬xz\neg{x_{w}}\vee\neg{x_{z}}.

We have constructed the set of clauses for SS in such a way that it is directly satisfiable if YY has at least one predecessor configuration. It now remains for us to argue that the configuration YY^{\prime} obtained from a satisfying truth assignment as explained above is indeed a predecessor of YY.

Suppose, to the contrary, that such a YY^{\prime} is not a predecessor of YY. In other words, at least one vertex vv exists that does not reach state Y(v)Y(v) within one time step. Analyzing the case of Y(v)=+1Y(v)=+1 we have the following possibilities:

  • Y(v)=+1Y^{\prime}(v)=+1: In order to force vv to change its state vv must have at least two neighbors in state 1-1. Hence, vv is necessarily a vertex with degree at least 22.

    If d(v)=2d(v)=2, then both neighbors of vv have state 1-1 and we have clause xuxwx_{u}\vee x_{w} not satisfied, which is a contradiction.

    If d(v)=3d(v)=3, then in a similar way there is an unsatisfied clause.

  • Y(v)=1Y^{\prime}(v)=-1: In order to force vv to remain in state 1-1 in the next time step, there must be at most one neighbor in state +1+1. Notice that we must have d(v)1d(v)\neq 1, since Y(v)=+1Y(v)=+1, and then the satisfied clause xvx_{v} forces Y(v)=+1Y^{\prime}(v)=+1.

    If d(v)=2d(v)=2 with neighbors uu and ww, then at least one of these two vertices must have state 1-1; hence vv does not change its state to +1+1, but this implies that one of the clauses xvxwx_{v}\vee x_{w} or xvxux_{v}\vee x_{u} it not satisfied.

    If d(v)=3d(v)=3, then since we have clauses with two literals involving all the three neighbors of vv, at least one of them is not satisfied.

The case of Y(v)=1Y(v)=-1 is analogous. We conclude that if SS is satisfiable then configuration YY^{\prime} is a predecessor configuration of YY.

To summarize, given a graph with nn vertices, we create nn variables and in the worst case 3n3n clauses. Each clause has at most two literals. Thus, SS is indeed a 2Sat instance and we can solve Pre(2) in O(n)O(n) time when Δ(G)3\Delta(G)\leq 3.

6 Conclusions

In this paper we have dealt with Pre(kk) and #Pre(kk), our denominations for the Predecessor Existence problem and its counting variation for kk-reversible processes. We have shown that Pre(11) is solvable in polynomial time, that Pre(kk) is NP-complete for k>1k>1 even for bipartite graphs, and that it can be solved in polynomial time for trees. For trees we have also shown that #Pre(kk) is polynomial-time solvable. We have also demonstrated the polynomial-time solvability of Pre(22) if Δ(G)3\Delta(G)\leq 3.

We identify two problems worth investigating:

  • Identify other cases in which Pre(kk) can be solved in polynomial time.

  • Study the complexity properties of #Pre(22) for Δ(G)3\Delta(G)\leq 3.

Acknowledgments

The authors acknowledge partial support from CNPq, CAPES, and FAPERJ BBP grants.

References

  • [1] J. Adler. Bootstrap percolation. Physica A, 171:453–470, 1991.
  • [2] B. Aspvall, M. F. Plass, and R. E. Tarjan. A linear-time algorithm for testing the truth of certain quantified Boolean formulas. Inform. Process. Lett., 8(3):121–123, 1979.
  • [3] C. L. Barrett, H. B. Hunt III, M. V. Marathe, S. S. Ravi, D. J. Rosenkrantz, R. E. Stearns, and P. T. Tosic. Gardens of Eden and fixed points in sequential dynamical systems. Discrete Math. Theor. Comput. Sci. Proc., AA:95–110, 2001.
  • [4] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. The MIT Press, Cambridge, MA, third edition, 2009.
  • [5] M. H. DeGroot. Reaching a consensus. J. Am. Stat. Assoc., 69:167–182, 1974.
  • [6] M. C. Dourado, L. D. Penso, D. Rautenbach, and J. L. Szwarcfiter. Reversible iterative graph processes. Theor. Comput. Sci., 460:16–25, 2012.
  • [7] P. A. Dreyer Junior. Application and Variations of Domination in Graphs. Ph.D. dissertation, The State University of New Jersey, New Brunswick, NJ, 2000.
  • [8] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, New York, NY, 1979.
  • [9] D. C. Ghiglia, G. A. Mastin, and L. A. Romero. Cellular automata method for phase unwrapping. J. Opt. Soc. Am. A, 4:267–280, 1987.
  • [10] E. Goles and J. Olivos. The convergence of symmetric threshold automata. Inform. Control, 51:98–104, 1981.
  • [11] E. Goles and J. Olivos. Periodic behavior of binary threshold functions and applications. Discrete Appl. Math., 3:93–105, 1985.
  • [12] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA, 79:2554–2558, 1987.
  • [13] F. Luccio, L. Pagli, and H. Sanossian. Irreversible dynamos in butterflies. In Proceedings of the Sixth International Colloquium on Structural Information and Communication Complexity, pages 204–218, 1999.
  • [14] A. R. Mikler, S. Venkatachalam, and K. Abbas. Modeling infectious diseases using global stochastic cellular automata. J. Biol. Syst., 13:421–439, 2005.
  • [15] L. I. L. Oliveira. kk-Reversible Processes in Graphs. M.Sc. thesis, Federal University of Rio de Janeiro, Brazil, 2012. In Portuguese.
  • [16] D. Peleg. Size bounds for dynamic monopolies. Discrete Appl. Math., 86:263–273, 1998.
  • [17] K. Sutner. On the computational complexity of finite cellular automata. J. Comput. Syst. Sci., 50:87–97, 1995.
  • [18] P. Tošić. Modeling and Analysis of the Collective Dynamics of Large-Scale Multi-Agent Systems. Ph.D. dissertation, University of Illinois, Urbana-Champaign, IL, 2006.