Montessorilaan 3, 6525 HR Nijmegen, The Netherlands
Probabilistic Parameterized Polynomial Time††thanks: Research for this paper has been funded through NWO EW TOP grant 612.001.601.
Abstract
We examine a parameterized complexity class for randomized computation where only the error bound and not the full runtime is allowed to depend more than polynomially on the parameter, based on a proposal by Kwisthout in [15, 16]. We prove that this class, for which we propose the shorthand name PPPT, has a robust definition and is in fact equal to the intersection of the classes paraBPP and PP. This result is accompanied by a Cook-style proof of completeness for the corresponding promise class (under a suitable notion of reduction) for parameterized approximation versions of the inference problem in Bayesian networks, which is known to be PP-complete. With these definitions and results in place, we proceed by showing how it follows from this that derandomization is equivalent to efficient deterministic approximation methods for the inference problem. Furthermore, we observe as a straightforward application of a result due to Drucker in [8] that these problems cannot have polynomial size randomized kernels unless the polynomial hierarchy collapses to the third level. We conclude by indicating potential avenues for further exploration and application of this framework.
Keywords:
Parameterized complexity theory Randomized computation Bayesian networks.1 Preliminaries
The simple yet powerful idea which lies at the heart of the theory of parameterized complexity is that the hardness of computational problems may be better studied by analyzing the effects of particular aspects of its instances, treating these as a distinguished problem parameter and allowing the time (or other measures such as space) required to find a solution to depend on this parameter by an unbounded factor. This leads to an account of fixed-parameter tractability (FPT) and a hardness theory based on classes belonging to the W-hierarchy which together mirror the parts played by P and NP in classical complexity theory.
In the two decades since the appearance of [6], the book by Downey and Fellows which largely formed the foundations of the field, research in parameterized complexity theory has gone far beyond this initial outlook and revealed a rich structure and many interesting questions to pursue, a lot of which covered in the updated [7]. Yet with a few notable exceptions, little attention has been paid to probabilistic computation in the parameterized setting. The most encompassing effort thus far has been made by Montaya and Müller in [18], where they show amongst other things that the natural analogue BPFPT relates to other complexity classes in much the same way as does BPP in the classical setting.111We also mention [3] which studies PFPT, the parameterized counterpart to PP.
Our aim with the present paper is to improve on this situation by demonstrating that studying parameterized probabilistic computation amounts to more than simply reconstructing results from the classical setting (which can already be a non-trivial task, as evidenced by the work done in [18]), and that results obtained in this way can have broader theoretical and practical significance. In particular we study a complexity class intended to capture probabilistic parameterized polynomial time computability, which we shall refer to as PPPT for this reason.222In [15, 16] this class was proposed under the name FERT for fixed-error randomized tractability, intended to be reminiscent of FPT. We believe that the name PPPT is more appropriate as it calls into mind the class PP as well as ppt-reductions.
This class PPPT is informally defined by considering probabilistic algorithms for parameterized problems, except allowing not the runtime but instead only the error bound to depend on the parameter by more than a polynomial factor. As such, PPPT can be thought of as containing those problems in PP which are nevertheless close to being in BPP and hence randomized tractable in a certain sense. This perspective, which we will explore more rigorously later on, formed much of the motivation of the class’s original proposal in [15].
In what follows, we assume the reader to be familiar with the basics of classical and parameterized complexity theory. However, we repeat the definitions of the complexity classes used here, mostly to facilitate the comparison with the class PPPT which we formally define in the next section. First recall the probabilistic complexity classes BPP and PP:
Definition 1
BPP is the class of decision problems computable in time for some constant by a probabilistic Turing machine which gives the correct answer with probability more than for some constant .
Definition 2
PP is the class of decision problems computable in time for some constant by a probabilistic Turing machine which gives the correct answer with probability more than .
We mostly follow the original definition presented in [10] in that we consider a probabilistic Turing machine to be a Turing machine with access to random bits which it may query at every step of its execution, and whose transition function may depend on the values read off in this way. However, we include the generalization that a probabilistic Turing machine may query not one but many random bits at each step, where as usual we drop the subscript when it can be inferred from the context. Before continuing we note that .
Definition 3
FPT is the class of parameterized decision problems computable in time , where is a computable function in and is a constant.
Definition 4
paraBPP is the class of parameterized decision problems computable in time by a probabilistic Turing machine which gives the correct answer with probability more than , where is a computable function in and and are constants.
The method of converting classical complexity classes C to parameterized classes paraC illustrated above originates from [9], and indeed . One should keep in mind though that this construction does not yield the usual parameterized classes including BPFPT, as these are furthermore characterized by using at most random bits. (cf. [4]). In fact, as observed in [18], if and only if . As we shall see in the next section, a similar statement remains true when we replace paraBPP by the class PPPT.
2 Error Parameterization
We now provide a formal definition of the class PPPT:
Definition 5
We say that a parameterized decision problem is in PPPT if there exist a computable function , a constant and a probabilistic Turing machine which on input halts in time with probability at least of giving the correct answer.
Based on the convention that the parameter value is given as a unary string along with the rest of the input , from this definition it is immediate that PPPT is a subclass of PP. Moreover, it can be shown that this definition is robust in two ways, which makes it easy to see that PPPT is also a subclass of paraBPP.
Proposition 1
The class PPPT in Definition 5 remains the same if
(i) the error bound on a correct decision is or instead; (ii) the probability
of a false positive is not bounded away from .
Proof
For (i), note that , which shows that such error bounds may be used interchangeably. For independent identically distributed Bernoulli variables with , Hoeffding’s inequality states that the probability of the average over trials being no greater than is at most . Thus for we may take the majority vote over runs to obtain a probability of at least of giving the right answer. To see this, observe that the claim is trivially true whenever , while if then we should have , which is always the case since by definition. As this is only a polynomial number of repetitions, any error bound of either of these two alternative forms can be amplified to conform to the one required by Definition 5 without exceeding the time restrictions.
For (ii), we provide essentially the same argument as can be used to show that the class PP remains the same under this less strict requirement. Suppose that is a probabilistic Turing machine which decides the problem in time for some , with suitable and given such that Yes-instances are accepted with probability at least and No-instances with probability at most . Then we may consider a probabilistic Turing machine which on input operates the same up to where would halt, at which point the outcome of is processed as follows. In case of a rejection, the outcome is simply preserved, while an acceptance is rejected with probability (which is possible within the original polynomial time bound). Now a Yes-instance of is accepted by with probability at least , while No-instances are accepted with probability at most . Thus halts in time polynomial in and bounds the probability of a false positive away from by some as required, which concludes the proof.
One can extend the probabilistic amplification used in the proof of Proposition 1(i) to (instead) remove the term from the error bound, at the cost of introducing this factor into the runtime. This is generally not allowed for PPPT, but permissible for paraBPP, hence PPPT is a subclass of paraBPP. In fact, this inclusion is easily shown to be strict.
Proposition 2
, i.e. is a strict subclass of .
Proof
Let be any decidable problem not in PP, and parameterize this problem by the input size, so that we obtain the problem -. It is immediate that - cannot be in PPPT since is not in PP. On the other hand, - is in as the parameterization permits an arbitrary runtime for the decision procedure for , hence the inclusion is strict.
Note that what the proof above actually shows is that there are problems in FPT which are not in PPPT. This tells us that the inclusion in the statement remains strict even if , in which case we have that : see Proposition 5.1 of [18]. Similar arguments can be used to show that implies . Most importantly however, we can extend these results by providing an exact characterization of PPPT, the proof of which relies on essentially the same strategy used to show that every problem in FPT is kernelizable.
Theorem 2.1
. In particular, if a problem - is in and also in as an unparameterized problem, then - is in .
Proof
Suppose - is as in the statement of the theorem. Then there is a probabilistic Turing machine which on input halts in time and makes the correct decision with probability at least . Furthermore, there is a probabilistic Turing machine which on input halts in time and makes the correct decision with probability at least . Then on any given , we can first run for steps, adopting its decision if it halts within that time. If it does, then we have given the correct answer with probability at least . If it does not, then we may conclude that , in which case we proceed by running which will halt in time with a probability of at least of giving the correct answer. Thus in time we can decide with probability at least whether is in -, which means - is in PPPT by Proposition 1. As we already observed that PPPT is a subclass of both paraBPP and PP, this yields the conclusion that as stated.
For context it may be valuable to note that the idea of a complexity class being the intersection of a parameterized and a classical one has been considered before: while [2] looked into , [18] briefly discussed . However, we believe it is important to note first of all that the class PPPT arose from a reasonably natural definition intended to capture a slightly weaker kind of parameterization, instead of its definition being explicitly constructed to ensure a correspondence to the intersection of paraBPP and PP. Furthermore, and perhaps most importantly, we can actually exhibit natural problems which are complete for (the promise version of) the class PPPT, something which has not yet been done for BPP. In the remainder of this section we shall describe these problems, which originate from the domain of approximate Bayesian inference. We subsequently prove completeness for these problems in Section 3, which sets us up to explore the main implications of our results in Section 4.
Below we provide a definition of a Bayesian network, so that we may introduce the problem of inference within such networks; for the reader interested in a more detailed treatment, we refer to a standard textbook such as [14].
Definition 6
A Bayesian network is a pair , where is a directed acyclic graph whose nodes represent statistical variables, and Pr is a set of families of probability distributions containing for each node , and each possible configuration of the variables represented by its parents, a distribution over the possible outcomes of its represented variable.
As reflected by the notation, one typically blurs the distinction between the node and the statistical variable which it represents, so that one may use to refer directly to the set of possible outcomes of this variable.
One of the main computational problems associated with Bayesian networks is that of inference, which is to determine what the likelihood is of some given combination of outcomes, possibly conditioned on certain specified outcomes for another set of variables. Below we describe the corresponding decision problem.
Bayesian Inference
Input: A Bayesian network , two sets of variables , joint value assignments and , a rational threshold value .
Question: Is ?
Based on whether , we shall refer to this problem as Inference or Conditional Inference respectively, both of which are PP-complete (see also [19]). Despite their equivalence from a classical perspective, in terms of parameterized approximability the latter is more difficult: [16] has an overview of the main results known thus far. We consider the following specific parameterizations:
-Inference
Input: A Bayesian network , a set of variables , a joint value assignment , and rational values and .
Parameter: .
Promise: .
Question: Is ?
-Conditional Inference
Input: A Bayesian network , two sets of variables , joint value assignments , , rational values , .
Parameters: , , and .
Promise: .
Question: Is ?
In what follows we use as a shorthand for with the promise that , and similarly for the relative approximation; for more on this approach, see [17]. For completeness’ sake, we remind the reader of the definition of a promise problem, based on [11].
Definition 7
A promise problem consists of disjoint sets and of Yes and No-instances respectively, where may be a strict subset of the set of all inputs; is called the promise of the problem .
We can separate promise problems into complexity classes just as we do for decision problems by converting the familiar definitions in the following way.
Definition 8
Let be any complexity class of decision problems. The class of promise problems is defined by applying ’s criteria of membership to promise problems instead, e.g. is the class of promise problems for which there exists a polynomial-time algorithm which answers correctly on its promise.
We now show for both of the approximate inference problems given above that they are in the promise version of PPPT, the former by an explicit argument, the latter by an appeal to Theorem 2.1.
Proposition 3
-Inference is in pPPPT.
Proof
By forward sampling the network (i.e. generating outcomes according to the distributions of each variable, following some topological ordering of the graph) and accepting with probability if the sample agrees with , and with probability if it does not, we arrive at a probability of acceptance of . Under the promise that , the probability of giving the correct answer is now at least , hence the problem is in pPPPT.
Proposition 4
-Conditional Inference is in pPPPT.
Proof
In [12] it is shown that rejection sampling (which is forward sampling and dismissing the outcome if it does not agree with ) provides an algorithm which places -Conditional Inference in paraBPP.333Note that the parameter Pr(h) is only necessary here because we ask for a relative approximation: the same holds when considering Inference instead. Because Conditional Inference is itself in PP, by Theorem 2.1 we conclude that the parameterized problem is in pPPPT.
3 Reductions and Completeness
At this point we wish to show that the two parameterized problems considered in the previous section are actually complete for the class pPPPT. In order to do this, we first determine which notion of reduction is the most suitable with respect to the class PPPT, after which we identify the machine acceptance problem for pPPPT and demonstrate its completeness for the class under these reductions. We then construct an explicit reduction from this problem to -Inference, and in turn reduce the latter to -Conditional Inference, thereby establishing completeness for both of these problems.
First of all, it is evident that while some form of parameterized reduction is required, the usual fpt-reductions are unsuitable because they allow the runtime to contain a factor superpolynomial in the parameter value and so PPPT is not closed under these. Furthermore, the reductions cannot be probabilistic either: while this is possible for BPP since the error can be reduced to constant using probabilistic amplification, mitigating the parameterized error bound is generally impossible without parameterized runtime (unless ). Thus in this context it makes sense to consider the notion of a ppt-reduction444As [7] observes, the acronym ‘ppt’ can be read equally well as either “polynomial parameter transformation” or “parameterized polynomial transformation”., which was formally introduced in [1]:
Definition 9
A ppt-reduction from to is a polynomial-time computable function such that there exists a polynomial with the property that , and furthermore if and only if .
However, we can remove the constraint that is bounded by a polynomial in , and instead simply demand that its value depends only on .555I hereby express my gratitude to the anonymous reviewer who raised this point. The resulting class of reductions is a slightly broader one for which we introduce the name pppt-reduction, for which it is easy to see that PPPT is closed under pppt-reductions. Thus we would like to exhibit a parameterized problem which is complete for PPPT under pppt-reductions; yet here we run into the same issue as with the class BPP, namely that it may be impossible to effectively decide whether a given probabilistic Turing machine has a suitably lower-bounded probability to be correct on all possible inputs. Hence we have to add this explicit requirement in the form of a promise, which means we arrive at the machine acceptance problem stated below, and instead study completeness for the promise class pPPPT.
Error PTM Acceptance
Input: Two strings and and a probabilistic Turing machine .
Parameter: A positive integer .
Promise: For any valid input to and after any number of steps, the probability of acceptance does not lie between and .
Question: After steps, does accept more often than it rejects?
Proposition 5
Error PTM Acceptance is complete for pPPPT.
Proof
The problem Error PTM Acceptance is straightforwardly seen to lie in pPPPT, as it can be decided by running for steps on input and accepting only if it halts in an accepting state. Based on the promise this has a probability of at least of making the correct decision, while taking time polynomial in the input size, which satisfies the requirements for being in PPPT.
In turn, presenting a reduction from a problem to Error PTM Acceptance may be done in the following way. Suppose that , and witness that , i.e. on input the machine will run in time at most and accept or reject based on whether with probability at least . Then we can construct in polynomial time which on input simulates the machine for exactly steps, deferring the decision until then: this will itself take time at most for some constant .
Using the above, we find that precisely when after steps accepts more often than it rejects, hence if and only if . Since the remaining data (i.e. other than the machine ) can also be given in polynomial time, this describes a pppt-reduction as required to show that Error PTM Acceptance is complete for pPPPT under pppt-reductions.
We can now show the problem -Inference to be complete for pPPPT as well, by providing what is essentially a Cook-style construction of a Bayesian network from the probabilistic Turing machine specification and number of steps and the input which together make up an instance of Error PTM Acceptance. In contrast to the previous result, the reduction resulting from this construction is a ppt-reduction rather than a pppt-reduction.
Theorem 3.1
-Inference is complete for pPPPT.
Proof
Given an instance of Error PTM Acceptance we describe a ppt-reduction to -Inference as follows. First, we construct the underlying graph of the Bayesian network by stacking layers of nodes and connecting these using an intermediate gadget. Any such layer consists of nodes representing the potentially reachable cells of the machine tape, a pair of nodes and which track the current tape head position and machine state respectively, and a series of nodes which act as the random bits which the machine uses to determine its next step. This means that consists of the tape alphabet (including blanks), , is the set of machine states, and finally .
Such a layer of nodes is connected to its successor through a gadget consisting again of nodes , with the parents of being and and . These nodes can be interpreted as storing the position of the tape head at step and reading off the tape until the correct cell is encountered, after which its symbol is copied and carried over all the way to . To achieve this, we require to be the disjoint union of and . Now the layer combined with its gadget is connected to the next one by setting , and .
We now have to assign probability distributions to each of these nodes such that they fulfill their intended purposes. First of all, the nodes are all uniformly distributed so that they may be correctly regarded as random bits. As for the first row, the remaining nodes are fixed to the first cells of the tape input, the tape head starting location and the initial state of the machine. All other nodes in the network have similar distributions which are deterministic given the values of their parents. In particular, if and otherwise (here should be read for ), unless in which case and together determine the symbol overwriting the previous one according to the transition function of , and in general the values of and follow from those of its parents based on this transition function as well.
The reduction can now be straightforwardly expressed as follows: an instance is mapped to an instance of -Inference, where is constructed as above and is the accepting state of . Then as required we have that after steps accepts more often than it rejects if and only if . Since is of size polynomial in and (in particular because the conditional probability distribution at every node is of polynomial size) and the parameter remains unchanged, this indeed describes a ppt-reduction, which completes the proof.
In turn, we can reduce -Inference to -Conditional Inference, thereby extending the hardness and hence completeness to the latter.
Corollary 1
-Conditional Inference is -complete.
Proof
Given an instance of -Inference, we can adjust by building an inverse binary tree below the nodes in , with terminal node being when and otherwise. We then furthermore add an initial, uniformly distributed binary node and another binary node with parents and , distributed as follows:
Now , and moreover , hence we find that if and only if . This therefore describes a pppt-reduction from -Inference to -Conditional Inference, albeit not a ppt-reduction as an artefact of the particular choice of parameter value corresponding to . The result then follows from Theorem 3.1.
To conclude this section, we discuss a question which may have occurred to the reader, namely whether one could simplify this approach by avoiding the inference problems altogether and working instead with the following variant of MajSat, which is the satisfiability problem complete for PP.
Gap-MajSat
Input: A propositional formula .
Parameter: A positive integer .
Promise: The ratio of satisfying truth assignments of does not lie between and .
Question: Is satisfied by more than half of its possible truth assignments?
The issue here is that the canonical reduction from Error PTM Acceptance to Gap-MajSat requires a number of variables proportional to both and the size of the machine , hence the original margin of will shrink by a factor in the input size. The resulting parameter for the Gap-MajSat instance will thus depend on the input size, which means this reduction is not even an fpt-reduction. This points to a phenomenon also observed in the W-hierarchy, where W[SAT] (which is defined in terms of a parameterized satisfiability problem) is believed to a proper subclass of W[P] (which is defined in terms of a parameterized circuit satisfiability or machine acceptance problem). That the reduction does work for the inference problems suggests that Bayesian networks do have the direct expressive power of Turing machines lacked by propositional formulas.
4 Application of Results
Ultimately, one of the main open questions in the area of probabilistic computation is whether . In contrast to the more famous open question whether , the generally accepted view is that BPP is likely to equal P, based on works such as [13]. However, due to the lack of natural problems which are known to be BPP-complete, it has not been possible to focus efforts on proving a particular problem to lie in P in order to demonstrate the collapse of BPP. We believe that our work makes an important contribution in that it indirectly provides a problem which can play this part, namely -Inference. This relies in part on the following proposition adapted from [18] which we hinted at earlier.
Proposition 6
if and only if .
Proof
Suppose , and let be an arbitrary problem in . Then certainly , and also for any constant parameterization, hence by Theorem 2.1. By assumption it follows that , hence there is a deterministic algorithm for which runs in time . But now the factor is a constant term, which means is actually in P by this algorithm.
Conversely, suppose , and let be an arbitrary problem in with corresponding error bound function . Given an instance of we can determine whether : for the instances where this is true, the problem is in FPT, while it is in BPP for those where it is false. By assumption the latter problem is moreover in P, which means the entire problem is in FPT.
Combined with Theorem 3.1, we arrive at the following result:
Theorem 4.1
if and only if there exists an efficient deterministic absolute approximation algorithm for Inference, i.e. a deterministic approximation which runs in time for some constant and computable function .
Proof
At the same time, we can provide some indication as to the hardness of -Inference by means of the framework of kernelization lower bounds, which is where the notion of a ppt-reduction originated. Here we consider the following reformulation, inspired by [5], of a theorem by Drucker found in [8].
Theorem 4.2
If is an NP-hard or coNP-hard problem and is a parameterized problem such that there exists a polynomial-time algorithm which maps any tuple of -sized instances of to an instance of such that
-
1.
if all are No-instances of , then is a No-instance of ;
-
2.
if exactly one is a Yes-instance of , then is a Yes-instance of ;
-
3.
the parameter of is bounded by for some constant ;
then has no randomized (two-sided constant error) polynomial-sized kernels unless , collapsing the polynomial hierarchy to the third level.
We can use this Theorem to prove that -Inference has no randomized polynomial-sized kernels unless the polynomial hierarchy collapses.
Proposition 7
-Inference has no randomized polynomial-sized kernel unless .
Proof
Consider the NP-hard problem SAT, and let be propositional formulas in variables. We can rename the variables so that every formula uses the same if necessary, introduce a new variable , and take the disjunction . Then is a formula with variables with a majority of its truth assignments being satisfying if and only if at least one of the is satisfiable, hence if and only if for some . Thus by Theorem 4.2 Gap-MajSat does not have randomized polynomial-sized kernels unless . Furthermore, by [1] this property is closed under ppt-reductions, and the usual reduction from Gap-MajSat to -Inference (which amounts to constructing a Boolean circuit out of the given formula) is in fact a ppt-reduction, hence neither does -Inference have randomized polynomial-sized kernels under the assumption that .
While perhaps unsurprising, this result serves in particular as a reminder that hard problems in PPPT such as -Inference are not solvable by polynomial kernelization followed by a probabilistic (PP) algorithm.
5 Closing Remarks
In this paper we have explored the proposal made in [15, 16] of an alternative parameterized randomized complexity class, which we have called PPPT and of which we have shown that it is identical to the intersection of PP and paraBPP. In the preceding sections we showed that the problem -Inference is a natural fit for this class, as it is not only a member of the class in a straightforward way (Proposition 3), it is moreover complete for the corresponding problem class (Theorem 3.1). Because of the close relation between classical and parameterized probabilistic computation (Proposition 6), the class PPPT turns out to have unexpected broader relevance, as finding an efficient deterministic absolute approximation algorithm for Inference is necessary and sufficient for the derandomization of BPP to P (Theorem 4.1).
In other words, we are in the fortunate circumstances where efforts to address a long-standing open question originating in theory can actually coincide with the search for a novel algorithm capable of solving a practical problem, and most importantly the existence of such an algorithm actually follows from a conjecture supported by other considerations. With this paper we wish to call attention to this opportunity for researchers with theoretical and practical motivations alike to engage with a challenge which is broadly relevant to multiple research communities at once. It is our hope that such focused efforts on the -Inference problem may lead to a valuable breakthrough in both the fields of structural complexity theory and of probabilistic graphical models.
Acknowledgements
The author thanks Johan Kwisthout and Hans Bodlaender for sharing insightful remarks in his discussions with them, and also Ralph Bottesch for providing useful comments on an early draft of this paper.
References
- [1] Bodlaender, H.L., Thomassé, S., Yeo, A.: Kernel bounds for disjoint cycles and disjoint paths. Theoretical Computer Science 412, 4570–4578 (2011)
- [2] Cai, L., Chen, J., Downey, R.G., Fellows, M.R.: On the structure of parameterized problem in NP. In: Enjalbert, P., Mayr, E.W., Wagner, K.W. (eds.) Proceedings of STACS 94. pp. 507–520 (1994)
- [3] Chauhan, A., Rao, B.V.R.: Parameterized analogues of probabilistic computation. In: Ganguly, S., Krishnamurti, R. (eds.) Algorithms and Discrete Applied Mathematics. pp. 181–192 (2015)
- [4] Chen, Y., Flum, J., Grohe, M.: Machine-based methods in parameterized complexity theory. Theoretical Computer Sciences 339, 167–199 (2005)
- [5] Dell, H.: AND-compression of NP-complete problems: Streamlined proof and minor observations. Algorithmica 75, 403–423 (2016)
- [6] Downey, R.G., Fellows, M.R.: Parameterized Complexity. Springer (1999)
- [7] Downey, R.G., Fellows, M.R.: Fundamentals of parameterized complexity. Springer (2013)
- [8] Drucker, A.: New limits to classical and quantum instance compression. Tech. Rep. TR12-112, Electronic Colloquium on Computational Complexity (ECCC) (2014), http://eccc.hpi-web.de/report/2012/112/
- [9] Flum, J., Grohe, M.: Describing parameterized complexity classes. Information and Computation 187, 291–319 (2003)
- [10] Gill, J.: Computational complexity of probabilistic Turing machines. SIAM Journal on Computing 6(4), 675–695 (1977)
- [11] Goldreich, O.: On promise problems: A survey. In: Goldreich, O., Rosenberg, A.L., Selman, A.L. (eds.) Theoretical Computer Science: Essays in Memory of Shimon Even, pp. 254–290. Springer (2006)
- [12] Henrion, M.: Propagating uncertainty in Bayesian networks by probabilistic logic sampling. In: Lemmer, J.F., Kanal, L.N. (eds.) Uncertainty in Artificial Intelligence, Machine Intelligence and Pattern Recognition, vol. 5, pp. 149–163 (1988)
- [13] Impagliazzo, R., Wigderson, A.: P = BPP if E requires exponential circuits: Derandomizing the XOR lemma. In: Proceedings of STOC ’97. pp. 220–229 (1997)
- [14] Koller, D., Friedman, N.: Probabilistic graphical models: principles and techniques. MIT Press (2009)
- [15] Kwisthout, J.: Tree-width and the computational complexity of MAP approximations in Bayesian networks. Journal of Artificial Intelligence Research 53, 699–720 (2015)
- [16] Kwisthout, J.: Approximate inference in Bayesian networks: parameterized complexity results. International Journal of Approximate Reasoning 93, 119–131 (2018)
- [17] Marx, D.: Parameterized complexity and approximation algorithms. The Computer Journal 51, 60–78 (2008)
- [18] Montoya, J.A., Müller, M.: Parameterized random complexity. Theory of Computing Systems 52, 221–270 (2013)
- [19] Park, J.D., Darwiche, A.: Complexity results and approximation strategies for MAP explanations. Journal of Artificial Intelligence Research 21, 101–133 (2004)