This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: LRE, EPITA, Le Kremlin-Bicêtre, France 22institutetext: University of Liverpool, Liverpool, UK 33institutetext: ​​University of Gothenburg and Chalmers University of Technology, Gothenburg, Sweden44institutetext: Sapienza University of Rome, Rome, Italy 55institutetext: Rice University, Houston, Texas, USA

Engineering an 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} Synthesis Tool

Alexandre Duret-Lutz ID 11    Shufang Zhu ID 22    Nir Piterman ID 33    Giuseppe De Giacomo ID 44    Moshe Y. Vardi ID 55
Abstract

The problem of 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} reactive synthesis is to build a transducer, whose output is based on a history of inputs, such that, for every infinite sequence of inputs, the conjoint evolution of the inputs and outputs has a prefix that satisfies a given 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} specification.

We describe the implementation of an 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} synthesizer that outperforms existing tools on our benchmark suite. This is based on a new, direct translation from 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} to a DFA represented as an array of Binary Decision Diagrams (MTBDDs) sharing their nodes. This MTBDD-based representation can be interpreted directly as a reachability game that is solved on-the-fly during its construction.

1 Introduction

Reactive synthesis is concerned with synthesizing programs (a.k.a. strategies) for reactive computations (e.g., processes, protocols, controllers, robots) in active environments [47, 30, 26], typically, from temporal logic specifications. In AI, Reactive Synthesis, which is related to (strong) planning for temporally extended goals in fully observable nondeterministic domains [16, 3, 4, 14, 6, 34, 21, 15], has been studied with a focus on logics on finite traces such as 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} [33, 7, 22, 23]. In fact, 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} synthesis [23] is one of the two main success stories of reactive synthesis so far (the other being the GR(1) fragment of LTL [46]), and has brought about impressive advances in scalability [56, 8, 18, 20].

Reactive synthesis for 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} involves the following steps [23]: (1) distinguishing uncontrollable input (\mathcal{I}) and controllable output (𝒪\mathcal{O}) variables in an 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}}{} specification φ\varphi of the desired system behavior; (2) constructing a DFA accepting the behaviors satisfying φ\varphi; (3) interpreting this DFA as a two-player reachability game, and finding a controller winning strategy. Step (2) has two main bottlenecks: the DFA is worst-case doubly-exponential and its propositional alphabet Σ=2𝒪\Sigma=2^{\mathcal{I}\cup\mathcal{O}} is exponential. The first only happens in the worst case, while the second blow-up – which we call alphabet explosion – always happens.

Mona [39] addresses the alphabet-explosion problem, which happens also in MSO, by representing a DFA with Multi-Terminal Binary Decision Diagrams (MTBDDs) [36]. MTBDDs are a variant of BDDs [12] with arbitrary terminal values. If terminal values encode destination states, an MTBDD can compactly represent all outgoing transitions of a single DFA state. A DFA is represented, through its transition function, as an array of MTBDDs sharing their nodes.

The first 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} synthesizer, Syft [56], converted 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} into first-order logic in order to build a MTBDD-encoded DFA with Mona. Syft then converted this DFA into a BDD representation to solve the reachability game using a symbolic fixpoint computation. Syft demonstrated that DFA construction is the main bottleneck in 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} synthesis, motivating several follow-up efforts.

One approach to effective DFA construction uses compositional techniques, decomposing the input 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formula into smaller subformulas whose DFAs can be minimized before being recombined. Lisa [8] decomposes top-level conjunctions, while Lydia [19] and LydiaSyft [29] decompose every operator.

Compositional methods construct the full DFA before synthesis can proceed, limiting their scalability. On-the-fly approaches [53] construct the DFA incrementally, while simultaneously solving the game, allowing strategies to be found before the complete DFA is built. The DFA construction may use various techniques. Cynthia [20] uses Sentential Decision Diagrams (SDDs) [17] to generate all outgoing transitions of a state at once. Alternatively, Nike [28] and MoGuSer [54] use a SAT-based method to construct one successor at a time. The game is solved by forward exploration with suitable backpropagation.

Contributions and Outline In Section 3, we propose a direct and efficient translation from 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} to MTBDD-encoded DFA (henceforth called MTDFA). In Section 4, we show that given an appropriate ordering of BDD variables, 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} realizability can be solved by interpreting the MTBDD nodes of the MTDFA as the vertices of a reachability game, known to be solvable in linear time by backpropagation of the vertices that are winning for the output player. We give a linear-time implementation for solving the game on-the-fly while it is constructed. For more opportunities to abort the on-the-fly construction earlier, we additionally backpropagate vertices that are known to be winning by the input player. We implemented these techniques in two tools (ltlf2dfa\faExternalLink* and ltlfsynt\faExternalLink*) that compare favorably with other existing tools in benchmarks from the 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}}-Synthesis Competition. To meet space limits, Section 5 only reports on the 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} realizability benchmark, and we refer readers to our artifact for the other results [24].

2 Preliminaries

2.1 Words over Assignments

A word over σ\sigma of length nn over an alphabet Σ\Sigma is a function σ:{0,1,,n1}Σ\sigma:\{0,1,\ldots,n-1\}\to\Sigma. We use Σn\Sigma^{n} (resp. Σ\Sigma^{\star} and Σ+\Sigma^{+}) to denote the set of words of length nn (resp. any length n0n\geq 0 and n>0n>0). We use |σ||\sigma| to represent the length of a word σ\sigma. For σΣn\sigma\in\Sigma^{n} and 0i<n0\leq i<n, σ(..i)\sigma(..i) denotes the prefix of σ\sigma of length i+1i+1.

Let 𝒫\mathcal{P} be a finite set of Boolean variables (a.k.a. atomic propositions). We use 𝔹𝒫\mathbb{B}^{\mathcal{P}} to denote the set of all assignments, i.e., functions 𝒫𝔹\mathcal{P}\to\mathbb{B} mapping variables to values in 𝔹={,}\mathbb{B}=\{\bot,\top\}.

Given two disjoint sets of variables 𝒫1\mathcal{P}_{1} and 𝒫2\mathcal{P}_{2}, and two assignments w1𝔹𝒫1w_{1}\in\mathbb{B}^{\mathcal{P}_{1}} and w2𝔹𝒫2w_{2}\in\mathbb{B}^{\mathcal{P}_{2}}, we use w1w2:(𝒫1𝒫2)𝔹w_{1}\sqcup w_{2}:(\mathcal{P}_{1}\cup\mathcal{P}_{2})\to\mathbb{B} to denote their combination.

In a system modeled using discrete Boolean signals that evolve synchronously, we assign a variable to each signal, and use a word σ(𝔹𝒫)+\sigma\in(\mathbb{B}^{\mathcal{P}})^{+} over assignments of 𝒫\mathcal{P} to represent the conjoint evolution of all signals over time.

We extend \sqcup to such words. For two words σ1(𝔹𝒫1)n\sigma_{1}\in(\mathbb{B}^{\mathcal{P}_{1}})^{n}, σ2(𝔹𝒫2)n\sigma_{2}\in(\mathbb{B}^{\mathcal{P}_{2}})^{n} of length nn over assignments that use disjoint sets of variables, we use σ1σ2(𝔹𝒫1𝒫2)n\sigma_{1}\sqcup\sigma_{2}\in(\mathbb{B}^{\mathcal{P}_{1}\cup\mathcal{P}_{2}})^{n} to denote a word such that (σ1σ2)(i)=σ1(i)σ2(i)(\sigma_{1}\sqcup\sigma_{2})(i)=\sigma_{1}(i)\sqcup\sigma_{2}(i) for 0i<n0\leq i<n.

2.2 Linear Temporal Logic over Finite, Nonempty Words.

We use classical 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} semantics over nonempty finite words [22].

Definition 1 (𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formulas)

An 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formula φ\varphi is built from a set 𝒫\mathcal{P} of variables, using the following grammar where p𝒫p\in\mathcal{P}, and {,,,,}\odot\in\{\land,\lor,\rightarrow,\leftrightarrow,...\} is any Boolean operator: φ::=𝑡𝑡𝑓𝑓p¬φφφ𝖷φ𝖷!φφ𝖴φφ𝖱φ𝖦φ𝖥φ\varphi::=\mathit{tt}\mid\mathit{ff}\mid p\mid\lnot\varphi\mid\varphi\odot\varphi\mid\mathsf{X}\varphi\mid\mathsf{X^{!}}\varphi\mid\varphi\mathbin{\mathsf{U}}\varphi\mid\varphi\mathbin{\mathsf{R}}\varphi\mid\mathsf{G}\varphi\mid\mathsf{F}\varphi.

Symbols 𝑡𝑡\mathit{tt} and 𝑓𝑓\mathit{ff} represent the true and false 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formulas. Temporal operators are 𝖷\mathsf{X} (weak next), 𝖷!\mathsf{X^{!}} (strong next), 𝖴\mathbin{\mathsf{U}} (until), 𝖱\mathbin{\mathsf{R}} (release), 𝖦\mathsf{G} (globally), and 𝖥\mathsf{F} (finally). 𝖫𝖳𝖫𝖿(𝒫){\mathsf{LTL_{f}}}(\mathcal{P}) denotes the set of formulas produced by the above grammar. We use 𝗌𝖿(φ)\mathsf{sf}(\varphi) to denote the set of subformulas for φ\varphi. A maximal temporal subformula of φ\varphi is a subformula whose primary operator is temporal and that is not strictly contained within any other temporal subformula of φ\varphi.

The satisfaction of a formula φ𝖫𝖳𝖫𝖿(𝒫)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{P}) by word σ(𝔹𝒫)+\sigma\in(\mathbb{B}^{\mathcal{P}})^{+} of length n>0n>0 at position 0i<n0\leq i<n, denoted σ,iφ\sigma,i\models\varphi, is defined as follows.

σ,i𝑡𝑡i<nσ,i𝖷φ(i+1=n)(σ,i+1φ)σ,i𝑓𝑓i=nσ,i𝖷!φ(i+1<n)(σ,i+1φ)σ,ippσ(i)σ,i𝖥φj[i,n),σ,jφσ,i¬φ¬(σ,iφ)σ,i𝖦φj[i,n),σ,jφ\displaystyle\begin{aligned} \sigma,i\models\mathit{tt}&\iff i<n&\sigma,i\models\mathsf{X}\varphi&\iff(i+1=n)\lor(\sigma,i+1\models\varphi)\\ \sigma,i\models\mathit{ff}&\iff i=n&\sigma,i\models\mathsf{X^{!}}\varphi&\iff(i+1<n)\land(\sigma,i+1\models\varphi)\\ \sigma,i\models p&\iff p\in\sigma(i)&\sigma,i\models\mathsf{F}\varphi&\iff\exists j\in[i,n),\,\sigma,j\models\varphi\\ \sigma,i\models\lnot\varphi&\iff\lnot(\sigma,i\models\varphi)&\sigma,i\models\mathsf{G}\varphi&\iff\forall j\in[i,n),\,\sigma,j\models\varphi\end{aligned}
σ,iφ1φ2(σ,iφ1)(σ,iφ2)σ,iφ1𝖴φ2j[i,n),(σ,jφ2)(k[i,j),σ,kφ1)σ,iφ1𝖱φ2j[i,n),(σ,jφ2)(k[i,j),σ,kφ1)\displaystyle\begin{aligned} \sigma,i\models\varphi_{1}\odot\varphi_{2}&\iff(\sigma,i\models\varphi_{1})\odot(\sigma,i\models\varphi_{2})\\ \sigma,i\models\varphi_{1}\mathbin{\mathsf{U}}\varphi_{2}&\iff\exists j{\in}[i,n),(\sigma,j\models\varphi_{2})\land(\forall k{\in}[i,j),\,\sigma,k\models\varphi_{1})\\ \sigma,i\models\varphi_{1}\mathbin{\mathsf{R}}\varphi_{2}&\iff\forall j{\in}[i,n),(\sigma,j\models\varphi_{2})\lor(\exists k{\in}[i,j),\,\sigma,k\models\varphi_{1})\end{aligned}

The set of words that satisfy φ𝖫𝖳𝖫𝖿(𝒫)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{P}) is (φ)={σ(𝔹𝒫)+σ,0φ}\mathscr{L}(\varphi)=\{\sigma\in(\mathbb{B}^{\mathcal{P}})^{+}\mid\sigma,0\models\varphi\}.

Example 1

Consider the following 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formulas over 𝒫={i0,i1,i2,o1,o2}\mathcal{P}=\{i_{0},i_{1},i_{2},o_{1},o_{2}\}: Ψ1=𝖦((i0(o1i1))((¬i0)(o1i2)))\Psi_{1}=\mathsf{G}((i_{0}\rightarrow(o_{1}\leftrightarrow i_{1}))\land((\lnot i_{0})\rightarrow(o_{1}\leftrightarrow i_{2}))), and Ψ2=(𝖦𝖥o2)(𝖥i0)\Psi_{2}=(\mathsf{G}\mathsf{F}o_{2})\leftrightarrow(\mathsf{F}i_{0}). If we interpret i0,i1,i2i_{0},i_{1},i_{2} as input signals, and o1,o2o_{1},o_{2} as output signals, formula Ψ1\Psi_{1} specifies a 1-bit multiplexer: the value of the signal o1o_{1} should be equal to the value of either i1i_{1} or i2i_{2} depending on the setting of i0i_{0}. Formula Ψ2\Psi_{2} specifies that the last value of o2o_{2} should be \top if and only if i0i_{0} was \top at some instant.

Definition 2 (Propositional Equivalence [27])

For φ𝖫𝖳𝖫𝖿(𝒫)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{P}), let φP\varphi_{P} be the Boolean formula obtained from φ\varphi by replacing every maximal temporal subformula ψ\psi by a Boolean variable xψx_{\psi}. Two formulas α,β𝖫𝖳𝖫𝖿(𝒫)\alpha,\beta\in{\mathsf{LTL_{f}}}(\mathcal{P}) are propositionally equivalent, denoted αβ\alpha\equiv\beta, if αP\alpha_{P} and βP\beta_{P} are equivalent Boolean formulas.

Example 2

Formulas α=(𝖦b)((𝖥a)(𝖦b))\alpha=(\mathsf{G}b)\lor((\mathsf{F}a)\land(\mathsf{G}b)) and β=𝖦b\beta=\mathsf{G}b are propositionally equivalent. Indeed, αP=x𝖦b(x𝖥ax𝖦b)=x𝖦b=βP\alpha_{P}=x_{\mathsf{G}b}\lor(x_{\mathsf{F}a}\land x_{\mathsf{G}b})=x_{\mathsf{G}b}=\beta_{P}.

Note that αβ\alpha\equiv\beta implies (α)=(β)\mathscr{L}(\alpha)=\mathscr{L}(\beta), but the converse is not true in general. Since \equiv is an equivalence relation, we use [α]𝖫𝖳𝖫𝖿(𝒫)[\alpha]_{\equiv}\in{\mathsf{LTL_{f}}}(\mathcal{P}) to denote some unique representative of the equivalence class of α\alpha with respect to \equiv.

2.3 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} Realizability

Our goal is to build a tool that decides whether an 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} formula is realizable.

Definition 3 ([23, 37])

Given two disjoint sets of variables \mathcal{I} (inputs) and 𝒪\mathcal{O} (outputs), a controller is a function ρ:𝒪\rho:\mathcal{I}^{*}\to\mathcal{O}, that produces an assignment of output variables given a history of assignments of input variables.

Given a word of nn input assignments σ(𝔹)n\sigma\in(\mathbb{B}^{\mathcal{I}})^{n}, the controller can be used to generate a word of nn output assignments σρ(𝔹)n\sigma_{\rho}\in(\mathbb{B}^{\mathcal{I}})^{n}. The definition of σρ\sigma_{\rho} may use two semantics depending on whether we want to the controller to have access to the current input assignment to decide the output assignment:

Mealy semantics:

σρ(i)=ρ(σ(..i))\sigma_{\rho}(i)=\rho(\sigma(..i)) for all 0i<n0\leq i<n.

Moore semantics:

σρ(i)=ρ(σ(..i1))\sigma_{\rho}(i)=\rho(\sigma(..i-1)) for all 0i<n0\leq i<n.

A formula φ𝖫𝖳𝖫𝖿(𝒪)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{I}\cup\mathcal{O}) is said to be Mealy-realizable or Moore-realizable if there exists a controller ρ\rho such that for any word σ(𝔹)ω\sigma\in(\mathbb{B}^{\mathcal{I}})^{\omega} there exists a position kk such that (σσρ)(..k)(φ)(\sigma\sqcup\sigma_{\rho})(..k)\in\mathscr{L}(\varphi) using the desired semantics.

Example 3

Formula Ψ1\Psi_{1} (from Example 1) is Mealy-realizable but not Moore-realizable. Formula Ψ2\Psi_{2} is both Mealy and Moore-realizable.

2.4 Multi-Terminal BDDs

Let 𝒮\mathcal{S} be a finite set. Given a finite set of variables 𝒫={p0,p1,,pn1}\mathcal{P}=\{p_{0},p_{1},\ldots,p_{n-1}\} (that are implicitly ordered by their index) we use f:𝔹𝒫𝒮f:\mathbb{B}^{\mathcal{P}}\to\mathcal{S} to denote a function that maps an assignment of all those variables to an element of 𝒮\mathcal{S}. Given a variable p𝒫p\in\mathcal{P} and a Boolean b𝔹b\in\mathbb{B}, the function fp=b:𝔹𝒫{p}𝒮f_{p=b}:\mathbb{B}^{\mathcal{P}\setminus\{p\}}\to\mathcal{S} represents a generalized co-factor obtained by replacing pp by bb in ff. When 𝒮=𝔹\mathcal{S}=\mathbb{B}, a function f:𝔹𝒫𝔹f:\mathbb{B}^{\mathcal{P}}\to\mathbb{B} can be encoded into a Binary Decision Diagram (BDD) [11]. Multi-Terminal Binary Decision Diagrams (MTBDDs) [44, 45, 32, 39], also called Algebraic Decision Diagrams (ADDs) [5, 51], generalize BDDs by allowing arbitrary values on the leaves of the graph.

A Multi-Terminal BDD encodes any function f:𝔹𝒫𝒮f:\mathbb{B}^{\mathcal{P}}\to\mathcal{S} as a rooted, directed acyclic graph. We use the term nodes to refer to the vertices of this graph. All nodes in an MTBDD are represented by triples of the form (p,,h)(p,\ell,h). In an internal node, p𝒫p\in\mathcal{P} and ,h\ell,h point to successors MTBDD nodes called the 𝗅𝗈𝗐\mathsf{low} and 𝗁𝗂𝗀𝗁\mathsf{high} links. The intent is that if (p,,h)(p,\ell,h) is the root of the MTBDD representing the function ff, then \ell and hh are the roots of the MTBDDs representing the functions fp=f_{p=\bot} and fp=f_{p=\top}, respectively. Leaves of the graph, called terminals, hold values in 𝒮\mathcal{S}. For consistency with internal nodes, we represent terminals with a triple of the form (,s,)(\infty,s,\infty) where s𝒮s\in\mathcal{S}. When comparing the first elements of different triplets, we assume that \infty is greater than all variables. We use 𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮)\mathsf{MTBDD}(\mathcal{P},\mathcal{S}) to denote the set of MTBDD nodes that can appear in the representation of an arbitrary function 𝔹𝒫𝒮\mathbb{B}^{\mathcal{P}}\to\mathcal{S}.

Following the classical implementations of BDD packages [11, 1], we assume that MTBDDs are ordered (variables of 𝒫\mathcal{P} are ordered and visited in increasing order by all branches of the MTBDD) and reduced (isomorphic subgraphs are merged by representing each triplet only once, and internal nodes with identical 𝗅𝗈𝗐\mathsf{low} and 𝗁𝗂𝗀𝗁\mathsf{high} links are skipped over). Doing so ensures that each function f:𝔹𝒫𝒮f:\mathbb{B}^{\mathcal{P}}\to\mathcal{S} has a unique MTBDD representation for a given order of variables.

Given m𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮)m\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}) and an assignment w𝔹𝒫w\in\mathbb{B}^{\mathcal{P}}, we note m(w)m(w) the element of 𝒮\mathcal{S} stored on the terminal of mm that margin: Cf. App. 0.A.1 is reached after following the assignment ww in the structure of mm. We use |m||m| to denote the number of MTBDD nodes that can be reached from mm.

Let m1𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮1)m_{1}\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}_{1}) and m2𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮2)m_{2}\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}_{2}) be two MTBDD nodes representing functions fi:𝔹𝒫𝒮if_{i}:\mathbb{B}^{\mathcal{P}}\to\mathcal{S}_{i}, and let :𝒮1×𝒮2𝒮3\odot:\mathcal{S}_{1}\times\mathcal{S}_{2}\to\mathcal{S}_{3}, be a binary operation. One can easily construct m3𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮3)m_{3}\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}_{3}) representing the function f3(p0,,pn1)=f1(p0,,pn1)f2(p0,,pn1)f_{3}(p_{0},\ldots,p_{n-1})=f_{1}(p_{0},\ldots,p_{n-1})\odot f_{2}(p_{0},\ldots,p_{n-1}), by generalizing the apply2 function typically found in BDD libraries [32].margin: Cf. App. 0.A.2 We use m1m2m_{1}\odot m_{2} to denote the MTBDD that results from this construction.

For m𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮)m\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}) we use leaves(m)𝒮\textnormal{{leaves}}(m)\subseteq\mathcal{S} to denote the elements of 𝒮\mathcal{S} that label terminals reachable from mm. This set can be computed in Θ(|m|)\Theta(|m|).margin: Cf. App. 0.A.3.

2.5 MTBDD-Based Deterministic Finite Automata

We now define an MTBDD-based representation of a DFA with a propositional alphabet, inspired by Mona’s DFA representation [36, 39].

Definition 4 (MTDFA)

An MTDFA is a tuple 𝒜=𝒬,𝒫,ι,Δ\mathcal{A}=\langle\mathcal{Q},\mathcal{P},\iota,\Delta\rangle, where 𝒬\mathcal{Q} is a finite set of states, 𝒫\mathcal{P} is a finite (and ordered) set of variables, ι𝒬\iota\in\mathcal{Q} is the initial state, Δ:𝒬𝖬𝖳𝖡𝖣𝖣(𝒫,𝒬×𝔹)\Delta:\mathcal{Q}\to\mathsf{MTBDD}(\mathcal{P},\mathcal{Q}\times\mathbb{B}) represents the set of outgoing transitions of each state. For a word σ(𝔹𝒫)\sigma\in(\mathbb{B}^{\mathcal{P}})^{\star} of length nn, let (qi,bi)0in(q_{i},b_{i})_{0\leq i\leq n} be a sequence of pairs defined recursively as follows: (q0,b0)=(ι,)(q_{0},b_{0})=(\iota,\bot), and for 0<i|σ|0<i\leq|\sigma|, (qi,bi)=Δ(qi1)(σ(i1))(q_{i},b_{i})=\Delta(q_{i-1})(\sigma(i-1)) is the pair reached by evaluating assignment σ(i1)\sigma(i-1) on Δ(qi1)\Delta(q_{i-1}). The word σ\sigma is accepted by 𝒜\mathcal{A} iff bn=b_{n}=\top. The language of 𝒜\mathcal{A}, denoted (A)\mathscr{L}(A), is the set of words accepted by 𝒜\mathcal{A}.

Ψ1((𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))ι=\iota=Ψ1((𝖥o2𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{F}o_{2}\land\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))𝑓𝑓\mathit{ff}Ψ1(𝖦𝖥o2)\Psi_{1}\land(\mathsf{G}\mathsf{F}o_{2})Ψ1((𝖦𝖥o2)(𝖥o0))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2})\land(\mathsf{F}o_{0}))i0i_{0}i0i_{0}i2i_{2}i1i_{1}i2i_{2}o1o_{1}o1o_{1}o1o_{1}o1o_{1}o2o_{2}o2o_{2}Ψ1((𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))Ψ1((𝖥o2𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{F}o_{2}\land\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))𝑓𝑓\mathit{ff}Ψ1(𝖦𝖥o2)\Psi_{1}\land(\mathsf{G}\mathsf{F}o_{2})Ψ1((𝖦𝖥o2)(𝖥o2))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2})\land(\mathsf{F}o_{2}))
Figure 1: An MTDFA where 𝒫={i0,i1,i2,o0,o1}\mathcal{P}=\{i_{0},i_{1},i_{2},o_{0},o_{1}\} and 𝒬𝖫𝖳𝖫𝖿(𝒫)\mathcal{Q}\subseteq{\mathsf{LTL_{f}}}(\mathcal{P}). Following classical BDD representations a BDD node (p,,h)(p,\ell,h) is represented by pp\ellhh  . A terminal (,(α,b),)(\infty,(\alpha,b),\infty) is represented by α\alpha if b=b=\bot, or α\alpha if b=b=\top. Finally, MTBDD m=Δ(α)m=\Delta(\alpha) representing the successors of state α\alpha is indicated with α\alphamm​​. Subformula Ψ1\Psi_{1} abbreviates 𝖦((i0(o1i1))((¬i0)(o1i2)))\mathsf{G}((i_{0}\rightarrow(o_{1}\leftrightarrow i_{1}))\land((\lnot i_{0})\rightarrow(o_{1}\leftrightarrow i_{2}))).
Example 4

Figure 1 shows an MTDFA where 𝒬𝖫𝖳𝖫𝖿({i0,i1,i2,o1,o2})\mathcal{Q}\subseteq{\mathsf{LTL_{f}}}(\{i_{0},i_{1},i_{2},o_{1},o_{2}\}). The set of states 𝒬\mathcal{Q} are the dashed rectangles on the left. For each such a state q𝒬q\in\mathcal{Q}, the dashed arrow points to the MTBDD node representing Δ(q)\Delta(q). The MTBDD nodes are shared between all states. If, starting from the initial state ι\iota at the top-left, we read the assignment w=(i0,i1,i2,o1,o2)w=(i_{0}{\to}\top,i_{1}{\to}\top,i_{2}{\to}\top,o_{1}{\to}\top,o_{2}{\to}\top), we should follow only the 𝗁𝗂𝗀𝗁\mathsf{high} links (plain arrows) and we reach the Ψ1(𝖦𝖥o2)\Psi_{1}\land(\mathsf{G}\mathsf{F}o_{2}) accepting terminal. If we read this assignment a second-time, starting this time from state Ψ1(𝖦𝖥o2)\Psi_{1}\land(\mathsf{G}\mathsf{F}o_{2}) on the left, we reach the same accepting terminal. Therefore, non-empty words of the form wwwwwww\ldots w are accepted by this automaton.

An MTDFA can be regarded as a semi-symbolic representation of a DFA over propositional alphabet.margin: Cf. App. 0.C From a state qq and reading the assignment ww, the automaton jumps to the state qq^{\prime} that is the result of computing (q,b)=Δ(q)(w)(q^{\prime},b)=\Delta(q)(w). The value of bb indicates whether that assignment is allowed to be the last one of the word being read. By definition, an MTDFA cannot accept the empty word.

MTDFAs are compact representations of DFAs, because the MTBDD representation of the successors of each state can share their common nodes. Boolean operations can be implemented over MTDFAs, with the expected semantics, i.e., (𝒜1𝒜2)={σ(𝔹𝒫)+(σ(𝒜1))(σ(𝒜2))}\mathscr{L}(\mathcal{A}_{1}\odot\mathcal{A}_{2})=\{\sigma\in(\mathbb{B}^{\mathcal{P}})^{+}\mid(\sigma\in\mathscr{L}(\mathcal{A}_{1}))\odot(\sigma\in\mathscr{L}(\mathcal{A}_{2}))\}.margin: Cf. App. 0.B

3 Translating 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} to MTBDD and MTDFA

This section shows how to directly transform a formula φ𝖫𝖳𝖫𝖿(𝒫)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{P}) into an MTDFA 𝒜φ=𝒬,𝒫,φ,Δ\mathcal{A}_{\varphi}=\langle\mathcal{Q},\mathcal{P},\varphi,\Delta\rangle such that (φ)=(𝒜φ)\mathscr{L}(\varphi)=\mathscr{L}(\mathcal{A}_{\varphi}). The translation is reminiscent of other translations of 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} to DFA [22, 20], but it leverages the fact that MTBBDs can provide a normal form for 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} formulas.

The construction maps states to 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formulas, i.e., 𝒬𝖫𝖳𝖫𝖿(𝒫)\mathcal{Q}\subseteq{\mathsf{LTL_{f}}}(\mathcal{P}). Terminals appearing in the MTBDDs of 𝒜φ\mathcal{A}_{\varphi} will be labeled by pairs (α,b)𝖫𝖳𝖫𝖿(𝒫)×𝔹(\alpha,b)\in{\mathsf{LTL_{f}}}(\mathcal{P})\times\mathbb{B}, so we use 𝗍𝖾𝗋𝗆(α,b)=(,(α,b),)\mathsf{term}(\alpha,b)=(\infty,(\alpha,b),\infty) to shorten the notation from Section 2.4.

The conversion from φ\varphi to 𝒜φ\mathcal{A}_{\varphi} is based on the function 𝗍𝗋:𝖫𝖳𝖫𝖿(𝒫)𝖬𝖳𝖡𝖣𝖣(𝒫,𝖫𝖳𝖫𝖿(𝒫)×𝔹)\mathsf{tr}:{\mathsf{LTL_{f}}}(\mathcal{P})\to\mathsf{MTBDD}(\mathcal{P},{\mathsf{LTL_{f}}}(\mathcal{P})\times\mathbb{B}) defined inductively as follows:

𝗍𝗋(𝑓𝑓)\displaystyle\mathsf{tr}(\mathit{ff}) =𝗍𝖾𝗋𝗆(𝑓𝑓,)\displaystyle=\mathsf{term}(\mathit{ff},\bot) 𝗍𝗋(𝖷α)\displaystyle\mathsf{tr}(\mathsf{X}\alpha) =𝗍𝖾𝗋𝗆(α,)\displaystyle=\mathsf{term}(\alpha,\top)
𝗍𝗋(𝑡𝑡)\displaystyle\mathsf{tr}(\mathit{tt}) =𝗍𝖾𝗋𝗆(𝑡𝑡,)\displaystyle=\mathsf{term}(\mathit{tt},\top) 𝗍𝗋(𝖷!α)\displaystyle\mathsf{tr}(\mathsf{X^{!}}\alpha) =𝗍𝖾𝗋𝗆(α,)\displaystyle=\mathsf{term}(\alpha,\bot)
𝗍𝗋(p)\displaystyle\mathsf{tr}(p) =(p,𝗍𝖾𝗋𝗆(𝑓𝑓,),𝗍𝖾𝗋𝗆(𝑡𝑡,)) for p𝒫\displaystyle=(p,\mathsf{term}(\mathit{ff},\!\bot),\mathsf{term}(\mathit{tt},\!\top))\text{~for~}p{\,\in\,}\mathcal{P} 𝗍𝗋(¬α)\displaystyle\mathsf{tr}(\lnot\alpha) =¬𝗍𝗋(α)\displaystyle=\lnot\mathsf{tr}(\alpha)
𝗍𝗋(αβ)\displaystyle\mathsf{tr}(\alpha\odot\beta) =𝗍𝗋(α)𝗍𝗋(β) for any {,,,,}\displaystyle=\mathsf{tr}(\alpha)\odot\mathsf{tr}(\beta)\text{~for any~}\odot\in\mathrlap{\{\land,\lor,\rightarrow,\leftrightarrow,\oplus\}}
𝗍𝗋(α𝖴β)\displaystyle\mathsf{tr}(\alpha\mathbin{\mathsf{U}}\beta) =𝗍𝗋(β)(𝗍𝗋(α)𝗍𝖾𝗋𝗆(α𝖴β,))\displaystyle=\mathsf{tr}(\beta)\lor(\mathsf{tr}(\alpha)\land\mathsf{term}(\alpha\mathbin{\mathsf{U}}\beta,\bot)) 𝗍𝗋(𝖥α)\displaystyle\mathsf{tr}(\mathsf{F}\alpha) =𝗍𝗋(α)𝗍𝖾𝗋𝗆(𝖥α,)\displaystyle=\mathsf{tr}(\alpha)\lor\mathsf{term}(\mathsf{F}\alpha,\bot)
𝗍𝗋(α𝖱β)\displaystyle\mathsf{tr}(\alpha\mathbin{\mathsf{R}}\beta) =𝗍𝗋(β)(𝗍𝗋(α)𝗍𝖾𝗋𝗆(α𝖱β,))\displaystyle=\mathsf{tr}(\beta)\land(\mathsf{tr}(\alpha)\lor\mathsf{term}(\alpha\mathbin{\mathsf{R}}\beta,\top)) 𝗍𝗋(𝖦α)\displaystyle\mathsf{tr}(\mathsf{G}\alpha) =𝗍𝗋(α)𝗍𝖾𝗋𝗆(𝖦α,)\displaystyle=\mathsf{tr}(\alpha)\land\mathsf{term}(\mathsf{G}\alpha,\top)

Boolean operators that appear to the right of the equal sign are applied on MTBDDs as discussed in Section 2.4. Terminals in 𝖫𝖳𝖫𝖿(𝒫)×𝔹{\mathsf{LTL_{f}}}(\mathcal{P})\times\mathbb{B} are combined with: (α1,b1)(α2,b2)=([α1α2],b1b2)(\alpha_{1},b_{1})\odot(\alpha_{2},b_{2})=([\alpha_{1}\odot\alpha_{2}]_{\equiv},b_{1}\odot b_{2}) and ¬(α,b)=([¬α],¬b)\lnot(\alpha,b)=([\lnot\alpha]_{\equiv},\lnot b).

Theorem 3.1

For φ𝖫𝖳𝖫𝖿(𝒫)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{P}), let 𝒜φ=𝒬,𝒫,ι,Δ\mathcal{A}_{\varphi}=\langle\mathcal{Q},\mathcal{P},\iota,\Delta\rangle be the MTDFA obtained by setting ι=[φ]\iota=[\varphi]_{\equiv}, Δ=𝗍𝗋\Delta=\mathsf{tr}, and letting 𝒬\mathcal{Q} be the smallest subset of 𝖫𝖳𝖫𝖿(𝒫){\mathsf{LTL_{f}}}(\mathcal{P}) such that ι𝒬\iota\in\mathcal{Q}, and such that for any q𝒬q\in\mathcal{Q} and for any (α,b)leaves(Δ(q))(\alpha,b)\in\textnormal{{leaves}}(\Delta(q)), then α𝒬\alpha\in\mathcal{Q}. With this construction, |𝒬||\mathcal{Q}| is finite and (φ)=(𝒜φ)\mathscr{L}(\varphi)=\mathscr{L}(\mathcal{A}_{\varphi}).

Proof

(sketch) By definition of 𝗍𝗋\mathsf{tr}, 𝒬\mathcal{Q} contains only Boolean combinations of subformulas of φ\varphi. Propositional equivalence implies that the number of such combinations is finite: |𝒬|22|𝗌𝖿(φ)||\mathcal{Q}|\leq 2^{2^{|\mathsf{sf}(\varphi)|}}. The language equivalence follows from the definition of 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}}, and from some classical 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} equivalences. For instance the rule for 𝗍𝗋(α𝖴β)\mathsf{tr}(\alpha\mathbin{\mathsf{U}}\beta) is based on the equivalence (α𝖴β)=(β(α𝖷!(α𝖴β)))\mathscr{L}(\alpha\mathbin{\mathsf{U}}\beta)=\mathscr{L}(\beta\lor(\alpha\land\mathsf{X^{!}}(\alpha\mathbin{\mathsf{U}}\beta))).

Example 5

Figure 1 is the MTDFA for formula Ψ1Ψ2\Psi_{1}\land\Psi_{2}, presented in Example 1. Many more examples can be found margin: Also App. 0.D in the associated artifact [24].

The definition of 𝗍𝗋()\mathsf{tr}(\cdot) as an MTBDD representation of the set of successors of a state can be thought as a symbolic representation of Antimirov’s linear forms [2] for DFA with propositional alphabets. Antimirov presented linear forms as an efficient way to construct all (partial) derivatives at once, without having to iterate over the alphabet. For 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}}, formula progressions [20] are the equivalent of Brozozowski derivatives [13]. Here, 𝗍𝗋()\mathsf{tr}(\cdot) computes all formulas progressions at once, without having to iterate over an exponential number of assignments.

Finally, note that while this construction works with any order for 𝒫\mathcal{P}, different orders might produce a different number of MTBDD nodes.

Optimizations

The previous definitions can be improved in several ways.

Our implementation of MTBDD actually supports terminals that are the Boolean terminals of standard BDDs as well as the terminals used so far. So we are actually using 𝖬𝖳𝖡𝖣𝖣(𝒫,(𝖫𝖳𝖫𝖿(𝒫)×𝔹)𝔹)\mathsf{MTBDD}(\mathcal{P},({\mathsf{LTL_{f}}}(\mathcal{P})\times\mathbb{B})\cup\mathbb{B}), and we encode 𝗍𝖾𝗋𝗆(𝑓𝑓,)\mathsf{term}(\mathit{ff},\bot) and 𝗍𝖾𝗋𝗆(𝑡𝑡,)\mathsf{term}(\mathit{tt},\top) directly as \bot and \top respectively. With those changes, apply2 may be modified to shortcut the recursion depending on the values of m1m_{1}, m2m_{2}, and \odot. For instance if =\odot=\land and m1=m_{1}=\top, then m2m_{2} can be returned immediately. margin: Cf. App 0.A.4 Such shortcuts may be implemented for 𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮𝔹)\mathsf{MTBDD}(\mathcal{P},\mathcal{S}\cup\mathbb{B}) regardless of the nature of 𝒮\mathcal{S}, so our implementation of MTBDD operations is independent of 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}}.

When combining terminals during the computation of 𝗍𝗋\mathsf{tr}, one has to compute the representative formula [α1α2][\alpha_{1}\odot\alpha_{2}]_{\equiv}. This can be done by converting α1P{\alpha_{1}}_{P} and α2P{\alpha_{2}}_{P} into BDDs, keeping track of such conversions in a hash table. Two propositionally equivalent formulas will have the same BDD representation. While we are looking for a representative formula, we can also use the opportunity to simplify the formula at hand. We use the following very simple rewritings, for patterns that occur naturally in the output of 𝗍𝗋\mathsf{tr}:
(α𝖴β)βα𝖴β(\alpha\mathbin{\mathsf{U}}\beta)\lor\beta\leadsto\alpha\mathbin{\mathsf{U}}\beta, (α𝖱β)βα𝖱β(\alpha\mathbin{\mathsf{R}}\beta)\land\beta\leadsto\alpha\mathbin{\mathsf{R}}\beta, (𝖥β)β𝖥β(\mathsf{F}\beta)\lor\beta\leadsto\mathsf{F}\beta, (𝖦β)β𝖦β(\mathsf{G}\beta)\land\beta\leadsto\mathsf{G}\beta.

Once 𝒜φ\mathcal{A}_{\varphi} has been built, two states q,q𝒬q,q^{\prime}\in\mathcal{Q} such that Δ(q)=Δ(q)\Delta(q)=\Delta(q^{\prime}) can be merged by replacing all occurrences of qq^{\prime} by qq in the leaves of Δ\Delta.

Example 6

The automaton from Figure 1 margin: Cf. App. 0.C & 0.D has two pairs of states that can be merged. However, if the rule (𝖦β)β𝖦β(\mathsf{G}\beta)\land\beta\leadsto\mathsf{G}\beta is applied during the construction, then the occurrence of (𝖦𝖥o2)(𝖥o2)(\mathsf{G}\mathsf{F}o_{2})\land(\mathsf{F}o_{2}) will already be replaced by 𝖦𝖥o2\mathsf{G}\mathsf{F}o_{2}, producing the simplified automaton without requiring any merging.

4 Deciding 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} Realizability

𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} realizability (Def. 3) is solved by reducing the problem to a two-player reachability game where one player decides the input assignments and the other player decides the output assignments [23]. Section 4.1 presents reachability games and how to interpret the MTDFA as a reachability game, and Section 4.2 shows how we can solve the game on-the-fly while constructing it.

4.1 Reachability Games & Backpropagation

Definition 5 (Rechability Game)

A Reachability Game is 𝒢=𝒱=𝒱o𝒱i,,o\mathcal{G}=\langle\mathcal{V}=\mathcal{V}_{\textsc{o}}\biguplus\mathcal{V}_{\textsc{i}},\linebreak[3]\mathcal{E},\mathcal{F}_{\textsc{o}}\rangle, where 𝒱\mathcal{V} is a finite set of vertices partitioned to player output (abbreviated o) and player input (abbreviated i), 𝒱×𝒱\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V} is a finite set of edges, and o𝒱\mathcal{F}_{\textsc{o}}\subseteq\mathcal{V} is the set of target states. Let (v)={(v,v)|(v,v)}\mathcal{E}(v)=\{(v,v^{\prime})~|~(v,v^{\prime})\in\mathcal{E}\}. This graph is also referred to as the game arena.

A strategy for player o is a cycle-free subgraph W,σ𝒱,\langle W,\sigma\rangle\subseteq\langle\mathcal{V},\mathcal{E}\rangle such that (a) for every vWv\in W we have vov\in\mathcal{F}_{\textsc{o}} or (v)σ\mathcal{E}(v)\cap\sigma\neq\emptyset and (b) if vW𝒱iv\in W\cap\mathcal{V}_{\textsc{i}} then (v)σ\mathcal{E}(v)\subseteq\sigma. A vertex vv is winning for o if vWv\in W for some strategy W,σ\langle W,\sigma\rangle.

Such a reachability game can be solved by backpropagation identifying the maximal set WW in a strategy. Namely, start from W=oW=\mathcal{F}_{\textsc{o}}. Then WW is iteratively augmented with every vertex in 𝒱o\mathcal{V}_{\textsc{o}} that has some edge to WW, and every (non dead-end) vertex in 𝒱i\mathcal{V}_{\textsc{i}} whose edges all lead to WW. At the end of this backpropagation, which can be performed in linear time [35, Theorem 3.1.2], every vertex in WW is winning for o, and every vertex outside WW is losing for o. Notice that every dead-end that is not in o\mathcal{F}_{\textsc{o}} cannot be winning. It follows that we can identify some (but not necessarily all) vertices that are losing by setting LL as the set of all dead-ends and adding to LL every 𝒱i\mathcal{V}_{\textsc{i}} vertex that has some edge to LL and every 𝒱o\mathcal{V}_{\textsc{o}} vertex whose edges all lead to LL.

i0i_{0}i0i_{0}i2i_{2}i1i_{1}i2i_{2}o1o_{1}o1o_{1}o1o_{1}o1o_{1}o2o_{2}o2o_{2}Ψ1((𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))Ψ1((𝖥o2𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{F}o_{2}\land\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))𝑓𝑓\mathit{ff}Ψ1(𝖦𝖥o2)\Psi_{1}\land(\mathsf{G}\mathsf{F}o_{2})Ψ1((𝖦𝖥o2)(𝖥o2))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2})\land(\mathsf{F}o_{2}))
Figure 2: Interpretation of the MTDFA of Figure 1 as a game with ={i0,i1,i2}\mathcal{I}=\{i_{0},i_{1},i_{2}\}, 𝒪={o1,o2}\mathcal{O}=\{o_{1},o_{2}\}. Each MTBDD node of the MTDFA is viewed as a vertex of the game, with terminal of the form (α,)(\alpha,\bot) looping back to Δ(α)\Delta(\alpha). Player o decides where to go from diamond and rectangular vertices and wants to reach the green vertices corresponding to accepting terminals. Player i decides where to go from round vertices and wants to reach 𝑓𝑓\mathit{ff} or avoid green vertices.

Let 𝒜φ=𝒬,𝒪,ι,Δ\mathcal{A}_{\varphi}=\langle\mathcal{Q},\mathcal{I}\uplus\mathcal{O},\iota,\Delta\rangle be a translation of φ𝖫𝖳𝖫𝖿(𝒪)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{I}\uplus\mathcal{O}) (per Th. 3.1) such that variables of \mathcal{I} appear before 𝒪\mathcal{O} in the MTBDD encoding of Δ\Delta.

Definition 6 (Realizability Game)

We define the reachability game 𝒢φ=𝒱=𝒱i𝒱o,,o\mathcal{G}_{\varphi}=\langle\mathcal{V}=\mathcal{V}_{\textsc{i}}\uplus\mathcal{V}_{\textsc{o}},\mathcal{E},\mathcal{F}_{\textsc{o}}\rangle in which 𝒱𝖬𝖳𝖡𝖣𝖣(𝒪)\mathcal{V}\subseteq\mathsf{MTBDD}(\mathcal{I}\uplus\mathcal{O}) corresponds the set of nodes that appear in the MTBDD encoding of Δ\Delta. 𝒱o\mathcal{V}_{\textsc{o}} contains all nodes (p,,h)(p,\ell,h) such that p𝒪p\in\mathcal{O} or p=p=\infty (terminals), and 𝒱i\mathcal{V}_{\textsc{i}} contains those with pp\in\mathcal{I}. The edges \mathcal{E} follows the structure of Δ\Delta, i.e., if 𝒜φ\mathcal{A}_{\varphi} has a node r=(p,,h)r=(p,\ell,h), then {(r,),(r,h)}\{(r,\ell),(r,h)\}\subseteq\mathcal{E}. Additionally, for any terminal t=(,(α,),)t=(\infty,(\alpha,\bot),\infty) such that α𝑓𝑓\alpha\neq\mathit{ff}, \mathcal{E} contains the edge (t,Δ(α))(t,\Delta(\alpha)). Finally, o\mathcal{F}_{\textsc{o}} is the set of accepting terminals, i.e., nodes of the form (,(α,),)(\infty,(\alpha,\top),\infty).

Theorem 4.1

Vertex Δ(ι)\Delta(\iota) is winning for o in 𝒢φ\mathcal{G}_{\varphi} iff φ\varphi is Mealy-realizable.

Moore realizability can be checked similarly by changing the order of \mathcal{I} and 𝒪\mathcal{O} in the MTBDD encoding of Δ\Delta.

Example 7

Figure 2 shows how to interpret the MTDFA of Figure 1 as a game, by turning each MTBDD node into a game vertex. The player owning each vertex is chosen according to the variable that labels it. Vertices corresponding to accepting terminals become winning targets for the output player, so the game stops once they are reached. Solving this game will find every internal node as winning for o, so the corresponding formula is Mealy-realizable.

The difference with DFA games [23, 20, 28, 54] is that instead of having player i select all input signals at once, and then player o select all output signals at once, our game proceeds by selecting one signal at a time. Sharing nodes that represent identical partial assignments contributes to the scalability of our approach.

4.2 Solving Realizability On-the-fly

We now show how to construct and solve 𝒢φ\mathcal{G}_{\varphi} on-the-fly, for better efficiency. The construction is easier to study in two parts: (1) the on-the-fly solving of reachability games, based on backpropagation, and (2) the incremental construction of 𝒢φ\mathcal{G}_{\varphi}, done with a forward exploration of a subset of the MTDFA for φ\varphi.

Algorithm 1 presents the first part: a set of functions for constructing a game arena incrementally, while performing the margin: Cf. App. 0.D linear-time backpropagation algorithm on-the-fly. At all points during this construction, the winning status of a vertex (𝑤𝑖𝑛𝑛𝑒𝑟[x]\mathit{winner}[x]) will be one of o (player o can force the play to reach o\mathcal{F}_{\textsc{o}}, i.e., the vertex belongs to WW), i (player i can force the play to avoid o\mathcal{F}_{\textsc{o}}, i.e., the vertex belongs to LL), or u (undetermined yet), and the algorithm will backpropagate both o and i. At the end of the construction, all vertices with status u will be considered as winning for i. Like in the standard algorithm for solving reachability games [35, Th. 3.1.2] each state uses a counter (𝑐𝑜𝑢𝑛𝑡\mathit{count}, lines 1,1) to track the number of its undeterminated successors. When a vertex xx is marked as winning for player ww by calling set_winner(x,w), an undeterminated predecessor pp has its counter decreased (line 1), and pp can be marked as winning for ww (line 1) if either vertex pp is owned by ww (player ww can choose to go to xx) or the counter dropped to 0 (meaning that all choices at pp were winning for ww).

To solve the game while it is constructed, we freeze vertices. A vertex should be frozen after all its successors have been introduced with new_edge. The counter dropping to 0 is only checked on frozen vertices (lines 11) since it is only meaningful if all successors of a vertex are known.

var: 𝑜𝑤𝑛𝑒𝑟[]\mathit{owner}[];
  // map each vertex to one of {o,i}\{\textsc{o},\textsc{i}\}
1
var: 𝑝𝑟𝑒𝑑[]\mathit{pred}[];
  // map vertices to sets of predecessor vertices
2
var: 𝑐𝑜𝑢𝑛𝑡[]\mathit{count}[];
  // map vertices to # of undeterminated successors
3
var: 𝑤𝑖𝑛𝑛𝑒𝑟[]\mathit{winner}[];
  // map vertices to one of {o,i,u}\{\textsc{o},\textsc{i},\textsc{u}\}
4
var: 𝑓𝑟𝑜𝑧𝑒𝑛[]\mathit{frozen}[];
  // map vertices to their frozen status (a Boolean)
5
6 Function new_vertex(x𝒱,𝑜𝑤𝑛{o,i}x\in\mathcal{V},\mathit{own}\in\{\textsc{o},\textsc{i}\}) // new vertex owned by 𝑜𝑤𝑛\mathit{own}
7 𝑜𝑤𝑛𝑒𝑟[x]𝑜𝑤𝑛\mathit{owner}[x]\leftarrow\mathit{own}; 𝑝𝑟𝑒𝑑[x]\mathit{pred}[x]\leftarrow\emptyset; 𝑐𝑜𝑢𝑛𝑡[x]0\mathit{count}[x]\leftarrow 0; 𝑓𝑟𝑜𝑧𝑒𝑛[x]\mathit{frozen}[x]\leftarrow\bot;
 𝑤𝑖𝑛𝑛𝑒𝑟[x]u\mathit{winner}[x]\leftarrow\textsc{u};
   // undeterminated winner
8 
9Function new_edge(𝑠𝑟𝑐𝒱,𝑑𝑠𝑡𝒱\mathit{src}\in\mathcal{V},\mathit{dst}\in\mathcal{V})
10 assert (𝑓𝑟𝑜𝑧𝑒𝑛[𝑠𝑟𝑐]=\mathit{frozen}[\mathit{src}]=\bot);
11 if 𝑤𝑖𝑛𝑛𝑒𝑟[𝑑𝑠𝑡]=u\mathit{winner}[\mathit{dst}]=\textsc{u} then
12    𝑐𝑜𝑢𝑛𝑡[𝑠𝑟𝑐]𝑐𝑜𝑢𝑛𝑡[𝑠𝑟𝑐]+1\mathit{count}[\mathit{src}]\leftarrow\mathit{count}[\mathit{src}]+1; 𝑝𝑟𝑒𝑑[𝑑𝑠𝑡]𝑝𝑟𝑒𝑑[𝑑𝑠𝑡]{𝑠𝑟𝑐}\mathit{pred}[\mathit{dst}]\leftarrow\mathit{pred}[\mathit{dst}]\cup\{\mathit{src}\};
13    
14 else if 𝑤𝑖𝑛𝑛𝑒𝑟[𝑑𝑠𝑡]=𝑜𝑤𝑛𝑒𝑟[𝑠𝑟𝑐]\mathit{winner}[\mathit{dst}]=\mathit{owner}[\mathit{src}] then  set_winner(𝑠𝑟𝑐,𝑜𝑤𝑛𝑒𝑟[𝑠𝑟𝑐]\mathit{src},\mathit{owner}[\mathit{src}]);
 // ignore the edge otherwise, it will never be used
15 
16Function freeze_vertex(x𝒱\mathit{x}\in\mathcal{V}) // promise not to add more successors
 𝑓𝑟𝑜𝑧𝑒𝑛[x]\mathit{frozen}[x]\leftarrow\top;
    // next line, we assume ¬i=o\lnot\textsc{i}=\textsc{o} and ¬o=i\lnot\textsc{o}=\textsc{i}
17 if 𝑤𝑖𝑛𝑛𝑒𝑟[x]=u𝑐𝑜𝑢𝑛𝑡[n]=0\mathit{winner}[x]=\textsc{u}\land\mathit{count}[n]=0 then  set_winner(x,¬𝑜𝑤𝑛𝑒𝑟[x]x,\lnot\mathit{owner}[x]) ;
18 
19Function set_winner(x𝒱,w{o,i}\mathit{x}\in\mathcal{V},w\in\{\textsc{o},\textsc{i}\}) // with linear backprop.
20 assert (𝑤𝑖𝑛𝑛𝑒𝑟[x]=u\mathit{winner}[x]=\textsc{u}); 𝑤𝑖𝑛𝑛𝑒𝑟[x]w\mathit{winner}[x]\leftarrow w;
21 foreach p𝑝𝑟𝑒𝑑[x]p\in\mathit{pred}[x] such that 𝑤𝑖𝑛𝑛𝑒𝑟[p]=u\mathit{winner}[p]=\textsc{u} do
22    𝑐𝑜𝑢𝑛𝑡[p]𝑐𝑜𝑢𝑛𝑡[p]1\mathit{count}[p]\leftarrow\mathit{count}[p]-1;
23    if 𝑜𝑤𝑛𝑒𝑟[p]=w(𝑐𝑜𝑢𝑛𝑡[p]=0𝑓𝑟𝑜𝑧𝑒𝑛[p])\mathit{owner}[p]=w\lor(\mathit{count}[p]=0\land\mathit{frozen}[p]) then  set_winner(p,wp,w);
24    
25 
Algorithm 1 API for solving a reachability game on-the-fly. Construct the game arena with new_vertex and new_edge. Once all successors of a vertex have been connected, call freeze_vertex. Call set_winner at any point to designate vertices winning for one player.
1
2Function realizability(φ𝖫𝖳𝖫𝖿(𝒪)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{I}\uplus\mathcal{O}))
3   configure the MTBDD library to put variables in \mathcal{I} before those in 𝒪\mathcal{O};
4 𝑖𝑛𝑖𝑡𝗍𝖾𝗋𝗆(φ,)\mathit{init}\leftarrow\mathsf{term}(\varphi,\bot); new_vertex(𝑖𝑛𝑖𝑡,i\mathit{init},\textsc{i});
 𝒱{𝑖𝑛𝑖𝑡}\mathcal{V}\leftarrow\{\mathit{init}\};
   // nodes created as game vertices
 𝒬\mathcal{Q}\leftarrow\emptyset;
   // 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formulas processed by main loop on line 2
5 Function declare_vertex(r𝖬𝖳𝖡𝖣𝖣(𝒪,𝖫𝖳𝖫𝖿(𝒪)×𝔹)r\in\mathsf{MTBDD}(\mathcal{I}\uplus\mathcal{O},{\mathsf{LTL_{f}}}(\mathcal{I}\uplus\mathcal{O})\times\mathbb{B}))
6    (p,,h)r(p,\ell,h)\leftarrow r; if p=pp=\infty\lor p\in\mathcal{I} then 𝑜𝑤𝑛i\mathit{own}\leftarrow\textsc{i} else 𝑜𝑤𝑛o\mathit{own}\leftarrow\textsc{o};
7    new_vertex(r,𝑜𝑤𝑛r,\mathit{own}); 𝒱𝒱{r}\mathcal{V}\leftarrow\mathcal{V}\cup\{r\}; 𝑡𝑜_𝑒𝑛𝑐𝑜𝑑𝑒𝑡𝑜_𝑒𝑛𝑐𝑜𝑑𝑒{r}\mathit{to\_encode}\leftarrow\mathit{to\_encode}\cup\{r\};
8    
9 𝑡𝑜𝑑𝑜{φ}\mathit{todo}\leftarrow\{\varphi\};
10 while 𝑡𝑜𝑑𝑜𝑤𝑖𝑛𝑛𝑒𝑟[𝑖𝑛𝑖𝑡]=u\mathit{todo}\neq\emptyset\land\mathit{winner}[\mathit{init}]=\textsc{u} do
11    α𝑡𝑜𝑑𝑜.𝗉𝗈𝗉_𝖺𝗇𝗒()\alpha\leftarrow\mathit{todo}.\mathsf{pop\_any}(); 𝒬𝒬{α}\mathcal{Q}\leftarrow\mathcal{Q}\cup\{\alpha\};
12    [optional: add one-step (un)realizability check here, see Sec. 5];
13    a𝗍𝖾𝗋𝗆(α,)a\leftarrow\mathsf{term}(\alpha,\bot); m𝗍𝗋(α)m\leftarrow\mathsf{tr}(\alpha);
14    if m𝒱m\in\mathcal{V} then // mm has already been encoded
15       new_edge(a,ma,m); freeze_vertex(aa); continue to line 2;
16       
17    𝑡𝑜_𝑒𝑛𝑐𝑜𝑑𝑒\mathit{to\_encode}\leftarrow\emptyset; 𝑙𝑒𝑎𝑣𝑒𝑠\mathit{leaves}\leftarrow\emptyset;
18    declare_vertex(m); new_edge(a,ma,m); freeze_vertex(aa);
19    if 𝑤𝑖𝑛𝑛𝑒𝑟[a]u\mathit{winner}[a]\neq\textsc{u} then continue to line 2;
20    while 𝑡𝑜_𝑒𝑛𝑐𝑜𝑑𝑒\mathit{to\_encode}\neq\emptyset do
21       r𝑡𝑜_𝑒𝑛𝑐𝑜𝑑𝑒.𝗉𝗈𝗉_𝖺𝗇𝗒()r\leftarrow\mathit{to\_encode}.\mathsf{pop\_any}();
22       (p,,h)r(p,\ell,h)\leftarrow r;
23       if p=p=\infty then // this is a terminal labeled by \ell
24          (β,b)(\beta,b)\leftarrow\ell;
25          if b then set_winner(r,or,\textsc{o});
26          else if β=𝑓𝑓\beta=\mathit{ff} then set_winner(r,ir,\textsc{i});
27          else if β𝒬\beta\not\in\mathcal{Q} then 𝑙𝑒𝑎𝑣𝑒𝑠𝑙𝑒𝑎𝑣𝑒𝑠{β}\mathit{leaves}\leftarrow\mathit{leaves}\cup\{\beta\};
28          
29       else
30          if 𝒱\ell\not\in\mathcal{V} then declare_vertex(\ell);
31          if h𝒱h\not\in\mathcal{V} then declare_vertex(hh);
32          new_edge(r,r,\ell); new_edge(r,hr,h); freeze_vertex(rr);
33          
34       if 𝑤𝑖𝑛𝑛𝑒𝑟[a]u\mathit{winner}[a]\neq\textsc{u} then continue to line 2;
35       
36    𝑡𝑜𝑑𝑜𝑡𝑜𝑑𝑜𝑙𝑒𝑎𝑣𝑒𝑠\mathit{todo}\leftarrow\mathit{todo}\cup\mathit{leaves};
37    
38 return 𝑤𝑖𝑛𝑛𝑒𝑟[𝑖𝑛𝑖𝑡]=o\mathit{winner}[\mathit{init}]=\textsc{o};
39 
Algorithm 2 On-the-fly realizability check with Mealy semantics (for Moore semantics, swap the order of \mathcal{I} and 𝒪\mathcal{O} on the first line).

Algorithm 2 is the second part. It shows how to build 𝒢φ\mathcal{G}_{\varphi} incrementally. It translates the states α\alpha of the corresponding MTDFA one at a time, and uses the functions of Algorithm 1 to turn each node of 𝗍𝗋(α)\mathsf{tr}(\alpha) into a vertex of the game. Since the functions of Algorithm 1 update the winning status of the states as soon as possible, Algorithm 2 can use that to cut parts of the exploration.

Instead of using Δ(φ)=𝗍𝗋(φ)\Delta(\varphi)=\mathsf{tr}(\varphi) as initial vertex of the game, as in Theorem 4.1, we consider 𝑖𝑛𝑖𝑡=𝗍𝖾𝗋𝗆(φ,)\mathit{init}=\mathsf{term}(\varphi,\bot) as initial vertex (line 2): this makes no theoretical difference, since 𝗍𝖾𝗋𝗆(φ,)\mathsf{term}(\varphi,\bot) has 𝗍𝗋(φ)\mathsf{tr}(\varphi) as unique successor. Lines 2,22,2, and 2 implements the exploration of all the 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formulas α\alpha that would label the states of the MTDFA for φ\varphi (as needed to implement Theorem 3.1). The actual order in which formulas are removed from 𝑡𝑜𝑑𝑜\mathit{todo} on line 2 is free. (We found out that handling 𝑡𝑜𝑑𝑜\mathit{todo} as a queue to implement a BFS exploration worked marginally better than using it as a stack to do a DFS-like exploration, so we use a BFS in practice.)

Each α\alpha is translated into an MTBDD 𝗍𝗋(α)\mathsf{tr}(\alpha) representing its possible successors. The constructed game should have one vertex per MTBDD node in 𝗍𝗋(α)\mathsf{tr}(\alpha). Those vertices are created in the inner while loop (lines 22). Function declare_vertex is used to assign the correct owner to each new node according to its decision variable (as in Def. 6) as well as adding those nodes to the 𝑡𝑜_𝑒𝑛𝑐𝑜𝑑𝑒\mathit{to\_encode} set processed by this inner loop. Terminal nodes are either marked as winning for one of the players (lines 22) or stored in 𝑙𝑒𝑎𝑣𝑒𝑠\mathit{leaves} (line 2).

Since connecting game vertices may backpropagate their winning status, the encoding loop can terminate early whenever the vertex associated to 𝗍𝖾𝗋𝗆(α,)\mathsf{term}(\alpha,\bot) becomes determined (lines 2 and 2). If that vertex is not determined, the 𝑙𝑒𝑎𝑣𝑒𝑠\mathit{leaves} of α\alpha are added to 𝑡𝑜𝑑𝑜\mathit{todo} (line 2) for further exploration.

The entire construction can also stop as soon as the initial vertex is determined (line 2). However, if the algorithm terminates with 𝑤𝑖𝑛𝑛𝑖𝑛𝑔[𝑖𝑛𝑖𝑡]=u\mathit{winning[init]}=\textsc{u}, it still means that o cannot reach its targets. Therefore, as tested by line 2, formula φ\varphi is realizable iff 𝑤𝑖𝑛𝑛𝑖𝑛𝑔[init]=o\mathit{winning}[init]=\textsc{o} in the end.

Theorem 4.2

Algorithm 2 returns 𝑡𝑡\mathit{tt} iff φ\varphi is Mealy-realizable.

5 Implementation and Evaluation

Our algorithms have been implemented in Spot [25], after extending its fork of BuDDy [42] to support MTBDDs. margin: Cf. App. 0.F The release of Spot 2.14 distributes two new command-line tools: ltlf2dfa\faExternalLink* and ltlfsynt\faExternalLink*, implementing translation from 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} to MTDFA, and solving 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} synthesis. We describe and evaluate ltlfsynt in the following.

Preprocessing

Before executing Algorithm 2, we use a few preprocessing techniques to simplify the problem. We remove variables that always have the same polarity in the specification (a simplification used also by Strix [50]), and we decompose the specifications into output-disjoint sub-specifications that can be solved independently [31]. A specification such as Ψ1Ψ2\Psi_{1}\land\Psi_{2}, from Example 1, is not solved directly as demonstrated here, but split into two output-disjoint specifications Ψ1\Psi_{1} and Ψ2\Psi_{2} that are solved separately. Finally, we also simplify 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formulas using very simple rewriting margin: Cf. App. 0.G rules such as 𝖷(α)𝖷(β)𝖷(αβ)\mathsf{X}(\alpha)\land\mathsf{X}(\beta)\leadsto\mathsf{X}(\alpha\land\beta) that reduce the number of MTBDD operations required during translation.

One-step (un)realizability checks

An additional optimization consists in performing one-step realizability and one-step unrealizability checks in Algorithm 2. margin: Cf. App. 0.H The principle is to transform the formula α\alpha into two smaller Boolean formulas αr\alpha_{r} and αu\alpha_{u}, such that if αr\alpha_{r} is realizable it implies that α\alpha is realizable, and if αu\alpha_{u} is unrealizable it implies that α\alpha is unrealizable [53, Theorems 2–3]. Those Boolean formulas can be translated to BDDs for which realizability can be checked by quantification. On success, it avoids the translation of the larger formula α\alpha. The simple formula Ψ1Ψ2\Psi_{1}\land\Psi_{2} of our running example is actually one-step realizable.

Synthesis

After deciding realizability, ltlfsynt is able to extract a strategy from the solved game in the form of a Mealy machine, and encode that into an And-Invert Graph (AIG) [10]: the expected output of the Synthesis Competition for the 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} synthesis tracks. The conversion from Mealy to AIG reuses prior work [48, 49] developed for Spot’s LTL (not 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}}) synthesis tool. We do not detail nor evaluate these extra steps here due to lack of space.

Evaluation

We evaluated the task of deciding 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} reachability over specifications from the Synthesis Competition [38]. We took all tlsf-fin specifications from SyntComp’s repository \faExternalLink*, excluded some duplicate specifications as well as some specifications that were too large to be solved by any tool, and converted the specifications from TLSF v1.2 [37] to 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} using syfco [37].

We used BenchExec 3.22 [9] to track time and memory usage of each tool. Tasks were run on a Core i7-3770 with Turbo Boost disabled, and frequency scaled down to 1.6GHz to prevent CPU throttling. The computer has 4 physical cores and 16GB of memory. BenchExec was configured to run up to 3 tasks in parallel with a memory limit of 4GB per task, and a time limit of 15 minutes.

Refer to caption
Figure 3: Cactus plots comparing time and memory usage of different configurations.

Figure 3 compares five configurations of ltlfsynt against seven other tools. margin: More in App. 0.I. We verified that all tools were in agreement. Lydia 0.1.3 [19], SyftMax (or Syft 2.0) [55] and LydiaSyft 0.1.0-alpha [29] are all using Mona to construct a DFA by composition; they then solve the resulting game symbolically after encoding it using BDDs. Lisa [8] uses a hybrid compositional construction, mixing explicit compositions (using Spot), with symbolic compositions (using BuDDy), solving the game symbolically in the end. Cynthia 0.1.0 [20], Nike 0.1.0 [28], and MoGuSer [54] all use an on-the-fly construction of a DFA game that they solve via forward exploration with backpropagation, but they do not implement backpropagation in linear time, as we do. Yet, the costly part of synthesis is game generation, not solving. Cynthia uses SDDs [17] to compute successors and represent states, while Nike and MoGuSer use SAT-based techniques to compute successors and BDDs to represent states. Nike, Lisa, and LydiaSyft were the top-3 contenders of the 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} track of SyntComp in 2023 and 2024.

Configuration ltlfsynt:bfs-nopre corresponds to Algorithm 2 were 𝑡𝑜𝑑𝑜\mathit{todo} is a queue: it already solves more cases than all other tested tools. Suffix -nopre indicates that preprocessings of the specification are disabled (this makes comparison fairer, since other tools have no such preprocessings). The version with preprocessings enabled is simply called ltlfsynt:bfs. Variants with “-os” adds the one-step (un)realizability checks that LydiaSyft, Cynthia, and Nike also perform.

We also include a configuration ltlfsynt:full-N that corresponds to first translating the specification into a MTDFA using Theorem 3.1, and then solving the game by linear propagation. The difference between ltlfsynt:full and ltlfsynt:bfs shows the gain obtained with the on-the-fly translation: although that look small in the cactus plot, it is important in some specifications.margin: Tab. 23 in App. 0.I.

Data Availability Statement

Implementation, supporting scripts, detailed analysis of this benchmark, and additional examples are archived on Zenodo [24].

6 Conclusion

We have presented the implementation of ltlfsynt, and evaluated it to be faster at deciding 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} realizability than seven existing tools, including the winners of SyntComp’24. The implementation uses a direct and efficient translation from 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} to DFA represented by MTBDDs, which can then be solved as a game played directly on the structure of the MTBDDs. The two constructions (translation and game solving) are performed together on-the-fly, to allow early termination.

Although ltlsynt also includes a preliminary implementation of 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} synthesis of And-Inverter graphs, we leave it as future work to document it and ensure its correctness.

Finally, the need for solving a reachability game while it is discovered also occurs in other equivalent contexts such as HornSAT, where linear algorithms that do not use “counters” and “predecessors” (unlike ours) have been developed [43]. Using such algorithms might improve our solution by saving memory.

References

  • [1] Andersen, H.R.: An introduction to binary decision diagrams. Lecture notes for Efficient Algorithms and Programs, Fall 1999 (1999), https://web.archive.org/web/20090530154634/http://www.itu.dk:80/people/hra/bdd-eap.pdf
  • [2] Antimirov, V.: Partial derivatives of regular expressions and finite automaton constructions. Theoretical Computer Science 155(2), 291–319 (Mar 1996). https://doi.org/10.1016/0304-3975(95)00182-4
  • [3] Bacchus, F., Kabanza, F.: Planning for temporally extended goals. Annals of Mathematics and Artificial Intelligence 22, 5–27 (1998). https://doi.org/10.1023/A:1018985923441
  • [4] Bacchus, F., Kabanza, F.: Using temporal logics to express search control knowledge for planning. Artificial Intelligence 116(1–2), 123–191 (2000). https://doi.org/10.1016/S0004-3702(99)00071-5
  • [5] Bahar, R.I., Frohm, E.A., Gaona, C.M., Hachtel, G.D., Macii, E., Pardo, A., Somenzi, F.: Algebraic decision diagrams and their applications. In: Proceedings of 1993 International Conference on Computer Aided Design (ICCAD’93). pp. 188–191. IEEE Computer Society Press (Nov 1993). https://doi.org/10.1109/ICCAD.1993.580054
  • [6] Baier, J.A., Fritz, C., McIlraith, S.A.: Exploiting procedural domain control knowledge in state-of-the-art planners. In: Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS’07). pp. 26–33. AAAI (2007), https://aaai.org/papers/icaps-07-004
  • [7] Baier, J.A., McIlraith, S.A.: Planning with first-order temporally extended goals using heuristic search. In: Proceedings of the 21st national conference on Artificial intelligence (AAAI’06). pp. 788–795. AAAI Press (2006). https://doi.org/10.5555/1597538.1597664
  • [8] Bansal, S., Li, Y., Tabajara, L.M., Vardi, M.Y.: Hybrid compositional reasoning for reactive synthesis from finite-horizon specifications. In: Proceedings of the 34th national conference on Artificial intelligence (AAAI’20). pp. 9766–9774. AAAI Press (2020). https://doi.org/10.1609/AAAI.V34I06.6528
  • [9] Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: requirements and solutions. International Journal on Software Tools for Technology Transfer 21, 1–29 (Feb 2019). https://doi.org/10.1007/s10009-017-0469-y
  • [10] Biere, A., Heljanko, K., Wieringa, S.: AIGER 1.9 and beyond. Tech. Rep. 11/2, Institute for Formal Models and Verification, Johannes Kepler University, Altenbergerstr. 69, 4040 Linz, Austria (2011), https://fmv.jku.at/aiger/
  • [11] Bryant, R.E.: Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers 35(8), 677–691 (Aug 1986). https://doi.org/10.1109/TC.1986.1676819
  • [12] Bryant, R.E.: Symbolic boolean manipulation with ordered binary-decision diagrams. ACM Comput. Surv. 24(3), 293–318 (Sep 1992). https://doi.org/10.1145/136035.136043
  • [13] Brzozowski, J.A.: Derivatives of regular expressions. Journal of the ACM 11(4), 481–494 (Oct 1964). https://doi.org/10.1145/321239.321249
  • [14] Calvanese, D., De Giacomo, G., Vardi, M.Y.: Reasoning about actions and planning in LTL action theories. In: Proceedings of the Eights International Conference on Principles of Knowledge Representation and Reasoning (KR’02). pp. 593–602. Morgan Kaufmann (2002). https://doi.org/10.5555/3087093.3087142
  • [15] Camacho, A., Bienvenu, M., McIlraith, S.A.: Towards a unified view of AI planning and reactive synthesis. In: Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS’19). pp. 58–67. AAAI Press (2019). https://doi.org/10.1609/icaps.v29i1.3460
  • [16] Cimatti, A., Pistore, M., Roveri, M., Traverso, P.: Weak, strong, and strong cyclic planning via symbolic model checking. Artificial Intelligence 147(1–2), 35–84 (2003). https://doi.org/10.1016/S0004-3702(02)00374-0
  • [17] Darwiche, A.: SDD: A new canonical representation of propositional knowledge bases. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence. pp. 819–826. AAAI Press (2011). https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-143
  • [18] De Giacomo, G., Favorito, M.: Compositional approach to translate LTLf/LDLf into deterministic finite automata. In: Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS’21). pp. 122–130 (2021). https://doi.org/10.1609/icaps.v31i1.15954
  • [19] De Giacomo, G., Favorito, M.: Compositional approach to translate LTLf/LDLf into deterministic finite automata. In: Biundo, S., Do, M., Goldman, R., Katz, M., Yang, Q., Zhuo, H.H. (eds.) Proceedings of the 31’st International Conference on Automated Planning and Scheduling (ICAPS’21). pp. 122–130. AAAI Press (Aug 2021). https://doi.org/10.1609/icaps.v31i1.15954
  • [20] De Giacomo, G., Favorito, M., Li, J., Vardi, M.Y., Xiao, S., Zhu, S.: LTLf synthesis as AND-OR graph search: Knowledge compilation at work. In: Raedt, L.D. (ed.) Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI’22). pp. 2591–2598. International Joint Conferences on Artificial Intelligence Organization (Jul 2022). https://doi.org/10.24963/ijcai.2022/359
  • [21] De Giacomo, G., Rubin, S.: Automata-theoretic foundations of fond planning for LTLf/LDLf goals. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI’18). pp. 4729–4735 (2018). https://doi.org/10.24963/ijcai.2018/657
  • [22] De Giacomo, G., Vardi, M.Y.: Linear temporal logic and linear dynamic logic on finite traces. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI’13). pp. 854–860. IJCAI’13, AAAI Press (Aug 2013). https://doi.org/10.5555/2540128.2540252
  • [23] De Giacomo, G., Vardi, M.Y.: Synthesis for LTL and LDL on finite traces. In: Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI’15). pp. 1558–1564. AAAI Press (2015). https://doi.org/10.5555/2832415.2832466
  • [24] Duret-Lutz, A.: Supporting material for ”Engineering an LTLf Synthetizer Tool” (2025). https://doi.org/10.5281/zenodo.15752968
  • [25] Duret-Lutz, A., Renault, E., Colange, M., Renkin, F., Aisse, A.G., Schlehuber-Caissier, P., Medioni, T., Martin, A., Dubois, J., Gillard, C., Lauko, H.: From Spot 2.0 to Spot 2.10: What’s new? In: Proceedings of the 34th International Conference on Computer Aided Verification (CAV’22). Lecture Notes in Computer Science, vol. 13372, pp. 174–187. Springer (Aug 2022). https://doi.org/10.1007/978-3-031-13188-2_9
  • [26] Ehlers, R., Lafortune, S., Tripakis, S., Vardi, M.Y.: Supervisory control and reactive synthesis: a comparative introduction. Discrete Event Dynamic Systems 27(2), 209–260 (2017). https://doi.org/10.1007/s10626-015-0223-0
  • [27] Esparza, J., Křetínský, J., Sickert, S.: One theorem to rule them all: A unified translation of LTL into ω\omega-automata. In: Dawar, A., Grädel, E. (eds.) Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS’18). pp. 384–393. ACM (2018). https://doi.org/10.1145/3209108.3209161
  • [28] Favorito, M.: Forward LTLf synthesis: DPLL at work. In: Benedictis, R.D., Castiglioni, M., Ferraioli, D., Malvone, V., Maratea, M., Scala, E., Serafini, L., Serina, I., Tosello, E., Umbrico, A., Vallati, M. (eds.) Proceedings of the 30th Workshop on Experimental evaluation of algorithms for solving problems with combinatorial explosion (RCRA’23). CEUR Workshop Proceedings, vol. 3585 (2023), https://ceur-ws.org/Vol-3585/paper7_RCRA4.pdf
  • [29] Favorito, M., Zhu, S.: LydiaSyft: A compositional symbolic synthesis framework for LTLf specifications. In: Proceedings of the 31st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’25). Lecture Notes in Computer Science, vol. 15696, pp. 295–302. Springer (May 2025). https://doi.org/10.1007/978-3-031-90643-5_15
  • [30] Finkbeiner, B.: Synthesis of reactive systems. In: Javier Esparza, Orna Grumberg, S.S. (ed.) Dependable Software Systems Engineering, NATO Science for Peace and Security Series — D: Information and Communication Security, vol. 45, pp. 72–98. IOS Press (2016). https://doi.org/10.3233/978-1-61499-627-9-72
  • [31] Finkbeiner, B., Geier, G., Passing, N.: Specification decomposition for reactive synthesis. In: Proceedings for the 13th NASA Formal Methods Symposium (NFM’21). Lecture Notes in Computer Science, vol. 12673, pp. 113–130. Springer (2021). https://doi.org/10.1007/978-3-030-76384-8_8
  • [32] Fujita, M., McGeer, P.C., Yang, J.C.: Multi-terminal binary decision diagrams: An efficient data structure for matrix representation. Formal Methods in System Design 10(2/3), 149–169 (1997). https://doi.org/10.1023/A:1008647823331
  • [33] Gabbay, D., Pnueli, A., Shelah, S., Stavi, J.: On the temporal analysis of fairness. In: Proceedings of the 7th ACM SIGPLAN-SIGACT symposium on Principles of programming languages (POPL’80). pp. 163–173. Association for Computing Machinery (1980). https://doi.org/doi.org/10.1145/567446.5674
  • [34] Gerevini, A., Haslum, P., Long, D., Saetti, A., Dimopoulos, Y.: Deterministic planning in the fifth international planning competition: PDDL3 and experimental evaluation of the planners. Artificial Intelligence 173(5–6), 619–668 (2009). https://doi.org/10.1016/j.artint.2008.10.012
  • [35] Grädel, E.: Finite model theory and descriptive complexity. In: Finite Model Theory and Its Applications, chap. 3, pp. 125–230. Texts in Theoretical Computer Science an EATCS Series, Springer Berlin Heidelberg, Berlin, Heidelberg (2007). https://doi.org/10.1007/3-540-68804-8_3
  • [36] Henriksen, J.G., Jensen, J., Jørgensen, M., Klarlund, N., Paige, R., Rauhe, T., Sandholm, A.: Mona: Monadic second-order logic in practice. In: Brinksma, E., Cleaveland, W.R., Larsen, K.G., Margaria, T., Steffen, B. (eds.) First International Workshop on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’95). pp. 89–110. Springer Berlin Heidelberg (1995). https://doi.org/10.1007/3-540-60630-0_5
  • [37] Jacobs, S., Perez, G.A., Schlehuber-Caissier, P.: The temporal logic synthesis format TLSF v1.2. arXiV (2023). https://doi.org/10.48550/arXiv.2303.03839
  • [38] Jacobs, S., Perez, G.A., Abraham, R., Bruyère, V., Cadilhac, M., Colange, M., Delfosse, C., van Dijk, T., Duret-Lutz, A., Faymonville, P., Finkbeiner, B., Khalimov, A., Klein, F., Luttenberger, M., Meyer, K.J., Michaud, T., Pommellet, A., Renkin, F., Schlehuber-Caissier, P., Sakr, M., Sickert, S., Staquet, G., Tamines, C., Tentrup, L., Walker, A.: The reactive synthesis competition (SYNTCOMP): 2018–2021. arXiV (Jun 2022). https://doi.org/10.48550/ARXIV.2206.00251
  • [39] Klarlund, N., Møller, A.: MONA version 1.4, user manual. Tech. rep., BRICS (Jul 2001), https://www.brics.dk/mona/mona14.pdf
  • [40] Klarlund, N., Rauhe, T.: BDD algorithms and cache misses. Tech. Rep. BR-96-26, BRICS (Jul 1996), https://www.brics.dk/mona/papers/bdd-alg-cache-miss/article.pdf
  • [41] Kluyver, T., Ragan-Kelley, B., Pérez, F., Granger, B., Bussonnier, M., Frederic, J., Kelley, K., Hamrick, J., Grout, J., Corlay, S., Ivanov, P., Avila, D., Abdalla, S., Willing, C., development team, J.: Jupyter notebooks — a publishing format for reproducible computational workflows. In: Loizides, F., Scmidt, B. (eds.) Proceedings of 20th International Conference on Electronic Publishing: Positioning and Power in Academic Publishing: Players, Agents and Agendas (ELPUB’16). pp. 87–90. IOS Press (2016). https://doi.org/10.3233/978-1-61499-649-1-87
  • [42] Lind-Nielsen, J.: BuDDy: A binary decision diagram package. User’s manual. (1999), https://web.archive.org/web/20040402015529/http://www.itu.dk/research/buddy/
  • [43] Liu, X., Smolka, S.A.: Simple linear-time algorithms for minimal fixed points. In: Larsen, K.G., Skyum, S., Winskel, G. (eds.) Proceedings of the 25th International Colloquium on Automata, Languages and Programming (ICALP’98). pp. 53–66. Springer Berlin Heidelberg (1998). https://doi.org/10.1007/BFb0055035
  • [44] Long, D.: BDD library. source archive, https://www.cs.cmu.edu/˜modelcheck/bdd.html
  • [45] Minato, S.i.: Representation of Multi-Valued Functions, pp. 39–47. Springer US, Boston, MA (1996). https://doi.org/10.1007/978-1-4613-1303-8_4
  • [46] Piterman, N., Pnueli, A., Sa’ar, Y.: Synthesis of reactive(1) designs. In: Proceedings of the 7th international conference on Verification, Model Checking, and Abstract Interpretation (VMCAI’06). Lecture Notes in Computer Science, vol. 3855, pp. 364–380. Springer (2006). https://doi.org/10.1007/11609773_24
  • [47] Pnueli, A., Rosner, R.: On the synthesis of a reactive module. In: Proceedings of the 16th ACM SIGPLAN-SIGACT symposium on Principles of Programming Languages (POPL’89). Association for Computing Machinery (1989). https://doi.org/10.1145/75277.75293
  • [48] Renkin, F., Schlehuber-Caissier, P., Duret-Lutz, A., Pommellet, A.: Effective reductions of Mealy machines. In: Proceedings of the 42nd International Conference on Formal Techniques for Distributed Objects, Components, and Systems (FORTE’22). Lecture Notes in Computer Science, vol. 13273, pp. 170–187. Springer (Jun 2022). https://doi.org/10.1007/978-3-031-08679-3_8
  • [49] Renkin, F., Schlehuber-Caissier, P., Duret-Lutz, A., Pommellet, A.: Dissecting ltlsynt. Formal Methods in System Design (2023). https://doi.org/10.1007/s10703-022-00407-6
  • [50] Sickert, S., Meyer, P.: Modernizing strix (2021), https://www7.in.tum.de/˜sickert/publications/MeyerS21.pdf
  • [51] Somenzi, F.: CUDD: CU Decision Diagram package release 3.0.0 (Dec 2015), https://web.archive.org/web/20171208230728/http://vlsi.colorado.edu/˜fabio/CUDD/cudd.pdf
  • [52] Tabajara, L.M., Vardi, M.Y.: Partitioning techniques in LTLf synthesis. In: Kraus, S. (ed.) Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI’19). pp. 5599–5606. ijcai.org (Aug 2019). https://doi.org/10.24963/IJCAI.2019/777
  • [53] Xiao, S., Li, J., Zhu, S., Shi, Y., Pu, G., Vardi, M.: On-the-fly synthesis for LTL over finite traces. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI’21, Technical Track 7). pp. 6530–6537 (May 2021). https://doi.org/10.1609/aaai.v35i7.16809
  • [54] Xiao, S., Li, Y., Huang, X., Xu, Y., Li, J., Pu, G., Strichman, O., Vardi, M.Y.: Model-guided synthesis for LTL over finite traces. In: Proceedings of the 25th International Conference on Verification, Model Checking, and Abstract Interpretation. Lecture Notes in Computer Science, vol. 14499, pp. 186–207. Springer (2024). https://doi.org/10.1007/978-3-031-50524-9_9
  • [55] Zhu, S., De Giacomo, G.: Synthesis of maximally permissive strategies for LTLf specifications. In: Raedt, L.D. (ed.) Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI’22). pp. 2783–2789. ijcai.org (Jul 2022). https://doi.org/10.24963/IJCAI.2022/386
  • [56] Zhu, S., Tabajara, L.M., Li, J., Pu, G., Vardi, M.Y.: Symbolic LTLf synthesis. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI’17). pp. 1362–1369 (2017). https://doi.org/10.24963/ijcai.2017/189

These appendices and the margin notes that point to them were part of the submission for interested reviewers, but they have not been peer-reviewed, and are not part of the CIAA’25 proceedings.

Appendix 0.A MTBDD operations

This section details some of the MTBDD operations described in section 2.4. We believe those functions should appear straightforward to any reader familiar with BDD implementations. We show them for the sake of being comprehensive.

0.A.1 Evaluating an MTBDD using an assignment

For m𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮)m\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}), and w𝔹𝒫w\in\mathbb{B}^{\mathcal{P}}, Algorithm 3 shows how to compute m(w)m(w) by descending the structure of mm according to ww.

Function eval(m,wm,\,w)
 input : m𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮)m\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}), w𝔹𝒫w\in\mathbb{B}^{\mathcal{P}}
 output : m(w)𝒮m(w)\in\mathcal{S}
 
 (p,,h)m(p,\ell,h)\leftarrow m;
 while pp\neq\infty do
    if w(v)w(v) then  (p,,h)h(p,\ell,h)\leftarrow h else  (p,,h)(p,\ell,h)\leftarrow\ell ;
    
 return \ell;
 
Algorithm 3 Evaluating an MTBDD using an assignment.

0.A.2 Binary and Unary Operations on MTBDDs

Algorithm 4 shows how the implementation of apply2 follows a classical recursive definition typically found in BDD packages [11, 32, 1]. The function makebdd is in charge of ensuring the reduced property of the MTBDD: for any triplet of the form (p,r,r)(p,r,r) where the 𝗅𝗈𝗐\mathsf{low} and 𝗁𝗂𝗀𝗁\mathsf{high} links are equal, makebdd returns rr to skip over the node. For other triplets, makebdd will look up and possibly update a global hash table to ensure that each triplet is represented only once. The hash table HH is used for memoization; assuming lossless caching (i.e., no dropped entry on hash collision), this ensures that the number of recursive calls performed is at most |m1||m2||m_{1}|\cdot|m_{2}|. Our implementation, as discussed in Section 0.F, uses a lossy cache, therefore the complexity might be higher.

Function apply2(m1,m2,,Hm_{1},\,m_{2},\,\odot,\,H)
 input : m1𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮1)m_{1}\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}_{1}), m2𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮2)m_{2}\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}_{2}), :𝒮1×𝒮2𝒮3\odot:\mathcal{S}_{1}\times\mathcal{S}_{2}\to\mathcal{S}_{3}, H:𝗁𝖺𝗌𝗁𝗆𝖺𝗉H:\mathsf{hashmap}
 output : m1m2𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮3)m_{1}\odot m_{2}\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}_{3})
 
 if (m1,m2,)H(m_{1},m_{2},\odot)\in H then
    return H[(m1,m2,)]H[(m_{1},m_{2},\odot)] ;
    
 (p1,1,h1)m1(p_{1},\ell_{1},h_{1})\leftarrow m_{1};
 (p2,2,h2)m2(p_{2},\ell_{2},h_{2})\leftarrow m_{2};
 if p1<p2p_{1}<p_{2} then
    rr\leftarrow{} makebdd(p1p_{1}, apply2(1,m2,,H\ell_{1},\,m_{2},\,\odot,\,H), apply2(h1,m2,,Hh_{1},\,m_{2},\,\odot,\,H));
    
 else if p2<p1p_{2}<p_{1} then
    rr\leftarrow{} makebdd(p2p_{2}, apply2(m1,2,,Hm_{1},\,\ell_{2},\,\odot,\,H), apply2(m1,h2,,Hm_{1},\,h_{2},\,\odot,\,H));
    
 else if p1<p_{1}<\infty then// p1=p2p_{1}=p_{2}
    rr\leftarrow{} makebdd(p1p_{1}, apply2(1,2,,H\ell_{1},\,\ell_{2},\,\odot,\,H), apply2(h1,h2,,Hh_{1},\,h_{2},\,\odot,\,H));
    
 else // p1=p2=p_{1}=p_{2}=\infty, we have terminals holding values 1\ell_{1} and 2\ell_{2}
    rr\leftarrow{} makebdd(,12,\infty,\ell_{1}\odot\ell_{2},\infty);
    
 H[(m1,m2,)]rH[(m_{1},m_{2},\odot)]\leftarrow r;
 return rr;
 
Algorithm 4 Composing two MTBDDs by applying a binary operator to their terminals.
Function leaves(mm)
 input : m𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮)m\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S})
 output : the subset of 𝒮\mathcal{S} that appears on leaves of mm
 𝑠𝑒𝑒𝑛{m}\mathit{seen}\leftarrow\{m\};
 𝑡𝑜𝑑𝑜{m}\mathit{todo}\leftarrow\{m\};
 𝑟𝑒𝑠\mathit{res}\leftarrow\emptyset;
 while 𝑡𝑜𝑑𝑜\mathit{todo}\neq\emptyset do
    m𝑡𝑜𝑑𝑜.𝗉𝗈𝗉_𝖺𝗇𝗒()m\leftarrow\mathit{todo}.\mathsf{pop\_any}();
    (p,,h)m(p,\ell,h)\leftarrow m;
    if p=p=\infty then // We reached a leaf labeled by \ell
       𝑟𝑒𝑠𝑟𝑒𝑠{}\mathit{res}\leftarrow\mathit{res}\cup\{\ell\};
       
    else
       𝑡𝑜𝑑𝑜𝑡𝑜𝑑𝑜({,r}𝑠𝑒𝑒𝑛)\mathit{todo}\leftarrow\mathit{todo}\cup(\{\ell,r\}\setminus\mathit{seen});
       𝑠𝑒𝑒𝑛𝑠𝑒𝑒𝑛{,r}\mathit{seen}\leftarrow\mathit{seen}\cup\{\ell,r\};
       
    
 return 𝑟𝑒𝑠\mathit{res};
 
Algorithm 5 Gathering the leaves of an MTBDD can be done with a simple linear traversal of an MTBDD.

An apply1 function can be written along the same lines for unary operators.

0.A.3 Leaves of an MTBDD

Function leaves(mm), shown by Algorithm 5 is a straightforward way to collect the leaves that appear in an MTBDD mm.

0.A.4 Boolean Operations with Shortcuts

Algorithm 6 shows how to implement Boolean operations on MTBDDs with terminals in 𝒮𝔹\mathcal{S}\cup\mathbb{B}, shortcutting the recursion when one of the operands is a terminal labeled by a value in 𝔹\mathbb{B}.

Function apply2sc(m1,m2,,Hm_{1},\,m_{2},\,\odot,\,H)
 input : m1𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮1𝔹)m_{1}\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}_{1}\cup\mathbb{B}), m2𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮2𝔹)m_{2}\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}_{2}\cup\mathbb{B}), :𝒮1×𝒮2𝒮3\odot:\mathcal{S}_{1}\times\mathcal{S}_{2}\to\mathcal{S}_{3}, H:𝗁𝖺𝗌𝗁𝗆𝖺𝗉H:\mathsf{hashmap}
 output : m1m2𝖬𝖳𝖡𝖣𝖣(𝒫,𝒮3)m_{1}\odot m_{2}\in\mathsf{MTBDD}(\mathcal{P},\mathcal{S}_{3})
 
 if (m1,m2,)H(m_{1},m_{2},\odot)\in H then
    return H[(m1,m2,)]H[(m_{1},m_{2},\odot)] ;
    
 (p1,1,h1)m1(p_{1},\ell_{1},h_{1})\leftarrow m_{1};
 (p2,2,h2)m2(p_{2},\ell_{2},h_{2})\leftarrow m_{2};
 if (p1=1𝔹)(p2=2𝔹)(p_{1}=\infty\land\ell_{1}\in\mathbb{B})\lor(p_{2}=\infty\land\ell_{2}\in\mathbb{B}) then
    switch \odot do
       case \land do
          if {1,2}\bot\in\{\ell_{1},\ell_{2}\} then return (,,)(\infty,\bot,\infty);
          if 1=\ell_{1}=\top then return m2m_{2};
          if 2=\ell_{2}=\top then return m1m_{1};
          
       case \lor do
          if {1,2}\top\in\{\ell_{1},\ell_{2}\} then return (,,)(\infty,\top,\infty);
          if 1=\ell_{1}=\bot then return m2m_{2};
          if 2=\ell_{2}=\bot then return m1m_{1};
          
       case \ldots do \ldots;
         
    
 
 if p1<p2p_{1}<p_{2} then
    rr\leftarrow{} makebdd(p1p_{1}, apply2sc(1,m2,,H\ell_{1},\,m_{2},\,\odot,\,H), apply2sc(h1,m2,,Hh_{1},\,m_{2},\,\odot,\,H));
    
 else if p2<p1p_{2}<p_{1} then
    rr\leftarrow{} makebdd(p2p_{2}, apply2sc(m1,2,,Hm_{1},\,\ell_{2},\,\odot,\,H), apply2sc(m1,h2,,Hm_{1},\,h_{2},\,\odot,\,H));
    
 else if p1<p_{1}<\infty then// p1=p2p_{1}=p_{2}
    rr\leftarrow{} makebdd(p1p_{1}, apply2sc(1,2,,H\ell_{1},\,\ell_{2},\,\odot,\,H), apply2sc(h1,h2,,Hh_{1},\,h_{2},\,\odot,\,H));
    
 else // p1=p2=p_{1}=p_{2}=\infty, we have terminals holding values 1\ell_{1} and 2\ell_{2}
    rr\leftarrow{} makebdd(,12,\infty,\ell_{1}\odot\ell_{2},\infty);
    
 H[(m1,m2,)]rH[(m_{1},m_{2},\odot)]\leftarrow r;
 return rr;
 
Algorithm 6 Variant of apply2 that implements shortcuts when one of the argument is a Boolean leaf.

Appendix 0.B Boolean Operations on MTDFAs

Although it is not necessary for the approach we presented, our implementation supports all Boolean operations over MTDFAs.

Since Δ(q)𝖬𝖳𝖡𝖣𝖣(𝒫,𝒬×𝔹)\Delta(q)\in\mathsf{MTBDD}(\mathcal{P},\mathcal{Q}\times\mathbb{B}) has terminals labeled by pairs of the form (q,b)𝒬×𝔹(q,b)\in\mathcal{Q}\times\mathbb{B}, let us extend any Boolean operator :𝔹×𝔹𝔹\odot:\mathbb{B}\times\mathbb{B}\to\mathbb{B} so that it can work on such pairs. More formally, for (q1,b1)𝒬1×𝔹(q_{1},b_{1})\in\mathcal{Q}_{1}\times\mathbb{B} and (q2,b2)𝒬2×𝔹(q_{2},b_{2})\in\mathcal{Q}_{2}\times\mathbb{B} we define (q1,b1)(q2,b2)(q_{1},b_{1})\odot(q_{2},b_{2}) to be equal to ((q1,q2),(b1b2))((Q1×Q2)×𝔹)((q_{1},q_{2}),(b_{1}\odot b_{2}))\in((Q_{1}\times Q_{2})\times\mathbb{B}). Using Algorithm 4 to apply \odot elements of 𝖬𝖳𝖡𝖣𝖣(𝒫,𝒬×𝔹)\mathsf{MTBDD}(\mathcal{P},\mathcal{Q}\times\mathbb{B}) gives us a very simple way to combine MTDFAs, as shown by the following definition.

Definition 7 (Composition of two MTDFAs)

Let 𝒜1=𝒬1,𝒫,ι1,Δ1\mathcal{A}_{1}=\langle\mathcal{Q}_{1},\mathcal{P},\iota_{1},\Delta_{1}\rangle and 𝒜2=𝒬2,𝒫,ι2,Δ2\mathcal{A}_{2}=\langle\mathcal{Q}_{2},\mathcal{P},\iota_{2},\Delta_{2}\rangle be two MTDFAs over the same variables 𝒫\mathcal{P}, and let {,,,,}\odot\in\{\land,\lor,\rightarrow,\leftrightarrow,...\} be any Boolean binary operator.

Then, let 𝒜1𝒜2\mathcal{A}_{1}\odot\mathcal{A}_{2} denote the composition of 𝒜1\mathcal{A}_{1} and 𝒜2\mathcal{A}_{2} defined as the MTDFA 𝒬1×𝒬2,𝒫,(ι1,ι2),Δ\langle\mathcal{Q}_{1}\times\mathcal{Q}_{2},\mathcal{P},(\iota_{1},\iota_{2}),\Delta\rangle where for any (q1,q2)𝒬1×𝒬2(q_{1},q_{2})\in\mathcal{Q}_{1}\times\mathcal{Q}_{2} we have Δ3((q1,q2))=Δ1(q1)Δ2(q2)\Delta_{3}((q_{1},q_{2}))=\Delta_{1}(q_{1})\odot\Delta_{2}(q_{2}).

Property 1

With the notations from Definition 7, (𝒜1𝒜2)={σ(𝔹𝒫)+(σ(𝒜1))(σ(𝒜2))}\mathscr{L}(\mathcal{A}_{1}\odot\mathcal{A}_{2})=\{\sigma\in(\mathbb{B}^{\mathcal{P}})^{+}\mid(\sigma\in\mathscr{L}(\mathcal{A}_{1}))\odot(\sigma\in\mathscr{L}(\mathcal{A}_{2}))\}. In particular (𝒜1𝒜2)=(𝒜1)(𝒜2)\mathscr{L}(\mathcal{A}_{1}\land\mathcal{A}_{2})=\mathscr{L}(\mathcal{A}_{1})\cap\mathscr{L}(\mathcal{A}_{2}) and (𝒜1𝒜2)=(𝒜1)(𝒜2)\mathscr{L}(\mathcal{A}_{1}\lor\mathcal{A}_{2})=\mathscr{L}(\mathcal{A}_{1})\cup\mathscr{L}(\mathcal{A}_{2}). If \oplus designates the exclusive or operator, testing the equivalence of two automata (𝒜1)=(𝒜2)\mathscr{L}(\mathcal{A}_{1})=\mathscr{L}(\mathcal{A}_{2}) amounts to testing whether (𝒜1𝒜2)=\mathscr{L}(\mathcal{A}_{1}\oplus\mathcal{A}_{2})=\emptyset.

The complementation of an MTDFA (with respect to (𝔹𝒫)+(\mathbb{B}^{\mathcal{P}})^{+} not (𝔹𝒫)(\mathbb{B}^{\mathcal{P}})^{\star}) can be defined using the unary Boolean negation similarly.

Such compositional operations are at the heart of the compositional 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} translations used by Lisa [8], Lydia [19] and LydiaSyft [29]. This is efficient as it allows minimizing intermediate automata before combining them. Our translator tool ltlf2dfa\faExternalLink* uses such a compositional approach by default. For 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} synthesis, our tool ltlfsynt\faExternalLink* also has the option to build the automaton by composition, but this is not enabled by default: using an on-the-fly construction as presented in Algorithm 2 is more efficient. We refer the reader to the artifact [24] for benchmark comparisons involving our own implementation of the compositional approach.

Appendix 0.C Simplified MTDFA

Ψ1((𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))ι=\iota=𝑓𝑓\mathit{ff}Ψ1(𝖦𝖥o2)\Psi_{1}\land(\mathsf{G}\mathsf{F}o_{2})i0i_{0}i0i_{0}i2i_{2}i1i_{1}i2i_{2}o1o_{1}o1o_{1}o1o_{1}o1o_{1}o2o_{2}o2o_{2}Ψ1((𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))Ψ1((𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))𝑓𝑓\mathit{ff}Ψ1(𝖦𝖥o2)\Psi_{1}\land(\mathsf{G}\mathsf{F}o_{2})Ψ1(𝖦𝖥o2)\Psi_{1}\land(\mathsf{G}\mathsf{F}o_{2})
Figure 4: The MTDFA from Figure 1, simplified by merging all states that have an identical MTBDD successor and adjusting the terminals.
Ψ1((𝖦𝖥o2)(𝖥i0))\Psi_{1}\land((\mathsf{G}\mathsf{F}o_{2}){\leftrightarrow}(\mathsf{F}i_{0}))Ψ1(𝖦𝖥o2)\Psi_{1}\land(\mathsf{G}\mathsf{F}o_{2})¬i0(i2o1)o2\lnot i_{0}\land(i_{2}\leftrightarrow o_{1})\land o_{2}¬i0(i2o1)¬o2\lnot i_{0}\land(i_{2}\leftrightarrow o_{1})\land\lnot o_{2}i0(i1o1)o2i_{0}\land(i_{1}\leftrightarrow o_{1})\land o_{2}i0(i1o1)¬o2i_{0}\land(i_{1}\leftrightarrow o_{1})\land\lnot o_{2}((¬i0(i2o1))(i0(i1o1)))o2\bigl{(}(\lnot i_{0}\land(i_{2}\leftrightarrow o_{1}))\lor(\mathrlap{i_{0}\land(i_{1}\leftrightarrow o_{1}))\bigr{)}\land o_{2}}((¬i0(i2o1))(i0(i1o1)))¬o2\bigl{(}(\lnot i_{0}\land(i_{2}\leftrightarrow o_{1}))\lor(\mathrlap{i_{0}\land(i_{1}\leftrightarrow o_{1}))\bigr{)}\land\lnot o_{2}}
Figure 5: Transition-based DFA interpretation of the MTDFA of Figure 5. A word σ(𝒫𝔹)+\sigma\in(\mathcal{P}^{\mathbb{B}})^{+} is accepted if there is a run of the automaton such that each assignment σ(i)\sigma(i) is compatible with the Boolean formula labeling the transition, and if the last transition visited was accepting (double line). The sink state corresponding to 𝑓𝑓\mathit{ff} has been trimmed for clarity.

Figure 5 shows a simplified version of the MTDFA from Figure 1 that can be obtained by any one of two optimizations discussed in Section 3:

  • merge states with identical MTBDD representations, or

  • apply the (𝖦β)β𝖦β(\mathsf{G}\beta)\land\beta\leadsto\mathsf{G}\beta simplification during constriction.

The second optimization is faster, as it does not require computing 𝗍𝗋(q)\mathsf{tr}(q) on some state qq only to later find that the result is identical to some previous 𝗍𝗋(q)\mathsf{tr}(q^{\prime}).

This simplified automaton may also help understand the “transition-based” nature of those MTDFAs. Here we have pairs of terminal with identical formula labels, but different acceptance: words are allowed to finish on one, but not the other. If they continue, they continue from the state specified by the formula. Figure 5 shows an equivalent “transition-based DFA” using notations that should be more readable by readers familiar with finite automata.

Appendix 0.D Try it Online!

The Spot Sandbox \faExternalLink* website offers online access to the development version of Spot (which includes the work described here) via Jupyter notebooks [41] or shell terminals.

In order to try the ltlf2dfa\faExternalLink* and ltlfsynt\faExternalLink* command-line tools, simply connect to Spot Sandbox \faExternalLink*, hit the “New” button, and start a “Terminal”.

The example directory contains two Jupyter notebooks directly related to this submission:

  • backprop.ipynb \faExternalLink* illustrates Algorithm 1. There, players o and 1 are called True and False respectively.

  • ltlf2dfa.ipynb \faExternalLink* illustrates the translation of Section 3 with the optimizations discussed in Section 3 (page 3), the MTDFA operations mentioned in Appendix 0.B, and some other game solving techniques not discussed here.

An HTML version of these two notebooks can also be found in directory more-examples/ of the associated artifact [24].

Appendix 0.E Backpropagation of Losing Vertices

The example of Figure 2 does not make it very clear how marking 𝑓𝑓\mathit{ff} as a losing vertex (i.e., winning for i) may improve the on-the-fly game solving: it does not help in that example.

i0i_{0}Δ(ι)\Delta(\iota)i1i_{1}o0o_{0}𝑓𝑓\mathit{ff}α\alphai0i_{0}𝑡𝑡\mathit{tt}huge subgameΔ(α)\Delta(\alpha)
Figure 6: If this game is created on-the-fly, it is useful to mark the 𝑓𝑓\mathit{ff} terminal as losing. When Δ(ι)\Delta(\iota) is encoded into a game, the fact that 𝑓𝑓\mathit{ff} is winning for player i will cause the two round vertices above it to be immediately marked as winning as well. Now, since the initial vertex is known to be winning for i, hence losing for o, the exploration may stop without having to encode Δ(α)\Delta(\alpha).

Figure 6 shows a scenario where marking states as losing and propagating this information is useful to avoid some unnecessary exploration of a large part of the automaton. Algorithm 2, described in Section 4, translates one state of the MTDFA at a time, starting from ι\iota, and encodes that state into a game by calling new_vertex, new_edge, etc. In the example of Figure 6, after the MTBDD for Δ(ι)\Delta(\iota) has been encoded (the top five nodes of Figure 6), the initial node will be marked as winning for i already (because i can select the appropriate value of i0i_{0} and i1i_{1} to reach 𝑓𝑓\mathit{ff}), therefore, the algorithm can stop immediately. Had we decided to backpropagate only the states winning for player o, the algorithm would have to continue encoding Δ(α)\Delta(\alpha) into the game and probably many other states reachable from there. At the end of the backpropagation, the initial node would still be undetermined, and we would also conclude that o cannot win.

Such an interruption of the on-the-fly exploration is used, does not only occur when the initial state is determined.for the initial state, but at every search: if during the encoding of Δ(α)\Delta(\alpha) we find that the winning status of the root note of Δ(α)\Delta(\alpha) is determined (line 2 of Algorithm 2), then it is unnecessary to explore the rejecting leaves of Δ(α)\Delta(\alpha).

Appendix 0.F Implementation Details: MTBDDs in BuDDy

BuDDy [42] is a BDD library created by Jørn Lind-Nielsen for his Ph.D. project. Maintenance was passed to someone else in 2004. The Spot developer has contributed a few changes and fixes to the “original” project, but it soon became apparent that some of the changes motivated by Spot’s needs could not be merged upstream (e.g., because they would break other projects for the sake of efficiency). Nowadays, Spot is distributed with its own fork of BuDDy that includes several extra functions, a more compact representation of the BDD nodes (16 bytes par node instead of 20), a “derecursived” implementation of the most common BDD operations. Moving away from BuDDy, to another BDD library would be very challenging. Therefore, for this work, we modified BuDDy to add support for MTBDDs with int-valued terminals (our MTBDD implementation knows nothing about 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}}).

Our implementation differs from Mona’s MTBDDs or CUDD’s ADDs in several ways. First, BuDDy is designed around a global unicity table, which stores reference counted BDDs. There is no notion of “BDD manager” as in Mona or CUDD that allows building independent BDDs. We introduced support for MTBDD directly into this table, by reserving the highest possible variable number to indicate a terminal (storing the terminal’s value in the 𝗅𝗈𝗐\mathsf{low} link, as suggested by our notation in this paper), and adding an extra if in the garbage collector so it correctly deals with those nodes. This change allows to mix MTBDD terminals with regular BDD terminals (false and true). Existing BDD function wills work as they have always done when a BDD does not use the new terminals. If multi-terminals are used, a new set of functions should be used.

In CUDD’s ADD implementation, the set of operations that can be passed to the equivalent of the apply2 function (see Algorithm 4) is restricted to a fixed set of algebraic operations that have well defined semantics. In Mona and in our implementation, the user may pass an arbitrary function in order to interpret the terminals (which can only store an integer) and combine them. For instance, to implement the presented algorithm where terminal are supposed to be labeled by pairs (α,b)𝖫𝖳𝖫𝖿(𝒫)×𝔹(\alpha,b)\in{\mathsf{LTL_{f}}}(\mathcal{P})\times\mathbb{B}, we store bb in the lower bit of the terminal’s value, and use the other bits as an index in an array that stores α\alpha. If we create a new formula while combining two terminals, we add the new formula to that array, and build the value of the newly formed terminal from the corresponding index in that array.

One issue with implementing MTBDD operations is how to implement the operation cache (the HH argument of Algorithm 4) when the function to apply on the leaves is supplied by the user. Since the supplied function may depend on global variables, it is important that this operation cache can be reset by the user.

We implement those user-controlled operation caches using lossy hash tables similar to what are used internally by BuDDy for classical BDD operations. Algorithm 4, the line H[(m1,m2,)]rH[(m_{1},m_{2},\odot)]\leftarrow r that saves the result of the last operation may actually erase the result of a previous operation that would have been hashed to the same index. Therefore, the efficiency of our MTBDD algorithms will depend on how many collisions they generate, and this in turn depends on the size allocated for this hash table: ideally HH should have a size of the same order as the number of BDD nodes used in the MTBDD resulting from the operation. We use two empirical heuristics to estimate a size for HH. For unary operations on MTDFAs, we set |H|=|𝒫||𝒬|/2|H|=|\mathcal{P}|\cdot|\mathcal{Q}|/2, and for binary operations on MTDFAs (e.g., Def. 7), we set |H|=|𝒫1𝒫2||𝒬1||𝒬2|/4|H|=|\mathcal{P}_{1}\cup\mathcal{P}_{2}|\cdot|\mathcal{Q}_{1}|\cdot|\mathcal{Q}_{2}|/4. For operations performed during the translation of 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} formulas to MTDFAs (Th. 3.1), we use a hash table that is 20%20\% of the total number of nodes allocated by BuDDy, but we share it for all MTBDD operations performed during the translation.

Mona handles those caches differently: it also estimates an initial size for those caches (with different formulas [40]), but by default it will handle any collision by chaining, growing an overflow table to store collisions as needed. This difference probably contributes to the additional “out-of-memory” errors that Mona-based tools tend to show in our benchmarks.

Appendix 0.G Simple Rewriting Rules

We use a specification decomposition technique based on [31]. We try to rewrite the input specification φ\varphi into a conjunction φ=iφi\varphi=\bigwedge_{i}\varphi_{i}, where each φi\varphi_{i} uses non-overlapping sets of outputs. Formula Ψ=Ψ1Ψ2\Psi=\Psi_{1}\land\Psi_{2} from Example 1 is already in this form. However, in general, the specification may be more complex, like 𝖦(ξ0)iξi\mathsf{G}(\xi_{0})\rightarrow\bigwedge_{i}\xi_{i}. In such a case, we rewrite the formula as i(𝖦(ξ0)ξi)\bigwedge_{i}(\mathsf{G}(\xi_{0})\rightarrow\xi_{i}) before partitioning the terms of this conjunction into groups that use overlapping sets of output variables. Such a rewriting, necessary to an effective decomposition, may introduce a lot of redundancy in the formula (in this example 𝖦(ξ0)\mathsf{G}(\xi_{0}) is duplicated several times).

For this reason, we apply simple language-preserving rewritings on 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}}{} formulas before attempting to translate them into an MTDFA. These rewritings undo some of the changes that had to be done earlier to look for possible decompositions. They are also performed when decomposition is disabled. More generally, the goal is to reduce the number of temporal operators, in order to reduce the number of MTBDD operations that need to be performed.

(αβ)(αγ)\displaystyle(\alpha\rightarrow\beta)\land(\alpha\rightarrow\gamma) α(βγ)\displaystyle\leadsto\alpha\rightarrow(\beta\land\gamma) (1)
(αβ)(γδ)\displaystyle(\alpha\rightarrow\beta)\lor(\gamma\rightarrow\delta) (¬α)β(¬γ)δ\displaystyle\leadsto(\lnot\alpha)\lor\beta\lor(\lnot\gamma)\lor\delta (2)
i𝖦(αi)j𝖦𝖥(βj)\displaystyle\bigwedge_{i}\mathsf{G}(\alpha_{i})\land\bigwedge_{j}\mathsf{G}\mathsf{F}(\beta_{j}) 𝖦(iαi𝖥(jβj))\displaystyle\leadsto\mathsf{G}(\bigwedge_{i}\alpha_{i}\land\mathsf{F}(\bigwedge_{j}\beta_{j})) (3)
i𝖥(αi)j𝖥𝖦(βj)\displaystyle\bigvee_{i}\mathsf{F}(\alpha_{i})\lor\bigvee_{j}\mathsf{F}\mathsf{G}(\beta_{j}) 𝖥(iαi𝖦(jβj))\displaystyle\leadsto\mathsf{F}(\bigvee_{i}\alpha_{i}\lor\mathsf{G}(\bigvee_{j}\beta_{j})) (4)
𝖷α𝖷β\displaystyle\mathsf{X}\alpha\land\mathsf{X}\beta 𝖷(αβ)\displaystyle\leadsto\mathsf{X}(\alpha\land\beta) (5)
𝖷α𝖷β\displaystyle\mathsf{X}\alpha\lor\mathsf{X}\beta 𝖷(αβ)\displaystyle\leadsto\mathsf{X}(\alpha\lor\beta) (6)
𝖷!α𝖷!β\displaystyle\mathsf{X^{!}}\alpha\land\mathsf{X^{!}}\beta 𝖷!(αβ)\displaystyle\leadsto\mathsf{X^{!}}(\alpha\land\beta) (7)
𝖷!α𝖷!β\displaystyle\mathsf{X^{!}}\alpha\lor\mathsf{X^{!}}\beta 𝖷!(αβ)\displaystyle\leadsto\mathsf{X^{!}}(\alpha\lor\beta) (8)
𝖦𝖥(α)\displaystyle\mathsf{G}\mathsf{F}(\alpha) 𝖦𝖥(αr)\displaystyle\leadsto\mathsf{G}\mathsf{F}(\alpha_{r}) (9)
𝖥𝖦(α)\displaystyle\mathsf{F}\mathsf{G}(\alpha) 𝖦𝖥(αr)\displaystyle\leadsto\mathsf{G}\mathsf{F}(\alpha_{r}) (10)

Equation (2) is the only equation that does not reduce the number of operators. However, our implementation automatically removes duplicate operands for nn-ary operators such as \land or \lor, so this is more likely to occur after this rewriting.

In 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}}, formulas 𝖦𝖥(α)\mathsf{G}\mathsf{F}(\alpha) and 𝖥𝖦(α)\mathsf{F}\mathsf{G}(\alpha) are equivalent, and specify that α\alpha should hold on the last position of the word. Therefore, in (9)–(10), any temporal operators in α\alpha can be removed using the same rules as in Theorem 0.H.1 in Appendix 0.H.

Appendix 0.H One-step (Un)Realizability Checks

To test if an 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} formula φ\varphi is realizable or unrealizable in one-step, we can rewrite the formula into Boolean formulas φr\varphi_{r} or φu\varphi_{u} using one of the following theorems that follow from the 𝖫𝖳𝖫𝖿\mathsf{LTL_{f}} semantics.

Then testing whether a Boolean formula is (un)realizable can be achieved by representing that formula as a BDD, and then removing input/output variables by universal/existential quantification, in the order required by the selected semantics (Moore or Mealy).

Theorem 0.H.1 (One-step realizability [53, Th. 2])

For φ𝖫𝖳𝖫𝖿(𝒫)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{P}), define φr\varphi_{r} inductively using the following rules:

𝑓𝑓r\displaystyle\mathit{ff}_{r} =𝑓𝑓\displaystyle=\mathit{ff} (𝖷!α)r\displaystyle(\mathsf{X^{!}}\alpha)_{r} =𝑓𝑓\displaystyle=\mathit{ff} (𝖦α)r\displaystyle(\mathsf{G}\alpha)_{r} =αr\displaystyle=\alpha_{r} (α𝖱β)r\displaystyle(\alpha\mathbin{\mathsf{R}}\beta)_{r} =βr\displaystyle=\beta_{r}
𝑡𝑡r\displaystyle\mathit{tt}_{r} =𝑡𝑡\displaystyle=\mathit{tt} (𝖷α)r\displaystyle(\mathsf{X}\alpha)_{r} =𝑡𝑡\displaystyle=\mathit{tt} (𝖥α)r\displaystyle(\mathsf{F}\alpha)_{r} =αr\displaystyle=\alpha_{r} (α𝖴β)r\displaystyle(\alpha\mathbin{\mathsf{U}}\beta)_{r} =βr\displaystyle=\beta_{r}
pr\displaystyle p_{r} =pfor p𝒫\displaystyle=p\mathrlap{\quad\text{for~}p\in\mathcal{P}} (¬α)r\displaystyle(\lnot\alpha)_{r} =¬(αr)\displaystyle=\lnot(\alpha_{r}) (αβ)r\displaystyle(\alpha\odot\beta)_{r} =αrβr\displaystyle=\alpha_{r}\odot\beta_{r}

Where {,,,,}\odot\in\{\land,\lor,\rightarrow,\leftrightarrow,\oplus\}.

If the Boolean formula φr\varphi_{r} is realizable, then φ\varphi is realizable too.

Theorem 0.H.2 (One-step unrealizability [53, Th. 3])

Consider a formula φ𝖫𝖳𝖫𝖿(𝒫)\varphi\in{\mathsf{LTL_{f}}}(\mathcal{P}). To simplify the definition, we assume φ\varphi to be in negative normal form (i.e., negations have been pushed down the syntactic tree, and may only occur in front of variables, and operators \rightarrow, \leftrightarrow, \oplus have been rewritten away). We define φu\varphi_{u} inductively as follows:

𝑓𝑓u\displaystyle\mathit{ff}_{u} =𝑓𝑓\displaystyle=\mathit{ff} (𝖷!α)u\displaystyle(\mathsf{X^{!}}\alpha)_{u} =𝑡𝑡\displaystyle=\mathit{tt} (𝖦α)u\displaystyle(\mathsf{G}\alpha)_{u} =αu\displaystyle=\alpha_{u} (α𝖱β)u\displaystyle(\alpha\mathbin{\mathsf{R}}\beta)_{u} =αuβu\displaystyle=\alpha_{u}\land\beta_{u} (αβ)u\displaystyle(\alpha\land\beta)_{u} =αuβu\displaystyle=\alpha_{u}\land\beta_{u}
𝑡𝑡u\displaystyle\mathit{tt}_{u} =𝑡𝑡\displaystyle=\mathit{tt} (𝖷α)u\displaystyle(\mathsf{X}\alpha)_{u} =𝑡𝑡\displaystyle=\mathit{tt} (𝖥α)u\displaystyle(\mathsf{F}\alpha)_{u} =αu\displaystyle=\alpha_{u} (α𝖴β)u\displaystyle(\alpha\mathbin{\mathsf{U}}\beta)_{u} =αuβu\displaystyle=\alpha_{u}\lor\beta_{u} (αβ)u\displaystyle(\alpha\lor\beta)_{u} =αuβu\displaystyle=\alpha_{u}\lor\beta_{u}

For any variable p𝒫p\in\mathcal{P}, we have pu=pp_{u}=p and (¬p)u=¬p(\lnot p)_{u}=\lnot p.

If the Boolean formula φu\varphi_{u} is not realizable, then φ\varphi is not realizable.

Appendix 0.I More Benchmark Results

The SyntComp benchmarks contain specifications that can be partitioned in three groups:

game

Those specifications describe two-players games. They have three subfamilies [52]: single counter, double counters, and nim.

pattern

Those specifications are scalable patterns built either from nesting 𝖴\mathbin{\mathsf{U}} operators, or by making conjunctions of terms such as 𝖦(vi)\mathsf{G}(v_{i}) or 𝖥(vj)\mathsf{F}(v_{j})[53]

random

Those specifications are random conjunctions of 𝖫𝖳𝖫𝖿{\mathsf{LTL_{f}}} specifications [56].

Of these three sets, the games are the most challenging to solve. The patterns use each variable only once, so they can all be reduced to 𝑡𝑡\mathit{tt} or 𝑓𝑓\mathit{ff} by the preprocessing technique discussed in Section 5, or by the one-step (un)realizability checks. Since random specifications are built as a conjunction of subspecifications that often have nonintersecting variable sets, they can very often be decomposed into output-disjoint specifications that can be solved separately [31].

Table 1 shows how the different tools succeed in these different benchmarks.

Table 1: Count of different tool outputs grouped by different types of benchmarks. Status false and true indicate how many times a tool successfully managed to decide unrealizability (false) or realizability (true). The other status are error conditions: TIMEOUT (over 15 minutes), OOMEM (over 4GB), ABORT (aborted), SEGV (segmentation violation). The later two errors are likely caused by some incorrect handling of out-of-memory conditions.

\includestandalonestatus-table

Figure 8 compare the best configuration of ltlfsynt against Nike: there are no cases where Nike is faster. If we disable preprocessings and one-step (un)realizability in ltlfsynt, the comparison is more balanced, as shown in Figure 8. Note that we have kept one-step (un)realizability enabled in this comparison, because Nike uses it too. This is the reason why pattern benchmarks are solved instantaneously by both tools.

Refer to caption
Figure 7: Scatter plots comparing time and memory usage of Nike against ltlfsynt’s best configuration. Dots on the lines marked as T and M on the side represent timeouts or out-of-memory cases.
Refer to caption
Figure 8: Scatter plots comparing time and memory usage of Nike against ltlfsynt’s on-the-fly construction but without preprocessing.

Tables 2, 3, and 4 look at the runtime of the tools on game benchmarks. Values highlighted in yellow are within 5% of the minimum value of each line.

Table 2 shows a family of specifications where preprocessings are useless, and using one-step (un)realizability slows things down.

Table 3 shows a family of specifications where one-step (un)realizability is what allows ltlfsynt to solve many more instance than other tools (even tools like Nike or LydiaSyft who also implement one-step (un)realizability). The suspicious behavior of Lydia/LidyaSyft/SyftMax cycling between timeouts, segmentation faults, and out-of-memory has been double-checked: this is really how they terminated.

Finally, Table 4 shows very impressive results by ltlfsynt on the challenging Nim family of benchmarks: the highest configuration that third-party tools are able to solve is nim_04_01, but ltlfsynt solves this instantaneously in all configurations, and can handle much larger instances.

Table 2: Runtime of the different configurations on the Single Counter benchmark.

\includestandalonecounter-table

Table 3: Runtime of the different configurations on the Double Counters benchmark.

\includestandalonecounters-table

Table 4: Runtime of the different configurations on the Nim benchmark.

\includestandalonenim-table

A more detailed analysis of the benchmark results can be found in directory ltlfsynt-analysis/ of the associated artifact [24].