This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Faculty of Engineering, Environment and Computing,
Coventry University, Coventry, CV1 5FB, UK 11email:
{Dorian.Florescu, Matthew.England}@coventry.ac.uk

Algorithmically generating new algebraic
features of polynomial systems
for machine learning

Dorian Florescu    Matthew England
Abstract

There are a variety of choices to be made in both computer algebra systems (CASs) and satisfiability modulo theory (SMT) solvers which can impact performance without affecting mathematical correctness. Such choices are candidates for machine learning (ML) approaches, however, there are difficulties in applying standard ML techniques, such as the efficient identification of ML features from input data which is typically a polynomial system. Our focus is selecting the variable ordering for cylindrical algebraic decomposition (CAD), an important algorithm implemented in several CASs, and now also SMT-solvers. We created a framework to describe all the previously identified ML features for the problem and then enumerated all options in this framework to automatically generation many more features. We validate the usefulness of these with an experiment which shows that an ML choice for CAD variable ordering is superior to those made by human created heuristics, and further improved with these additional features. We expect that this technique of feature generation could be useful for other choices related to CAD, or even choices for other algorithms with polynomial systems for input.

Keywords:
machine learning; feature generation; non-linear real arithmetic; symbolic computation; cylindrical algebraic decomposition

1 Introduction

Machine Learning (ML), that is statistical techniques to give computer systems the ability to learn rules from data, is a topic that has found great success in a diverse range of fields over recent years. ML is most attractive when the underlying functional relationship to be modelled is complex or not well understood. Hence ML has yet to make a large impact in the fields which form SC2, Symbolic Computation and Satisfiability Checking [1], since these prize mathematical correctness and seek to understand underlying functional relationships. However, as most developers would acknowledge, our software usually comes with a range of choices which, while having no effect on the correctness of the end result, could have a great effect on the resources required to find it. These choices range from the low level (in what order to perform a search that may terminate early) to the high (which of a set of competing exact algorithms to use for this problem instance). In making such choices we may be faced with decisions where relationships are not fully understood, but are not the key object of study.

In practice such choices may be made by man-made heuristics based on some experimentation (e.g. [18]) or magic constants where crossing a single threshold changes system behaviour [11]. It is likely that many of these decisions could be improved by allowing learning algorithms to analyse the data. The broad topic of this paper is ML for algorithm choices where the input is a set of polynomials, which encompasses a variety of tools in computer algebra systems and the SMT theory of [Quantifier Free] Non-Linear Real Arithmetic, [QF]NRA.

There has been little research on the use of ML in computer algebra: only [28] [27] [24] on the topic of CAD variable ordering choice; [26], [27] on the question of whether to precondition CAD with Groebner Bases; and [31] on deciding the order of sub-formulae solving for a QE procedure. Within SMT there has been significant work on the Boolean logic side e.g. the portfolio SAT solver SATZilla [45] and MapleSAT [33] which views solver branching as an optimisation problem. However there is little work on the use of ML to choose or optimise theory solvers. We note that other fields of mathematical software are ahead in the use of ML, most notably the automated reasoning community (see e.g. [42], [32], [7], or the brief survey in [19]).

1.1 Difficulties with ML for problems in NRA

There are difficulties in applying standard ML techniques to problems in NRA. One is the lack of sufficiently large datasets, which is addressed only partially by the SMT-LIB. The experiment in [26] found that the [QF]NRA sections of the SMT-LIB too uniform, and had to resort to random generated examples (although the state of benchmarking in computer algebra is far worse [22]). There have been improvements since then, with the benchmarks increasing both in number and diversity of underlying application. For example, there are now problems arising from biology [4], [23] and economics [37], [38].

Another difficulty is the identification of suitable features from the input with which to train the ML models. There are some obvious candidates concerning the size and degrees of polynomials, and the distribution of variables. However, this provides a starting set (i.e. before any feature selection takes place) that is small in comparison to other machine learning applications. The main focus of this paper is to introduce a method to automatically (and cheaply) generate further features for ML from polynomial systems.

1.2 Contribution and plan

Our main contributions are the new feature generation approach described in Section 3 and the validation of its use in the experiments described in Sections 4-5. The experiments are for the choice of variable ordering for cylindrical algebraic decomposition, a topic whose background we first present in Section 2, but we emphasise that the techniques may be applicable more broadly.

2 Background on variable ordering for CAD

2.1 Cylindrical algebraic decomposition

A Cylindrical Algebraic Decomposition (CAD) is a decomposition of ordered n\mathbb{R}^{n} space into cells arranged cylindrically: the projections of any pair of cells with respect to the variable ordering are either equal or disjoint. The projections form an induced CAD of the lower dimensional space. The cells are (semi)-algebraic meaning each can be described with a finite sequence of polynomial constraints.

A CAD is produced to be truth-invariant for a logical formula (so the formula is either true or false on each cell). Such a decomposition can then be used to perform Quantifier Elimination (QE) over the reals, i.e. given a quantified Tarski formula find an equivalent quantifier free formula over the reals. For example, QE would transform x,ax2+bx+c=0a0\exists x,ax^{2}+bx+c=0\land a\neq 0 to the equivalent unquantified statement b24ac0b^{2}-4ac\geq 0. A CAD over the (x,a,b,c)(x,a,b,c)-space could be used to ascertain this, so long as the variable ordering ensured that there was an induced CAD of (a,b,c)(a,b,c)-space. We test one sample point per cell and construct a quantifier free formula from the relevant semi-algebraic cell descriptions.

CAD was introduced by Collins in 1975 [15] and works relative to a set of polynomials. Collins’ CAD produces a decomposition so that each polynomial has constant sign on each cell (thus truth-invariant for any formula built with those polynomials). The algorithm first projects the polynomials into smaller and smaller dimensions; and then uses these to lift - to incrementally build decompositions of larger and larger spaces according to the polynomials at that level. For further details on CAD see for example the collection [12].

QE has numerous applications throughout science and engineering [41]. Our work also speeds up independent applications of CAD, such as reasoning with multi-valued functions [17] or motion planning [44].

2.2 Variable ordering

The definition of cylindricity and both stages of the algorithm are relative to an ordering of the variables. For example, given polynomials in variables ordered as xnxn1,x2x1x_{n}\succ x_{n-1}\succ\dots,\succ x_{2}\succ x_{1} we first project away xnx_{n} and so on until we are left with polynomials univariate in x1x_{1}. We then start lifting by decomposing the x1x_{1}-axis, and then the (x1,x2)(x_{1},x_{2})-plane and so so on. The cylindricity condition refers to projections of cells in n\mathbb{R}^{n} onto a space (x1,,xm(x_{1},\dots,x_{m}) where m<nm<n. There have been numerous advances to CAD since its inception: new projection schemes [34], [36]; partial construction [16], [43]; symbolic-numeric lifting [40], [29]; adapting to the Boolean structure [5], [20]; and adaptations for SMT [30], [9]. However, in all cases, the need for a fixed variable ordering remains.

Depending on the application, the variable ordering may be determined, constrained, or free. QE, requires that quantified variables are eliminated first and that variables are eliminated in the order in which they are quantified. However, variables in blocks of the same quantifier (and the free variables) can be swapped, so there is partial freedom. Of course, in the SMT context there is only a single existentially quantified block and so there is a free choice of ordering. So the discriminant in the example above could have been found with any CAD which eliminates xx first. A CAD for the quadratic polynomial under ordering abca\prec b\prec c has only 27 cells, but needs 115 for the reverse ordering.

Since we can switch the order of quantified variables in a statement when the quantifier is the same, we also have some choice on the ordering of quantified variables. For example, a QE problem of the form xyaϕ(x,y,a)\exists x\exists y\forall a\,\phi(x,y,a) could be solved by a CAD under either ordering xyax\succ y\succ a or ordering yxay\succ x\succ a.

The choice of variable ordering can have a great effect on the time and memory use of CAD, and the number of cells in the output. Further, Brown and Davenport presented a class of problems in which one variable ordering gave output of double exponential complexity in the number of variables and another output of a constant size [10].

2.3 Prior work on choosing the variable ordering

Heuristics have been developed to choose a variable ordering, with Dolzmann et al. [18] giving the best known study. After analysing a variety of metrics they proposed a heuristic, sotd, which constructs the full set of projection polynomials for each permitted ordering and selects the ordering whose corresponding set has the lowest sum of total degrees for each of the monomials in each of the polynomials. The second author demonstrated examples for which that heuristic could be misled in [6]; and then later showed that tailoring to an implementation could improve performance [21]. These heuristics all involved potentially costly projection operations on the input polynomials.

In [28] the second author of the present paper collaborated to use a support vector machine to choose which of three human made heuristics to believe when picking the variable ordering, based only on simple features of the input polynomials. The experiments identified substantial subclasses on which each of the three heuristics made the best decision, and demonstrated that the machine learned choice did significantly better than any one heuristic overall. This work was picked up again in [24] by the present authors, where ML was used to predict directly the variable ordering for CAD, leading to the shortest computing time, with experiments conducted for four different ML models.

Both [28] and [24] used a set of 1111 human identified features. These did lead to good performance of the models, with ML outperforming the prior human created heuristics, but a starting set of 11 features is relatively small for ML and so we hypothesise that identifying more would improve the results.

3 Generating new features algorithmically

3.1 Existing features for sets of polynomials

An early heuristic for the choice of CAD variable ordering is that of Brown [8], which chooses a variable ordering according to the following criteria, starting with the first and breaking ties with successive ones.

  1. (1)

    Eliminate a variable first if it appears with the lowest overall degree in the input.

  2. (2)

    For each variable calculate the maximum total degree for the set of terms in the input in which it occurs. Eliminate first the variable for which this is lowest.

  3. (3)

    Eliminate a variable first if there is a smaller number of terms in the input which contain the variable.

Despite being computationally cheaper than the sotd heuristic (because the latter performs projections before measuring degrees) experiments in [28] suggested this simpler measure actually performs slightly better, although the key message from those experiments is that there were substantial subsets of problems for which each heuristic made a better choice than the others.

The Brown heuristic inspired almost all the features used by the authors of [28], [24] to perform ML for CAD variable ordering, with the full set of 11 features listed in Table 1 (column 3 will be explained later).

Table 1: Features used by ML in [28] to choose the ordering of 3 variables for CAD.
 # Description fvf_{v}
1 Number of polynomials PP
2 Maximum total degree of polynomials maxm,p(vdvm,p)\max_{m,p}\left(\sum_{v}d_{v}^{m,p}\right)
3 Maximum degree of x1x_{1} among all polynomials maxm,pd1m,p\max_{m,p}d_{1}^{m,p}
4 Maximum degree of x2x_{2} among all polynomials maxm,pd2m,p\max_{m,p}d_{2}^{m,p}
5 Maximum degree of x3x_{3} among all polynomials maxm,pd3m,p\max_{m,p}d_{3}^{m,p}
6 Proportion of x1x_{1} occurring in polynomials avp(sgn(md1m,p))\text{av}_{p}\left(\text{sgn}\left(\sum_{m}d_{1}^{m,p}\right)\right)
7 Proportion of x2x_{2} occurring in polynomials avp(sgn(md2m,p))\text{av}_{p}\left(\text{sgn}\left(\sum_{m}d_{2}^{m,p}\right)\right)
8 Proportion of x3x_{3} occurring in polynomials avp(sgn(md3m,p))\text{av}_{p}\left(\text{sgn}\left(\sum_{m}d_{3}^{m,p}\right)\right)
9 Proportion of x1x_{1} occurring in monomials avm,p(sgn(d1m,p))\text{av}_{m,p}\left(\text{sgn}\left(d_{1}^{m,p}\right)\right)
10 Proportion of x2x_{2} occurring in monomials avm,p(sgn(d2m,p))\text{av}_{m,p}\left(\text{sgn}\left(d_{2}^{m,p}\right)\right)
11 Proportion of x3x_{3} occurring in monomials avm,p(sgn(d3m,p))\text{av}_{m,p}\left(\text{sgn}\left(d_{3}^{m,p}\right)\right)

3.2 A new framework for generating polynomial features

Our new feature generation procedure is based on the observation that all the measurements taken by the Brown heauristic, and all those features used in [28], [24] can be formalised mathematically using a small number of functions. For simplicity, the following discussion will be restricted to polynomials of 33 variables as these were used in the following experiments, but everything generalises in an obvious way to nn variables.

Let a problem instance 𝑷𝒓\bm{Pr} be defined by a set of PP polynomials

𝑷𝒓={𝒫p|p=1,,P}.\bm{Pr}=\{\mathcal{P}_{p}\,|\,p=1,\dots,P\}. (1)

This is the case for producing a sign-invariant CAD. Of course, any problem instance consisting of a logical formula whose atoms are polynomial sign conditions can also have such a set extracted.

In the following we define the notation for polynomial variables and coefficients that will be used throughout the manuscript. Each polynomial with index pp, for p=1,,Pp=1,\dots,P contains a different number of monomials, which will be labelled with index mm, where m=1,,Mpm=1,\dots,M_{p} and MpM_{p} denotes the number of monomials in polynomial pp. We note that these are just labels and are not setting an ordering themselves. The degrees corresponding to each of the variables x1,x2,x3x_{1},x_{2},x_{3} are a function of mm and pp. These need to be explicitly labelled in order to allow a rigorous definition of our proposed procedure of feature generation.

We next write each polynomial as

𝒫p=m=1Mpcm,px1d1m,px2d2m,px3d3m,p,p=1,,P.\mathcal{P}_{p}=\sum_{m=1}^{M_{p}}c^{m,p}\cdot x_{1}^{d_{1}^{m,p}}x_{2}^{d_{2}^{m,p}}x_{3}^{d_{3}^{m,p}},p=1,\dots,P. (2)

Here, xvx_{v} represents the polynomial variables (v=1,2,3v=1,2,3). Thus for each monomial in each polynomial there is a tuple (m,p)(m,p) of positive integers that label it. Then in turn we denote by dvm,pd_{v}^{m,p} the degree of variable xvx_{v} in that monomial, and by cm,pc^{m,p} the constant coefficient, i.e., tuple superscripts are giving a label for a monomial in a problem. The original indices are simply a labelling and not an ordering of the variables x1,x2,x3.x_{1},x_{2},x_{3}.

Therefore, any one of our problem instances 𝑷𝒓\bm{Pr} is uniquely represented by a set of sets

𝕊𝑷𝒓={{[cm,p,(d1m,p,d2m,p,d3m,p)]|m=1,,Mp}|p=1,,P}.\mathbb{S}_{\bm{Pr}}=\big{\{}\big{\{}\left[c^{m,p},(d_{1}^{m,p},d_{2}^{m,p},d_{3}^{m,p})\right]\,|\,m=1,\dots,M_{p}\big{\}}\,|\,p=1,\dots,P\big{\}}. (3)

Observe now that each of Brown’s measures can be formalised as a vector of features for choosing a variable as follows.

  1. (1)

    Overall degree in the input of a variable: maxm,pdvm,p\max_{m,p}d_{v}^{m,p}.

  2. (2)

    Maximum total degree of those terms in the input in which a variable occurs:
    maxm,psgn(dvm,p)(d1m,p+d2m,p+d3m,p)\max_{m,p}\text{sgn}(d_{v}^{m,p})\cdot(d_{1}^{m,p}+d_{2}^{m,p}+d_{3}^{m,p})

  3. (3)

    Number of terms in the input which contain the variable: m,psgn(dvm,p)\sum_{m,p}\text{sgn}(d_{v}^{m,p})

In the latter two we use the sign function to discriminate between monomials which contain a variable (sign of degree is positive) and those which do not (sign of degree is zero). Of course the sign of the degree is never negative.

Define now also the averaging functions

avm\displaystyle\text{av}_{m} 1Mpm,avp1Pp,avm,p1Pp1Mpm.\displaystyle\triangleq\frac{1}{M_{p}}\sum_{m},\quad\text{av}_{p}\triangleq\frac{1}{P}\sum_{p},\quad\text{av}_{m,p}\triangleq\frac{1}{P}\sum_{p}\frac{1}{M_{p}}\sum_{m}.

Then the features in Table 1 can be formalised similarly to Brown’s metrics, as shown in the third column of Table 1.

We can place all of these scalars into a single framework:

f(𝑷𝒓)=(g4g3g2g1hm,p)(𝑷𝒓),f\left(\bm{Pr}\right)=(g_{4}\circ g_{3}\circ g_{2}\circ g_{1}\circ h^{m,p})\left(\bm{Pr}\right), (4)

where

hm,p(𝑷𝒓){dvm,p,sgn(dvm,p)(vdvm,p)|v=1,2,3}h^{m,p}\left(\bm{Pr}\right)\in\big{\{}d_{v}^{m,p},\,\text{sgn}\left(d_{v}^{m,p}\right)\cdot\left(\textstyle\sum_{v^{\prime}}d_{v^{\prime}}^{m,p}\right)\,|\,v=1,2,3\big{\}}

and g1,g2,g3g_{1},g_{2},g_{3}, and g4g_{4} are all taken from the set

{maxp,maxm,maxm,p,max0,p,m,m,p,0,avp,avm,avm,p,av0,sgn,sgn0}.\big{\{}\textstyle\max_{p},\max_{m},\max_{m,p},\max_{0},\sum_{p},\sum_{m},\sum_{m,p},\sum_{0},\text{av}_{p},\text{av}_{m},\text{av}_{m,p},\text{av}_{0},\text{sgn},\text{sgn}_{0}\big{\}}.

In the above set max0\max_{0}, 0\sum_{0}, av0\text{av}_{0} and sgn0\text{sgn}_{0} are all equal to the identity function.

For example, let 𝑷𝒓={x12x2x3,x1x24x32+x1x3}\bm{Pr}=\{x_{1}^{2}x_{2}-x_{3},x_{1}x_{2}^{4}x_{3}^{2}+x_{1}x_{3}\}. If m=1,p=2m=1,p=2, then h1,2(𝑷𝒓){dv1,2,sgn(dv1,2)(vdv1,2)|v=1,2,3}={1,17,4,47,2,27}h^{1,2}\left(\bm{Pr}\right)\in\big{\{}d_{v}^{1,2},\,\text{sgn}\left(d_{v}^{1,2}\right)\cdot\left(\textstyle\sum_{v^{\prime}}d_{v^{\prime}}^{1,2}\right)\,|\,v=1,2,3\big{\}}=\big{\{}1,1\cdot 7,4,4\cdot 7,2,2\cdot 7\big{\}}.

3.3 Generating additional features

We will thus consider deriving all of the other features which fall into this framework, but to do so we must first impose a number of rules.

  1. 1.

    The functions g1,g2,g3,g4g_{1},g_{2},g_{3},g_{4} must all belong to distinct categories of function, i.e. one each of max\max, \sum, av, and sgn.

  2. 2.

    Exactly one of the functions g1,g2,g3,g4g_{1},g_{2},g_{3},g_{4} is computed over pp and exactly one is computed over mm (it may be the same one).

  3. 3.

    The computation over pp is always performed by a function gig_{i} with an index ii greater or equal to that of the function computing over mm.

Table 2: Possible distributions of indices to the function classes in feature framework.
max av sum sgn
p,mp,m 0 0 0
pp mm 0 0
pp 0 mm 0
0 p,mp,m 0 0
0 pp mm 0
0 0 p,mp,m 0
p,mp,m 0 0 1
pp mm 0 1
pp 0 0 1
0 p,mp,m 0 1
0 pp mm 1
0 0 p,mp,m 1

The expression of f(𝑷𝒓)f\left(\bm{Pr}\right) can be interpreted as follows. The values hm,p(𝑷𝒓)h^{m,p}\left(\bm{Pr}\right) are functions of variables mm and pp. Each of the functions g1,g2,g3,g4g_{1},g_{2},g_{3},g_{4} either leave the function unchanged, or they turn it into a function of fewer variables (first into a function of pp, and then into a scalar value, representing the ML feature).

The rules above are justified as follows. Rule 11 reduces the redundancy in the feature set. Rules 22 and 33 guarantee that the feature fv(𝑷𝒓)f_{v}\left(\bm{Pr}\right) is well defined and is a scalar number. In particular, Rule 33 is necessary because the computation over the terms in a polynomial is dependent on their number, which is not the same for all polynomials.

The final set {f(1)(𝑷𝒓),,f(Nf)(𝑷𝒓)}\{f^{(1)}(\bm{Pr}),\dots,f^{(N_{f})}(\bm{Pr})\} has size Nf=1728N_{f}=1728 for a problem with 33 variables. This number is attained as follows: we have 12 possible distributions of indexes to the functions g1,,g4g_{1},\dots,g_{4} as shown in Table 2; then 4!4! possible orderings of those functions; and 6 possible choices for hh. 4!612=17284!\cdot 6\cdot 12=1728.

However, many of these features will be identical (e.g. to a different placement of the identify function). We do not identify these manually now: the task that is trivial for a given dataset, but substantial to do in generality.

4 Machine learning experiment with the new features

We now describe a ML experiment to choose the variable ordering for cylcindrical algebraic decomposition. The methodology here is similar to that in our recent paper [24] except for the addition of the extra features from Section 3. A more detailed discussion of the methodology can be found in [24].

4.1 Problem set

We use the nlsat dataset111Freely available from http://cs.nyu.edu/˜dejan/nonlinear/ produced to evaluate the work in [30], thus the problems are all fully existentially quantified. Although there are CAD algorithms that reduce what is being computed based on the quantifiers in the input (most notably via Partial CAD [16]), the conclusions drawn are likely to be applicable outside of the SAT context.

We use the 61176117 problems with 33 variables from this database, so each has a choice of six different variable orderings. We extracted only the polynomials involved, and randomly divided into two datasets for training (46124612) and testing (15051505). Only the former was used to tune the parameters of the ML models.

4.2 Software

We used the CAD routine CylindricalAlgebraicDecompose which is part of the RegularChains Library for Maple. This algorithm builds decompositions first of nn-dimensional complex space before refining to a CAD of n\mathbb{R}^{n} [14], [13], [3]. We ran the code in Maple 20182018 but used an updated version of the RegularChains Library (http://www.regularchains.org). Training and evaluation of the ML models was done using the scikit-learn package [39] v0.20.2 for Python 2.7. The features for ML were extracted using code written in the sympy package v1.3 for Python 2.7, as was Brown’s heuristic. The sotd heuristic was implemented in Maple as part of the ProjectionCAD package [25].

4.3 Timings

CAD construction was timed in a Maple script that was called separately from Python for each CAD (to avoid Maple’s caching of results). The target variable ordering for ML was defined as the one that minimises the computing time for a given problem. All CAD function calls included a time limit. For the training dataset an initial time limit of 44 seconds was used, doubled incrementally if all orderings timed out, until CAD completed for at least one ordering (a target variable ordering could be assigned for all problems using time limits no bigger than 6464 seconds). The problems in the testing dataset were processed with a larger time limit of 128128 seconds for all orderings (time outs set as 128128s).

4.4 Feature simplification

When computed on a set of problems {𝑷𝒓1,,𝑷𝒓N}\{\bm{Pr}_{1},\dots,\bm{Pr}_{N}\}, some of the features f(i)f^{(i)} turn out to be constant, i.e. f(i)(𝑷𝒓1)=f(i)(𝑷𝒓2)==f(i)(𝑷𝒓N).f^{(i)}(\bm{Pr}_{1})=f^{(i)}(\bm{Pr}_{2})=\dots=f^{(i)}(\bm{Pr}_{N}). Such features will have no benefit for ML and are removed. Further, other features may be repetitive, i.e. f(i)(𝑷𝒓n)=f(j)(𝑷𝒓n),n=1,,N.f^{(i)}(\bm{Pr}_{n})=f^{(j)}(\bm{Pr}_{n}),\forall n=1,\dots,N. This repetition may represent a mathematical equality, or just be the case of the given dataset. Either way, they are merged into a single feature for the experiment. After this step, we are left with 7878 features: so while a large majority were redundant, we still have seven times those available in [28], [24].

4.5 Feature selection

Feature selection was performed with the training dataset to see if any features were redundant for the ML. We chose the Analysis of Variance (ANOVA) F-value to determine the importance of each feature for the classification task. Other choices we considered were unsuitable for our problem, e.g. the mutual information based selection requires very large amounts of data.

The training dataset consists of N=6117N=6117 problems with 33 variables, and each problem is assigned a target ordering, or class c=1,,Cc=1,\dots,C, where C=6C=6. Let 𝑷𝒓c,n\bm{Pr}_{c,n} denote problem number nn from the training dataset that is assigned class number cc, c=1,,Cc=1,\dots,C and n=1,,Ncn=1,\dots,N_{c}, where NcN_{c} denotes the number of problems that are assigned class c.c. Thus c=1CNc=N.\sum_{c=1}^{C}N_{c}=N.

The F-value for feature number ii is computed as follows [35].

Fi=1C1c=1CNc(f¯c(i)f¯(i))21NCc=1Cn=1Nc(f(i)(𝑷𝒓c,n)f¯c(i))2,F_{i}=\frac{\frac{1}{C-1}\sum_{c=1}^{C}N_{c}\left(\bar{f}_{c}^{(i)}-\bar{f}^{(i)}\right)^{2}}{\frac{1}{N-C}\sum_{c=1}^{C}\sum_{n=1}^{N_{c}}\left(f^{(i)}(\bm{Pr}_{c,n})-\bar{f}_{c}^{(i)}\right)^{2}}, (5)

where f¯c(i)\bar{f}_{c}^{(i)} is the sample mean in class cc, and f¯(i)\bar{f}^{(i)} the overall mean of the data:

f¯c(i)\displaystyle\bar{f}_{c}^{(i)} =1Ncn=1Ncf(i)(𝑷𝒓c,n),f¯(i)=1Nc=1Cn=1Ncf(i)(𝑷𝒓c,n).\displaystyle=\frac{1}{N_{c}}\sum_{n=1}^{N_{c}}f^{(i)}\left(\bm{Pr}_{c,n}\right),\qquad\bar{f}^{(i)}=\frac{1}{N}\sum_{c=1}^{C}\sum_{n=1}^{N_{c}}f^{(i)}(\bm{Pr}_{c,n}).

The numerator in (5) represents the between-class variability or explained variance and the denominator the within-class variability or unexplained variance.

Of the 78 features the three with the highest F-values were the following

f65(𝑷𝒓)\displaystyle f_{65}\left(\bm{Pr}\right) =max0avm,p0sign(d2m,p)=1Pp1Mpmsign(d2m,p)\displaystyle=\textstyle\max_{0}\text{av}_{m,p}\sum_{0}\text{sign}\left(d_{2}^{m,p}\right)=\frac{1}{P}\sum_{p}\frac{1}{M_{p}}\sum_{m}\text{sign}\left(d_{2}^{m,p}\right)
f46(𝑷𝒓)\displaystyle f_{46}\left(\bm{Pr}\right) =max0pavmsign(d2m,p)(vdvm,p)\displaystyle=\textstyle\max_{0}\sum_{p}\text{av}_{m}\text{sign}\left(d_{2}^{m,p}\right)\cdot\left(\sum_{v^{\prime}}d_{v^{\prime}}^{m,p}\right)
=p1Mpmsign(d2m,p)(vdvm,p)\displaystyle=\textstyle\sum_{p}\frac{1}{M_{p}}\sum_{m}\text{sign}\left(d_{2}^{m,p}\right)\cdot\left(\sum_{v^{\prime}}d_{v^{\prime}}^{m,p}\right)
f76(𝑷𝒓)\displaystyle f_{76}\left(\bm{Pr}\right) =av0pmaxmsign(d2m,p)(vdvm,p)\displaystyle=\textstyle\text{av}_{0}\sum_{p}\max_{m}\text{sign}\left(d_{2}^{m,p}\right)\cdot\left(\sum_{v^{\prime}}d_{v^{\prime}}^{m,p}\right)
=pmaxmsign(d2m,p)(vdvm,p)\displaystyle=\textstyle\sum_{p}\max_{m}\text{sign}\left(d_{2}^{m,p}\right)\cdot\left(\sum_{v^{\prime}}d_{v^{\prime}}^{m,p}\right)

The new features may be translated back into natural language. For example, feature 6565 is the proportion of monomials containing variable x2x_{2}, averaged across all polynomials; feature 4646 the sum of the degrees of the variables in all monomials containing variable x2x_{2}, averaged across all monomials and summed across all polynomials;and feature 7676 the maximum sum of the degrees of the variables in all monomials containing variable x2x_{2}, summed across all polynomials.

Feature selection did not suggest to remove any features (they all contributed meaningful information), so we proceed with our experiment using all 78.

4.6 ML models

Four of the most commonly used deterministic ML models were tuned on the training data (for details on the methods see e.g. the textbook [2]).

  • The K-Nearest Neighbours (KNN) classifier [2, §2.5].

  • The Multi-Layer Perceptron (MLP) classifier [2, §2.5].

  • The Decision Tree (DT) classifier [2, §14.4].

  • The Support Vector Machine (SVM) classifier with RBF kernel [2, §6.3].

Each model was trained using grid search 55-fold cross-validation, i.e. the set was randomly divided into 55 and each possible combination of 44 parts was used to tune the model parameters, leaving the last part for fitting the hyperparameters with cross-validation, by optimising the average F-score. Grid searches were performed for an initially large range for each hyperparameter; then gradually decreased to home into optimal values. This lasted from a few seconds for simpler models like KNN to a few minutes for more complex models like MLP. The optimal hyperparameters selected during cross-validation are in Table 3.

Table 3: The ML hyperparameters used following optimisation on the training dataset.
Model Hyperparameter Value
Decision Tree Criterion Gini impurity
Maximum tree depth 1717
K-Nearest Train instances weighting Inversely proportional to distance
Neighbours Algorithm Ball Tree
Support Vector Regularization parameter CC 316316
Machine Kernel Radial basis function
γ\gamma 0.080.08
Tolerance for stopping criterion 0.03160.0316
Multi-Layer Hidden layer size 1818
Perceptron Activation function Hyperbolic tangent
Algorithm Quasi-Newton based optimiser
Regularization parameter α\alpha 51055\cdot 10^{-5}

4.7 Comparing with human made heuristics

The ML approaches were compared in terms of prediction accuracy and resulting CAD computing time against the two best known human constructed heuristics [8], [18] as discussed earlier. Unlike the ML, these can end up predicting several variable orderings (i.e. when they cannot discriminate). In practice if this were to happen the heuristic would select one randomly (or perhaps lexicographically), however that final pick is not meaningful. To accommodate this, for each problem, the prediction accuracy of such a heuristic is judged to be the the percentage of its predicted variable orderings that are also target orderings. The average of this percentage over all problems in the testing dataset represents the prediction accuracy. Similarly, the computing time for such methods was assessed as the average computing time over all predicted orderings, and it is this that is summed up for all problems in the testing dataset.

5 Experimental Results

The results are presented in Table 4. We compare the four ML models on the percentage of problems where they selected the optimum ordering, and the total computation time (in seconds) for solving all the problems with their chosen orderings. The first two rows reproduce the results of [24] which used only the 11 features from Table 1, while the latter two rows are the results from the new experiment in the present paper which has 78 features. We also compare with the two human constructed heuristics and the outcome of a random choice between the 6 orderings (which do not change with the number of features). We might expect a random choice to be correct one sixth of the time but it is higher as for some problems there were multiple variable orderings with equally fast timings.

We also consider the distribution of the computation times: the differences between the computation time of each method and the minimum computation time, given as a percentage of the minimum time, are depicted in Figure 1.

Table 4: The comparative performance of DT, KNN, MLP, SVM, and the Brown and sotd heuristics on the testing dataset for the present experiment and the one in [24].
DT KNN MLP SVM Brown sotd rand
From [24] Accuracy 62.6%62.6\% 63.3%63.3\% 61.6%61.6\% 58.8%58.8\% 51%51\% 49.5%49.5\% 22.7%22.7\%
(11 Features) Time (s) 9 9949\,994 10 10510\,105 9 8229\,822 10 72510\,725 10 95110\,951 11 93811\,938 30 23530\,235
New Experiment Accuracy 65.2%65.2\% 66.3%66.3\% 67%67\% 65%65\%
(78 Features) Time (s) 9 6039\,603 9 1789\,178 9 3999\,399 9 4879\,487
Refer to caption
Figure 1: The histograms of the percentage increase in computation time relative to the minimum computation time for each method, calculated for a bin size of 1%.

5.1 Range of possible outcomes

The minimum total computing time, achieved if we select an optimal ordering for every problem, is 8 6238\,623s. Choosing at random would take 30 23530\,235s, almost 4 times as much. The maximum time, if we selected the worst ordering for every problem, is 64 53464\,534s. The K-Nearest Neighbours model achieved the shortest time of our models and heuristics, with 9 1789\,178s, only 6%6\% more than the minimal possible.

5.2 Human-made heuristics

Since they are not affected by the new feature framework of the present paper the findings on the human made heuristics are the same as in [24]. Of the two human-made heuristics, Brown performed the best, surprising since the sotd heuristic has access to additional information (not just the input polynomials but also their projections). Obtaining an ordering for a problem instance with sotd hence takes longer than for Brown or any ML model - generating an ordering with sotd for all problems in the testing dataset took over 3030min. Using Brown we can solve all problems in 10,951s, 27% more than the minimum. While sotd is only 0.7% less accurate than Brown in identifying the best ordering, it is much slower at 11 93811\,938s or 38% more than the minimum. So, while Brown is not much better at identifying the best, it is much better at discarding the worst!

5.3 ML choices

The results show that all ML approaches outperform the human constructed heuristics in terms of both accuracy and timings. Moreover, the results show that the new algorithm for generating features leads to a clear improvement in ML performance compared to using only a small number of human generated features in [24]. For all four modules both accuracy has increased and computation time decreased. The best achieved time was 14% above the minimum using the original 11 features but now only 6% above with the new features.

The computing time for all the methods lies between the best (8 6238\,623s) and the worst (64 53464\,534s). Therefore, if we scale this time to [0,100][0,100] so that the shortest time corresponds to 0 and the slowest to 100100, then the best human-made heuristic (Brown) lies at 4.164.16, and the best ML method (KNN) lies at 0.990.99. So using ML allows us to be 44 times closer to the minimum possible computing time.

Figure 1 shows that the human-made heuristics result in computing times that are often significantly larger than 1%1\% of the corresponding minimum time for each problem. The ML methods, on the other hand, all result in over 10001000 problems (75%\sim 75\% of the testing dataset) within 1%1\% of the minimum time.

6 Final Thoughts

In this experiment the MLP and KNN models offered the best performance, and a clear advance on the prior state of the art. But we acknowledge that there is much more to do and emphasise that these are only the initial findings of the project and we need to see if the findings are replicated. Planned extensions include: expanding the dataset to problems with more variables and quantifier structure; trying different feature selection techniques, and seeing if classifiers trained for the Maple CAD may be applied to other implementations.

Our main result is that a great many more features can be obtained trivially from the input (i.e. without any projection operations) than previously thought, and that these are relevant and lead to better ML choices. Some of these are easy to express in natural language, such as the number of polynomials containing a certain variable, but others do not have an obvious interpretation. This is important because something that is hard to describe in natural language is unlikely to be suggested by a human as a feature, which illustrates the benefit of our framework. This contribution to feature extraction for algebraic problems should be more widely applicable than the CAD variable ordering decision.

Acknowledgements

This work is supported by EPSRC Project EP/R019622/1: Embedding Machine Learning within Quantifier Elimination Procedures.

References

  • [1] Ábrahám, E., et al.: 𝖲𝖢2\mathsf{SC}^{2}: Satisfiability checking meets symbolic computation. In: Proc. CICM ’16, LNCS 9791, pp. 28–43. Springer (2016)
  • [2] Bishop, C.: Pattern Recognition and Machine Learning. Springer (2006)
  • [3] Bradford, R., Chen, C., Davenport, J., England, M., Moreno Maza, M., Wilson, D.: Truth table invariant cylindrical algebraic decomposition by regular chains. In: Proc. CASC ’14, LNCS 8660, pp. 44–58. Springer (2014)
  • [4] Bradford, R., et al.: A case study on the parametric occurrence of multiple steady states. In: Proc. ISSAC ’17, pp. 45–52. ACM (2017)
  • [5] Bradford, R., Davenport, J., England, M., McCallum, S., Wilson, D.: Truth table invariant cylindrical algebraic decomposition. J. Symb. Comp. 76, 1–35 (2016)
  • [6] Bradford, R., Davenport, J., England, M., Wilson, D.: Optimising problem formulations for cylindrical algebraic decomposition. In: Intelligent Computer Mathematics, LNCS 7961, pp. 19–34. Springer Berlin Heidelberg (2013)
  • [7] Bridge, J., Holden, S., Paulson, L.: Machine learning for first-order theorem proving. Journal of Automated Reasoning 53, 141–172 (2014)
  • [8] Brown, C.: Tutorial: Cylindrical algebraic decomposition, at ISSAC ’04. http://www.usna.edu/Users/cs/wcbrown/research/ISSAC04/handout.pdf (2004)
  • [9] Brown, C.: Constructing a single open cell in a cylindrical algebraic decomposition. In: Proc. ISSAC ’13, pp. 133–140. ACM (2013)
  • [10] Brown, C., Davenport, J.: The complexity of quantifier elimination and cylindrical algebraic decomposition. In: Proc. ISSAC ’07, pp. 54–60. ACM (2007)
  • [11] Carette, J.: Understanding expression simplification. In: Proc. ISSAC ’04, pp. 72–79. ACM (2004)
  • [12] Caviness, B., Johnson, J.: Quantifier Elimination and Cylindrical Algebraic Decomposition. Texts & Monographs in Symbolic Computation, Springer-Verlag (1998)
  • [13] Chen, C., Moreno Maza, M.: An incremental algorithm for computing cylindrical algebraic decompositions. In: Computer Mathematics, pp. 199—221. Springer Berlin Heidelberg (2014)
  • [14] Chen, C., Moreno Maza, M., Xia, B., Yang, L.: Computing cylindrical algebraic decomposition via triangular decomposition. In: Proc. ISSAC ’09, 95–102. ACM (2009)
  • [15] Collins, G.: Quantifier elimination for real closed fields by cylindrical algebraic decomposition. In: Proc. 2nd GI Conference on Automata Theory and Formal Languages. pp. 134–183. Springer-Verlag (reprinted in the collection [12]) (1975)
  • [16] Collins, G., Hong, H.: Partial cylindrical algebraic decomposition for quantifier elimination. Journal of Symbolic Computation 12, 299–328 (1991)
  • [17] Davenport, J., Bradford, R., England, M., Wilson, D.: Program verification in the presence of complex numbers, functions with branch cuts etc. In: Proc. SYNASC ’12, pp. 83–88. IEEE (2012)
  • [18] Dolzmann, A., Seidl, A., Sturm, T.: Efficient projection orders for CAD. In: Proc. ISSAC ’04, pp. 111–118. ACM (2004)
  • [19] England, M.: Machine learning for mathematical software. In: Mathematical Software – Proc. ICMS ’18. LNCS 10931, pp. 165–174. Springer (2018)
  • [20] England, M., Bradford, R., Davenport, J.: Improving the use of equational constraints in cylindrical algebraic decomposition. In: Proc. ISSAC ’15, pp. 165–172. ACM (2015)
  • [21] England, M., Bradford, R., Davenport, J., Wilson, D.: Choosing a variable ordering for truth-table invariant CAD by incremental triangular decomposition. In: Proc. ICMS ’14, LNCS 8592, pp. 450–457. Springer (2014)
  • [22] England, M., Davenport, J.: Experience with heuristics, benchmarks & standards for cylindrical algebraic decomposition. In: Proc. 𝖲𝖢2\mathsf{SC}^{2} ’16. CEUR-WS 1804 (2016)
  • [23] England, M., Errami, H., Grigoriev, D., Radulescu, O., Sturm, T., Weber, A.: Symbolic versus numerical computation and visualization of parameter regions for multistationarity of biological networks. In: Computer Algebra in Scientific Computing (CASC ’17), LNCS 10490, pp. 93–108. Springer (2017)
  • [24] England, M., Florescu, D.: Comparing machine learning models to choose the variable ordering for cylindrical algebraic decomposition. To appear in Proc. CICM ’19 (Springer LNCS) (2019). Preprint: https://arxiv.org/abs/1904.11061
  • [25] England, M., Wilson, D., Bradford, R., Davenport, J.: Using the Regular Chains Library to build cylindrical algebraic decompositions by projecting and lifting. In: Mathematical Software – ICMS ’14. LNCS 8592, pp. 458–465. Springer (2014)
  • [26] Huang, Z., England, M., Davenport, J., Paulson, L.: Using machine learning to decide when to precondition cylindrical algebraic decomposition with Groebner bases. In: Proc. SYNASC ’16. pp. 45–52. IEEE (2016)
  • [27] Huang, Z., England, M., Wilson, D., Bridge, J., Davenport, J., Paulson, L.: Using machine learning to improve cylindrical algebraic decomposition. Mathematics in Computer Science, Volume to be assigned, 28 pages (2019)
  • [28] Huang, Z., England, M., Wilson, D., Davenport, J., Paulson, L., Bridge, J.: Applying machine learning to the problem of choosing a heuristic to select the variable ordering for cylindrical algebraic decomposition. In: Intelligent Computer Mathematics, LNAI 8543, pp. 92–107. Springer International (2014)
  • [29] Iwane, H., Yanami, H., Anai, H., Yokoyama, K.: An effective implementation of a symbolic-numeric cylindrical algebraic decomposition for quantifier elimination. In: Proc. SNC ’09, pp. 55–64. SNC ’09 (2009)
  • [30] Jovanovic, D., de Moura, L.: Solving non-linear arithmetic. In: Gramlich, B., Miller, D., Sattler, U. (eds.) Automated Reasoning - Proc. IJCAR ’12, LNCS 7364, pp. 339–354. Springer (2012)
  • [31] Kobayashi, M., Iwane, H., Matsuzaki, T., Anai, H.: Efficient subformula orders for real quantifier elimination of non-prenex formulas. In: Proc. MACIS ’15, LNCS 9582, pp. 236–251. Springer International Publishing (2016)
  • [32] Kühlwein, D., Blanchette, J., Kaliszyk, C., Urban, J.: MaSh: Machine learning for sledgehammer. In: Interactive Theorem Proving, LNCS 7998, pp. 35–50. Springer Berlin Heidelberg (2013)
  • [33] Liang, J., Hari Govind, V., Poupart, P., Czarnecki, K., Ganesh, V.: An empirical study of branching heuristics through the lens of global learning rate. In: Proc. SAT ’17, LNCS 10491, pp. 119–135. Springer (2017)
  • [34] McCallum, S.: An improved projection operation for cylindrical algebraic decomposition. In: [12], pp. 242–268. (1998)
  • [35] Markowski, C.A. and Markowski, E.P.: Conditions for the effectiveness of a preliminary test of variance. The American Statistician, 44:4, 322–326 (1990)
  • [36] McCallum, S., Parusińiski, A., Paunescu, L.: Validity proof of Lazard’s method for CAD construction. Journal of Symbolic Computation 92, 52–69 (2019)
  • [37] Mulligan, C., Bradford, R., Davenport, J., England, M., Tonks, Z.: Non-linear real arithmetic benchmarks derived from automated reasoning in economics. In: Proc. 𝖲𝖢2\mathsf{SC}^{2} ’18, pp. 48–60. CEUR-WS 2189 (2018)
  • [38] Mulligan, C., Davenport, J., England, M.: TheoryGuru: A Mathematica package to apply quantifier elimination technology to economics. In: Mathematical Software – Proc. ICMS ’18. LNCS 10931, pp. 369–378. Springer (2018)
  • [39] Pedregosa, F., et al.: Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, 2825–2830 (2011)
  • [40] Strzeboński, A.: Cylindrical algebraic decomposition using validated numerics. Journal of Symbolic Computation 41(9), 1021–1038 (2006)
  • [41] Sturm, T.: New domains for applied quantifier elimination. In: Computer Algebra in Scientific Computing, LNCS 4194, pp. 295–301. Springer (2006)
  • [42] Urban, J.: MaLARea: A metasystem for automated reasoning in large theories. In: Proc. ESARLT ’07, CEUR-WS 257, p. 14. CEUR-WS (2007)
  • [43] Wilson, D., Bradford, R., Davenport, J., England, M.: Cylindrical algebraic sub-decompositions. Mathematics in Computer Science 8, 263–288 (2014)
  • [44] Wilson, D., Davenport, J., England, M., Bradford, R.: A “piano movers” problem reformulated. In: Proc. SYNASC ’13, pp. 53–60. IEEE (2013)
  • [45] Xu, L., Hutter, F., Hoos, H., Leyton-Brown, K.: SATzilla: Portfolio-based algorithm selection for SAT. J. Artificial Intelligence Research 32, 565–606 (2008)