This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Simplification Methods for Sum-of-Squares Programs

Peter Seiler, Qian Zheng, and Gary J. Balas P. Seiler is with Aerospace and Engineering Mechanics Department, University of Minnesota, seiler@aem.umn.eduQ. Zheng is with Aerospace and Engineering Mechanics Department, University of Minnesota, qzheng@aem.umn.eduG.J. Balas is with Aerospace and Engineering Mechanics Department, University of Minnesota, balas@umn.edu
Abstract

A sum-of-squares is a polynomial that can be expressed as a sum of squares of other polynomials. Determining if a sum-of-squares decomposition exists for a given polynomial is equivalent to a linear matrix inequality feasibility problem. The computation required to solve the feasibility problem depends on the number of monomials used in the decomposition. The Newton polytope is a method to prune unnecessary monomials from the decomposition. This method requires the construction of a convex hull and this can be time consuming for polynomials with many terms. This paper presents a new algorithm for removing monomials based on a simple property of positive semidefinite matrices. It returns a set of monomials that is never larger than the set returned by the Newton polytope method and, for some polynomials, is a strictly smaller set. Moreover, the algorithm takes significantly less computation than the convex hull construction. This algorithm is then extended to a more general simplification method for sum-of-squares programming.

I Introduction

A polynomial is a sum-of-squares (SOS) if it can be expressed as a sum of squares of other polynomials. There are close connections between SOS polynomials and positive semidefinite matrices [3, 2, 4, 13, 11, 7, 12]. For a given polynomial the search for an SOS decomposition is equivalent to a linear matrix inequality feasibility problem. It is also possible to formulate optimization problems with polynomial sum-of-squares constraints [11, 12]. There is freely available software that can be used to solve these SOS feasibility and optimization problems [14, 8, 1, 6]. Many nonlinear analysis problems, e.g. Lyapunov stability analysis, can be formulated within this optimization framework [11, 12, 19, 20].

Computational growth is a significant issue for these optimization problems. For example, consider the search for an SOS decomposition: given a polynomial pp and a vector of monomials zz, does there exist a matrix Q0Q\succeq 0 such that p=zTQzp=z^{T}Qz? The computation required to solve the corresponding linear matrix inequality feasibility problem grows with the number of monomials in the vector zz. The Newton polytope [15, 18] is a method to prune unnecessary monomials from the vector zz. This method is implemented in SOSTOOLs [14]. One drawback is that this method requires the construction of a convex hull and this construction itself can be time consuming for polynomials with many terms.

This paper presents an alternative monomial reduction method called the zero diagonal algorithm. This algorithm is based on a simple property of positive semidefinite matrices: if the (i,i)(i,i) diagonal entry of a positive semidefinite matrix is zero then the entire ithi^{th} row and column must be zero. The zero diagonal algorithm simply searches for diagonal entries of QQ that are constrained to be zero and then prunes the corresponding monomials. This algorithm can be implemented with very little computational cost using the Matlab find command. It is shown that final list of monomials returned by the zero diagonal algorithm is never larger than the pruned list obtained from the Newton polytope method. For some problems the zero diagonal algorithm returns a strictly smaller set of monomials. Results contained in this paper are similar to and preceded by those found in the prior work [9, 21].

The basic idea in the zero diagonal algorithm is then extended to a more general simplification method for sum-of-squares programs. The more general method also removes free variables that are implicitly constrained to be equal to zero. This can improve the numerical conditioning and reduce the computation time required to solve the SOS program. Both the zero diagonal elimination algorithm and the simplification procedure for SOS programs are implemented in SOSOPT [1].

II SOS Polynomials

\mathbb{N} denotes the set of nonnegative integers, {0,1,}\{0,1,\ldots\}, and n\mathbb{N}^{n} is the set of nn-dimensional vectors with entries in \mathbb{N}. For αn\alpha\in\mathbb{N}^{n}, a monomial in variables {x1,,xn}\{x_{1},\ldots,x_{n}\} is given by xα:=x1α1x2α2xnαnx^{\alpha}:=x_{1}^{\alpha_{1}}x_{2}^{\alpha_{2}}\cdots x_{n}^{\alpha_{n}}. α\alpha is the degree vector associated with the monomial xαx^{\alpha}. The degree of a monomial is defined as degxα:=i=1nαi\deg{x^{\alpha}}:=\sum_{i=1}^{n}\alpha_{i}. A polynomial is a finite linear combination of monomials:

p:=α𝒜cαxα=α𝒜cαx1α1x2α2xnαn\displaystyle p:=\sum_{\alpha\in{\cal A}}c_{\alpha}x^{\alpha}=\sum_{\alpha\in{\cal A}}c_{\alpha}x_{1}^{\alpha_{1}}x_{2}^{\alpha_{2}}\cdots x_{n}^{\alpha_{n}} (1)

where cαc_{\alpha}\in\mathbb{R}, cα0c_{\alpha}\neq 0, and 𝒜{\cal A} is a finite collection of vectors in n\mathbb{N}^{n}. [x]\mathbb{R}[x] denotes the set of all polynomials in variables {x1,,xn}\{x_{1},\ldots,x_{n}\} with real coefficients. Using the definition of deg\deg for a monomial, the degree of pp is defined as degp:=maxα𝒜[degxα]\deg{p}:=\max_{\alpha\in{\cal A}}\ \ [\deg{x^{\alpha}}].

A polynomial pp is a sum-of-squares (SOS) if there exist polynomials {fi}i=1m\{f_{i}\}_{i=1}^{m} such that p=i=1mfi2p=\sum_{i=1}^{m}f_{i}^{2}. The set of SOS polynomials is a subset of [x]\mathbb{R}[x] and is denoted by Σ[x]{\Sigma[x]}. If pp is a sum-of-squares then p(x)0p(x)\geq 0 xn\forall x\in\mathbb{R}^{n}. However, non-negative polynomials are not necessarily SOS [16].

Define zz as the column vector of all monomials in variables {x1,,xn}\{x_{1},\ldots,x_{n}\} of degree d\leq d: 111Any ordering of the monomials can be used to form zz. In Equation 2, xαx^{\alpha} precedes xβx^{\beta} in the definition of zz if degxα<degxβ\deg{x^{\alpha}}<\deg{x^{\beta}} OR degxα=degxβ\deg{x^{\alpha}}=\deg{x^{\beta}} and the first nonzero entry of αβ is >0\alpha-\beta\mbox{ is }>0.

z:=[1,x1,x2,,xn,x12,x1x2,,xn2,,xnd]T\displaystyle z:=\begin{bmatrix}1,\ x_{1},\ x_{2},\ \ldots,\ x_{n},\ x_{1}^{2},\ x_{1}x_{2},\ \ldots,\ x_{n}^{2},\ \ldots,\ x_{n}^{d}\end{bmatrix}^{T} (2)

There are (k+n1k)\left(\begin{smallmatrix}k+n-1\\ k\end{smallmatrix}\right) monomials in nn variables of degree kk. Thus zz is a column vector of length lz:=k=0d(k+n1k)=(n+dd)l_{z}:=\sum_{k=0}^{d}\left(\begin{smallmatrix}k+n-1\\ k\end{smallmatrix}\right)=\left(\begin{smallmatrix}n+d\\ d\end{smallmatrix}\right). If ff is a polynomial in nn variables with degree d\leq d then by definition ff is a finite linear combination of monomials of degree d\leq d. Consequently, there exists alza\in\mathbb{R}^{l_{z}} such that f=aTzf=a^{T}z.

Two useful facts from [15] are:

  1. 1.

    If pp is a sum-of-squares then pp must have even degree.

  2. 2.

    If pp is degree 2d2d (dd\in\mathbb{N}) and p=i=1mfi2p=\sum_{i=1}^{m}f_{i}^{2} then degfid\deg{f_{i}}\leq d i\forall i.

The following theorem, introduced as the “Gram Matrix” method by [4, 13], connects SOS polynomials and positive semidefinite matrices.

Theorem 1

Let p[x]p\in\mathbb{R}[x] be a polynomial of degree 2d2d and zz be the lz×1l_{z}\times 1 vector of monomials defined in Equation 2. Then pp is a SOS if and only if there exists a symmetric matrix Qlz×lzQ\in\mathbb{R}^{l_{z}\times l_{z}} such that Q0Q\succeq 0 and p=zTQzp=z^{T}Qz.

Proof:

(\Rightarrow) If pp is a SOS, then there exists polynomials {fi}i=1m\{f_{i}\}_{i=1}^{m} such that p=i=1mfi2p=\sum_{i=1}^{m}f_{i}^{2}. By fact 2 above, degfid\deg{f_{i}}\leq d for all ii. Thus, for each fif_{i} there exists a vector, ailza_{i}\in\mathbb{R}^{l_{z}}, such that fi=aiTzf_{i}=a_{i}^{T}z. Define the matrix Alz×mA\in\mathbb{R}^{l_{z}\times m} whose ithi^{th} column is aia_{i} and define Q:=AAT0Q:=AA^{T}\succeq 0. Then p=zTQzp=z^{T}Qz.

(\Leftarrow) Assume there exists Q=QTlz×lzQ=Q^{T}\in\mathbb{R}^{l_{z}\times l_{z}} such that Q0Q\succeq 0 and p=zTQzp=z^{T}Qz. Define m:=rank(Q)m:=rank(Q). There exists a matrix Alz×mA\in\mathbb{R}^{l_{z}\times m} such that Q=AATQ=AA^{T}. Let aia_{i} denote the ithi^{th} column of AA and define the polynomials fi:=zTaif_{i}:=z^{T}a_{i}. By definition of fif_{i}, p=zT(AAT)z=i=1mfi2p=z^{T}(AA^{T})z=\sum_{i=1}^{m}f_{i}^{2}. ∎

Determining if an SOS decomposition exists for a given polynomial pp is equivalent to a feasibility problem:

Find Q0 such that p=zTQz\displaystyle\mbox{Find }Q\succeq 0\mbox{ such that }p=z^{T}Qz (3)

QQ is constrained to be positive semi-definite and equating coefficients of pp and zTQzz^{T}Qz imposes linear equality constraints on the entries of QQ. Thus this is a linear matrix inequality (LMI) feasibility problem. There is software available to solve for SOS decompositions [14, 8, 1]. These toolboxes convert the SOS feasibility problem to an LMI problem. The LMI problem is then solved with a freely available LMI solver, e.g. Sedumi [17], and an SOS decomposition is constructed if a feasible solution is found. These software packages also solve SOS synthesis problems where some of the coefficients of the polynomial are treated as free variables to be computed as part of the optimization. These more general SOS optimization problems are discussed further in Section V. Many analysis problems for polynomial dynamical systems can be posed within this SOS synthesis framework [11, 12, 19, 20].

III Newton Polytope

As discussed in the previous section, the search for an SOS decomposition is equivalent to an LMI feasibility problem. One issue is that the computational complexity of this LMI feasibility problem grows with the dimension of the Gram matrix. For a polynomial of degree 2d2d in nn variables there are, in general, lz=(n+dd)l_{z}=\left(\begin{smallmatrix}n+d\\ d\end{smallmatrix}\right) monomials in zz and the Gram matrix QQ is lz×lzl_{z}\times l_{z}. lzl_{z} grows rapidly with both the number of variables and the degree of the polynomial. However, any particular polynomial pp may have an SOS decomposition with fewer monomials. The Newton Polytope [15, 18] is an algorithm to reduce the dimension lzl_{z} by pruning unnecessary monomials from zz.

First, some terminology is provided regarding polytopes [10, 5]. For any set 𝒜n{\cal A}\subseteq\mathbb{R}^{n}, convhull(A)\mbox{convhull}(A) denotes the convex hull of 𝒜{\cal A}. Let CnC\subseteq\mathbb{R}^{n} be a convex set. A point αC\alpha\in C is an extreme point if it does not belong to the relative interior of any segment [α1,α2]C[\alpha_{1},\alpha_{2}]\subset C. In other words, if α1,α2C\exists\alpha_{1},\alpha_{2}\in C and 0<λ<10<\lambda<1 such that α=λα1+(1λ)α2\alpha=\lambda\alpha_{1}+(1-\lambda)\alpha_{2} then α1=α2=α\alpha_{1}=\alpha_{2}=\alpha. A convex polytope (or simply polytope) is the convex hull of a non-empty, finite set {α1,,αp}n\{\alpha_{1},\ldots,\alpha_{p}\}\subseteq\mathbb{R}^{n}. The extreme points of a polytope are called the vertices. Let CC be a polytope and let 𝒱{\cal V} be the (finite) set of vertices of CC. Then C=convhull(𝒱)C=\mbox{convhull}({\cal V}) and 𝒱{\cal V} is a minimal vertex representation of CC. The polytope CC may be equivalently described as an intersection of a finite collection of halfspaces, i.e. there exists a matrix HN×nH\in\mathbb{R}^{N\times n} and a vector gNg\in\mathbb{R}^{N} such that C={αn:Hαg}C=\{\alpha\in\mathbb{R}^{n}\ :\ H\alpha\leq g\}. This is a facet or half-space representation of CC.

The Newton Polytope (or cage) of a polynomial p=α𝒜cαxαp=\sum_{\alpha\in{\cal A}}c_{\alpha}x^{\alpha} is defined as C(p):=convhull(𝒜)nC(p):=\mbox{convhull}({\cal A})\subseteq\mathbb{R}^{n} [15]. The reduced Newton polytope is 12C(p):={12α:αC(p)}\frac{1}{2}C(p):=\{\frac{1}{2}\alpha\ :\ \alpha\in C(p)\}. The following theorem from [15] is a key result for monomial reduction. 222 A polynomial pp is a form if all monomials have the same degree. The results in [15] are stated and proved for forms. A given polynomial can be converted to a form by adding a single dummy variable of appropriate degree to each monomial. The results in [15] apply to polynomials by this homogenization procedure.

Theorem 2

If p=i=1mfi2p=\sum_{i=1}^{m}f_{i}^{2} then the vertices of C(p)C(p) are vectors whose entries are even numbers and C(fi)12C(p)C(f_{i})\subseteq\frac{1}{2}C(p).

This theorem implies that any monomial xαx^{\alpha} appearing in the vector zz of an SOS decomposition zTQzz^{T}Qz must satisfy α12C(p)n\alpha\in\frac{1}{2}C(p)\cap\mathbb{N}^{n}. This forms the basis for the Newton polytope method for pruning monomials: Let pp be a given polynomial of degree 2d2d in nn variables with monomial degree vectors specified by the finite set 𝒜{\cal A}. First, create the lz×1l_{z}\times 1 vector zz consisting of all monomials of degree d\leq d in nn variables. There are lz=(n+dd)l_{z}=\left(\begin{smallmatrix}n+d\\ d\end{smallmatrix}\right) monomials in this complete list. Second, compute a half-space representation {αn:Hαg}\{\alpha\in\mathbb{R}^{n}\ :\ H\alpha\leq g\} for the reduced Newton polytope 12C(p)\frac{1}{2}C(p). Third, prune out any monomials in zz that are not elements of 12C(p)\frac{1}{2}C(p). This algorithm is implemented in SOSTOOLs [14]. The third step amounts to checking each monomial in zz to see if the corresponding degree vector satisfies the half-plane constraints HαgH\alpha\leq g. This step is computationally very fast. The second step requires computing a half-plane representation for the convex hull of 12𝒜\frac{1}{2}{\cal A}. This can be done in Matlab, e.g. with convhulln. However, this step can be time-consuming when the polynomial has many terms (𝒜{\cal A} has many elements). The next section provides an alternative implementation of the Newton Polytope algorithm that avoids constructing the half-space representation of the reduced Newton polytope.

Example: Consider the following polynomial

p=3x142x12x2+7x124x1x2+4x22+1\displaystyle p=3x_{1}^{4}-2x_{1}^{2}x_{2}+7x_{1}^{2}-4x_{1}x_{2}+4x_{2}^{2}+1 (4)

pp is a degree four polynomial in two variables. The list of all monomials in two variables with degree 2\leq 2 is:

z=[1x1x2x12x1x2x22]T\displaystyle z=\begin{bmatrix}1&x_{1}&x_{2}&x_{1}^{2}&x_{1}x_{2}&x_{2}^{2}\end{bmatrix}^{T} (5)

The length of zz is lz=6l_{z}=6. An SOS decomposition of a degree four polynomial would, in general, include all six of these monomials. The Newton Polytope can be used to prune some unnecessary monomials in this list.

The set of monomial degree vectors for pp is 𝒜:={[40],[21],[20],[11],[02],[00]}{\cal A}:=\{\left[\begin{smallmatrix}4\\ 0\end{smallmatrix}\right],\ \left[\begin{smallmatrix}2\\ 1\end{smallmatrix}\right],\ \left[\begin{smallmatrix}2\\ 0\end{smallmatrix}\right],\ \left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right],\ \left[\begin{smallmatrix}0\\ 2\end{smallmatrix}\right],\ \left[\begin{smallmatrix}0\\ 0\end{smallmatrix}\right]\}. These vectors are shown as circles in Figure 1. The Newton Polytope C(p)C(p) is the large triangle with vertices {[40],[00],[02]}\{\left[\begin{smallmatrix}4\\ 0\end{smallmatrix}\right],\ \left[\begin{smallmatrix}0\\ 0\end{smallmatrix}\right],\ \left[\begin{smallmatrix}0\\ 2\end{smallmatrix}\right]\}. Figure 2 shows the degree vectors for the six monomials in zz (circles) and the reduced Newton polytope (large triangle). The reduced Newton polytope 12C(f)\frac{1}{2}C(f) is the triangle with vertices {[20],[00],[01]}\{\left[\begin{smallmatrix}2\\ 0\end{smallmatrix}\right],\ \left[\begin{smallmatrix}0\\ 0\end{smallmatrix}\right],\ \left[\begin{smallmatrix}0\\ 1\end{smallmatrix}\right]\}. By Theorem 2, x1x2x_{1}x_{2} and x22x_{2}^{2} can not appear in any SOS decomposition of pp because [11],[02]12C(f)\left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right],\left[\begin{smallmatrix}0\\ 2\end{smallmatrix}\right]\notin\frac{1}{2}C(f). These monomials can be pruned from zz and the search for an SOS decomposition can be performed using only the four monomials in the reduced Newton polytope:

z=[1x1x2x12]T\displaystyle z=\begin{bmatrix}1&x_{1}&x_{2}&x_{1}^{2}\end{bmatrix}^{T} (6)

The length of the reduced vector zz is lz=4l_{z}=4. The SOS feasibility problem with this reduced vector zz (Equation 3) is feasible. The following matrix is one feasible solution:

Q=[1000072002410013]\displaystyle Q=\left[\begin{smallmatrix}1&0&0&0\\ 0&7&-2&0\\ 0&-2&4&-1\\ 0&0&-1&3\end{smallmatrix}\right] (7)

pp is SOS since p=zTQzp=z^{T}Qz and Q0Q\succeq 0.

Refer to caption

Figure 1: Newton polytope (large triangle) and monomial degree vectors (circles)

Refer to caption

Figure 2: Reduced Newton polytope (large triangle) and degree vectors for all monomials of degree = 0, 1, 2 (circles)

IV Zero Diagonal Algorithm

The zero diagonal algorithm searches for diagonal entries of the Gram matrix that are constrained to be zero and then prunes the associated monomials from zz. The remainder of the section describes this algorithm in more detail.

As mentioned in Section II, equating the coefficients of pp and zTQzz^{T}Qz leads to linear equality constraints on the entries of QQ. The structure of these equations plays an important role in the proposed algorithm. Let zz be the lz×1l_{z}\times 1 vector of all monomials in nn variables of degree d\leq d (Equation 2). Define the corresponding set of degree vectors as M:={α1,,αlz}nM:=\{\alpha_{1},\ldots,\alpha_{l_{z}}\}\subseteq\mathbb{N}^{n}. zTQzz^{T}Qz is a polynomial in xx with coefficients that are linear functions of the entries of QQ:

zTQz=i=1mj=1mQi,jxαi+αj\displaystyle z^{T}Qz=\sum_{i=1}^{m}\sum_{j=1}^{m}Q_{i,j}x^{\alpha_{i}+\alpha_{j}} (8)

The entries of zz are not independent: it is possible that zizj=zkzlz_{i}z_{j}=z_{k}z_{l} for some i,j,k,l{1,,lz}i,j,k,l\in\{1,\ldots,l_{z}\}. The unique degree vectors in Equation 8 are given by the set

M+M:={αn:αi,αjM s.t. α=αi+αj}\displaystyle M+M:=\{\alpha\in\mathbb{N}^{n}:\exists\alpha_{i},\alpha_{j}\in M\mbox{ s.t. }\alpha=\alpha_{i}+\alpha_{j}\} (9)

The polynomial zTQzz^{T}Qz can be rewritten as:

zTQz=αM+M((i,j)SαQi,j)xα\displaystyle z^{T}Qz=\sum_{\alpha\in M+M}\left(\sum_{(i,j)\in S_{\alpha}}Q_{i,j}\right)x^{\alpha} (10)

where Sα:={(i,j):αi+αj=α}S_{\alpha}:=\{(i,j)\ :\ \alpha_{i}+\alpha_{j}=\alpha\}. Equating the coefficients of pp and zTQzz^{T}Qz yields the following linear equality constraints on the entries of QQ:

(i,j)SαQi,j={cαα𝒜0α𝒜\displaystyle\sum_{(i,j)\in S_{\alpha}}Q_{i,j}=\left\{\begin{array}[]{lc}c_{\alpha}&\alpha\in{\cal A}\\ 0&\alpha\notin{\cal A}\end{array}\right. (13)

There exists Al×lz2A\in\mathbb{R}^{l\times l_{z}^{2}} and blb\in\mathbb{R}^{l} such that these equality constraints are given by Aq=bAq=b 333In addition to the equality constraints due to p=zTQzp=z^{T}Qz there are also equality constraints due to the symmetry condition Q=QTQ=Q^{T}. Some solvers, e.g. Sedumi [17], internally handle these symmetry constraints. where q:=vec(Q)q:=vec(Q) is the vector obtained by vertically stacking the columns of QQ. The dimension ll is equal to the number of elements of M+MM+M.

The zero diagonal algorithm is based on two lemmas.

Lemma 1

If S2αi={(i,i)}S_{2\alpha_{i}}=\{(i,i)\} then

Qi,i={c2αi2αi𝒜02αi𝒜\displaystyle Q_{i,i}=\left\{\begin{array}[]{lc}c_{2\alpha_{i}}&2\alpha_{i}\in{\cal A}\\ 0&2\alpha_{i}\notin{\cal A}\end{array}\right. (16)
Lemma 2

If p=zTQzp=z^{T}Qz, Q0Q\succeq 0, and Qi,i=0Q_{i,i}=0 then p=z~TQ~z~p=\tilde{z}^{T}\tilde{Q}\tilde{z} where z~\tilde{z} is the (lz1)×1(l_{z}-1)\times 1 vector obtained by deleting the ithi^{th} element of zz and Q~0\tilde{Q}\succeq 0 is the (lz1)×(lz1)(l_{z}-1)\times(l_{z}-1) matrix obtained by deleting the ithi^{th} row and column from QQ.

Lemma 1 follows from Equation 13. S2αi={(i,i)}S_{2\alpha_{i}}=\{(i,i)\} means that xαixαix^{\alpha_{i}}\cdot x^{\alpha_{i}} is the unique decomposition of x2αix^{2\alpha_{i}} as a product of monomials in zz. There is no other decomposition of x2αix^{2\alpha_{i}} as a product of monomials in zz. In this case, p=zTQzp=z^{T}Qz places a direct constraint on Qi,iQ_{i,i} that must hold for all possible Gram matrices.

Lemma 2 follows from a simple property of positive semidefinite matrices: If Q0Q\succeq 0 and Qi,i=0Q_{i,i}=0 then Qi,j=Qj,i=0Q_{i,j}=Q_{j,i}=0 for j=1,,lzj=1,\ldots,l_{z}. If Qi,i=0Q_{i,i}=0 then an SOS decomposition of pp, if one exists, does not depend on the monomial ziz_{i} and ziz_{i} can be removed from zz.

The zero diagonal algorithm is given in Table I. The sets MkM_{k} denote the pruned list of monomial degree vectors at the kthk^{th} iterate. The main step in the iteration is the search for equations that directly constrain a diagonal entry Qi,iQ_{i,i} to be zero (Step 6). This step can be performed very fast since it can be implemented using the find command in Matlab. Based on Lemma 2, if Qi,i=0Q_{i,i}=0 then the monomial ziz_{i} and the ithi^{th} row and column of QQ can be removed. This is equivalent to zeroing out the corresponding columns of AA (Step 7). This implementation has the advantage that AA and bb do not need to be recomputed for each updated set MkM_{k}. Zeroing out columns of AA in Step 7 also means that new equations of the form Qi,i=0Q_{i,i}=0 may be uncovered during the next iteration. The iteration continues until no new zero diagonal entries of QQ are discovered. The next theorem proves that if pp is a SOS then the decomposition must be expressible using only monomials associated with the final set MkfM_{k_{f}}. Moreover, Mkf12C(p)nM_{k_{f}}\subseteq\frac{1}{2}C(p)\cap\mathbb{N}^{n}, i.e. the list of monomials returned by the zero diagonal algorithm is never larger than the list obtained from the Newton polytope method. In fact, there are polynomials for which the zero diagonal algorithm returns a strictly smaller list of monomials than the Newton polytope. The second example below provides an instance of this fact.

1. Given: A polynomial p=α𝒜cαxαp=\sum_{\alpha\in{\cal A}}c_{\alpha}x^{\alpha}.
2. Initialization: Set k=0k=0 and M0:={αi}i=1lznM_{0}:=\{\alpha_{i}\}_{i=1}^{l_{z}}\subseteq\mathbb{N}^{n}
3. Form Aq=bAq=b: Construct the equality constraint data, Al×lz2A\in\mathbb{R}^{l\times l_{z}^{2}} and
blb\in\mathbb{R}^{l}, obtained by equating coefficients of p=zTQzp=z^{T}Qz.
4. Iteration:
5. Set 𝒵={\cal Z}=\emptyset, k:=k+1k:=k+1, and Mk:=Mk1M_{k}:=M_{k-1}
6. Search Aq=bAq=b: If there is an equation of the form Qi,i=0Q_{i,i}=0
then set Mk:=M_{k}:= Mk\{αi}M_{k}\backslash\{\alpha_{i}\} and 𝒵=𝒵{\cal Z}={\cal Z}\cup{\cal I} where {\cal I} are the
entries of qq corresponding to the ithi^{th} row and column of QQ.
7. For each j𝒵j\in{\cal Z} set the jthj^{th} column of AA equal to zero.
8. Terminate if 𝒵={\cal Z}=\emptyset otherwise return to step 5.
9. Return: MkM_{k}, AA, bb
TABLE I: Monomial Reduction using the Zero Diagonal Algorithm
Theorem 3

The zero diagonal algorithm terminates in a finite number of steps, kfk_{f}, and Mkf12C(p)nM_{k_{f}}\subseteq\frac{1}{2}C(p)\cap\mathbb{N}^{n}. Moreover, if p=i=1mfi2p=\sum_{i=1}^{m}f_{i}^{2} then C(fi)nMkfC(f_{i})\cap\mathbb{N}^{n}\subseteq M_{k_{f}}.

Proof:

M0M_{0} has lzl_{z} elements. The algorithm terminates unless at least one point is removed from MkM_{k}. Thus the algorithm must terminate after kflz+1k_{f}\leq l_{z}+1 steps.

To show Mkf12C(p)nM_{k_{f}}\subseteq\frac{1}{2}C(p)\cap\mathbb{N}^{n} consider a vertex αi\alpha_{i} of convhull(Mkf)\mbox{convhull}(M_{k_{f}}). If there exists u,vconvhull(Mkf)u,v\in\mbox{convhull}(M_{k_{f}}) such that 2αi=u+v2\alpha_{i}=u+v then u=v=αiu=v=\alpha_{i}. This follows from αi=12(u+v)\alpha_{i}=\frac{1}{2}(u+v) and the definition of a vertex. As a consequence, S2αi={(i,i)}S_{2\alpha_{i}}=\{(i,i)\}. By Lemma 1

Qi,i={c2αi2αi𝒜02αi𝒜\displaystyle Q_{i,i}=\left\{\begin{array}[]{lc}c_{2\alpha_{i}}&2\alpha_{i}\in{\cal A}\\ 0&2\alpha_{i}\notin{\cal A}\end{array}\right. (19)

Qi,i0Q_{i,i}\neq 0 since αi\alpha_{i} was not removed at step 6 during the final iteration and thus 2αi𝒜C(p)2\alpha_{i}\in{\cal A}\subseteq C(p). This implies that αi12C(p)\alpha_{i}\in\frac{1}{2}C(p), i.e. 12C(p)\frac{1}{2}C(p) contains all vertices of convhull(Mkf)\mbox{convhull}(M_{k_{f}}). Hence Mkfconvhull(Mkf)12C(p)M_{k_{f}}\subseteq\mbox{convhull}(M_{k_{f}})\subseteq\frac{1}{2}C(p).

Finally it is shown that C(fi)nMkfC(f_{i})\cap\mathbb{N}^{n}\subseteq M_{k_{f}}. C(fi)12C(p)C(f_{i})\subseteq\frac{1}{2}C(p) by Theorem 2 and 12C(p)convhull(M0)\frac{1}{2}C(p)\subseteq\mbox{convhull}(M_{0}) by the choice of M0M_{0}. Thus C(fi)nM0C(f_{i})\cap\mathbb{N}^{n}\subseteq M_{0}. Let zz be the vector of monomials associated with M0M_{0}. If p=i=1mfi2p=\sum_{i=1}^{m}f_{i}^{2} then there exists a Q0Q\succeq 0 such that p=zTQzp=z^{T}Qz. If the iteration removes no degree vectors then Mkf=M0M_{k_{f}}=M_{0} and the proof is complete. Assume the iteration removes at least one degree vector and let αi\alpha_{i} be the first removed degree vector. Based on Step 6, p=zTQzp=z^{T}Qz constrains Qi,i=0Q_{i,i}=0. By Lemma 2 the monomial ziz_{i} cannot appear in any fif_{i}. Hence C(fi)nM0\{αi}C(f_{i})\cap\mathbb{N}^{n}\subseteq M_{0}\backslash\{\alpha_{i}\}. Induction can be used to show C(fi)nMkC(f_{i})\cap\mathbb{N}^{n}\subseteq M_{k} holds after each step kk including the final step kfk_{f}. ∎

This algorithm is currently implemented in SOSOPT [1]. The results in Theorem 3 still hold if M0nM_{0}\subseteq\mathbb{N}^{n} is chosen to be any set satisfying 12C(p)nM0\frac{1}{2}C(p)\cap\mathbb{N}^{n}\subseteq M_{0}. Simple heuristics can be used to obtain an initial set of monomials M0M_{0} with fewer than lzl_{z} elements. M0M_{0} can then be used to initialize the zero diagonal algorithm. The next step is to construct the matrix AA and vector bb obtained by equating the coefficients of pp and zTQzz^{T}Qz. This step is required to formulate the LMI feasibility problem and it is not an additional computational cost associated with the zero diagonal algorithm. MkfM_{k_{f}} contains the final reduced set of monomial degree vectors. If at least one degree vector was pruned then the returned matrix AA and vector bb may contain entire columns or rows of zeros. These rows/and columns can be deleted prior to passing the data to a a semi-definite programming solver. The next two examples demonstrate the basic ideas of the algorithm.

Example: Consider again the polynomial in Equation 4. The full list of all monomials in two variables with degree 2\leq 2 consists of six monomials (Equation 5). Equating the coefficients of pp and zTQzz^{T}Qz yields the following linear equality constraints on the entries of QQ:

Q2,1+Q1,2\displaystyle Q_{2,1}+Q_{1,2} =0,\displaystyle=0, Q4,1+Q1,4+Q2,2\displaystyle Q_{4,1}+Q_{1,4}+Q_{2,2} =7\displaystyle=7
Q4,2+Q2,4\displaystyle Q_{4,2}+Q_{2,4} =0,\displaystyle=0, Q6,4+Q4,6+Q5,5\displaystyle Q_{6,4}+Q_{4,6}+Q_{5,5} =0\displaystyle=0
Q3,1+Q1,3\displaystyle Q_{3,1}+Q_{1,3} =0,\displaystyle=0, Q6,1+Q1,6+Q3,3\displaystyle Q_{6,1}+Q_{1,6}+Q_{3,3} =4\displaystyle=4
Q5,4+Q4,5\displaystyle Q_{5,4}+Q_{4,5} =0,\displaystyle=0, Q5,2+Q2,5+Q4,3+Q3,4\displaystyle Q_{5,2}+Q_{2,5}+Q_{4,3}+Q_{3,4} =2\displaystyle=-2
Q6,3+Q3,6\displaystyle Q_{6,3}+Q_{3,6} =0,\displaystyle=0, Q6,2+Q2,6+Q5,3+Q3,5\displaystyle Q_{6,2}+Q_{2,6}+Q_{5,3}+Q_{3,5} =0\displaystyle=0
Q6,5+Q5,6\displaystyle Q_{6,5}+Q_{5,6} =0,\displaystyle=0, Q5,1+Q1,5+Q3,2+Q2,3\displaystyle Q_{5,1}+Q_{1,5}+Q_{3,2}+Q_{2,3} =4\displaystyle=-4
Q1,1\displaystyle Q_{1,1} =1\displaystyle=1 Q4,4\displaystyle Q_{4,4} =3\displaystyle=3
Q6,6\displaystyle Q_{6,6} =0\displaystyle=0

A matrix AA and vector bb can be constructed to represent these equations in the form Aq=bAq=b. Note that Q6,6=0Q_{6,6}=0 and this implies that Qi,6=Q6,i=0Q_{i,6}=Q_{6,i}=0 i=1,,6i=1,\ldots,6 for any SOS decomposition of pp. Thus the monomial z6=x22z_{6}=x_{2}^{2} can not appear in any SOS decomposition and it can be removed from the list. After eliminating x22x_{2}^{2} and removing the 6th6^{th} row and column of QQ, the equality constraints reduce to:

Q2,1+Q1,2\displaystyle Q_{2,1}+Q_{1,2} =0,\displaystyle=0, Q4,1+Q1,4+Q2,2\displaystyle Q_{4,1}+Q_{1,4}+Q_{2,2} =7\displaystyle=7
Q4,2+Q2,4\displaystyle Q_{4,2}+Q_{2,4} =0,\displaystyle=0, Q5,5\displaystyle Q_{5,5} =0\displaystyle=0
Q3,1+Q1,3\displaystyle Q_{3,1}+Q_{1,3} =0,\displaystyle=0, Q3,3\displaystyle Q_{3,3} =4\displaystyle=4
Q5,4+Q4,5\displaystyle Q_{5,4}+Q_{4,5} =0,\displaystyle=0, Q5,2+Q2,5+Q4,3+Q3,4\displaystyle Q_{5,2}+Q_{2,5}+Q_{4,3}+Q_{3,4} =2\displaystyle=-2
Q5,3+Q3,5\displaystyle Q_{5,3}+Q_{3,5} =0\displaystyle=0 Q5,1+Q1,5+Q3,2+Q2,3\displaystyle Q_{5,1}+Q_{1,5}+Q_{3,2}+Q_{2,3} =4\displaystyle=-4
Q1,1\displaystyle Q_{1,1} =1\displaystyle=1 Q4,4\displaystyle Q_{4,4} =3\displaystyle=3

Removing the 6th6^{th} row and column of QQ is equivalent to zeroing out the appropriate columns of the matrix AA. This uncovers the new constraint Q5,5=0Q_{5,5}=0 which implies that the monomial z5=x1x2z_{5}=x_{1}x_{2} can be pruned from the list. After eliminating x1x2x_{1}x_{2}, the procedure can be repeated once again after removing the 5th5^{th} row and column of QQ. No new diagonal entries of QQ are constrained to be zero and hence no additional monomials can be pruned from zz. The final list of monomials consists of four monomials.

z=[1x1x2x12]T\displaystyle z=\begin{bmatrix}1&x_{1}&x_{2}&x_{1}^{2}\end{bmatrix}^{T} (20)

The Newton polytope method returned the same list.

Example: Consider the polynomial p=x12+x22+x14x24p=x_{1}^{2}+x_{2}^{2}+x_{1}^{4}x_{2}^{4}. The Newton polytope is C(p)=convhull({[20],[02],[44]})C(p)=\mbox{convhull}(\{\left[\begin{smallmatrix}2\\ 0\end{smallmatrix}\right],\ \left[\begin{smallmatrix}0\\ 2\end{smallmatrix}\right],\ \left[\begin{smallmatrix}4\\ 4\end{smallmatrix}\right]\}). The reduced Newton polytope is 12C(p)=convhull({[10],[01],[22]})\frac{1}{2}C(p)=\mbox{convhull}(\{\left[\begin{smallmatrix}1\\ 0\end{smallmatrix}\right],\ \left[\begin{smallmatrix}0\\ 1\end{smallmatrix}\right],\ \left[\begin{smallmatrix}2\\ 2\end{smallmatrix}\right]\}). The monomial vector corresponding to 12C(p)n\frac{1}{2}C(p)\cap\mathbb{N}^{n} is:

z:=[x1,x2,x1x2,x12x22]T\displaystyle z:=\begin{bmatrix}x_{1},\ x_{2},\ x_{1}x_{2},\ x_{1}^{2}x_{2}^{2}\end{bmatrix}^{T} (21)

There are lz=15l_{z}=15 monomials in two variables with degree 4\leq 4. For simplicity, assume the zero diagonal algorithm is initialized with M0:=12C(p)nM_{0}:=\frac{1}{2}C(p)\cap\mathbb{N}^{n}. Equating coefficients of pp and zTQzz^{T}Qz yields the constraint Q3,3=0Q_{3,3}=0 in the first iteration of the zero diagonal algorithm. The monomial z3=x1x2z_{3}=x_{1}x_{2} is pruned and no additional monomials are removed at the next iteration. The zero diagonal algorithm returns M2={[10],[01],[22]}M_{2}=\{\left[\begin{smallmatrix}1\\ 0\end{smallmatrix}\right],\ \left[\begin{smallmatrix}0\\ 1\end{smallmatrix}\right],\ \ \left[\begin{smallmatrix}2\\ 2\end{smallmatrix}\right]\}. M2M_{2} is a proper subset of 12C(p)n\frac{1}{2}C(p)\cap\mathbb{N}^{n}. The same set of monomials is returned by the zero diagonal algorithm after 13 steps if M0M_{0} is initialized with the lz=15l_{z}=15 degree vectors corresponding to all possible monomials in two variables with degree 4\leq 4. This example demonstrates that the zero diagonal algorithm can return a strictly smaller set of monomials than the Newton polytope method.

V Simplification Method for SOS Programs

This section describes a simplification method for SOS programs that is based on the zero diagonal algorithm. A sum-of-squares program is an optimization problem with a linear cost and affine SOS constraints on the decision variables [14]:

minurcTu\displaystyle\min_{u\in\mathbb{R}^{r}}c^{T}u (22)
subject to: ak(x,u)Σ[x],k=1,N\displaystyle\mbox{subject to: }a_{k}(x,u)\in{\Sigma[x]},\ \ k=1,\ldots N

uru\in\mathbb{R}^{r} are decision variables. The polynomials {ak}k=1N\{a_{k}\}_{k=1}^{N} are given problem data and are affine in uu:

ak(x,u):=ak,0(x)+ak,1(x)u1++ak,r(x)ur\displaystyle a_{k}(x,u):=a_{k,0}(x)+a_{k,1}(x)u_{1}+\dots+a_{k,r}(x)u_{r} (23)

Theorem 1 is used to convert an SOS program into a semidefinite program (SDP). The constraint ak(x,u)Σ[x]a_{k}(x,u)\in{\Sigma[x]} can be equivalently written as:

ak,0(x)+ak,1(x)u1++ak,r(x)ur=zkTQkzk\displaystyle a_{k,0}(x)+a_{k,1}(x)u_{1}+\dots+a_{k,r}(x)u_{r}=z_{k}^{T}Q_{k}z_{k} (24)
Qk0\displaystyle Q_{k}\succeq 0 (25)

If maxu[degak(x,u)]=2d\max_{u}[\deg a_{k}(x,u)]=2d then, in general, zkz_{k} must contain all monomials in nn variables of degree d\leq d. QkQ_{k} is a new matrix of decision variables that is introduced when converting an SOS constraint to an LMI constraint. Equating the coefficients of zkTQkzkz_{k}^{T}Q_{k}z_{k} and ak(x,u)a_{k}(x,u) imposes linear equality constraints on the decision variables uu and QkQ_{k}. There exists a matrix Al×mA\in\mathbb{R}^{l\times m} and vector blb\in\mathbb{R}^{l} such that the linear equations for all SOS constraints are given by Ay=bAy=b where

y:=[uT,vec(Q1)T,,vec(QN)T]T\displaystyle y:=[u^{T},\ vec(Q_{1})^{T},\ \ldots,\ vec(Q_{N})^{T}]^{T} (26)

vec(Qk)vec(Q_{k}) denotes the vector obtained by vertically stacking the columns of QkQ_{k}. The dimension mm is equal to r+k=1Nmk2r+\sum_{k=1}^{N}m_{k}^{2} where QkQ_{k} is mk×mkm_{k}\times m_{k} (k=1,,Nk=1,\ldots,N). After introducing a Gram matrix for each constraint the SOS program can be expressed as:

minur,{Qk}k=1NcTu\displaystyle\min_{u\in\mathbb{R}^{r},\{Q_{k}\}_{k=1}^{N}}c^{T}u (27)
subject to: Ay=b\displaystyle\mbox{subject to: }Ay=b
Qk0,k=1,N\displaystyle Q_{k}\succeq 0,\ \ k=1,\ldots N

Equation 27 is an SDP expressed in Sedumi [17] primal form. uu is a vector of free decision variables and {Qk}k=1N\{Q_{k}\}_{k=1}^{N} contain decision variables that are constrained to lie in the positive semi-definite cone. Sedumi internally handles the symmetry constraints implied by Qk=QkTQ_{k}=Q_{k}^{T}.

The SOS simplification procedure is a generalization of the zero diagonal algorithm. It prunes the list of monomials used in each SOS constraint. It also attempts to remove free decision variables that are implicitly constrained to be zero. Specifically, the constraints in some SOS programs imply both ui0u_{i}\geq 0 and ui0u_{i}\leq 0, i.e. there is an implicit constraint that ui=0u_{i}=0 for some ii. Appendix A.1 of [19] provides some simple examples of how these implicit constraints can arise in nonlinear analysis problems. For these simple examples it is possible to discover the implicit constraints by examination. For larger, more complicated analysis problems it can be difficult to detect that implicit constraints exist. The SOS simplification procedure described below automatically uncovers some classes of implicit constraints ui=0u_{i}=0 and removes these decision variables from the optimization. This is important because implicit constraints can cause numerical issues for SDP solvers. A significant reduction in computation time and improvement in numerical accuracy has been observed when implicitly constrained variables are removed prior to calling Sedumi.

The general SOS simplification procedure is shown in Table II. To ease the notation the algorithm is only shown for the case of one SOS constraint (N=1N=1). The extension to SOS programs with multiple constraints (N>1N>1) is straight-forward. The algorithm is initialized with a finite set of vectors M0nM_{0}\subseteq\mathbb{N}^{n}. The Newton polytope of a(x,u)a(x,u) depends on the choice of uu so M0M_{0} must be chosen so that it contains all possible reduced Newton polytopes. One choice is to initialize M0M_{0} corresponding to the degree vectors of all monomials in nn variables and degree 2d:=maxu[degak(x,u)]\leq 2d:=\max_{u}[\deg a_{k}(x,u)]. AA and bb need to be computed when formulating the SDP so this step is not additional computation associated with the simplification procedure. The last pre-processing step is the initialization of the sign vector ss. The entries of sis_{i} are +1+1, 1-1, or 0 if it can be determined from the constraints that yiy_{i} is 0\geq 0, 0\leq 0 or =0=0, respectively. si=s_{i}=NaN if no sign information can be determined for yiy_{i}. If yiy_{i} corresponds to a diagonal entry of QQ then sis_{i} can be initialized to +1+1.

The main iteration step is the search for equations that directly constrain any decision variable to be zero (Step 7a). This is similar to the zero diagonal algorithm. The iteration also attempts to determine sign information about the decision variables. Steps 7b-7d update the sign vector based on equality constraints involving a single decision variable. For example, a decision variable must be zero if the decision variable has been previously determined to be 0\leq 0 and the current equality constraint implies that it must be 0\geq 0 (Step 7c). These decision variables can be removed from the optimization. Step 8 processes equality constraints involving two decision variables. The logic for this case is omitted due to space constraints. The processing of equality constraints can be performed very fast using the find command in Matlab. Steps 9 and 10 prune monomials and zero out appropriate columns of AA. The iteration continues until no additional information can be determined about the sign of the decision variables.

1. Given: Polynomials {aj}j=1r\{a_{j}\}_{j=1}^{r} in variables xx. Define
a(x,u):=a0(x)+a1(x)u1++ar(x)ura(x,u):=a_{0}(x)+a_{1}(x)u_{1}+\dots+a_{r}(x)u_{r}
2. Initialization: Set k=0k=0 and choose a finite set M0:={αi}i=1mM_{0}:=\{\alpha_{i}\}_{i=1}^{m}
n\subseteq\mathbb{N}^{n} such that [ur12C(a(x,u))]nM0\left[\cup_{u\in\mathbb{R}^{r}}\frac{1}{2}C(a(x,u))\right]\cap\mathbb{N}^{n}\subseteq M_{0}.
3. Form Ay=bAy=b: Construct the equality constraint data, Al×(r+m2)A\in\mathbb{R}^{l\times(r+m^{2})}
and blb\in\mathbb{R}^{l} obtained by equating coefficients of a(x,u)=zTQza(x,u)=z^{T}Qz
where z:=[xα1,,xαm]Tz:=\begin{bmatrix}x^{\alpha_{1}},\ldots,x^{\alpha_{m}}\end{bmatrix}^{T} and y:=[uT,vec(Q)T]Ty:=[u^{T},\ vec(Q)^{T}]^{T}.
4. Sign Data: Initialize the l×1l\times 1 vector ss to be si=+1s_{i}=+1 if yiy_{i}
corresponds to a diagonal entry of QQ. Otherwise set si=s_{i}= NaN.
5. Iteration:
6. Set 𝒵={\cal Z}=\emptyset, 𝒮={\cal S}=\emptyset, k:=k+1k:=k+1, and Mk:=Mk1M_{k}:=M_{k-1}
7. Pro cess equality equality constraints of the form ai,jyj=bia_{i,j}y_{j}=b_{i}
where ai,j0a_{i,j}\neq 0
    7a. If bi=0b_{i}=0 then set sj=0s_{j}=0 and 𝒵=𝒵j{\cal Z}={\cal Z}\cup j
    7b. Else if sj=s_{j}=NaN then set sj=s_{j}= sign(ai,jbi)(a_{i,j}b_{i}) and 𝒮=𝒮j{\cal S}={\cal S}\cup j
    7c. Else if sj=1s_{j}=-1 and sign(ai,jbi)=+1(a_{i,j}b_{i})=+1 then set sj=0s_{j}=0
and 𝒮=𝒮j{\cal S}={\cal S}\cup j
    7d. Else if sj=+1s_{j}=+1 and sign(ai,jbi)=1(a_{i,j}b_{i})=-1 then set sj=0s_{j}=0
and 𝒮=𝒮j{\cal S}={\cal S}\cup j
8. Process equality equality constraints of the form
ai,j1yj1+ai,j2yj2=bia_{i,j_{1}}y_{j_{1}}+a_{i,j_{2}}y_{j_{2}}=b_{i}.
9. If for any j𝒵j\in{\cal Z}, yjy_{j} corresponds to a diagonal entry Qi,iQ_{i,i}
then set Mk:=Mk\{αi}M_{k}:=M_{k}\backslash\{\alpha_{i}\} and 𝒵=𝒵{\cal Z}={\cal Z}\cup{\cal I} where {\cal I} are the
entries of yy corresponding to the ithi^{th} row and column of QQ.
10. For each j𝒵j\in{\cal Z} set the jthj^{th} column of AA equal to zero.
11. Terminate if 𝒵={\cal Z}=\emptyset and 𝒮={\cal S}=\emptyset otherwise return to step 6.
12.     Return: MkM_{k}, AA, bb, ss
TABLE II: Simplification Method for SOS Programs with One Constraint

VI Conclusions

The Newton polytope is a method to prune unnecessary monomials from an SOS decomposition. The method requires the construction of a convex hull and this can be time time consuming for polynomials with many terms. This paper presented a zero diagonal algorithm for pruning monomials. The algorithm is based on a simple property of positive semidefinite matrices. The algorithm is fast since it only requires searching a set of linear equality constraints for those having certain properties. Moreover, the set of monomials returned by the algorithm is a subset of the set returned by the Newton polytope method. The zero diagonal algorithm was extended to a more general reduction method for sum-of-squares programming.

VII Acknowledgments

This research was partially supported under the NASA Langley NRA contract NNH077ZEA001N entitled “Analytical Validation Tools for Safety Critical Systems” and the NASA Langley NNX08AC65A contract entitled ’Fault Diagnosis, Prognosis and Reliable Flight Envelope Assessment.” The technical contract monitors are Dr. Christine Belcastro and Dr. Suresh Joshi respectively.

References

  • [1] G.J. Balas, A. Packard, P. Seiler, and U. Topcu. Robustness analysis of nonlinear systems. http://www.aem.umn.edu/\simAerospaceControl/, 2009.
  • [2] G. Chesi, A. Garulli, A. Tesi, and A. Vicino. Homogeneous Polynomial Forms for Robustness Analysis of Uncertain Systems. Springer, 2009.
  • [3] G. Chesi, A. Tesi, A. Vicino, and R. Genesio. On convexification of some minimum distance problems. In European Control Conference, 1999.
  • [4] M.D. Choi, T.Y. Lam, and B. Reznick. Sums of squares of real polynomials. Proc. of Symposia in Pure Math., 58(2):103–126, 1995.
  • [5] B. Grünbaum. Convex Polytopes. Springer Verlag, 2003.
  • [6] D. Henrion, J. B. Lasserre, and J. Loefberg. Gloptipoly 3: moments, optimization and semidefinite programming. Optimization Methods and Software, 24(4-5):761–779, 2009.
  • [7] J.B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, 11(3):796–817, 2001.
  • [8] J. Lofberg. Yalmip : A toolbox for modeling and optimization in MATLAB. In Proc. of the CACSD Conference, Taipei, Taiwan, 2004.
  • [9] J. Lofberg. Pre- and post-processing sum-of-squares problems in practice. IEEE Transactions on Automatic Control, 54(5):1007–1011, 2009.
  • [10] A. Brønsted. An Introduction to Convex Polytopes. Springer-Verlag, 1983.
  • [11] P. Parrilo. Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization. PhD thesis, California Institute of Technology, 2000.
  • [12] P. Parrilo. Semidefinite programming relaxations for semialgebraic problems. Mathematical Programming Ser. B, 96(2):293–320, 2003.
  • [13] V. Powers and T. Wörmann. An algorithm for sums of squares of real polynomials. J. of Pure and Applied Algebra, 127:99–104, 1998.
  • [14] S. Prajna, A. Papachristodoulou, P. Seiler, and P. A. Parrilo. SOSTOOLS: Sum of squares optimization toolbox for MATLAB, 2004.
  • [15] B. Reznick. Extremal PSD forms with few terms. Duke Mathematical Journal, 45(2):363–374, 1978.
  • [16] B. Reznick. Some concrete aspects of Hilberts 17th problem. Contemporary Mathematics, 253(251-272), 2000.
  • [17] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software, pages 625–653, 1999.
  • [18] B. Sturmfels. Polynomial equations and convex polytopes. American Mathematical Monthly, 105(10):907–922, 1998.
  • [19] W. Tan. Nonlinear Control Analysis and Synthesis using Sum-of-Squares Programming. PhD thesis, Univ. of Calif., Berkeley, 2006.
  • [20] U. Topcu. Quantitative Local Analysis of Nonlinear Systems. PhD thesis, Univ. of California, Berkeley, 2008.
  • [21] H. Waki and M. Muramatsu. A facial reduction algorithm for finding sparse sos representations. Technical Report CS-09-02, Dept. of Comp. Science, The University of Electro-Communications, 2009.