This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Higher-order Reverse Automatic Differentiation with emphasis on the third-order.

R. Gower111School of Mathematics and Maxwell Institute for Mathematical Sciences The University of Edinburgh, e-mail: gowerrobert@gmail.com \,\,\,\cdot\, A. Gower222School of Mathematics, Statistics and Applied Mathematics, National University of Ireland Galway
Abstract

It is commonly assumed that calculating third order information is too expensive for most applications. But we show that the directional derivative of the Hessian (D3f(x)dD^{3}f(x)\cdot d) can be calculated at a cost proportional to that of a state-of-the-art method for calculating the Hessian matrix. We do this by first presenting a simple procedure for designing high order reverse methods and applying it to deduce several methods including a reverse method that calculates D3f(x)dD^{3}f(x)\cdot d. We have implemented this method taking into account symmetry and sparsity, and successfully calculated this derivative for functions with a million variables. These results indicate that the use of third order information in a general nonlinear solver, such as Halley-Chebyshev methods, could be a practical alternative to Newton’s method.

1 Introduction

Derivatives permeate mathematics and engineering right from the first steps of calculus, which together with the Taylor series expansion is a central tool in designing models and methods of modern mathematics. Despite this, successful methods for automatically calculating derivatives of nn-dimensional functions is a relatively recent development. Perhaps most notably amongst recent methods is the advent of Automatic Differentiation (AD), which has the remarkable achievement of the “cheap gradient principle”, wherein the cost of evaluating the gradient is proportional to that of the underlying function [11]. This AD success is not only limited to the gradient, there also exists a number of efficient AD algorithms for calculating Jacobian [6, 10] and Hessian matrices [8, 7], that can accommodate for large dimensional sparse instances. The same success has not been observed in calculating higher order derivatives.

The assumed cost in calculating high-order derivatives drives the design of methods, typically favouring the use of lower-order methods. In the optimization community it is generally assumed that calculating any third-order information is too costly, so the design of methods revolves around using first and second order information. We will show that third-order information can be used at a cost proportional to the cost of calculating the Hessian. This has an immediate application in third-order nonlinear optimization methods such as the Chebyshev-Halley Family [14]. Furthermore, the need for higher order differentiation finds applications in calculating quadratures [16, 4], bifurcations and periodic orbits [13]. In the fields of numerical integration and solution of PDE’s, a lot of attention has been given to refining and adapting meshes to then use first and second-order approximations over these meshes. An alternative paradigm would be to use fixed coarse meshes and higher approximations. With the capacity to efficiently calculate high-order derivatives, this approach could become competitive and lift the fundamental deterrent in higher-order methods.

Current methods for calculating derivatives of order three or higher in the AD community typically propagate univariate Taylor series [12] or repeatedly apply the tangent and adjoint operations [18]. In these methods, each element of the desired derivative is calculated separately. If AD has taught us anything it is that we should not treat elements of derivatives separately, for their computation can be highly interlaced. The cheap gradient principle illustrates this well, for calculating the elements of the gradient separately yields a time complexity of nn times that of simultaneously calculating all entries. This same principle should be carried over to higher order methods, that is, be wary of overlapping calculations in individual elements. Another alternative for calculating high order derivatives is the use of forward differentiation [19]. The drawback of forward propagation is that it calculates the derivatives of all intermediate functions, in relation to the independent variables, even when these do not contribute to the desired end result. For these reasons, we look at calculating high-order derivatives as a whole and focus on reverse AD methods.

An efficient alternative to AD is that the end users hand code their derivatives. Though with the advent of evermore complicated models, this task is becoming increasingly error prone, difficult to write efficient code, and, let’s face it, boring. This approach also rules out methods that use high order derivatives, for no one can expect the end user to code the total and directional derivatives of high order tensors.

The article flows as follows, first we develop algorithms that calculate derivatives in a more general setting, wherein our function is described as a sequence of compositions of maps, Section 2. We then use Griewank and Walther’s [11] state-transformations in Section 3, to translate a composition of maps into an AD setting and an efficient implementation. Numerical tests are presented in Section 4, followed by our conclusions in Section 5.

2 Derivatives of Sequences of Maps

In preparation for the AD setting, we first develop algorithms for calculating derivatives of functions that can be broken into a composition of operators

F(x)=ΨΨ1Ψ1(x).F(x)=\Psi^{\ell}\circ\Psi^{\ell-1}\circ\cdots\circ\Psi^{1}(x). (1)

for Ψi\Psi^{i}’s of varying dimension: Ψ1(x)C2(n,m1)\Psi^{1}(x)\in C^{2}(\mathbb{R}^{n},\mathbb{R}^{m_{1}}) and Ψi(x)C2(mi1,mi)\Psi^{i}(x)\in C^{2}(\mathbb{R}^{m_{i-1}},\mathbb{R}^{m_{i}}), each mim_{i}\in\mathbb{N} and for i=2,,i=2,\ldots,\ell, so that F:nmF:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m_{\ell}}. From this we define a functional f(x)=yTF(x),f(x)=y^{T}F(x), where ymy\in\mathbb{R}^{m_{\ell}}, and develop methods for calculating the gradient f(x)=yTDF(x)\nabla f(x)=y^{T}DF(x), the Hessian D2f(x)=yTD2F(x)D^{2}f(x)=y^{T}D^{2}F(x) and the Tensor D3f(x)=yTD3F(x)D^{3}f(x)=y^{T}D^{3}F(x).

For a given dnd\in\mathbb{R}^{n}, we also develop methods for the directional derivative DF(x)dDF(x)\cdot d, D2F(x)dD^{2}F(x)\cdot d, the Hessian-vector product D2f(x)d=yTD2F(x)dD^{2}f(x)\cdot d=y^{T}D^{2}F(x)\cdot d and the Tensor-vector product D3f(x)d=yTD3F(x)dD^{3}f(x)\cdot d=y^{T}D^{3}F(x)\cdot d. Notation will be gradually introduced and clarified as is required, including the definition of the preceding directional derivatives.

2.1 First-Order Derivatives

Taking the derivative of FF, equation (1), and recursively applying the chain rule, we get

yTDF=yTDΨDΨ1DΨ1.y^{T}DF=y^{T}D\Psi^{\ell}D\Psi^{\ell-1}\cdots D\Psi^{1}. (2)

Note that yTDFy^{T}DF is the transpose of the gradient (yTF)\nabla(y^{T}F). For simplicity’s sake, the argument of each function is omitted in (2), but it should be noted that DΨiD\Psi^{i} is evaluated at the argument (Ψi1Ψ1)(x)(\Psi^{i-1}\circ\cdots\circ\Psi^{1})(x), for each ii from 11 to \ell. If each of these arguments has been recorded, the gradient of yTF(x)y^{T}F(x) can be calculated with what’s called a reverse sweep in Algorithm (1). Reverse, for it transverses the maps from the last Ψ\Psi^{\ell} to the first Ψ1\Psi^{1}, the opposite direction in which (1) is evaluated. The intermediate stages of the gradient calculation are accumulated in the vector v¯\overline{\mathrm{v}}, its dimension changing from one iteration to the next. This will be a recurring fact in the matrices and vectors used to store the intermediate phases of the archetype algorithms presented in this article.

initialization: v¯=y\overline{\mathrm{v}}=y
for i=,,1i=\ell,\ldots,1 do
   v¯Tv¯TDΨi\overline{\mathrm{v}}^{T}\leftarrow\overline{\mathrm{v}}^{T}D\Psi^{i}
   end for
  Output: yTDF(x)=v¯Ty^{T}DF(x)=\overline{\mathrm{v}}^{T}
Algorithm 1 Archetype Reverse Gradient.

For a given direction dnd\in\mathbb{R}^{n}, we define the directional derivative of F(x)F(x) as

ddtF(x+td)\displaystyle\frac{d}{dt}F(x+td) =DiF(x+td)di:=DF(x+td)d,\displaystyle=D_{i}F(x+td)d_{i}:=DF(x+td)\cdot d, (3)

where we have omitted the summation symbol for ii, and instead, use Einstein notation where a repeated indexes implies summation over that index. We use this notation throughout the article unless otherwise stated. Again using the chain-rule and (1), we find

DF(x)d=DΨDΨ1DΨ1d.DF(x)\cdot d=D\Psi^{\ell}D\Psi^{\ell-1}\cdots D\Psi^{1}\cdot d.

This can be efficiently calculated using a forward sweep of the computational graph, detailed below.

initialization: v˙0=d\dot{\mathrm{v}}^{0}=d
for i=1,,i=1,\ldots,\ell do
   v˙DΨiv˙\dot{\mathrm{v}}\leftarrow D\Psi^{i}\dot{\mathrm{v}}
   end for
  Output: DF(x)d=v˙DF(x)\cdot d=\dot{\mathrm{v}}^{\ell}
Algorithm 2 Archetype 1st Order Directional Derivative.

2.2 Second-Order Derivatives

Here we develop a reverse algorithm for calculating the Hessian D2(yTF(x))D^{2}(y^{T}F(x)). First we determine the Hessian for FF as a composition of two maps, then we use induction to design a method for when FF is a composition of \ell maps.

For F(X)=Ψ2Ψ1(x)F(X)=\Psi^{2}\circ\Psi^{1}(x) and =2\ell=2, we find the Hessian by differentiating in the jj-th and kk-th coordinate,

Djk(yiFi)=(yiDrsΨi2)DjΨr1DkΨs1+(yiDrΨi2)DjkΨr1,D_{jk}(y_{i}F_{i})=(y_{i}D_{rs}\Psi_{i}^{2})D_{j}\Psi_{r}^{1}D_{k}\Psi_{s}^{1}+(y_{i}D_{r}\Psi^{2}_{i})D_{jk}\Psi^{1}_{r}, (4)

where the arguments have been omitted. So the (j,k)(j,k) component of the Hessian [D2(yTF)]jk=Djk(yTF)[D^{2}(y^{T}F)]_{jk}=D_{jk}(y^{T}F). The higher the order of the derivative, the more messy and unclear component notation becomes. A way around this issue is to use a tensor notation

yTD2F(v,w):=yiDjkFivjwk,\displaystyle y^{T}D^{2}F\cdot(v,w):=y_{i}D_{jk}F_{i}v_{j}w_{k},

and

(yTD2Fw)v:=yTD2F(v,w),\displaystyle(y^{T}D^{2}F\cdot w)\cdot v:=y^{T}D^{2}F\cdot(v,w), (5)

for any vectors v,wnv,w\in\mathbb{R}^{n}, and in general,

[yTD2F(,)]t2tqs2sp:=yiDt1s1Fit1t2tqs1s2sp,\displaystyle[y^{T}D^{2}F\cdot(\triangle,\square)]_{t_{2}\cdots t_{q}s_{2}\cdots s_{p}}:=y_{i}D_{t_{1}s_{1}}F_{i}\triangle_{t_{1}t_{2}\cdots t_{q}}\square_{s_{1}s_{2}\cdots s_{p}}, (6)

and

(yTD2F):=yTD2F(,)\displaystyle(y^{T}D^{2}F\cdot\square)\cdot\triangle:=y^{T}D^{2}F\cdot(\triangle,\square) (7)

for any compatible \triangle and \square . To use a matrix notation for a composition of maps can be aesthetically unpleasant. Using this tensor notation the Hessian of yTFy^{T}F, see equation (4), becomes

yTD2F=yTD2Ψ2(DΨ1,DΨ1)+yTDΨ2D2Ψ1.\boxed{y^{T}D^{2}F=y^{T}D^{2}\Psi^{2}\cdot(D\Psi^{1},D\Psi^{1})+y^{T}D\Psi^{2}\cdot D^{2}\Psi^{1}}. (8)

We recursively use the identity (8) to design an algorithm that calculates the Hessian of a function yTF(x)y^{T}F(x) composed of \ell maps, as defined in equation (1).

initialization: v¯=y\overline{\mathrm{v}}=y, W=0W=0
for i=,,1i=\ell,\ldots,1 do
   WW(DΨi,DΨi)W\leftarrow W\cdot(D\Psi^{i},D\Psi^{i})
   WW+v¯TD2ΨiW\leftarrow W+\overline{\mathrm{v}}^{T}D^{2}\Psi^{i}
   v¯Tv¯TDΨi\overline{\mathrm{v}}^{T}\leftarrow\overline{\mathrm{v}}^{T}D\Psi^{i}
   end for
  Output: yTD2FW,yTDFv¯Ty^{T}D^{2}F\leftarrow W,\,\,y^{T}DF\leftarrow\overline{\mathrm{v}}^{T}
Algorithm 3 Archetype Reverse Hessian.

Proof of Algorithm: We will use induction on the number of compositions \ell. For =1\ell=1 the output is W=yTD2Ψ1W=y^{T}D^{2}\Psi^{1}. Now we suppose the Algorithm is correct for m1m-1 map compositions, and use this assumption to show that for =m\ell=m the output is W=yTD2FW=y^{T}D^{2}F. Let

yTX=yTΨmΨ2,y^{T}X=y^{T}\Psi^{m}\circ\cdots\circ\Psi^{2},

so that yTF=yTXΨ1.y^{T}F=y^{T}X\circ\Psi^{1}. Then at the end of the iteration i=2i=2, by the chain rule, v¯T=yTDX\overline{\mathrm{v}}^{T}=y^{T}DX and, by induction, W=yTD2XW=y^{T}D^{2}X. This way, at termination, or after the iteration i=1i=1, we get

W\displaystyle W =yTD2X(DΨ1,DΨ1)+yTDXD2Ψ1\displaystyle=y^{T}D^{2}X\cdot(D\Psi^{1},D\Psi^{1})+y^{T}DX\cdot D^{2}\Psi^{1}
=yTD2(XΨ1)[Equation (8)]\displaystyle=y^{T}D^{2}(X\circ\Psi^{1})\quad\quad\left[\mbox{Equation~\eqref{eq:Hessian2map}}\right]
=yTD2F. \displaystyle=y^{T}D^{2}F.\nobreak\thinspace\qquad\nobreak\vrule height=7.5pt,width=5.0pt,depth=2.5pt

Now we take a small detour to show how to calculate Hessian-vector products in a similar manner. We do this because it is an important component of graph-coloring based algorithms for calculating the Hessian [7] and its complexity is surprisingly the same as evaluating yTFy^{T}F, the underlying functional [3]. Thus, analogously, we calculate the directional derivative of the gradient yTDF(x)y^{T}DF(x), for =2\ell=2,

yTDjkFdk\displaystyle y^{T}D_{jk}Fd_{k} =yTDrsΨ2DjΨr1DkΨs1dk+yTDrΨ2DjkΨr1dk,\displaystyle=y^{T}D_{rs}\Psi^{2}D_{j}\Psi^{1}_{r}D_{k}\Psi^{1}_{s}d_{k}+y^{T}D_{r}\Psi^{2}D_{jk}\Psi^{1}_{r}d_{k},

or simply,

yTD2Fd=yTD2Ψ2(DΨ1,DΨ1d)+yTDΨ2D2Ψ1d,\displaystyle\boxed{y^{T}D^{2}F\cdot d=y^{T}D^{2}\Psi^{2}\cdot(D\Psi^{1},D\Psi^{1}\cdot d)+y^{T}D\Psi^{2}\cdot D^{2}\Psi^{1}\cdot d}, (10)

and use this recursively to calculate the directional derivative of yTDF(x)y^{T}DF(x) in Algorithm 4. This algorithm was first described in [3].

initialization: v˙0=d,v¯=ym,w=0m\dot{\mathrm{v}}^{0}=d,\overline{\mathrm{v}}=y\in\mathbb{R}^{m_{\ell}},w=0\in\mathbb{R}^{m_{\ell}}
for i=1,,i=1,\ldots,\ell do
   v˙iDΨiv˙i1\dot{\mathrm{v}}^{i}\leftarrow D\Psi^{i}\cdot\dot{\mathrm{v}}^{i-1}
  
   end for
  for i=,,1i=\ell,\ldots,1 do
     wwDΨiw\leftarrow w\cdot D\Psi^{i}
     ww+v¯TD2Ψiv˙i1w\leftarrow w+\overline{\mathrm{v}}^{T}D^{2}\Psi^{i}\cdot\dot{\mathrm{v}}^{i-1}
     v¯Tv¯TDΨi\overline{\mathrm{v}}^{T}\leftarrow\overline{\mathrm{v}}^{T}D\Psi^{i}
     end for
    Output: yTD2F(x)dw,yTDFv¯Ty^{T}D^{2}F(x)\cdot d\leftarrow w,\,\,y^{T}DF\leftarrow\overline{\mathrm{v}}^{T}
Algorithm 4 Archetype Gradient Directional Derivative

Proof of Algorithm: Let yTFy^{T}F be a composition of \ell maps as in (1) and

Xm=yTΨΨm,X^{m}=y^{T}\Psi^{\ell}\circ\cdots\circ\Psi^{m},

so that Xm1=XmΨm1X^{m-1}=X^{m}\circ\Psi^{m-1}. The first for loop simply accumulates the directional derivative DFdDF\cdot d. For the second for loop, we use an induction hypothesis that at the end of the i=mi=m iteration w=D2Xmv˙m1w=D^{2}X^{m}\cdot\dot{\mathrm{v}}^{m-1}. The first iteration, i=i=\ell, the output is w=yTD2Ψd=D2Xv˙1w=y^{T}D^{2}\Psi^{\ell}\cdot d=D^{2}X^{\ell}\cdot\dot{\mathrm{v}}^{\ell-1}. Now suppose our hypothesis is true for i=m+1i=m+1, so that at the end of the i=m+1i=m+1 iteration, by the induction hypothesis,

w=D2Xm+1v˙m=D2Xm+1DΨmv˙m1,w=D^{2}X^{m+1}\cdot\dot{\mathrm{v}}^{m}=D^{2}X^{m+1}\cdot D\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1},

and, by calculus,

v¯T=yTDΨDΨm+1=DXm+1.\overline{\mathrm{v}}^{T}=y^{T}D\Psi^{\ell}\cdots D\Psi^{m+1}=DX^{m+1}.

Then for the next step, the i=mi=m iteration,

w\displaystyle w wDΨm+v¯TD2Ψmv˙m1\displaystyle\leftarrow w\cdot D\Psi^{m}+\overline{\mathrm{v}}^{T}D^{2}\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1}
=(D2Xm+1DΨmv˙m1)DΨm+DXm+1D2Ψmv˙m1\displaystyle=(D^{2}X^{m+1}\cdot D\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1})\cdot D\Psi^{m}+DX^{m+1}\cdot D^{2}\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1}
=D2Xm+1(DΨm,DΨmv˙m1)+DXm+1D2Ψmv˙m1\displaystyle=D^{2}X^{m+1}\cdot(D\Psi^{m},D\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1})+DX^{m+1}\cdot D^{2}\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1}
=D2(Xm+1Ψm)v˙i1[Equation (10)]\displaystyle=D^{2}(X^{m+1}\circ\Psi^{m})\cdot\dot{\mathrm{v}}^{i-1}\quad\quad\left[\mbox{Equation~\eqref{eq:y2ndderein}}\right]
=D2Xmv˙m1.\displaystyle=D^{2}X^{m}\cdot\dot{\mathrm{v}}^{m-1}.

Thus by induction we have proved that at the end of the i=1i=1 iteration,

w=D2X1v˙0=D2X1d=yTD2F(x)d. w=D^{2}X^{1}\cdot\dot{\mathrm{v}}^{0}=D^{2}X^{1}\cdot d=y^{T}D^{2}F(x)\cdot d.\nobreak\thinspace\qquad\nobreak\vrule height=7.5pt,width=5.0pt,depth=2.5pt

2.3 Third-Order Methods

Now we move on to the directional derivative of yTD2F(x)y^{T}D^{2}F(x), that is, the derivative of yTD2F(x+td)y^{T}D^{2}F(x+td) in tt, where dnd\in\mathbb{R}^{n}, to get

ddtyTD2F(x+td)\displaystyle\frac{d}{dt}y^{T}D^{2}F(x+td) =yiddtDjkFi(x+td)\displaystyle=y_{i}\frac{d}{dt}D_{jk}F_{i}(x+td)
=yiDjkmFi(x+td)dm\displaystyle=y_{i}D_{jkm}F_{i}(x+td)d_{m}
:=yTD3F(x+td)d.\displaystyle:=y^{T}D^{3}F(x+td)\cdot d. (11)

Here our tensor notation really facilitates working with third-order derivatives. Using matrix notation would lead to confusing equations and possibly detter intuition. The notation conventions from before carry over naturally to third-order derivatives, with

(yTD3F(,,))t2tqs2spl2lr:=yiD3Ft1s1l1t1tqs1spl1lr,(y^{T}D^{3}F\cdot(\triangle,\square,\lozenge))_{t_{2}\ldots t_{q}s_{2}\ldots s_{p}l_{2}\ldots l_{r}}:=y_{i}D^{3}F_{t_{1}s_{1}l_{1}}\triangle_{t_{1}\ldots t_{q}}\square_{s_{1}\ldots s_{p}}\lozenge_{l_{1}\ldots l_{r}}, (12)

and

yTD3F(,,)=(yTD3F)(,)=((yTD3F)),y^{T}D^{3}F\cdot(\triangle,\square,\lozenge)=(y^{T}D^{3}F\cdot\lozenge)\cdot(\triangle,\square)=((y^{T}D^{3}F\cdot\lozenge)\cdot\square)\cdot\triangle, (13)

for any compatible \triangle, \square and \lozenge. We begin by calculating the directional derivative of a composition of two maps F=Ψ2Ψ1F=\Psi^{2}\circ\Psi^{1},

ddt\displaystyle\frac{d}{dt} (yTD2F(x+dt))\displaystyle\left(y^{T}D^{2}F(x+dt)\right)
=\displaystyle= D(yTD2Ψ2(DΨ1,DΨ1))d+D((yTDΨ2)D2Ψ1)d\displaystyle D\left(y^{T}D^{2}\Psi^{2}\cdot(D\Psi^{1},D\Psi^{1})\right)\cdot d+D\left((y^{T}D\Psi^{2})\cdot D^{2}\Psi^{1}\right)\cdot d
=\displaystyle= (yTD3Ψ2DΨ1d)(DΨ1,DΨ1)+(yTD2Ψ2)(DΨ1,D2Ψ1d)\displaystyle(y^{T}D^{3}\Psi^{2}\cdot D\Psi^{1}\cdot d)\cdot(D\Psi^{1},D\Psi^{1})+(y^{T}D^{2}\Psi^{2})\cdot(D\Psi^{1},D^{2}\Psi^{1}\cdot d)
+(yTD2Ψ2)(D2Ψ1d,DΨ1)+(yTDΨ2)D3Ψ1d\displaystyle+(y^{T}D^{2}\Psi^{2})\cdot(D^{2}\Psi^{1}\cdot d,D\Psi^{1})+(y^{T}D\Psi^{2})\cdot D^{3}\Psi^{1}\cdot d
+(yTD2Ψ2DΨ1d)D2Ψ1,\displaystyle+(y^{T}D^{2}\Psi^{2}\cdot D\Psi^{1}\cdot d)\cdot D^{2}\Psi^{1},

in conclusion, after some rearrangement,

yT\displaystyle y^{T} ddtD2F(x+dt)=yTD3Ψ2(DΨ1,DΨ1,DΨ1d)+yTDΨ2D3Ψ1d\displaystyle\frac{d}{dt}D^{2}F(x+dt)=y^{T}D^{3}\Psi^{2}\cdot(D\Psi^{1},D\Psi^{1},D\Psi^{1}\cdot d)+y^{T}D\Psi^{2}\cdot D^{3}\Psi^{1}\cdot d
+yTD2Ψ2((DΨ1,D2Ψ1d)+(D2Ψ1d,DΨ1)+(D2Ψ1,DΨ1d))\displaystyle+y^{T}D^{2}\Psi^{2}\cdot\left((D\Psi^{1},D^{2}\Psi^{1}\cdot d)+(D^{2}\Psi^{1}\cdot d,D\Psi^{1})+(D^{2}\Psi^{1},D\Psi^{1}\cdot d)\right) (14)

As usual, we have omitted all arguments to the maps. The above applied recursively gives us the Reverse Hessian Directional Derivative Algorithm 5, or RevHedir for short. To prove the correctness of RevHedir, we use induction based on Xm=yTΨΨmX^{m}=y^{T}\Psi^{\ell}\circ\cdots\circ\Psi^{m}, working from m=m=\ell backwards towards m=1m=1 to calculate yTD3F(x)dy^{T}D^{3}F(x)\cdot d.

initialization: v˙1=d,v¯=y,W=Td=0m×m\dot{\mathrm{v}}^{1}=d,\overline{\mathrm{v}}=y,W=Td=0\in\mathbb{R}^{m_{\ell}\times m_{\ell}}
for i=1,,i=1,\ldots,\ell do
   v˙iDΨiv˙i1\dot{\mathrm{v}}^{i}\leftarrow D\Psi^{i}\cdot\dot{\mathrm{v}}^{i-1}
  
   end for
  for i=,,1i=\ell,\ldots,1 do
     TdTd(DΨi,DΨi)Td\leftarrow Td\cdot(D\Psi^{i},D\Psi^{i})
     TdTd+W((DΨi,D2Ψiv˙i1)+(D2Ψiv˙i1,DΨi))Td\leftarrow Td+W\cdot\big{(}(D\Psi^{i},D^{2}\Psi^{i}\cdot\dot{\mathrm{v}}^{i-1})+(D^{2}\Psi^{i}\cdot\dot{\mathrm{v}}^{i-1},D\Psi^{i})\big{)}
     TdTd+W(D2Ψi,DΨiv˙i1)Td\leftarrow Td+W\cdot(D^{2}\Psi^{i},D\Psi^{i}\cdot\dot{\mathrm{v}}^{i-1})
     TdTd+v¯TD3Ψiv˙i1Td\leftarrow Td+\overline{\mathrm{v}}^{T}D^{3}\Psi^{i}\cdot\dot{\mathrm{v}}^{i-1}
     WW(DΨi,DΨi)+v¯TD2ΨiW\leftarrow W\cdot(D\Psi^{i},D\Psi^{i})+\overline{\mathrm{v}}^{T}D^{2}\Psi^{i}
     v¯Tv¯TDΨi\overline{\mathrm{v}}^{T}\leftarrow\overline{\mathrm{v}}^{T}D\Psi^{i}
     end for
    Output: yTD3F(x)dTd,yTD2FW,yTDFv¯Ty^{T}D^{3}F(x)\cdot d\leftarrow Td,\,\,y^{T}D^{2}F\leftarrow W,\,\,y^{T}DF\leftarrow\overline{\mathrm{v}}^{T}
Algorithm 5 Archetype Reverse Hessian Directional Derivative (RevHedir )

Proof of Algorithm: Our induction hypothesis is that at the end of the i=mi=m iteration Td=yTD3Xmv˙i1.Td=y^{T}D^{3}X^{m}\cdot\dot{\mathrm{v}}^{i-1}. After the first iteration i=i=\ell, paying attention to the initialization of the variables, we have that Td=v¯TD3Ψv˙1=yTD3Xv˙1.Td=\overline{\mathrm{v}}^{T}D^{3}\Psi^{\ell}\cdot\dot{\mathrm{v}}^{\ell-1}=y^{T}D^{3}X^{\ell}\cdot\dot{\mathrm{v}}^{\ell-1}. Now suppose the hypothesis is true for iterations up to m+1m+1, so that at the beginning of the i=mi=m iteration Td=yTD3Xm+1v˙m.Td=y^{T}D^{3}X^{m+1}\cdot\dot{\mathrm{v}}^{m}. To prove the hypothesis we need the following results: at the end of the i=mi=m iteration

v¯T\displaystyle\overline{\mathrm{v}}^{T} =yTDXmandW=yTD2Xm,\displaystyle=y^{T}DX^{m}\quad\textrm{and}\quad W=y^{T}D^{2}X^{m}, (15)

both are demonstrated in the proof of Algorithm 10. Now we are equipt to examine TdTd at the end of the i=mi=m iteration,

Td\displaystyle Td Td(DΨm,DΨm)+W((DΨm,D2Ψmv˙m1)+(D2Ψmv˙m1,DΨm))\displaystyle\leftarrow Td\cdot(D\Psi^{m},D\Psi^{m})+W\cdot\big{(}(D\Psi^{m},D^{2}\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1})+(D^{2}\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1},D\Psi^{m})\big{)}
+W(D2Ψm,DΨmv˙m1)+v¯TD3Ψmv˙m1,\displaystyle+W\cdot(D^{2}\Psi^{m},D\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1})+\overline{\mathrm{v}}^{T}D^{3}\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1},

using the induction hypothesis followed by property (13) we get Td(DΨm,DΨm)=yTD3Xm+1v˙m(DΨm,DΨm)=yTD3Xm+1(DΨm,DΨm,v˙m)Td\cdot(D\Psi^{m},D\Psi^{m})=y^{T}D^{3}X^{m+1}\cdot\dot{\mathrm{v}}^{m}\cdot(D\Psi^{m},D\Psi^{m})=y^{T}D^{3}X^{m+1}\cdot\left(D\Psi^{m},D\Psi^{m},\dot{\mathrm{v}}^{m}\right), and from the algorithm v˙m=DΨmv˙m1\dot{\mathrm{v}}^{m}=D\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1}. Then using equations (15) to substitute WW and v¯T\overline{\mathrm{v}}^{T} we arrive at

Td=\displaystyle Td= yTD3Xm+1(DΨm,DΨm,DΨmv˙m1)\displaystyle y^{T}D^{3}X^{m+1}\cdot\left(D\Psi^{m},D\Psi^{m},D\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1}\right)
+yTD2Xm((DΨm,D2Ψmv˙m1)+(D2Ψmv˙m1,DΨm))\displaystyle+y^{T}D^{2}X^{m}\cdot\big{(}(D\Psi^{m},D^{2}\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1})+(D^{2}\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1},D\Psi^{m})\big{)}
+yTD2Xm(D2Ψm,DΨmv˙m1)+(yTDXm)D3Ψmv˙m1\displaystyle+y^{T}D^{2}X^{m}\cdot(D^{2}\Psi^{m},D\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1})+\left(y^{T}DX^{m}\right)D^{3}\Psi^{m}\cdot\dot{\mathrm{v}}^{m-1}
=\displaystyle= yTD3Xmv˙m1[Using equation 14].\displaystyle y^{T}D^{3}X^{m}\cdot\dot{\mathrm{v}}^{m-1}\quad[\mbox{\bf Using equation~\ref{eq:D3CompositionPsi2Psi1d}}].

Finally, after iteration i=1i=1, we have

Td=yTD3X1v˙0=yTD3Fd. Td=y^{T}D^{3}X^{1}\cdot\dot{\mathrm{v}}^{0}=y^{T}D^{3}F\cdot d.\nobreak\thinspace\qquad\nobreak\vrule height=7.5pt,width=5.0pt,depth=2.5pt

As is to be expected, in the computation of the Tensor-vector product, only 2-dimensional tensor arithmetic, or matrix arithmetic, is used, and it is not necessary to form a 33-dimensional tensor. On the other hand, calculating the entire yTD3Fy^{T}D^{3}F Tensor does involve 33-dimensional arithmetic. The final archetype algorithm we present is a reverse method for calculating the entire third-order Tensor yTD3F(x)y^{T}D^{3}F(x). We want an expression for the derivative such that

yTddtD2F(x+td)=yTD3F(x+td)dy^{T}\frac{d}{dt}D^{2}F(x+td)=y^{T}D^{3}F(x+td)\cdot d (16)

for any dd. From equation (14), we see that dd is contracted with the last coordinate in every term except one. To account for this term, we need a switching tensor SS such that

yTD2Ψ2(D2Ψ1d,DΨ1)=yTD2Ψ2(D2Ψ1,DΨ1)Sd,y^{T}D^{2}\Psi^{2}\cdot(D^{2}\Psi^{1}\cdot d,D\Psi^{1})=y^{T}D^{2}\Psi^{2}\cdot(D^{2}\Psi^{1},D\Psi^{1})\cdot S\cdot d,

in other words we define SS as

S(v,w,z)=(v,z,w)orSabcijkviwjzk=vazbwcS\cdot(v,w,z)=(v,z,w)\quad\text{or}\quad S_{abcijk}v_{i}w_{j}z_{k}=v_{a}z_{b}w_{c} (17)

for any vectors v,wv,w and zz. This implies that SS’s components are Sabcijk=δaiδcjδbkS_{abcijk}=\delta_{ai}\delta_{cj}\delta_{bk}, where δnm=1\delta_{nm}=1 if n=mn=m and 0 otherwise. Then for F=Ψ2Ψ1F=\Psi^{2}\circ\Psi^{1} we use equation (14) to reach

yT\displaystyle y^{T} D3Fd=(yTD3Ψ2(DΨ1,DΨ1,DΨ1)+yTDΨ2D3Ψ1\displaystyle D^{3}F\cdot d=\left(y^{T}D^{3}\Psi^{2}\cdot(D\Psi^{1},D\Psi^{1},D\Psi^{1})+y^{T}D\Psi^{2}\cdot D^{3}\Psi^{1}\right.
+yTD2Ψ2((DΨ1,D2Ψ1)+(D2Ψ1,DΨ1)S+(D2Ψ1,DΨ1)))d.\displaystyle+\left.y^{T}D^{2}\Psi^{2}\cdot\left((D\Psi^{1},D^{2}\Psi^{1})+(D^{2}\Psi^{1},D\Psi^{1})\cdot S+(D^{2}\Psi^{1},D\Psi^{1})\right)\right)\cdot d. (18)

The above is true for all vectors dd, thus we can remove dd from both sides to arrive at our desired expression for yTD3F.y^{T}D^{3}F. With this notation we have, as expected, (yTD3F)ijk=yTDijkF(y^{T}D^{3}F)_{ijk}=y^{T}D_{ijk}F. We can now use this result to build a recurrence for D3XmD^{3}X^{m}, defined by Xm=yTΨΨmX^{m}=y^{T}\Psi^{\ell}\circ\cdots\circ\Psi^{m}, working from m=m=\ell backwards towards m=1m=1 to calculate yTD3F(x)dy^{T}D^{3}F(x)\cdot d.

initialization: v¯=y,W=0m×m\overline{\mathrm{v}}=y,W=0\in\mathbb{R}^{m_{\ell}\times m_{\ell}}, Tm×m×mT\in\mathbb{R}^{m_{\ell}\times m_{\ell}\times m_{\ell}}
for i=,,1i=\ell,\ldots,1 do
   TT(DΨi,DΨi,DΨi)T\leftarrow T\cdot(D\Psi^{i},D\Psi^{i},D\Psi^{i})
   TT+W((DΨi,D2Ψi)+(D2Ψi,DΨi))T\leftarrow T+W\cdot\big{(}(D\Psi^{i},D^{2}\Psi^{i})+(D^{2}\Psi^{i},D\Psi^{i})\big{)}
   TT+W(D2Ψi,DΨi)S+v¯TD3ΨiT\leftarrow T+W\cdot(D^{2}\Psi^{i},D\Psi^{i})\cdot S+\overline{\mathrm{v}}^{T}D^{3}\Psi^{i}
   WW(DΨi,DΨi)+v¯TD2ΨiW\leftarrow W\cdot(D\Psi^{i},D\Psi^{i})+\overline{\mathrm{v}}^{T}D^{2}\Psi^{i}
   v¯Tv¯TDΨi\overline{\mathrm{v}}^{T}\leftarrow\overline{\mathrm{v}}^{T}D\Psi^{i}
   end for
  Output: yTD3F(x)dT,y^{T}D^{3}F(x)\cdot d\leftarrow T,\, yTD2FW,y^{T}D^{2}F\leftarrow W,\, yTDFv¯Ty^{T}DF\leftarrow\overline{\mathrm{v}}^{T}
Algorithm 6 Archetype Reverse Third Order Derivative

Proof of Algorithm: the demonstration of this algorithm can be carried out in an analogous fashion to the proof of Algorithm 5.

This notation, together with a closed expression for high-order derivatives of a composition of two maps, see [5], can be used to design algorithms of even higher-orders. Though this would require the presentation of a rather cumbersome notation. What we can extract from this generic formula in [5], is that the number of terms that need to be calculated grows combinatorially in the order of the derivative, thus posing a lasting computational challenge.

3 Implementing through State Transformations

When coding a function, the user would not commonly write a composition of maps such as the form used in the previous section, see equation (1). Instead users implement functions in a number of different ways. Automatic Differentiation (AD) packages standardize these hand written functions, through compiler tools and operator overloading, into an evaluation that fits the format of Algorithm 7. As an example, consider the function f(x,y,z)=xysin(z),f(x,y,z)=xy\sin(z), and its evaluation for a given (x,y,z)(x,y,z) through the following list of commands

v2\displaystyle v_{-2} =x\displaystyle=x
v1\displaystyle v_{-1} =y\displaystyle=y
v0\displaystyle v_{0} =z\displaystyle=z
v1\displaystyle v_{1} =v2v1\displaystyle=v_{-2}v_{-1}
v2\displaystyle v_{2} =sin(v0)\displaystyle=\sin(v_{0})
v3\displaystyle v_{3} =v2v1.\displaystyle=v_{2}v_{1}.

By naming the functions ϕ1(v2,v1):=v2v1\phi_{1}(v_{-2},v_{-1}):=v_{-2}v_{-1}, ϕ2(v0):=sin(v0)\phi_{2}(v_{0}):=\sin(v_{0}) and ϕ3(v2,v1):=v2v1\phi_{3}(v_{2},v_{1}):=v_{2}v_{1}, this evaluation fits the format in Algorithm 7.

In general, each ϕi\phi_{i} is an elemental function such as addition, multiplication, sin()\sin(\cdot), exp()exp(\cdot), etc, which together with their derivatives are already coded in the AD package. In order, the algorithm first copies the independent variables xix_{i} into internal intermediate variables vin,v_{i-n}, for i=1,,ni=1,\ldots,n. Following convention, we use negative indexes for elements that relate to independent variables. For consistency, we will shift all indexes of vectors and matrices by n-n from here on, e.g., the components of xnx\in\mathbb{R}^{n} are xinx_{i-n} for i=1n.i=1\ldots n.

The next step in Algorithm 7, calculates the value v1v_{1} that only depends on the intermediate variables vin,v_{i-n}, for i=1,,ni=1,\ldots,n. In turn, the value v2v_{2} may now depend on vin,v_{i-n}, for i=1,,n+1i=1,\ldots,n+1, then v3v_{3} may depend on vin,v_{i-n}, for i=1,,n+2i=1,\ldots,n+2 and so on for all \ell intermediate variables. Each viv_{i} is calculated using only one elemental function ϕi\phi_{i}. There is a dependency amongst the intermediate variables, for ϕi\phi_{i} is evaluated at previously calculated intermediate variables. We say that jj is a predecessor of ii if vjv_{j} is a necessary argument of ϕi\phi_{i}. Let P(i)P(i) be the set of predecessors of ii and vP(i)v_{P(i)} the vector of predecessors, thus ϕi(vP(i))=vi\phi_{i}(v_{P(i)})=v_{i} and necessarily j<ij<i for any jP(i)j\in P(i). Analogously, S(i)S(i) is the set of successors of ii.

Input: vin=xiv_{i-n}=x_{i}, for i=1,ni=1,\ldots n
for i=1i=1\ldots\ell  do
   viϕi(vP(i))v_{i}\leftarrow\phi_{i}(v_{P(i)})
   end for
  Output: f(x)vf(x)\leftarrow v_{\ell}
Algorithm 7 Function evaluation

We can bridge this algorithmic description of a function with that of compositions of maps (1) using Griewank and Walther’s [11] state-transformations

Φi\displaystyle\Phi^{i} :n+n+,\displaystyle\colon\mathbb{R}^{n+\ell}\to\mathbb{R}^{n+\ell},
v(v1n,,vi1,ϕi(vP(i)),vi+1,,v)T,\displaystyle\parbox{25.0pt}{\centerline{\hbox{$v$}}}\mapsto(v_{1-n},\ldots,v_{i-1},\phi_{i}(v_{P(i)}),v_{i+1},\ldots,v_{\ell})^{T}, (19)

for i=1,i=1,\ldots\ell. In components,

Φri(v)=vr(1δri)+δriϕi(vP(i)),\Phi^{i}_{r}(v)=v_{r}(1-\delta_{ri})+\delta_{ri}\phi_{i}(v_{P(i)}), (20)

where here, and in the remainder of this article, we abandon Einstein’s notation of repeated indexes, because having the limits of summation is useful for implementing. With this, the function f(x)f(x) defined by Algorithm 7 can be written as

f(x)=e+nTΦΦ1Φ1(PTx),f(x)=e_{\ell+n}^{T}\Phi^{\ell}\circ\Phi^{\ell-1}\circ\cdots\circ\Phi^{1}\circ(P^{T}x), (21)

where e+ne_{\ell+n} is the (+n)(\ell+n)th canonical vector and PP is the immersion matrix [I  0]\left[I\,\,0\right] with In×nI\in\mathbb{R}^{n\times n} and 0n×(n).0\in\mathbb{R}^{n\times(\ell-n)}. The Jacobian of the iith state transformation Φi\Phi^{i}, in coordinates, is simply

DjΦri(v)=δrj(1δri)+δriϕivj(vP(i)).D_{j}\Phi^{i}_{r}(v)=\delta_{rj}(1-\delta_{ri})+\delta_{ri}\frac{\partial\phi_{i}}{\partial v_{j}}(v_{P(i)}). (22)

With the state-transforms and the structure of their derivatives, we look again at a few of the archetype algorithms in Section 2 and build a corresponding implementable version. Our final goal is to implement the RevHedir algorithm 5, for which we need the implementation of the reverse gradient and Hessian algorithms.

3.1 First-Order

To design an algorithm to calculate the gradient of f(x)f(x), given in equation (21), we turn to the Archetype Reverse Gradient Algorithm 1 and identify333Especifically PTP^{T} would be Ψ1\Psi^{1} and Φi\Phi^{i} would be Ψi+1\Psi^{i+1}. the Φi\Phi^{i}’s in place of the Ψi\Psi^{i}’s. Using (22) we find that v¯Tv¯TDΦi\overline{\mathrm{v}}^{T}\leftarrow\overline{\mathrm{v}}^{T}D{\Phi^{i}} becomes

v¯jv¯j(1δij)+v¯iϕivj(vP(i))j{1n,,}\bar{v}_{j}\leftarrow\bar{v}_{j}(1-\delta_{ij})+\bar{v}_{i}\frac{\partial\phi_{i}}{\partial v_{j}}(v_{P(i)})\quad\forall j\in\{1-n,\ldots,\ell\} (23)

where v¯i\bar{v}_{i} is the ii-th component of v¯\overline{\mathrm{v}}, also known as the ii-th adjoint in the AD literature. Note that if jij\neq i in the above, then the above step will only alter v¯j\bar{v}_{j} if jP(i).j\in P(i). Otherwise if j=ij=i, then this update is equivalent to setting v¯i=0.\bar{v}_{i}=0. We can disregard this update, as v¯i\bar{v}_{i} will not be used in subsequent iterations. This is because iP(m)i\not\in P(m), for mim\leq i. With these considerations, we arrive at the Algorithm 8, the component-wise version of Algorithm 1. Note how we have used the abbreviated operation a+=ba+=b to mean aa+b.a\leftarrow a+b. Furthermore, the last step v¯Tv¯TPT\overline{\mathrm{v}}^{T}\leftarrow\overline{\mathrm{v}}^{T}P^{T} selects the adjoints corresponding to independent variables.

An abuse of notation that we will employ throughout, is that whenever we refer to v¯i\bar{v}_{i} in the body of the text, we are referring to the value of v¯i\bar{v}_{i} after iteration ii of the Reverse Gradient algorithm has finished.

initialization: v¯=e1+n\overline{\mathrm{v}}=e_{1}\in\mathbb{R}^{\ell+n}
for i=,,1i=\ell,\ldots,1 do
   for jP(i)j\in P(i) do  v¯j+=v¯iϕi(vP(i))/vj\bar{v}_{j}+=\bar{v}_{i}\partial\phi_{i}(v_{P(i)})/\partial v_{j}
  
   end for
  Output: fv¯TPT=(v¯1n,,v¯0)T\nabla f\leftarrow\overline{\mathrm{v}}^{T}P^{T}=(\bar{v}_{1-n},\ldots,\bar{v}_{0})^{T}
Algorithm 8 Reverse Gradient.

Similarly, by using (22) again, each iteration ii of the Archetype 1st Order Directional Derivative Algorithm 2, can be reduced to a coordinate form

v˙r(1δri)v˙r+δrijP(i)v˙jϕivj(vP(i)),\dot{v}_{r}\leftarrow(1-\delta_{ri})\dot{v}_{r}+\delta_{ri}\sum_{j\in P(i)}\dot{v}_{j}\frac{\partial\phi_{i}}{\partial v_{j}}(v_{P(i)}),

where v˙j\dot{v}_{j} is the jj-th component of v˙\dot{\mathrm{v}}. If rir\neq i in the above, then v˙r\dot{v}_{r} remains unchanged, while if r=ir=i then we have

v˙ijP(i)v˙jϕivj(vP(i)).\dot{v}_{i}\leftarrow\sum_{j\in P(i)}\dot{v}_{j}\frac{\partial\phi_{i}}{\partial v_{j}}(v_{P(i)}). (24)

We implement this update by sweeping through the successors of each intermediate variable and incrementing a single term to the sum on the right-hand side of (24), see Algorithm 9. It is crucial to observe that the ii-th component of v˙\dot{\mathrm{v}} will remain unaltered after the ii-th iteration.

Again, when we refer to v˙i\dot{v}_{i} in the body of the text from this point on, we are referring to the value of v˙i\dot{v}_{i} after iteration ii has finished in Algorithm 9.

initialization: v˙=PTd+n\dot{v}=P^{T}d\in\mathbb{R}^{\ell+n}
for j=1,,j=1,\ldots,\ell do
   for iS(j)i\in S(j) do  v˙i+=v˙jϕi(vP(i))/vj\dot{v}_{i}+=\dot{v}_{j}\partial\phi_{i}(v_{P(i)})/\partial v_{j}
  
   end for
  Output: DFd=(v˙1n,,v˙0)TDF\cdot d=(\dot{v}_{1-n},\ldots,\dot{v}_{0})^{T}
Algorithm 9 1st Order Directional Derivative.

3.2 Second-Order

Just by substituting Ψi\Psi^{i}s for Φi\Phi^{i}s in the Archetype Reverse Hessian, Algorithm 3, we can quickly reach a very efficient component-wise algorithm for calculating the Hessian of f(x)f(x), given in equation (21). This component-wise algorithm is also known as edge_pushing, and has already been detailed in Gower and Mello [8]. Here we use a different notation which leads to a more concise presentation. Furthermore, the results below form part of the calculations needed for third order methods.

There are two steps of Algorithm 3 we must investigate, for we already know how to update v¯\overline{\mathrm{v}} from the above section. For these two steps, we need to substitute

DjkΦri(v)=2Φrivjvk(v)=δri2ϕivjvk(vP(i)),D_{jk}\Phi^{i}_{r}(v)=\frac{\partial^{2}\Phi^{i}_{r}}{\partial v_{j}\partial v_{k}}(v)=\delta_{ri}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{k}}(v_{P(i)}), (25)

and DΦiD\Phi^{i}, equation (22), in WW(DΦi,DΦi)+v¯TD2ΦiW\leftarrow W\cdot(D\Phi^{i},D\Phi^{i})+\overline{\mathrm{v}}^{T}D^{2}\Phi^{i}, resulting in

Wjk\displaystyle W_{jk}\leftarrow s,t=1nΦsivjWstΦtivk+s=1nv¯s2Φsivjvk\displaystyle\sum_{s,t=1-n}^{\ell}\frac{\partial\Phi^{i}_{s}}{\partial v_{j}}W_{st}\frac{\partial\Phi^{i}_{t}}{\partial v_{k}}+\sum_{s=1-n}^{\ell}\bar{v}_{s}\frac{\partial^{2}\Phi^{i}_{s}}{\partial v_{j}\partial v_{k}}
=\displaystyle= (1δji)Wjk(1δki)+ϕivjWiiϕivk\displaystyle(1-\delta_{ji})W_{jk}(1-\delta_{ki})+\frac{\partial\phi_{{i}}}{\partial v_{j}}W_{ii}\frac{\partial\phi_{i}}{\partial v_{k}}
+ϕivjWik(1δki)+(1δji)Wjiϕivk\displaystyle+\frac{\partial\phi_{i}}{\partial v_{j}}W_{ik}(1-\delta_{ki})+(1-\delta_{ji})W_{ji}\frac{\partial\phi_{i}}{\partial v_{k}} (26)
+v¯i2ϕivjvk.\displaystyle+\bar{v}_{i}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{k}}. (27)

Before translating these updates into an algorithm, we need a crucial result: at the beginning of iteration i1i-1, the element WjkW_{jk} is zero if jij\geq i for all kk. We show this by using induction on the iterations of Algorithm 3. Note that WW is initially set to zero, so the first step from (26) and (27) reduce to

Wjkv¯2ϕvjvk,W_{jk}\leftarrow\bar{v}_{\ell}\frac{\partial^{2}\phi_{\ell}}{\partial v_{j}\partial v_{k}},

which is zero for j=j=\ell because P()\ell\notin P(\ell). Now we assume the induction hypothesis holds at the beginning of the iteration ii, so that Wjk=0W_{jk}=0 for ji+1j\geq i+1. So letting ji+1j\geq i+1 and executing the iteration ii we get from the updates (26) and (27)

WjkWjk+Wjiϕivk,W_{jk}\leftarrow W_{jk}+W_{ji}\frac{\partial\phi_{i}}{\partial v_{k}},

because jP(i)j\notin P(i). Together with our hypothesis Wjk=0W_{jk}=0 and Wji=0W_{ji}=0, we see that WjkW_{jk} remains zero. While if j=ij=i, then (26) and (27) sets Wjk0W_{jk}\leftarrow 0 because iP(i)i\not\in P(i). Hence at the beginning of iteration i1i-1 we have that Wjk=0W_{jk}=0 for jij\geq i and this completes the induction.

Furthermore, WW is symmetric at the beginning of iteration ii because it is initialized to W=0W=0 and each iteration preserves symmetry. Consequentially, the only nonzero components WjkW_{jk} appear when both j,ki.j,k\leq i. We make use of this symmetry to avoid unnecessary calculations on symmetric counterparts. Let W{jk}W_{\{jk\}} denote both WjkW_{jk} and Wkj.W_{kj}. To accommodate for this symmetric representation, we perform (27) once for each pair {j,k}\{j,k\}, as to opposed for every coordinate pair, see the Creating step in Algorithm 10.

The calculations in (26) are done by sweeping through the nonzero elements of WW and then updating their contribution to the overall calculation.

Thus if W{ii}0W_{\{ii\}}\neq 0, looking to (26), this triggers the following increment

W{jk}+=ϕivjW{ii}ϕivk.W_{\{jk\}}+=\frac{\partial\phi_{i}}{\partial v_{j}}W_{\{ii\}}\frac{\partial\phi_{i}}{\partial v_{k}}.

Similarly to Creating step, the above should only be carried out for every pair {j,k}.\{j,k\}. While each nonzero off diagonal term WikW_{ik} and WkiW_{ki}, for k<ik<i, according to (26), has the effect of

Wjk\displaystyle W_{jk} +=ϕivjWik,\displaystyle+=\frac{\partial\phi_{i}}{\partial v_{j}}W_{ik}, (28)
Wkj\displaystyle W_{kj} +=ϕivjWki.\displaystyle+=\frac{\partial\phi_{i}}{\partial v_{j}}W_{ki}. (29)

It is redundant to update both symmetric elements, so we substitute both for just

W{jk}+=ϕivjW{ik}.W_{\{jk\}}+=\frac{\partial\phi_{i}}{\partial v_{j}}W_{\{ik\}}.

Though we must take care when j=kj=k, for according to (28) and (29), the two symmetric updates “double up” on the diagonal

W{jj}\displaystyle W_{\{jj\}} +=2ϕivjW{ij}.\displaystyle+=2\frac{\partial\phi_{i}}{\partial v_{j}}W_{\{ij\}}. (30)

The operation (26) has been implemented with these above considerations in the Pushing step in Algorithm 10. The names of the steps Creating and Pushing are elusive to a graph interpretation [8].

Input: Function evaluation 7, xn.x\in\mathbb{R}^{n}.
initialization: v¯=e+n+n\bar{v}=e_{\ell+n}\in\mathbb{R}^{\ell+n}, W=0(+n)×(+n)W=0\in\mathbb{R}^{(\ell+n)\times(\ell+n)}
for i=,,1i=\ell,\ldots,1 do
   Pushing
   foreach  kik\leq i such that W{ki}0W_{\{ki\}}\neq 0 do
     if k<ik<i then
      foreach jP(i)j\in P(i) do
         if j=kj=k then
          W{jj}+=2DjϕiW{ji}W_{\{jj\}}+=2D_{j}\phi_{i}W_{\{ji\}}
           else
            W{jk}+=DjϕiW{ki}W_{\{jk\}}+=D_{j}\phi_{i}W_{\{ki\}}
             end if
            
             end foreach
            
             else  k=ik=i
               foreach unordered pair {j,p}P(i)\{j,p\}\subset P(i) do
                 W{jp}+=DpϕiDjϕiW{ii}W_{\{jp\}}+=D_{p}\phi_{i}D_{j}\phi_{i}W_{\{ii\}}
                 end foreach
                
                 end if
                
                 end foreach
                Creating
                 foreach unordered pair {j,p}P(i)\{j,p\}\subset P(i) do
                   W{jp}+=v¯iDpjϕiW_{\{jp\}}+=\bar{v}_{i}D_{pj}\phi_{i}
                   end foreach
                  Adjoint
                   foreach jP(i)j\in P(i) do
                    v¯j+=v¯iDjϕi\bar{v}_{j}+=\bar{v}_{i}D_{j}\phi_{i}
                     end foreach
                    
                     end for
                    Output: D2f=(Wjk)1nj,k0D^{2}f=\left(W_{jk}\right)_{1-n\leq j,k\leq 0}
Algorithm 10 component-wise form of edge_pushing.

3.3 Third-Order

The final algorithm that we translate to implementation is the Hessian directional derivative, the RevHedir Algorithm 5. This implementation has an immediate application in the Halley-Chebyshev class of third-order optimization methods, for at each step of these algorithms, such a directional derivative is required. For this reason we pay special attention to its implementation.

Identifying each Ψi\Psi^{i} with Φi\Phi^{i}, we address each of the five operations on the matrix TdTd in Algorithm 5 separately, pointing out how each one preserves the symmetry of TdTd and how to perform the component-wise calculations.

First, given that TdTd is symmetric, the 2D pushing update

TdTd(DΦi,DΦi),Td\leftarrow Td\cdot\left(D\Phi^{i},D\Phi^{i}\right), (31)

is exactly as detailed in (26) and the surrounding comments. While 3D creating

TdTd+v¯TD3Φiv˙i1,Td\leftarrow Td+\overline{\mathrm{v}}^{T}D^{3}\Phi^{i}\cdot\dot{\mathrm{v}}^{i-1},

can be written in coordinate form as

Tdjk\displaystyle Td_{jk} Tdjk+r,p=1nv¯rDjkpΦriv˙pi1\displaystyle\leftarrow Td_{jk}+\sum_{r,p=1-n}^{\ell}\overline{v}_{r}D_{jkp}\Phi^{i}_{r}\dot{\mathrm{v}}_{p}^{i-1}
=Tdjk+pP(i)v¯i3ϕivjvkvpv˙p,\displaystyle=Td_{jk}+\sum_{p\in P(i)}\overline{v}_{i}\frac{\partial^{3}\phi_{i}}{\partial v_{j}\partial v_{k}\partial v_{p}}\dot{v}_{p}, (32)

where v˙p\dot{v}_{p} is the value given to v˙p\dot{v}_{p} after iteration pp in Algorithm 9. Note that v˙pi1=v˙p\dot{\mathrm{v}}_{p}^{i-1}=\dot{v}_{p} for pP(i)p\in P(i), because pi1p\leq i-1, so on the iteration i1i-1 of Algorithm 9 the calculation of v˙p\dot{v}_{p} will already have been finalized. Another trick we employ is that, since the above calculation is performed on iteration ii, we know that v¯i\bar{v}_{i} has already been calculated. These substitutions involving v¯i\bar{v}_{i}s and v˙i\dot{v}_{i}s will be carried out in the rest of the text with little or no comment. The update (32) also preserves the symmetry of TdTd.

To examine the update,

TdTd+W(DΦi,D2Φiv˙i1),Td\leftarrow Td+W\cdot\left(D\Phi^{i},D^{2}\Phi^{i}\cdot\dot{\mathrm{v}}^{i-1}\right), (33)

we use (22) and (25) to obtain the coordinate form

Tdjk\displaystyle Td_{jk} Tdjk+r,s=1nWrs(δrj(1δri)+δriϕivj)δsi2ϕivkvpv˙p\displaystyle\leftarrow Td_{jk}+\sum_{r,s=1-n}^{\ell}W_{rs}\left(\delta_{rj}(1-\delta_{ri})+\delta_{ri}\frac{\partial\phi_{i}}{\partial v_{j}}\right)\delta_{si}\frac{\partial^{2}\phi_{i}}{\partial v_{k}\partial v_{p}}\dot{v}_{p}
=Tdjk+Wji(1δji)2ϕivkvpv˙p+Wiiϕivj2ϕivkvpv˙p.\displaystyle=Td_{jk}+W_{ji}(1-\delta_{ji})\frac{\partial^{2}\phi_{i}}{\partial v_{k}\partial v_{p}}\dot{v}_{p}+W_{ii}\frac{\partial\phi_{i}}{\partial v_{j}}\frac{\partial^{2}\phi_{i}}{\partial v_{k}\partial v_{p}}\dot{v}_{p}. (34)

Upon inspection, the update

TdTd+W(D2Φiv˙i1,DΦi)Td\leftarrow Td+W\cdot\left(D^{2}\Phi^{i}\cdot\dot{\mathrm{v}}^{i-1},D\Phi^{i}\right)

is the transpose of (34) due to the symmetry of WW. So it can be written in coordinate form as

Tdjk\displaystyle Td_{jk} Tdjk+Wik(1δki)2ϕivjvpv˙p+Wiiϕivk2ϕivjvpv˙p.\displaystyle\leftarrow Td_{jk}+W_{ik}(1-\delta_{ki})\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{p}}\dot{v}_{p}+W_{ii}\frac{\partial\phi_{i}}{\partial v_{k}}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{p}}\dot{v}_{p}. (35)

Thus update (35) together with (34) is equivalent to summing a symmetric matrix to TdTd, so the symmetry of TdTd is still preserved.

Last we translate

TdTd+W(D2Φi,DΦiv˙i1),Td\leftarrow Td+W\cdot\left(D^{2}\Phi^{i},D\Phi^{i}\cdot\dot{\mathrm{v}}^{i-1}\right), (36)

to its coordinate form

Tdjk\displaystyle Td_{jk} Tdjk+r,s=1nWrsδriDjkΦri(δsp(1δsi)+δsiϕivp)v˙p\displaystyle\leftarrow Td_{jk}+\sum_{r,s=1-n}^{\ell}W_{rs}\delta_{ri}D_{jk}\Phi^{i}_{r}\left(\delta_{sp}(1-\delta_{si})+\delta_{si}\frac{\partial\phi_{i}}{\partial v_{p}}\right)\dot{v}_{p}
=Tdjk+Wip2ϕivjvk(1δpi)v˙p+Wii2ϕivjvkDpϕiv˙p.\displaystyle=Td_{jk}+W_{ip}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{k}}(1-\delta_{pi})\dot{v}_{p}+W_{ii}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{k}}D_{p}\phi_{i}\dot{v}_{p}. (37)

No change is affected by interchanging the indices jj and kk on the right-hand side of (37), so once again TdTd remains symmetric. For convenience of computing, we group updates (34), (35) and (37) into a set of updates called 2D Connecting. The name indicating that these updates “connect” objects that contain second order derivative information.

More then just symmetric, through closer inspection of these operations, we see that the sparsity structure of TdTd is contained in that of WW. This remains true even after execution, at which point Td=D3f(x)dTd=D^{3}f(x)\cdot d and W=D2f(x)W=D^{2}f(x) where, for each j,k,p{1n,,0}j,k,p\in\{1-n,\ldots,0\}, we have

Djkf(x)=0Djkpf(x)dp=0.D_{jk}f(x)=0\implies D_{jkp}f(x)d_{p}=0.

This fact should be explored when implementing the method, in that, the data structure of TdTd should imitate that of WW.

3.3.1 Implementing Third-Order Directional Derivative

The matrices TdTd and WW are symmetric, and based on the assumption that they will be sparse, we will represent them using a symmetric sparse data structure. Thus we now identify each pair (Wjk,Wkj)(W_{jk},W_{kj}) and (Tdjk,Tdkj)(Td_{jk},Td_{kj}) with the element W{jk}W_{\{jk\}} and Td{jk}Td_{\{jk\}}, respectively. Much like was done with edge_pushing, Algorithm 10, we will organize the computations by sweeping through all nonzero elements of Td{ik}Td_{\{ik\}} and W{ik}W_{\{ik\}} and then updating their contribution to the overall calculation.

We must take care when updating our symmetric representation of TdTd, both for the 2D pushing update (31) and for the redundant symmetric counterparts (34) and (35) which “double-up” on the diagonal, much like in the Pushing operations of Algorithm 10.

Each operation  (34), (35) and (37) depends on a diagonal element W{ii}W_{\{ii\}} and an off-diagonal element W{ik}W_{\{ik\}} of WW, for kik\neq i. Grouping together all terms that involve W{ii}W_{\{ii\}} we get the resulting update

Td{jk}\displaystyle Td_{\{jk\}} +=W{ii}pP(i)v˙p(ϕivj2ϕivkvp+ϕivk2ϕivjvp+ϕivp2ϕivjvk).\displaystyle+=W_{\{ii\}}\sum_{p\in P(i)}\dot{v}_{p}\left(\frac{\partial\phi_{i}}{\partial v_{j}}\frac{\partial^{2}\phi_{i}}{\partial v_{k}\partial v_{p}}+\frac{\partial\phi_{i}}{\partial v_{k}}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{p}}+\frac{\partial\phi_{i}}{\partial v_{p}}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{k}}\right). (38)

By appropriately renaming the indices in (34), (35) and (37), each nonzero off diagonal elements W{ik}W_{\{ik\}} gives the updates  (39), (40) and (41), respectively.

Tdjk\displaystyle Td_{jk} +=pP(i)v˙p2ϕivjvpWik,jP(i)\displaystyle+=\sum_{p\in P(i)}\dot{v}_{p}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{p}}W_{ik},\quad\forall j\in P(i) (39)
Tdkj\displaystyle Td_{kj} +=pP(i)v˙p2ϕivjvpWki,jP(i)\displaystyle+=\sum_{p\in P(i)}\dot{v}_{p}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{p}}W_{ki},\quad\forall j\in P(i) (40)
Tdjp\displaystyle Td_{jp} +=pP(i)v˙k2ϕivjvpWik,jP(i)\displaystyle+=\sum_{p\in P(i)}\dot{v}_{k}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{p}}W_{ik},\quad\forall j\in P(i) (41)

Note that (39) and (40) are symmetric updates, and when j=kj=k these two operations “double-up” resulting in the update

Tdjj+=2pP(i)v˙p2ϕivjvpWij.Td_{jj}+=2\sum_{p\in P(i)}\dot{v}_{p}\frac{\partial^{2}\phi_{i}}{\partial v_{j}\partial v_{p}}W_{ij}.

Passing to our symmetric notation, we dispense with (40) and account for this doubling effect in Algorithm 11. Finally we can eliminate redundant symmetric calculations performed in (41) by only performing this operation for each pair {j,p}\{j,p\}. All these considerations relating to 2D connecting have been factored into our implementation of the RevHedir Algorithm 11.

Performing 3D Creating (32) using this symmetric representation is simply a matter of not repeating the obvious symmetric counterpart, but instead, performing these operations on Td{jk}Td_{\{jk\}} once for each appropriate pair {j,k},\{j,k\}, see 3D Creating in to Algorithm 11.

Input: Function evaluation 7, xn.x\in\mathbb{R}^{n}.
Initialization: v¯1n==v¯1=0\bar{v}_{1-n}=\cdots=\bar{v}_{\ell-1}=0, v¯=1\bar{v}_{\ell}=1, Wjk=0W_{jk}=0, Td{jk}=0Td_{\{jk\}}=0, j<k{1n,,}j<k\in\{1-n,\ldots,\ell\}
Calculate first order directional derivative v˙\dot{v} using Algorithm (9)
for i=,,1i=\ell,\ldots,1 do
   2D Pushing of TdTd, see Pushing in Algorithm 10
   2D Connecting
   foreach pP(i)p\in P(i), {j,k}P(i)\{j,k\}\subset P(i) do
     Td{jk}+=W{ii}v˙p(DjϕiDkpϕi+DkϕiDjpϕi+DpϕiDjkϕi)Td_{\{jk\}}+=W_{\{ii\}}\dot{v}_{p}\left(D_{j}\phi_{i}D_{kp}\phi_{i}+D_{k}\phi_{i}D_{jp}\phi_{i}+D_{p}\phi_{i}D_{jk}\phi_{i}\right)
    
     end foreach
    foreach k<i,W{ik}0k<i,W_{\{ik\}}\neq 0 do
       foreach (j,p)P(i)2(j,p)\in P(i)^{2} do
         if j=kj=k then
           Td{kk}+=2W{ik}v˙pDjpϕiTd_{\{kk\}}+=2W_{\{ik\}}\dot{v}_{p}D_{jp}\phi_{i}
          
           end if
          if jkj\neq k then
             Td{jk}+=W{ik}v˙pDjpϕiTd_{\{jk\}}+=W_{\{ik\}}\dot{v}_{p}D_{jp}\phi_{i}
            
             end if
            if jpj\geq p then
               Td{jp}+=W{ik}v˙kDjpϕiTd_{\{jp\}}+=W_{\{ik\}}\dot{v}_{k}D_{jp}\phi_{i}
              
               end if
              
               end foreach
              
               end foreach
              3D Creating
               foreach pP(i)p\in P(i), {j,k}P(i)\{j,k\}\subset P(i) do
                 Td{jk}+=v¯iDjkpϕiv˙pTd_{\{jk\}}+=\overline{v}_{i}D_{jkp}\phi_{i}\dot{v}_{p}
                 end foreach
                Pushing and creating applied to WW, see Algorithm 10
                 Adjoint Iteration applied to v¯\bar{v}, see Algorithm 8
                
                 end for
                Output: (D3f(x)d)jk=Td{jk},D2f(x)jk=W{jk}(D^{3}f(x)\cdot d)_{jk}=Td_{\{jk\}},D^{2}f(x)_{jk}=W_{\{jk\}}
                  for each jk{1n,,0}.\mbox{ for each }j\leq k\in\{1-n,\ldots,0\}.
                
Algorithm 11 component-wise form of RevHedir.

4 Numerical experiment

We have implemented the RevHedir Algorithm 11 as an additional driver of ADOL-C, a well established automatic differentiation library coded in C and C++ [9]. We used version ADOL-C-2.4.0, the most recent available 444As checked May 28th, 2013. The tests where carried out on a personal laptop with 1.70GHz dual core processors Intel Core i5-3317U, 4GB of RAM, with the Ubuntu 13.0 operating system.

For those interested in replicating our implementation, we used a sparse undirected weighted graph data structure to represent the matrices WW and TdTd. The data structure is an array of weighted neighbourhood sets, one for each node, where each neighbourhood set is a dynamic array that resizes when needed. Each neighbourhood set is maintained in order and the method used to insert or increment the weight of an edge is built around a binary search.

We have hand-picked sixteen problems from the CUTE collection [2], augmlagn from [15], toiqmerg (Toint Quadratic Merging problem) and chainros_trigexp (Chained Rosenbrook function with Trigonometric and exponential constraints) from [17] for the experiments. We have also created a function

heavey_band(x,band)=i=1nbandsin(j=1bandxi+j).\mbox{heavey\_band($x,band$)}=\sum_{i=1}^{n-band}\sin\left(\sum_{j=1}^{band}x_{i+j}\right).

For our experiments, we tested heavey_band(x,20x,20). The problems were selected based on the sparsity pattern of D3f(x).dD^{3}f(x).d, dimension scalability and sparsity. Our goal was to cover a variety of patterns, to easily change the dimension of the function and work with sparse matrices.

In Table 1, the “Pattern” column indicates the type of sparsity pattern: bandwidth555The bandwidth of matrix M=(mij)M=(m_{ij}) is the maximum value of 2|ij|+12|i-j|+1 such that mij0m_{ij}\neq 0. of value xx (B xx), arrow, frame, number of diagonals (D xx), or irregular pattern. The “nnz/n” column gives the number of nonzeros in D3f(x).dD^{3}f(x).d over the dimension nn, which serves as a of measure density. For each problem, we applied RevHedir and edge_pushing Algorithm 11 and 10 to the objective function f:nf:\mathbb{R}^{n}\rightarrow\mathbb{R}, with xi=ix_{i}=i and di=1d_{i}=1, for i=1,,ni=1,\ldots,n, and give the runtime of each method for dimension n=106n=10^{6} in Table 1. Note that all of these matrices are very sparse, partly due to the “thinning out” caused by the high order differentiation. This probably contributed to the relatively low runtime, for in these tests, the run-times have a 0.750.75 correlation with the density measure “nnz/n”. This leads us to believe that the actual pattern is not a decisive factor in runtime.

name Pattern nnz/n edge_pushing RevHedir
cosine B 3 3.0000 2.89 5.25
bc4 B 3 3.0000 3.93 7.87
cragglevy B 3 2.9981 5.41 10.6
chainwood B 3 1.4999 4.04 7.22
morebv B 3 3.0000 4.57 9.44
scon1dls B 3 0.7002 3.99 8.12
bdexp B 5 0.0004 2.21 3.86
pspdoc B 5 4.9999 3.05 5.97
augmlagn 5×55\times 5 diagonal blocks 4.9998 4.15 9.28
brybnd B 11 12.9996 14.19 38.79
chainros_trigexp B 3 + D 6 4.4999 6.51 12.87
toiqmerg B 7 6.9998 4.33 8.89
arwhead arrow 3.0000 3.63 6.78
nondquar arrow + B 3 4.9999 2.9 5.61
sinquad frame + diagonal 4.9999 5.12 10.01
bdqrtic arrow + B 7 8.9998 8.98 19.62
noncvxu2 irregular 6.9998 4.95 9.55
ncvxqp3 irregular 6.9997 2.9 6.48
heavey_band B 39 38.9995 20.74 61.27
Table 1: Description of problem set together with the execution time in seconds of edge_push and RevHedir for n=106n=10^{6}.

We did not benchmark our results against an alternative algorithm for we could not find a known AD package that is capable of efficiently calculating such directional derivatives for such high dimensions. For small dimensions, we used the tensor_eval of ADOL-C to calculate the entire tensor using univariate forward Taylor series propagation[12]. Then we contract the resulting tensor with the vector dd. This was useful to check that our implementation was correct, though it would struggle with dimensions over n=100n=100, thus not an appropriate comparison.

A remarkable feature of these tests, is that the time spent by RevHedir to calculate D3f(x)dD^{3}f(x)\cdot d was, on average, 108% that of calculating D2f(x).D^{2}f(x). Thus, if the user is prepared to pay the price for calculating the Hessian, he could also gain some third order information for approximately the same cost. The code for these tests can be downloaded from the Edinburgh Research Group in Optimization website: http://www.maths.ed.ac.uk/ERGO/.

5 Conclusion

Our contribution boils down to a framework for designing high order reverse methods, and an efficient implementation of the directional derivative of the Hessian called RevHedir. The framework paves the way to obtaining a reverse method for all orders once and for all. Such an achievement could cause a paradigm shift in numerical method design, wherein, instead of increasing the number of steps or the mesh size, increasing the order of local approximations becomes conceivable. We have also shed light on existing AD methods, providing a concise proof of the edge_pushing [8] and the reverse gradient directional derivative [1] algorithms.

The novel algorithms 5 and 6 for calculating the third-order derivative and its contraction with a vector, respectively, fulfils what we set out to achieve: they accumulate the desired derivative “as a whole”, thus taking advantage of overlapping calculations amongst individual components. This is in contrast with what is currently being used, e.g., univariate Taylor expansions [12] and repeated tangent/adjoint operations [18]. These algorithms can also make use of the symmetry, as illustrated in our implementation of RevHedir Algorithm 11, wherein all operations are only carried out on a lower triangular matrix.

We implemented and tested the RevHedir with two noteworthy results. The first is its capacity to solve sparse problems of large dimension of up to a million variables. The second is how the time spent by RevHedir to calculate the directional derivative D3f(x)dD^{3}f(x)\cdot d was very similar to that spent by edge_pushing to calculate the Hessian. We believe this is true in general and plan on confirming this in future work through complexity analysis. Should this be confirmed, it would have an immediate consequence in the context of nonlinear optimization, in that the third-order Halley-Chebyshev methods could be used to solve large dimensional problems with an iteration cost proportional to that of Newton step. In more detail, at each step the Halley-Chebyshev methods require the Hessian matrix and its directional derivative. The descent direction is then calculated by solving the Newton system, and an additional system with the same sparsity pattern as the Newton system. If it is confirmed that solving these systems costs the same, in terms of complexity, then the cost of a Halley-Chebyshev iteration will be proportional to that of a Newton step. Though this comparison only holds if one uses these automatic differentiation procedures to calculate the derivatives in both methods.

The CUTE functions used to test both edge_pushing and RevHedir are rather limited, and further tests on real-world problems should be carried out. Also, complexity bounds need to be developed for both algorithms.

A current limitation of reverse AD procedures, such as the ones we have presented, is their issue with memory usage. All floating point values of the intermediate variables must be recorded on a forward sweep and kept for use in the reverse sweep. This can be a very substantial amount of memory, and can be prohibitive for large-scale functions [21]. As an example, when we used dimensions of n=107n=10^{7}, most of our above test cases exhausted the available memory on the personal laptop used. A possible solution to this, is to allow a trade off between run-time and memory usage by reversing only parts of the procedure at a time. This method is called checkpointing [21, 20].

References

  • [1] J. Abate, C. Bischof, L. Roh, and A. Carle, Algorithms and design for a second-order automatic differentiation module, in Proceedings of the 1997 International Symposium on Symbolic and Agebraic Computation, New York, 1997, ACM, pp. 149–155.
  • [2] I. Bongartz, A. R. Conn, N. Gould, and P. L. Toint, CUTE: constrained and unconstrained testing environment, ACM Transactions on Mathematical Software, 21 (1995), pp. 123–160.
  • [3] B. Christianson, Automatic Hessians by reverse accumulation, IMA Journal of Numerical Analysis, 12 (1992), pp. 135–150.
  • [4] G. F. Corliss, A. Griewank, and P. Henneberger, High-order stiff ODE solvers via automatic differentiation and rational prediction, in Lecture Notes in Computer Science, Springer, 1997, pp. 114–125.
  • [5] L. E. Fraenkel, Formulae for high derivatives of composite functions, Mathematical Proceedings of the Cambridge Philosophical Society, 83 (2008), p. 159.
  • [6] A. H. Gebremedhin, F. Manne, and A. Pothen, What color is your Jacobian? Graph coloring for computing derivatives, SIAM Review, 47 (2005), pp. 629–705.
  • [7] A. H. Gebremedhin, A. Tarafdar, A. Pothen, and A. Walther, Efficient computation of sparse Hessians using coloring and automatic differentiation, INFORMS Journal on Computing, 21 (2009), pp. 209–223.
  • [8] R. M. Gower and M. P. Mello, A new framework for the computation of Hessians, Optimization Methods and Software, 27 (2012), pp. 251–273.
  • [9] A. Griewank, D. Juedes, and J. Utke, ADOL-C, a package for the automatic differentiation of algorithms written in C/C++, ACM Transactions on Mathematical Software, 22 (1996), pp. 131–167.
  • [10] A. Griewank and U. Naumann, Accumulating Jacobians as chained sparse matrix products, Mathematical Programming, 95 (2003), pp. 555–571.
  • [11] A. Griewank and A. Walther, Evaluating derivatives, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second edi ed., 2008.
  • [12] A. Griewank, A. Walther, and J. Utke, Evaluating higher derivative tensors by forward propagation of univariate Taylor series, Mathematics of Computation, 69 (2000), pp. 1117–1130.
  • [13] J. Guckenheimer and B. Meloon, Computing periodic orbits and their bifurcations with automatic differentiation, SIAM Journal on Scientific Computing, 22 (2000), pp. 951–985.
  • [14] J. Gutiérrez and M. Hernández, A family of Chebyshev-Halley type methods in Banach spaces, Bulletin of the Australian Mathematical Society, 55 (1997), p. 113.
  • [15] W. Hock and K. Schittkowski, Test examples for nonlinear programming codes, Journal of Optimization Theory and Applications, 30 (1980), pp. 127–129.
  • [16] V. Kariwala, Automatic Differentiation-Based Quadrature Method of Moments for Solving Population Balance Equations, AIChE Journal, 58 (2012), pp. 842–854.
  • [17] Ladislav Luksan Jan Vlcek, Test problems for unconstrained optimization, Tech. Report 897, Academy of Sciences of the Czech Republic, 2003.
  • [18] U. Naumann, The Art of Differentiating Computer Programs: An Introduction to Algorithmic Differentiation, no. 24 in Software, Environments, and Tools, SIAM, Philadelphia, PA, 2012.
  • [19] R. D. Neidinger, An Efficient Method for the Numerical Evaluation of Partial Derivatives of Arbitrary Order, ACM Transactions on Mathematical Software, 18 (1992), pp. 159–173.
  • [20] J. Sternberg and A. Griewank, Reduction of Storage Requirement by Checkpointing for Time-Dependent Optimal Control Problems in ODEs, in Automatic Differentiation: Applications, Theory, and Implementations, B. N. M. Bücker, G. Corliss, P. Hovland, U. Naumann, ed., no. 0, Springer, 1 ed., 2006, pp. 99–110.
  • [21] A. Walther and A. Griewank, Advantages of Binomial Checkpointing for Memory-reduced Adjoint Calculations, Numerical Mathematics and Advanced Applications, (2004), pp. 834–843.