This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\TPMargin

5pt

Generating Families of Practical Fast Matrix Multiplication Algorithms
FLAME Working Note #82

Jianyu Huang* Leslie Rice* Devin A. Matthews Robert A. van de Geijn*
*Department of Computer Science
Institute for Computational Engineering and Sciences
The University of Texas at Austin, Austin, TX 78712
{jianyu@cs., leslierice, dmatthews, rvdg@cs.}utexas.edu
( November 3, 2016 )
Abstract

Matrix multiplication (GEMM) is a core operation to numerous scientific applications. Traditional implementations of Strassen-like fast matrix multiplication (FMM) algorithms often do not perform well except for very large matrix sizes, due to the increased cost of memory movement, which is particularly noticeable for non-square matrices. Such implementations also require considerable workspace and modifications to the standard BLAS interface. We propose a code generator framework to automatically implement a large family of FMM algorithms suitable for multiplications of arbitrary matrix sizes and shapes. By representing FMM with a triple of matrices U,V,W\llbracket U,V,W\rrbracket that capture the linear combinations of submatrices that are formed, we can use the Kronecker product to define a multi-level representation of Strassen-like algorithms. Incorporating the matrix additions that must be performed for Strassen-like algorithms into the inherent packing and micro-kernel operations inside GEMM avoids extra workspace and reduces the cost of memory movement. Adopting the same loop structures as high-performance GEMM implementations allows parallelization of all FMM algorithms with simple but efficient data parallelism without the overhead of task parallelism. We present a simple performance model for general FMM algorithms and compare actual performance of 20+ FMM algorithms to modeled predictions. Our implementations demonstrate a performance benefit over conventional GEMM on single core and multi-core systems. This study shows that Strassen-like fast matrix multiplication can be incorporated into libraries for practical use.

1 Introduction

Three recent advances have revived interest in the practical implementation of Strassen’s algorithm (Strassen) and similar Fast Matrix Multiplication (FMM) algorithms. The first [1] is a systematic way in which new FMM algorithms can be identified, building upon conventional calls to the BLAS matrix-matrix multiplication gemm routine. That work incorporated a code generator, due to the number of algorithms that are identified and the complexity of exploiting subexpressions encountered in the linear combinations of submatrices. Parallelism was achieved through a combination of task parallelism and parallelism within the BLAS. The second [2] was the insight that the BLAS-like Library Instantiation Software (BLIS) framework exposes basic building blocks that allow the linear combinations of submatrices in Strassen to be incorporated into the packing and/or computational micro-kernels already existing in the BLIS gemm implementation. Parallelism in that work mirrored the highly effective data parallelism that is part of BLIS. Finally, the present work also extends insights on how to express multiple levels of Strassen in terms of Kronecker products [3] to multi-level FMM algorithms, facilitating a code generator for all methods from [1] (including Strassen), in terms of the building blocks created for [2], but allowing different FMM algorithms to be used for each level. Importantly and unique to this work, the code generator also yields performance models that are accurate enough to guide the choice of a FMM implementation as a function of problem size and shape, facilitating the creation of poly-algorithms [4]. Performance results from single core and multi-core shared memory system support the theoretical insights.

We focus on the special case of gemm given by C:=C+ABC:=C+AB. Extending the ideas to the more general case of gemm is straightforward.

2 Background

We briefly summarize how the BLIS [5] framework implements gemm before reviewing recent results [2] on how Strassen can exploit insights that underlie this framework.

2.1 High-performance implementation of standard gemm

Refer to caption
Figure 1: Figure from [2] (used with permission from authors). Left: Illustration of the BLIS implementation of the GotoBLAS gemm  algorithm. All computation is cast in terms of a micro-kernel that is highly optimized. Right: modification that implements the representative computation M=(X+Y)(V+W);C+=M;D+=MM=(X+Y)(V+W);C+\!\!=M;D+\!\!=M of each row of computations in (2). XX, YY are submatrices of AA; VV, WW are submatrices of BB; CC, DD are submatrices of the original matrix CC; MM is the intermediate matrix product. Note that the packing buffers A~i\widetilde{A}_{i} and B~p\widetilde{B}_{p} stay in cache.

Key to high performance implementations of gemm is the partitioning of operands in order to (near-)optimally reuse data in the various levels of memory. Figure 1(left) illustrates how BLIS implements the GotoBLAS [6] approach. Block sizes {mC,nC,kC}\{m_{C},n_{C},k_{C}\} are chosen so that submatrices fit in the various caches while {mR,nR}\{m_{R},n_{R}\} relate to submatrices in registers that contribute to CC. These parameters can be analytically determined [7]. To improve data locality, row panels BpB_{p} that fit in the L3 cache are “packed” into contiguous memory, yielding B~p\widetilde{B}_{p}. For similar reasons, blocks AiA_{i} that fit in the L2 cache are packed into buffer A~i\widetilde{A}_{i}.

2.2 High-performance implementations of Strassen

If one partitions the three operands into quadrants,

X=(X0X1X2X3)for X{A,B,C}X=\left(\begin{array}[]{c | c}X_{0}&X_{1}\\ \hline\cr X_{2}&X_{3}\end{array}\right)\quad\mbox{for $X\in\{A,B,C\}$} (1)

then it can be shown that the operations

M0=(A0+A3)(B0+B3);C0+=M0;C3+=M0;M1=(A2+A3)B0;C2+=M1;C3=M1;M2=A0(B1B3);C1+=M2;C3+=M2;M3=A3(B2B0);C0+=M3;C2+=M3;M4=(A0+A1)B3;C1+=M4;C0=M4;M5=(A2A0)(B0+B1);C3+=M5;M6=(A1A3)(B2+B3);C0+=M6;{\begin{array}[]{l @{\hspace{1pt}} c @{\hspace{1pt}} l l r }M_{0}\hfil\hskip 1.0&=\hfil\hskip 1.0&(A_{0}+A_{3})(B_{0}+B_{3});&C_{0}+=M_{0};&C_{3}+=M_{0};\\ M_{1}\hfil\hskip 1.0&=\hfil\hskip 1.0&(A_{2}+A_{3})B_{0};&C_{2}+=M_{1};&C_{3}-=M_{1};\\ M_{2}\hfil\hskip 1.0&=\hfil\hskip 1.0&A_{0}(B_{1}-B_{3});&C_{1}+=M_{2};&C_{3}+=M_{2};\\ M_{3}\hfil\hskip 1.0&=\hfil\hskip 1.0&A_{3}(B_{2}-B_{0});&C_{0}+=M_{3};&C_{2}+=M_{3};\\ M_{4}\hfil\hskip 1.0&=\hfil\hskip 1.0&(A_{0}+A_{1})B_{3};&C_{1}+=M_{4};&C_{0}-=M_{4};\\ M_{5}\hfil\hskip 1.0&=\hfil\hskip 1.0&(A_{2}-A_{0})(B_{0}+B_{1});&C_{3}+=M_{5};\\ M_{6}\hfil\hskip 1.0&=\hfil\hskip 1.0&(A_{1}-A_{3})(B_{2}+B_{3});&C_{0}+=M_{6};\end{array}} (2)

compute C:=AB+CC:=AB+C, but with seven (sub)matrix multiplications, reducing the cost by a factor of 7/87/8 (ignoring a lower order number of extra additions). If all matrices are square and of size n×nn\times n, classical Strassen exploits this recursively, reducing the cost for gemm to O(n2.807)O(n^{2.807}).

Only a few levels of the recursion are exploited in practice because the cost of extra additions and extra memory movements quickly offsets the reduction in floating point operations. Also, Strassen is known to become more numerically unstable particularly when more than two levels of recursion are employed [8, 9, 10].

In [2], captured in Figure 1(right), it was noted that the additions of the submatrices of AA and BB can be incorporated into the packing into buffers A~i\widetilde{A}_{i} and B~p\widetilde{B}_{p}, avoiding extra memory movements. In addition, once a submatrix that contributes to CC is computed in the registers, it can be directly added to the appropriate parts of multiple submatrices of CC, thus avoiding the need for temporary matrices MrM_{r}, again avoiding extra memory movements. As demonstrated in [2], this makes the method practical for smaller matrices and matrices of special shape (especially rank-k updates, where kk is relatively small).

3 Fast Matrix Multiplication Algorithms

We now present the basic idea that underlies families of FMM algorithms and how to generalize one-level formula for multi-level FMM utilizing Kronecker products and recursive block storage indexing.

3.1 One-level fast matrix multiplication algorithms

In [1], the theory of tensor contractions is used to find a large number of FMM algorithms. In this subsection, we use the output (the resulting algorithms) of their approach.

Generalizing the partitioning for Strassen, consider C:=C+ABC:=C+AB, where CC, AA, and BB are m×nm\times n, m×km\times k, and k×nk\times n matrices, respectively. [1] defines a m~,k~,n~\langle\widetilde{m},\widetilde{k},\widetilde{n}\rangle algorithm by partitioning

C=(C0Cn~1C(m~1)n~Cm~n~1),A=(A0Ak~1A(m~1)k~Am~k~1),and B=(B0Bn~1B(k~1)n~Bk~n~1)\begin{array}[]{c}C=\left(\begin{array}[]{c | c | c }C_{0}&\cdot\cdot\cdot&C_{\widetilde{n}-1}\\ \hline\cr\vdots&&\vdots\\ \hline\cr C_{(\widetilde{m}-1)\widetilde{n}}&\cdot\cdot\cdot&C_{\widetilde{m}\widetilde{n}-1}\end{array}\right),A=\left(\begin{array}[]{c | c | c }A_{0}&\cdot\cdot\cdot&A_{\widetilde{k}-1}\\ \hline\cr\vdots&&\vdots\\ \hline\cr A_{(\widetilde{m}-1)\widetilde{k}}&\cdot\cdot\cdot&A_{\widetilde{m}\widetilde{k}-1}\end{array}\right),\mbox{and }B=\left(\begin{array}[]{c | c | c }B_{0}&\cdot\cdot\cdot&B_{\widetilde{n}-1}\\ \hline\cr\vdots&&\vdots\\ \hline\cr B_{(\widetilde{k}-1)\widetilde{n}}&\cdot\cdot\cdot&B_{\widetilde{k}\widetilde{n}-1}\end{array}\right)\end{array}

Note that AiA_{i}, BjB_{j}, and CpC_{p} are the submatrices of AA, BB and CC, with a single index in the row major order. Then, C:=C+ABC:=C+AB is computed by,

for r=0,,R1r=0,...,R-1,

Mr:=(i=0m~k~1uirAi)×(j=0k~n~1vjrBj);Cp+=wprMr(p=0,,m~n~1)\begin{array}[]{l}M_{r}:=\left(\sum\limits_{i=0}^{\widetilde{m}\widetilde{k}-1}{u_{ir}A_{i}}\right)\times\left(\sum\limits_{j=0}^{\widetilde{k}\widetilde{n}-1}{v_{jr}B_{j}}\right);\\ C_{p}+=w_{pr}M_{r}~(p=0,...,\widetilde{m}\widetilde{n}-1)\end{array} (3)

where (×\times) is a matrix multiplication that can be done recursively, uiru_{ir}, vjrv_{jr}, and wprw_{pr} are entries of a (m~k~)×R(\widetilde{m}\widetilde{k})\times R matrix U{U}, a (k~n~)×R(\widetilde{k}\widetilde{n})\times R matrix VV, and a (m~n~)×R(\widetilde{m}\widetilde{n})\times R matrix WW, respectively. Therefore, the classical matrix multiplication which needs m~k~n~\widetilde{m}\widetilde{k}\widetilde{n} submatrix multiplications can be completed with RR submatrix multiplications. The set of coefficients that determine the m~,k~,n~\langle\widetilde{m},\widetilde{k},\widetilde{n}\rangle algorithm is denoted as U,V,W\llbracket U,V,W\rrbracket.

For example, assuming that mm, nn, and kk are all even, one-level Strassen has 2,2,2\langle 2,2,2\rangle partition dimensions and, given the partitioning in (1) and computations in (2),

(1010110000010101000101101001),(1101010001001000010011010101),(1001101001010001010001110010){\llbracket\left(\begin{array}[]{@{}r r r r r r r r r@{}}\phantom{-}1&\phantom{-}0&\phantom{-}1&\phantom{-}0&\phantom{-}1&-1&\phantom{-}0\\ 0&0&0&0&1&0&1\\ 0&1&0&0&0&1&0\\ 1&1&0&1&0&0&-1\\ \end{array}\right),\left(\begin{array}[]{@{}r r r r r r r r r@{}}\phantom{-}1&\phantom{-}1&\phantom{-}0&-1&\phantom{-}0&\phantom{-}1&\phantom{-}0\\ 0&0&1&0&0&1&0\\ 0&0&0&1&0&0&1\\ 1&0&-1&0&1&0&1\\ \end{array}\right),\left(\begin{array}[]{@{}r r r r r r r r r@{}}\phantom{-}1&\phantom{-}0&\phantom{-}0&\phantom{-}1&-1&\phantom{-}0&\phantom{-}1\\ 0&0&1&0&1&0&0\\ 0&1&0&1&0&0&0\\ 1&-1&1&0&0&1&0\\ \end{array}\right)\rrbracket} (4)

specifies U,V,W\llbracket{U},{V},{W}\rrbracket for one-level Strassen.

m~,k~,n~\langle\widetilde{m},\widetilde{k},\widetilde{n}\rangle Ref. m~k~n~\widetilde{m}\widetilde{k}\widetilde{n} RR Speedup (%\%)
Theory Practical #1 Practical #2
Ours [1] Ours [1]
2,2,2\langle 2,2,2\rangle [11] 88 77 14.314.3 11.911.9  -3.0\text{-}3.0 13.113.1  13.113.1
2,3,2\langle 2,3,2\rangle [1] 1212 1111 9.19.1 5.55.5  -13.1\text{-}13.1 7.77.7  7.77.7
2,3,4\langle 2,3,4\rangle [1] 2424 2020 20.020.0 11.911.9  -8.0\text{-}8.0 16.316.3  17.017.0
2,4,3\langle 2,4,3\rangle [10] 2424 2020 20.020.0 4.84.8  -15.3\text{-}15.3 14.914.9  16.616.6
2,5,2\langle 2,5,2\rangle [10] 2020 1818 11.111.1 1.51.5  -23.1\text{-}23.1 8.68.6  8.38.3
3,2,2\langle 3,2,2\rangle [10] 1212 1111 9.19.1 7.17.1  -6.6\text{-}6.6 7.27.2  7.57.5
3,2,3\langle 3,2,3\rangle [10] 1818 1515 20.020.0 14.114.1  -0.7\text{-}0.7 17.217.2  16.816.8
3,2,4\langle 3,2,4\rangle [10] 2424 2020 20.020.0 11.911.9  -1.8\text{-}1.8 16.116.1  17.017.0
3,3,2\langle 3,3,2\rangle [10] 1818 1515 20.020.0 11.411.4  -8.1\text{-}8.1 17.317.3  16.516.5
3,3,3\langle 3,3,3\rangle [12] 2727 2323 17.417.4 8.68.6  -9.3\text{-}9.3 14.414.4  14.714.7
3,3,6\langle 3,3,6\rangle [12] 5454 4040 35.035.0 -34.0\text{-}34.0  -41.6\text{-}41.6 24.224.2  20.120.1
3,4,2\langle 3,4,2\rangle [1] 2424 2020 20.020.0 4.94.9  -15.7\text{-}15.7 16.016.0  16.816.8
3,4,3\langle 3,4,3\rangle [12] 3636 2929 24.124.1 8.48.4  -12.6\text{-}12.6 18.118.1  20.120.1
3,5,3\langle 3,5,3\rangle [12] 4545 3636 25.025.0 5.25.2  -20.6\text{-}20.6 19.119.1  18.918.9
3,6,3\langle 3,6,3\rangle [12] 5454 4040 35.035.0 -21.6\text{-}21.6  -64.5\text{-}64.5 19.519.5  17.817.8
4,2,2\langle 4,2,2\rangle [10] 1616 1414 14.314.3 9.49.4  -4.7\text{-}4.7 11.911.9  12.212.2
4,2,3\langle 4,2,3\rangle [1] 2424 2020 20.020.0 12.112.1  -2.3\text{-}2.3 15.915.9  17.317.3
4,2,4\langle 4,2,4\rangle [10] 3232 2626 23.123.1 10.410.4  -2.7\text{-}2.7 18.418.4  19.119.1
4,3,2\langle 4,3,2\rangle [10] 2424 2020 20.020.0 11.311.3  -7.8\text{-}7.8 16.816.8  15.715.7
4,3,3\langle 4,3,3\rangle [10] 3636 2929 24.124.1 8.18.1  -8.4\text{-}8.4 19.819.8  20.020.0
4,4,2\langle 4,4,2\rangle [10] 3232 2626 23.123.1 -4.2\text{-}4.2  -18.4\text{-}18.4 17.117.1  18.518.5
5,2,2\langle 5,2,2\rangle [10] 2020 1818 11.111.1 7.07.0  -6.7\text{-}6.7 8.28.2  8.58.5
6,3,3\langle 6,3,3\rangle [12] 5454 4040 35.035.0 -33.4\text{-}33.4  -42.2\text{-}42.2 24.024.0  20.220.2
Figure 2: Theoretical and practical speedup for various FMM algorithms. m~k~n~\widetilde{m}\widetilde{k}\widetilde{n} is the number of multiplication for classical matrix multiplication algorithm. RR is the number of multiplication for fast matrix multiplication algorithm. Theoretical speedup is the speedup per recursive step. Practical #1 speedup is the speedup for one-level FMM comparing with gemm when m=n=14400,k=480m=n=14400,k=480 (rank-k updates). Practical #2 speedup is the speedup for one-level FMM comparing with gemm when m=n=14400,k=12000m=n=14400,k=12000 (approximately square). We report the practical speedup of the best implementation of our generated code (generated gemm) and the implementations in [1] (linked with Intel MKL) on single core. More details about the experiment setup is described in Section 5.

Figure 2 summarizes a number of such algorithms that can be found in the literature that we eventually test in Section 5. We only consider 2m~,k~,n~62\leq\widetilde{m},\widetilde{k},\widetilde{n}\leq 6 and don’t include arbitrary precision approximate (APA) algorithms [13], due to their questionable numerical stability.

3.2 Kronecker Product

If X{X} and Y{Y} are m×nm\times n and p×qp\times q matrices with (i,j)(i,j) entries denoted by xi,jx_{i,j} and yi,jy_{i,j}, respectively, then the Kronecker product [14] XY{X}\otimes{Y} is the mp×nqmp\times nq matrix given by

XY=(x0,0Yx0,n1Yxm1,0Yxm1,n1Y){X}\otimes{Y}=\left(\begin{array}[]{c c c}x_{0,0}{Y}&\cdots&x_{0,n-1}{Y}\\ \vdots&\ddots&\vdots\\ x_{m-1,0}{Y}&\cdots&x_{m-1,n-1}{Y}\\ \end{array}\right)

Thus, entry (XY)p(r1)+v,q(s1)+w=xr,syv,w({X}\otimes{Y})_{p(r-1)+v,q(s-1)+w}=x_{r,s}y_{v,w}.

3.3 Recursive Block Indexing (Morton-like Ordering)

(01[2pt/2pt]2345[2pt/2pt] 67[6pt/2pt]89[2pt/2pt]10111213[2pt/2pt] 14151617[2pt/2pt]18192021[2pt/2pt] 22

23

[6pt/2pt]2425[2pt/2pt]26272829[2pt/2pt] 3031
3233[2pt/2pt]34353637[2pt/2pt] 3839[6pt/2pt]4041[2pt/2pt]42434445[2pt/2pt] 46474849[2pt/2pt]50515253[2pt/2pt] 5455[6pt/2pt]5657[2pt/2pt]58596061[2pt/2pt] 6263
)
\left(\begin{array}[]{c|c}\begin{array}[]{c;{6pt/2pt}c}\begin{array}[]{c;{2pt/2pt}c}0&1\\ \hline\cr[2pt/2pt]2&3\end{array}&\begin{array}[]{c;{2pt/2pt}c}4&5\\ \hline\cr[2pt/2pt] 6&7\end{array}\\ \hline\cr[6pt/2pt]\begin{array}[]{c;{2pt/2pt}c}8&9\\ \hline\cr[2pt/2pt]10&11\end{array}&\begin{array}[]{c;{2pt/2pt}c}12&13\\ \hline\cr[2pt/2pt] 14&15\end{array}\end{array}&\begin{array}[]{c;{6pt/2pt}c}\begin{array}[]{c;{2pt/2pt}c}16&17\\ \hline\cr[2pt/2pt]18&19\end{array}&\begin{array}[]{c;{2pt/2pt}c}20&21\\ \hline\cr[2pt/2pt] 22&23\end{array}\\ \hline\cr[6pt/2pt]\begin{array}[]{c;{2pt/2pt}c}24&25\\ \hline\cr[2pt/2pt]26&27\end{array}&\begin{array}[]{c;{2pt/2pt}c}28&29\\ \hline\cr[2pt/2pt] 30&31\end{array}\end{array}\\ \hline\cr\begin{array}[]{c;{6pt/2pt}c}\begin{array}[]{c;{2pt/2pt}c}32&33\\ \hline\cr[2pt/2pt]34&35\end{array}&\begin{array}[]{c;{2pt/2pt}c}36&37\\ \hline\cr[2pt/2pt] 38&39\end{array}\\ \hline\cr[6pt/2pt]\begin{array}[]{c;{2pt/2pt}c}40&41\\ \hline\cr[2pt/2pt]42&43\end{array}&\begin{array}[]{c;{2pt/2pt}c}44&45\\ \hline\cr[2pt/2pt] 46&47\end{array}\end{array}&\begin{array}[]{c;{6pt/2pt}c}\begin{array}[]{c;{2pt/2pt}c}48&49\\ \hline\cr[2pt/2pt]50&51\end{array}&\begin{array}[]{c;{2pt/2pt}c}52&53\\ \hline\cr[2pt/2pt] 54&55\end{array}\\ \hline\cr[6pt/2pt]\begin{array}[]{c;{2pt/2pt}c}56&57\\ \hline\cr[2pt/2pt]58&59\end{array}&\begin{array}[]{c;{2pt/2pt}c}60&61\\ \hline\cr[2pt/2pt] 62&63\end{array}\end{array}\end{array}\right)

Figure 3: Illustration of recursive block storage indexing (Morton-like ordering) [15] on m×km\times k matrix AA where the partition dimensions m~=k~=2\widetilde{m}=\widetilde{k}=2 for three-level recursions.

An example of recursive block storage indexing (Morton-like ordering) [15] is given in Figure 3. In this example, AA undergoes three levels of recursive splitting, and each submatrix of AA is indexed in row major form. By indexing AA, BB, and CC in this manner, data locality is maintained when operations are performed on their respective submatrices.

3.4 Representing two-level FMM with the Kronecker Product

In [3], it is shown that multi-level 2,2,2\langle 2,2,2\rangle Strassen can be represented as Kronecker product. In this paper, we extend this insight to multi-level FMM, where each level can use a different choice of m~,k~,n~\langle\widetilde{m},\widetilde{k},\widetilde{n}\rangle.

Assume each submatrix of AA, BB, and CC is partitioned with another level of m~,k~,n~\langle\widetilde{m^{\prime}},\widetilde{k^{\prime}},\widetilde{n^{\prime}}\rangle FMM algorithm with the coefficients U,V,W\llbracket U^{\prime},V^{\prime},W^{\prime}\rrbracket, and AiA_{i}, BjB_{j}, CpC_{p} are the submatrices of AA, BB and CC, with a single index in two-level recursive block storage indexing. Then it can be verified that C:=C+ABC:=C+AB is computed by,

for r=0,,RR1r=0,...,R\cdot R^{\prime}-1,

Mr:=(i=0m~k~m~k~1(UU)i,rAi)×(j=0k~n~k~n~1(VV)j,rBj);Cp+=(WW)p,rMr(p=0,,m~n~m~n~1)\begin{array}[]{l}M_{r}:=\left(\sum\limits_{i=0}^{{\widetilde{m}}{\widetilde{k}}\cdot{\widetilde{m}^{\prime}}{\widetilde{k}^{\prime}}-1}{({U}\otimes{U^{\prime}})_{i,r}{A}_{i}}\right)\times\left(\sum\limits_{j=0}^{{\widetilde{k}}{\widetilde{n}}\cdot{\widetilde{k}^{\prime}}{\widetilde{n}^{\prime}}-1}{({V}\otimes{V^{\prime}})_{j,r}{B}_{j}}\right);\\ {C}_{p}+=({W}\otimes{W^{\prime}})_{p,r}{M}_{r}(p=0,...,{\widetilde{m}}{\widetilde{n}}\cdot{\widetilde{m}^{\prime}}{\widetilde{n}^{\prime}}-1)\end{array}

where \otimes represents Kronecker Product. Note that UU{U}\otimes{U^{\prime}}, VV{V}\otimes{V^{\prime}}, and WW{W}\otimes{W^{\prime}} are (m~k~m~k~)×(RR)({\widetilde{m}}{\widetilde{k}}\cdot{\widetilde{m}^{\prime}}{\widetilde{k}^{\prime}})\times(R\cdot R^{\prime}), (k~n~k~n~)×(RR)({\widetilde{k}}{\widetilde{n}}\cdot{\widetilde{k}^{\prime}}{\widetilde{n}^{\prime}})\times(R\cdot R^{\prime}), (m~n~m~n~)×(RR)({\widetilde{m}}{\widetilde{n}}\cdot{\widetilde{m}^{\prime}}{\widetilde{n}^{\prime}})\times(R\cdot R^{\prime}) matrices, respectively.

The set of coefficients of a two-level m~,k~,n~\langle\widetilde{m},\widetilde{k},\widetilde{n}\rangle and m~,k~,n~\langle\widetilde{m^{\prime}},\widetilde{k^{\prime}},\widetilde{n^{\prime}}\rangle FMM algorithm can be denoted as UU,VV,WW\llbracket{U}\otimes{U^{\prime}},{V}\otimes{V^{\prime}},{W}\otimes{W^{\prime}}\rrbracket.

For example, the two-level Strassen is represented by the coefficients UU,VV,WW\llbracket{U}\otimes{U},{V}\otimes{V},{W}\otimes{W}\rrbracket where U,V,W\llbracket{U},{V},{W}\rrbracket are the one-level Strassen coefficients given in (4).

3.5 Additional levels of FMM

Comparing one-level and two-level FMM, the same skeleton pattern emerges. The formula for defining LL-level FMM is given by,

for r=0,,l=0L1Rl1r=0,...,\prod\nolimits_{l=0}^{L-1}{R_{l}}-1,

Mr:=(i=0l=0L1m~lk~l1(l=0L1Ul)i,rAi)×(j=0l=0L1k~ln~l1(l=0L1Vl)j,rBj);Cp+=(l=0L1Wl)p,rMr(p=0,,l=0L1m~ln~l1){\begin{array}[]{@{}l}M_{r}:=\left(\sum\limits_{i=0}^{\prod\limits_{l=0}^{L-1}{{\widetilde{m}_{l}}{\widetilde{k}_{l}}}-1}{(\bigotimes\limits_{l=0}^{L-1}{U_{l}})_{i,r}{A}_{i}}\right)\times\left(\sum\limits_{j=0}^{\prod\limits_{l=0}^{L-1}{{\widetilde{k}_{l}}{\widetilde{n}_{l}}}-1}{(\bigotimes\limits_{l=0}^{L-1}{V_{l}})_{j,r}{B}_{j}}\right);\\ {C}_{p}+=(\bigotimes\limits_{l=0}^{L-1}{W_{l}})_{p,r}{M}_{r}(p=0,...,\prod\nolimits_{l=0}^{L-1}{{\widetilde{m}_{l}}{\widetilde{n}_{l}}}-1)\end{array}} (5)

The set of coefficients of an LL-level ml~,kl~,nl~\langle\widetilde{m_{l}},\widetilde{k_{l}},\widetilde{n_{l}}\rangle (l=0,1,,L1l=0,1,...,L-1) FMM algorithm can be denoted as l=0L1Ul,l=0L1Vl,l=0L1Wl\llbracket\bigotimes\nolimits_{l=0}^{L-1}{U_{l}},\bigotimes\nolimits_{l=0}^{L-1}{V_{l}},\bigotimes\nolimits_{l=0}^{L-1}{W_{l}}\rrbracket.

4 Implementation and Analysis

The last section shows that families of one-level FMM algorithms can be specified by m~,k~,n~\langle\widetilde{m},\widetilde{k},\widetilde{n}\rangle and U,V,W\llbracket U,V,W\rrbracket. It also shows how the Kronecker product can be used to generate multi-level FMM algorithms that are iterative rather than recursive. In this section, we discuss a code generator that takes as input m~,k~,n~\langle\widetilde{m},\widetilde{k},\widetilde{n}\rangle and U,V,W\llbracket U,V,W\rrbracket and as output generates implementations that build upon the primitives that combine taking linear combinations of matrices with the packing routines and/or micro-kernels that underlie BLIS. The code generator also provides a model of cost for each implementation that can then be used to choose the best FMM for a matrix of given size and shape. This code generator can thus generate code for arbitrary levels of FMM that can use different FMM choices at each level. In this way, we have generated and compared more than 200 FMM algorithms.

4.1 Code generation

Our code generator generates various implementations of FMM, based on the coefficient representation U,V,W\llbracket U,V,W\rrbracket, levels of recursion, and packing routine/micro-kernel incorporation specifications.

There are two stages for our code generator: generating the skeleton framework, and generating the typical operations given in (3).

Generating the skeleton framework

During this stage, the code generator

  • Computes the Kronecker Product of the coefficient matrices Ul,Vl,Wl\llbracket U_{l},V_{l},W_{l}\rrbracket in each level ll to get the new coefficients l=0L1Ul,l=0L1Vl,l=0L1Wl\llbracket\bigotimes\nolimits_{l=0}^{L-1}{U_{l}},\bigotimes\nolimits_{l=0}^{L-1}{V_{l}},\bigotimes\nolimits_{l=0}^{L-1}{W_{l}}\rrbracket.

  • Generates the matrix partition code by conceptual recursive block storage indexing with ml~,kl~,nl~\langle\widetilde{m_{l}},\widetilde{k_{l}},\widetilde{n_{l}}\rangle partition dimensions for each level.

  • For the general cases where one or more dimensions are not multiples of corresponding l=0L1ml~\prod\nolimits_{l=0}^{L-1}{\widetilde{m_{l}}}, l=0L1kl~\prod\nolimits_{l=0}^{L-1}{\widetilde{k_{l}}}, l=0L1nl~\prod\nolimits_{l=0}^{L-1}{\widetilde{n_{l}}}, it generates dynamic peeling [16] code to handle the remaining “fringes” after invoking FMM, which requires no additional memory.

Generating the typical operations

To generate the code for the typical operations in (3), the code generator

  • Generates packing routines (written in C), that sum a list of submatrices of AA integrated into the packing routine, yielding A~i\widetilde{A}_{i}, and similarly sum a list of submatrices of BB integrated into the packing routine, yielding B~p\widetilde{B}_{p}, extending what is illustrated in Figure 1.

  • Assembles a specialized micro-kernel comprised of a hand-coded gemm kernel and automatically generated updates to multiple submatrices of CC.

Further variations

In [2], a number of variations on the theme illustrated in Figure 1 (right) are discussed:

  • Naive FMM: A classical implementation with temporary buffers for storing the sum of AA, BB, and the intermediate matrix product MrM_{r}.

  • AB FMM: The packing routines incorporate the summation of submatrices of AA, BB into the packing of buffers A~i\widetilde{A}_{i} and B~p\widetilde{B}_{p} but explicit temporary buffers for matrices MrM_{r} are used.

  • ABC FMM: AB FMM, but with a specialized micro-kernel that incorporates addition of MrM_{r} to multiple submatrices of CC.

Incorporating the generation of these variations into the code generator yields over 200 FMM implementations.

4.2 Performance model

τa\tau_{a} Time (in sec.) of one arithmetic (floating point),
operation, reciprocal of theoretical peak GFLOPS.
τb\tau_{b} (Bandwidth) Amortized time (in sec.) of 8 Bytes
contigous data movement from DRAM to cache.
TT Total execution time (in sec.).
TaT_{a} Time for arithmetic operations (in sec. ).
TmT_{m} Time for memory operations (in sec. ).
Ta×T_{a}^{\times} TaT_{a} for submatrix multiplications.
TaA+T_{a}^{A_{+}}, TaB+T_{a}^{B_{+}}, TaC+T_{a}^{C_{+}} TaT_{a} for extra submatrix additions.
TmA×T_{m}^{A_{\times}}, TmB×T_{m}^{B_{\times}} TmT_{m} for reading submatrices in packing routines (Fig. 1).
TmA~×T_{m}^{{\widetilde{A}}_{\times}},TmB~×T_{m}^{{\widetilde{B}}_{\times}} TmT_{m} for writing submatrices in packing routines (Fig. 1).
TmC×T_{m}^{C_{\times}} TmT_{m} for reading and writing submatrices in micro-kernel
(Fig. 1).
TmA+T_{m}^{A_{+}}, TmB+T_{m}^{B_{+}}, TmC+T_{m}^{C_{+}} TmT_{m} for reading or writing submatrices, related to the
temporary buffer as part of Naive FMM and AB FMM.
nnz(X)nnz(X) non-zero entry number in matrix or vector XX.
Figure 4: Notation table for performance model.
\raisebox{-0.9pt}{1}⃝ Effective GFLOPS=2mnk/T109\text{\emph{Effective} GFLOPS}=2\cdot m\cdot n\cdot k/T\cdot 10^{-9}
\raisebox{-0.9pt}{2}⃝ T=Ta+TmT=T_{a}+T_{m}
\raisebox{-0.9pt}{3}⃝ Ta=Na×Ta×+NaA+TaA++NaB+TaB++NaC+TaC+T_{a}=N_{a}^{\times}\cdot T_{a}^{\times}+N_{a}^{A_{+}}\cdot T_{a}^{A_{+}}+N_{a}^{B_{+}}\cdot T_{a}^{B_{+}}+N_{a}^{C_{+}}\cdot T_{a}^{C_{+}}
\raisebox{-0.9pt}{4}⃝ Tm=NmA×TmA×+NmB×TmB×+NmC×TmC×+NmA+TmA++NmB+TmB++NmC+TmC+T_{m}=N_{m}^{A_{\times}}\cdot T_{m}^{A_{\times}}+N_{m}^{B_{\times}}\cdot T_{m}^{B_{\times}}+N_{m}^{C_{\times}}\cdot T_{m}^{C_{\times}}+N_{m}^{A_{+}}\cdot T_{m}^{A_{+}}+N_{m}^{B_{+}}\cdot T_{m}^{B_{+}}+N_{m}^{C_{+}}\cdot T_{m}^{C_{+}}
type τ\tau gemm LL-level
Ta×T_{a}^{\times} - τa\tau_{a} 2mnk2mnk 2mML~nNL~kKL~2\frac{m}{\widetilde{M_{L}}{}}\frac{n}{\widetilde{N_{L}}}\frac{k}{\widetilde{K_{L}}}
TaA+T_{a}^{A_{+}} - τa\tau_{a} - 2mML~kKL~2\frac{m}{\widetilde{M_{L}}{}}\frac{k}{\widetilde{K_{L}}}
TaB+T_{a}^{B_{+}} - τa\tau_{a} - 2kKL~nNL~2\frac{k}{\widetilde{K_{L}}{}}\frac{n}{\widetilde{N_{L}}{}}
TaC+T_{a}^{C_{+}} - τa\tau_{a} - 2mML~nNL~2\frac{m}{\widetilde{M_{L}}{}}\frac{n}{\widetilde{N_{L}}{}}
TmA×T_{m}^{A_{\times}} r τb\tau_{b} mknncmk\lceil\frac{n}{n_{c}}\rceil mML~kKL~n/NL~nc\frac{m}{\widetilde{M_{L}}{}}\frac{k}{\widetilde{K_{L}}{}}\lceil\frac{n/\widetilde{N_{L}}{}}{n_{c}}\rceil
TmA~×T_{m}^{{\widetilde{A}}_{\times}} w τb\tau_{b} mknncmk\lceil\frac{n}{n_{c}}\rceil mML~kKL~n/NL~nc\frac{m}{\widetilde{M_{L}}{}}\frac{k}{\widetilde{K_{L}}{}}\lceil\frac{n/\widetilde{N_{L}}{}}{n_{c}}\rceil
TmB×T_{m}^{B_{\times}} r τb\tau_{b} nknk nNL~kKL~\frac{n}{\widetilde{N_{L}}{}}\frac{k}{\widetilde{K_{L}}{}}
TmB~×T_{m}^{{\widetilde{B}}_{\times}} w τb\tau_{b} nknk nNL~kKL~\frac{n}{\widetilde{N_{L}}{}}\frac{k}{\widetilde{K_{L}}{}}
TmC×T_{m}^{C_{\times}} r/w τb\tau_{b} 2λmnkkc2\lambda mn\lceil\frac{k}{k_{c}}\rceil 2λmML~nNL~k/KL~kc2\lambda\frac{m}{\widetilde{M_{L}}{}}\frac{n}{\widetilde{N_{L}}{}}\lceil\frac{k/\widetilde{K_{L}}{}}{k_{c}}\rceil
TmA+T_{m}^{A_{+}} r/w τb\tau_{b} mkmk mML~kKL~\frac{m}{\widetilde{M_{L}}{}}\frac{k}{\widetilde{K_{L}}{}}
TmB+T_{m}^{B_{+}} r/w τb\tau_{b} nknk nNL~kKL~\frac{n}{\widetilde{N_{L}}{}}\frac{k}{\widetilde{K_{L}}{}}
TmC+T_{m}^{C_{+}} r/w τb\tau_{b} mnmn mML~nNL~\frac{m}{\widetilde{M_{L}}{}}\frac{n}{\widetilde{N_{L}}{}}
gemm LL-level
ABC AB Naive
Na×N_{a}^{\times} 1 RLR_{L}{} RLR_{L}{} RLR_{L}{}
NaA+N_{a}^{A_{+}} - nnz(U)-RLnnz(\bigotimes\!U{})\text{-}R_{L}{} nnz(U)-RLnnz(\bigotimes\!U{})\text{-}R_{L}{} nnz(U)-RLnnz(\bigotimes\!U{})\text{-}R_{L}{}
NaB+N_{a}^{B_{+}} - nnz(V)-RLnnz(\bigotimes\!V{})\text{-}R_{L}{} nnz(V)-RLnnz(\bigotimes\!V{})\text{-}R_{L}{} nnz(V)-RLnnz(\bigotimes\!V{})\text{-}R_{L}{}
NaC+N_{a}^{C_{+}} - nnz(W)nnz(\bigotimes\!W{}) nnz(W)nnz(\bigotimes\!W{}) nnz(W)nnz(\bigotimes\!W{})
NmA×N_{m}^{A_{\times}} 1 nnz(U)nnz(\bigotimes\!U{}) nnz(U)nnz(\bigotimes\!U{}) RLR_{L}{}
NmA~×N_{m}^{{\widetilde{A}}_{\times}} - - - -
NmB×N_{m}^{B_{\times}} 11 nnz(V)nnz(\bigotimes\!V{}) nnz(V)nnz(\bigotimes\!V{}) RLR_{L}{}
NmB~×N_{m}^{{\widetilde{B}}_{\times}} - - - -
NmC×N_{m}^{C_{\times}} 1 nnz(W)nnz(\bigotimes\!W{}) RLR_{L}{} RLR_{L}{}
NmA+N_{m}^{A_{+}} - - - nnz(U)+RLnnz(\bigotimes\!U{})\text{+}R_{L}{}
NmB+N_{m}^{B_{+}} - - - nnz(V)+RLnnz(\bigotimes\!V{})\text{+}R_{L}{}
NmC+N_{m}^{C_{+}} - - 3nnz(W)3nnz(\bigotimes\!W{}) 3nnz(W)3nnz(\bigotimes\!W{})
Figure 5: The top table shows the equations for computing the execution time TT and Effective GFLOPS in our performance model. The middle table shows the various components of arithmetic and memory operations for BLAS gemm and various implementations of FMM. The time shown in the first column for gemm and LL-level FMM can be computed separately by multiplying the parameter in τ\tau column with the arithmetic/memory operation number in the corresponding entries. The bottom table shows the coefficient NaXN_{a}^{X}/NmXN_{m}^{X} mapping table for computing TaT_{a}/TmT_{m} in the performance model. Here ML~=l=0L1ml~\widetilde{M_{L}}=\prod\nolimits_{l=0}^{L-1}{\widetilde{m_{l}}}, KL~=l=0L1kl~\widetilde{K_{L}}=\prod\nolimits_{l=0}^{L-1}{\widetilde{k_{l}}}, NL~=l=0L1nl~\widetilde{N_{L}}=\prod\nolimits_{l=0}^{L-1}{\widetilde{n_{l}}}, U=l=0L1Ul\bigotimes\!U=\bigotimes\nolimits_{l=0}^{L-1}{U_{l}}, V=l=0L1Vl\bigotimes\!V=\bigotimes\nolimits_{l=0}^{L-1}{V_{l}}, W=l=0L1Wl\bigotimes\!W=\bigotimes\nolimits_{l=0}^{L-1}{W_{l}}, RL=l=0L1RlR_{L}=\prod\nolimits_{l=0}^{L-1}{R_{l}}.

Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption

Figure 6: Performance of generated one-level ABC, AB, Naive FMM implementations on single core when m=n=14400m\!\!=\!\!n\!\!=\!\!14400, kk varies. Top row: actual performance; Bottom row: modeled performance. Left column: one-level, ABC; Middle column: one-level, AB; Right column: one-level, Naive.

Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption

Figure 7: Performance of generated two-level ABC FMM implementations on single core when m=k=nm\!\!=\!\!k\!\!=\!\!n; m=n=14400m\!\!=\!\!n\!\!=\!\!14400, kk varies; k=1024k\!\!=\!\!1024, m=nm\!\!=\!\!n vary. Top row: actual performance; Bottom row: modeled performance. Left column: m=k=nm\!\!=\!\!k\!\!=\!\!n; Middle column: m=n=14400m\!\!=\!\!n\!\!=\!\!14400, kk varies; Right column: k=1024k\!\!=\!\!1024, m=nm\!\!=\!\!n vary.

In [2], a performance model was given to estimate the execution time TT for the one-level/two-level ABC, AB, and Naive variations of 2,2,2\langle 2,2,2\rangle Strassen. In this subsection, we generalize that performance model to predict the execution time TT for the various FMM implementations generated by our code generator. Theoretical estimation helps us better understand the computation and memory footprint of different FMM implementations, and allows us to avoid exhaustive empirical search when searching for the best implementation for different problem sizes and shapes. Most importantly, our code generator can embed our performance model to guide the selection of a FMM implementation as a function of problem size and shape, with the input m~l,k~l,n~l\langle\widetilde{m}_{l},\widetilde{k}_{l},\widetilde{n}_{l}\rangle and Ul,Vl,Wl\llbracket U_{l},V_{l},W_{l}\rrbracket specifications on each level ll. These performance models are themselves automatically generated.

Assumption

We have similar architecture assumptions as in [2]. Basically we assume that the architecture has two layers of modern memory hierarchy: fast caches and relatively slow main memory (DRAM). For read operations, the latency for accessing cache can be ignored, while the latency for accessing the main memory is counted; For write operations, we assume a lazy write-back policy such that the time for writing into fast caches can be hidden. Based on these assumptions, the memory operations for gemm and various implementations of FMM are decomposed into three parts:

  • memory packing shown in Figure 1.

  • reading/writing the submatrices of CC in Figure 1.

  • reading/writing of the temporary buffer that are parts of Naive FMM/AB FMM.

Notation

Notation is summarized in Figure 4.

The total execution time, TT, is dominated by arithmetic time TaT_{a} and memory time TmT_{m} (\raisebox{-0.9pt}{2}⃝ in Figure 5).

Arithmetic operations

TaT_{a} is decomposed into submatrix multiplications (Ta×T_{a}^{\times}) and submatrix additions (TaA+T_{a}^{A_{+}}, TaB+T_{a}^{B_{+}}, TaC+T_{a}^{C_{+}}) (\raisebox{-0.9pt}{3}⃝ in Figure 5). TaX+T_{a}^{X_{+}} has a coefficient 2 because under the hood the matrix additions are cast into FMA operations. The corresponding coefficients NaXN_{a}^{X} are tabulated in Figure 5. For instance, NaA+N_{a}^{A_{+}} = nnz(U)RLnnz(\bigotimes\!U{})-R_{L}{} for LL-level FMM, because computing ((U)i,rAi)\sum{((\bigotimes\!U{})_{i,r}{A}_{i})} in (5) involves r=0RL1(nnz((U):,r)1)\sum\nolimits_{r=0}^{R_{L}-1}(nnz((\bigotimes\!U{})_{:,r})-1) = nnz(U)RLnnz(\bigotimes\!U{})-\!R_{L}{} submatrix additions. Note that X:,rX_{:,r} denotes the rrth column of XX.

Memory operations

TmT_{m} is a function of the submatrix sizes {m/ML~m/\widetilde{M_{L}}, k/KL~k/\widetilde{K_{L}}, n/NL~n/\widetilde{N_{L}}}, and the block sizes {mCm_{C}, kCk_{C}, nCn_{C}} in Figure 1(right), because the memory operation can repeat multiple times according to which loop they reside in. TmT_{m} is broken down into several components, as shown in \raisebox{-0.9pt}{4}⃝ in Figure 5. Each memory operation term is characterized in Figure 5 by its read/write type and the amount of memory in units of 64-bit double precision elements. Note that TmA~×T_{m}^{{\widetilde{A}}_{\times}},TmB~×T_{m}^{{\widetilde{B}}_{\times}} are omitted in \raisebox{-0.9pt}{4}⃝ because of the assumption of lazy write-back policy with fast caches. Due to the software prefetching effects, TmC×=2λmML~nNL~k/KL~kcτbT_{m}^{C_{\times}}\!\!=\!\!2\lambda\frac{m}{\widetilde{M_{L}}{}}\frac{n}{\widetilde{N_{L}}{}}\lceil\frac{k/\widetilde{K_{L}}{}}{k_{c}}\rceil\tau_{b} has an additional parameter λ[0.5,1]\lambda\in[0.5,1], which denotes the prefetching efficiency. λ\lambda is adapted to match gemm performance. Note that this is a ceiling function proportional to kk, because rank-k updates for accumulating submatrices of CC recur k/KL~kc\lceil\frac{k/\widetilde{K_{L}}{}}{k_{c}}\rceil times in 4th loop in Figure 1. The corresponding coefficients NmXN_{m}^{X} are tabulated in Figure 5. For example, for Naive FMM and AB FMM, computing Cp+=(W)p,rMr(p=0,){C}_{p}+\!\!=(\bigotimes\!W{})_{p,r}{M}_{r}(p=0,...) in (5) involves 2 read and 1 write related to temporary buffer in slow memory. Therefore, NmC×N_{m}^{C_{\times}} = 3nnz(W)3nnz(\bigotimes\!W{}).

4.3 Discussion

We can make estimations about the run time performance of the various FMM implementations generated by our code generator, based on the analysis shown in Figure 5. We use Effective GFLOPS (defined in \raisebox{-0.9pt}{1}⃝ in Figure 5) as the metric to compare the performance of these various FMM implementations, similar to [1, 17, 18]. The architecture-dependent parameters for the model are given in Section 5. We demonstrate the performance of two representative groups of experiments in Figures 6 and 7.

  • Contrary to what was observed in [2], Naive FMM may perform better than ABC FMM and AB FMM for relatively large problem size. For example, in Figure 6, 3,6,3\langle 3,6,3\rangle (with the maximum theoretical speedup among all FMMs we test, Figure 2) has better Naive FMM performance than ABC FMM and AB FMM. This is because the total number of times for packing in 3,6,3\langle 3,6,3\rangle is very large (NmA×=nnz(U)N_{m}^{{A}_{\times}}=nnz(\bigotimes\!U{}), NmB×=nnz(V)N_{m}^{{B}_{\times}}=nnz(\bigotimes\!V{})). This magnifies the overhead for packing with AB FMM/ABC FMM.

  • Contrary to what was observed in [1], for rank-k updates (middle column, right column, Figure 7), 2,2,2\langle 2,2,2\rangle still performs the best with ABC FMM implementations ([1] observe some other shapes, e.g. 4,2,4\langle 4,2,4\rangle, tend to have higher performance). This is because their implementations are similar to Naive FMM, with the overhead for forming the MrM_{r} matrices explicitly.

  • Figure 6 shows that for small problem size, when kk is small, ABC FMM performs best; when kk is large, AB FMM/Naive FMM perform better. That can be quantitatively explained by comparing the coefficients of NmXN_{m}^{X} in the bottom table in Figure 5.

  • The graph for m=n=14400m=n=14400, kk varies, ABC, 1core (left column, Figure 6; middle column, Figure 7) shows that for kk equal to the appropriate multiple of kCk_{C} ( k=l=0L1kl~×kCk=\prod\nolimits_{l=0}^{L-1}{\widetilde{k_{l}}}\times k_{C}), ABC FMM achieves the best performance.

4.4 Apply performance model to code generator

Refer to caption

Figure 8: Selecting FMM implementation with performance model.

For actual performance, even the best implementation has some unexpected drops, due to the “fringes” which are caused by the problem sizes not being divisible by partition dimesions m~\widetilde{m}, k~\widetilde{k}, n~\widetilde{n}. This is not captured by our performance model. Therefore, given the specific problem size and shape, we choose the best two implementations predicted by our performance model as the top two candidate implementations, and then measure the performance in practice to pick the best one.

In Figure 8 we show the performance results on single core by selecting the generated FMM implementation with the guide of performance model, when m=k=nm\!\!=\!\!k\!\!=\!\!n; m=n=14400m\!\!=\!\!n\!\!=\!\!14400, kk varies; and k=1024k\!\!=\!\!1024, m=nm\!\!=\!\!n vary.

Overall this experiment shows that the performance model is accurate enough in terms of relative performance between various FMM implementations to guide the choice of a FMM implementation, with the problem sizes and shapes as the inputs. That will reduce the potential overhead of exhaustive empirical search.

5 Performance Experiments

We present performance evaluations for various generated FMM implementations.

5.1 Implementation and architecture information

The FMM implementations generated by our code generator are written in C, utilizing SSE2 and AVX assembly, compiled with the Intel C compiler version 15.0.3 with optimization flag -O3 -mavx.

We compare against our generated dgemm (based on the packing routines and micro-kernel borrowed from BLIS, marked as BLIS in the performance figures) as well as Intel MKL’s dgemm [19] (marked as MKL in the performance figures).

We measure performance on a dual-socket (10 cores/socket) Intel Xeon E5-2680 v2 (Ivy Bridge) processor with 12.8 GB/core of memory (Peak Bandwidth: 59.7 GB/s with four channels) and a three-level cache: 32 KB L1 data cache, 256 KB L2 cache and 25.6 MB L3 cache. The stable CPU clockrate is 3.54 GHz when a single core is utilized (28.32 GFLOPS peak, marked in the graphs) and 3.10 GHz when ten cores are in use (24.8 GLOPS/core peak). To set thread affinity and to ensure the computation and the memory allocation all reside on the same socket, we use KMP_AFFINITY=compact.

The blocking parameters, nR=4n_{R}=4, mR=8m_{R}=8, kC=256k_{C}=256, nC=4096n_{C}=4096 and mC=96m_{C}=96, are consistent with parameters used for the standard BLIS dgemm implementation for this architecture. This makes the size of the packing buffer A~i\widetilde{A}_{i} 192 KB and B~p\widetilde{B}_{p} 8192 KB, which then fit the L2 cache and L3 cache, respectively.

Parallelization is implemented mirroring that described in [20], using OpenMP directives that parallelize the third loop around the micro-kernel in Figure 1.

5.2 Benefit of hybrid partitions

First, we demonstrate the benefit of using different FMM algorithms for each level.

Refer to caption Refer to caption

Figure 9: Benefit of hybrid partitions over other partitions.

We report the performance of different combinations of one-level/two-level 2,2,2\langle 2,2,2\rangle, 2,3,2\langle 2,3,2\rangle, and 3,3,3\langle 3,3,3\rangle in Figure 9, when kk is fixed to 1200 and m=nm=n vary. As suggested and illustrated in Section 4.3, ABC FMM performs best for rank-k updates, which is why we only show the ABC FMM performance.

Overall the hybrid partitions 2,2,2\langle 2,2,2\rangle + 2,3,2\langle 2,3,2\rangle and 2,2,2\langle 2,2,2\rangle + 3,3,3\langle 3,3,3\rangle achieve the best performance. This is because 1200 is close to 2×3×kC2\times 3\times k_{C}, meaning that the hybrid partitions of 2 and 3 on the kk dimension are more favorable. This is consistent with what the performance model predicts. Performance benefits are less for 10 cores due to bandwidth limitations, although performance of hybrid partitions still beats two-level homogeneous partitions.

This experiment shows the benefit of hybrid partitions, facilitated by the Kronecker product representation.

Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption Refer to caption

Figure 10: Performance of the best implementation of our generated FMM code and reference implementations [1] on one socket (10 core). Top row: our implementations; Bottom row: reference implementations from [1] (linked with Intel MKL). Left column: m=k=nm\!\!=\!\!k\!\!=\!\!n; Middle column: m=n=14400m\!\!=\!\!n\!\!=\!\!14400, kk varies; Right column: k=1024k\!\!=\!\!1024, m=nm\!\!=\!\!n vary.

5.3 Sequential and parallel performance

Results when using a single core are presented in Figures 2, 6, and 7. Our generated ABC FMM implementation outperforms AB FMM/Naive FMM and reference implementations from [1] for rank-k updates (when kk is small). For very large square matrices, our generated AB FMM or Naive FMM can achieve competitive performance with reference implementations [1] that is linked with Intel MKL. These experiments support the validity of our model.

Figure 10 reports performance results for ten cores within the same socket. Memory bandwidth contention impacts the performance of various FMM when using many cores. Nonetheless we still observe the speedup of FMM over gemm. For smaller matrices and special shapes such as rank-k updates, our generated implementations achieve better performance than reference implementations [1].

6 Conclusion

We have discussed a code generator framework that can automatically implement families of FMM algorithms for Strassen-like fast matrix multiplication algorithms. This code generator expresses the composition of multi-level FMM algorithms as Kronecker products. It incorporates the matrix summations that must be performed for FMM into the inherent packing and micro-kernel operations inside gemm, avoiding extra workspace requirement and reducing the overhead of memory movement. Importantly, it generates an accurate performance model to guide the selection of a FMM implementation as a function of problem size and shape, facilitating the creation of poly-algorithms that select the best algorithm for a problem size. Comparing with state-of-the-art results, we observe a significant performance improvement for smaller matrices and special matrix multiplication shapes such as rank-k updates, without the need for exhaustive empirical search.

There are a number of avenues for future work:

  • Task parallelism and various parallel schemes are proposed in the recent literature [21, 1]. We need to pursue how our techniques compare to these and how to combine these with our advances. It may be possible to utilize our performance model to help with task scheduling.

  • Finding the new FMM algorithms by searching the coefficient matrix U,V,W\llbracket{U},{V},{W}\rrbracket is an NP-hard problem [22]. It may be possible to prune branches with the performance model as the cost function during the search process.

  • In [2], it is shown that Intel Xeon Phi coprocessor (KNC) can benefit from ABC variation of 2,2,2\langle 2,2,2\rangle Strassen. It may be possible to get performance benefit by porting our code generator to generate variations of FMM implementations for many-core architecture such as second-generation Intel Xeon Phi coprocessor (KNL).

Additional information

Additional information regarding BLIS and related projects can be found at

http://shpc.ices.utexas.edu

Acknowledgments

This work was sponsored in part by the National Science Foundation under grant number ACI-1550493, by Intel Corporation through an Intel Parallel Computing Center grant, and by a gift from Qualcomm. Access to the Maverick supercomputers administered by TACC is gratefully acknowledged. Jianyu Huang was supported by a summer fellowship from Graduate School of UT Austin. DAM is an Arnold O. Beckman Postdoctoral Fellow. We thank the rest of the SHPC team (http://shpc.ices.utexas.edu) for their supports.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

References

  • [1] A. R. Benson and G. Ballard, “A framework for practical parallel fast matrix multiplication,” in Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, ser. PPoPP 2015. New York, NY, USA: ACM, 2015, pp. 42–53.
  • [2] J. Huang, T. M. Smith, G. M. Henry, and R. A. van de Geijn, “Strassen’s algorithm reloaded,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, ser. SC ’16. New York, NY, USA: ACM, 2016.
  • [3] C.-H. Huang, J. R. Johnson, and R. W. Johnson, “A tensor product formulation of Strassen’s matrix multiplication algorithm,” Applied Mathematics Letters, vol. 3, no. 3, pp. 67–71, 1990.
  • [4] J. Li, A. Skjellum, and R. D. Falgout, “A poly-algorithm for parallel dense matrix multiplication on two-dimensional process grid topologies,” Concurrency: Practice and Experience, vol. 9, no. 5, pp. 345–389, May 1997.
  • [5] F. G. Van Zee and R. A. van de Geijn, “BLIS: A framework for rapidly instantiating BLAS functionality,” ACM Trans. Math. Soft., vol. 41, no. 3, pp. 14:1–14:33, June 2015. [Online]. Available: http://doi.acm.org/10.1145/2764454
  • [6] K. Goto and R. A. van de Geijn, “Anatomy of a high-performance matrix multiplication,” ACM Trans. Math. Soft., vol. 34, no. 3, p. 12, May 2008, article 12, 25 pages.
  • [7] Tze Meng Low, F. D. Igual, T. M. Smith, and E. S. Quintana-Ortí, “Analytical modeling is enough for high performance blis,” ACM Transactions on Mathematical Software, in review.
  • [8] N. J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd ed. Philadelphia, PA, USA: SIAM, 2002.
  • [9] J. Demmel, I. Dumitriu, O. Holtz, and R. Kleinberg, “Fast matrix multiplication is stable,” Numerische Mathematik, vol. 106, no. 2, pp. 199–224, 2007.
  • [10] G. Ballard, A. R. Benson, A. Druinsky, B. Lipshitz, and O. Schwartz, “Improving the numerical stability of fast matrix multiplication,” SIAM Journal on Matrix Analysis and Applications, vol. 37, no. 4, pp. 1382–1418, 2016.
  • [11] V. Strassen, “Gaussian elimination is not optimal,” Numer. Math., vol. 13, pp. 354–356, 1969.
  • [12] A. V. Smirnov, “The bilinear complexity and practical algorithms for matrix multiplication,” Computational Mathematics and Mathematical Physics, vol. 53, no. 12, pp. 1781–1795, 2013. [Online]. Available: http://dx.doi.org/10.1134/S0965542513120129
  • [13] D. Bini, M. Capovani, F. Romani, and G. Lotti, “O (n2.7799n^{2.7799}) complexity for n×nn\times n approximate matrix multiplication,” Information processing letters, vol. 8, no. 5, pp. 234–235, 1979.
  • [14] A. Graham, “Kronecker products and matrix calculus: With applications.” JOHN WILEY & SONS, INC., 605 THIRD AVE., NEW YORK, NY 10158, 1982, 130, 1982.
  • [15] E. Elmroth, F. Gustavson, I. Jonsson, and B. Kågström, “Recursive blocked algorithms and hybrid data structures for dense matrix library software,” SIAM review, vol. 46, no. 1, pp. 3–45, 2004.
  • [16] M. Thottethodi, S. Chatterjee, and A. R. Lebeck, “Tuning Strassen’s matrix multiplication for memory efficiency,” in Proceedings of the 1998 ACM/IEEE Conference on Supercomputing, ser. SC ’98. Washington, DC, USA: IEEE Computer Society, 1998, pp. 1–14. [Online]. Available: http://dl.acm.org/citation.cfm?id=509058.509094
  • [17] B. Grayson and R. van de Geijn, “A high performance parallel Strassen implementation,” Parallel Processing Letters, vol. 6, no. 1, pp. 3–12, 1996.
  • [18] B. Lipshitz, G. Ballard, J. Demmel, and O. Schwartz, “Communication-avoiding parallel Strassen: Implementation and performance,” in Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, ser. SC ’12. Los Alamitos, CA, USA: IEEE Computer Society Press, 2012, pp. 101:1–101:11.
  • [19] “Intel Math Kernel Library,” https://software.intel.com/en-us/intel-mkl.
  • [20] T. M. Smith, R. A. van de Geijn, M. Smelyanskiy, J. R. Hammond, and F. G. Van Zee, “Anatomy of high-performance many-threaded matrix multiplication,” in 28th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2014), 2014.
  • [21] P. D’Alberto, M. Bodrato, and A. Nicolau, “Exploiting parallelism in matrix-computation kernels for symmetric multiprocessor systems: Matrix-multiplication and matrix-addition algorithm optimizations by software pipelining and threads allocation,” ACM Trans. Math. Softw., vol. 38, no. 1, pp. 2:1–2:30, December 2011. [Online]. Available: http://doi.acm.org/10.1145/2049662.2049664
  • [22] D. E. Knuth, The Art of Computer Programming, Volume 2 (3rd Ed.): Seminumerical Algorithms. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 1997.