On the Exactness of SDP Relaxation for Quadratic Assignment Problem
Abstract
Quadratic assignment problem (QAP) is a fundamental problem in combinatorial optimization and finds numerous applications in operation research, computer vision, and pattern recognition. However, it is a very well-known NP-hard problem to find the global minimizer to the QAP. In this work, we study the semidefinite relaxation (SDR) of the QAP and investigate when the SDR recovers the global minimizer. In particular, we consider the two input matrices satisfy a simple signal-plus-noise model, and show that when the noise is sufficiently smaller than the signal, then the SDR is exact, i.e., it recovers the global minimizer to the QAP. It is worth noting that this sufficient condition is purely algebraic and does not depend on any statistical assumption of the input data. We apply our bound to several statistical models such as correlated Gaussian Wigner model. Despite the sub-optimality in theory under those models, empirical studies show the remarkable performance of the SDR. Our work could be the first step towards a deeper understanding of the SDR exactness for the QAP.
1 Introduction
Given two matrices and , how to find a simultaneous row and column permutation of such that the resulting two matrices are well aligned? This problem, known as the quadratic assignment (QAP), is one of most challenging problems in optimization [5, 6, 32, 34, 38]. Moreover, it has found numerous applications including graph matching [3, 13], de-anonymization and privacy [43], and protein network [47], and traveling salesman [16, 29].
One of the most common approaches to find the optimal permutation is to minimize the least squares objective:
(1.1) |
where is the set of all permutation matrices. In general, it is well-known NP-hard problem to find its global minimizer [32]. Numerous works have been done to either approximate or exactly solve the QAP problem [5, 6, 27, 34, 38]. Among various algorithms, convex relaxation is a popular approach to solve the QAP [18, 24, 39, 44, 51, 29]. In [27], Gilmore proposed a famous relaxation of quadratic assignment via linear programming and [27, 34] derived a lower bound for the QAP problem. Another straightforward convex relaxation is to relax the permutation matrix to the set of doubly stochastic matrix , which leads to a quadratic program:
(1.2) |
One can also consider spectral relaxation of the QAP [46, 5, 7], and moreover an estimation of the optimal value of the QAP can be also characterized by the spectra of input data matrices.
In this work, we will focus on the semidefinite relaxation (SQR) of QAP. One well-known SDR was proposed in [51]. After that, many variants of SDR have been proposed [44, 15, 16] to improve the formulation in [51], and [10, 42] have studied efficient algorithms to solve the SDR. Our work will be more on the theoretical sides of the SDR for QAP. In particular, we are interested in the following question:
When does the SDR recover the global minimizer to (1.1)? |
Without any more constraints, there is certainly almost no hope to find the exact solution to the QAP for general input data due to its NP-hardness. We consider a signal-plus-noise model for the quadratic assignment. More precisely, let be an symmetric matrix (e.g. adjacency matrix), and is a perturbed matrix of :
(1.3) |
where is an unknown permutation matrix. Our goal is to recover from the two matrices and in an efficient way.
In the noise-free case, i.e., which corresponds to the graph isomorphism, the ground true permutation can be exactly recovered by solving (1.2) or using spectral methods [23, 33] under some regularity conditions on the spectra of . Several other convex programs have been studied [3] to recover the ground true permutation for or in presence of very weak noise. Recently, the quadratic assignment has been extensively studied under various statistical models (average-case analysis), especially in the context of graph matching or graph alignment such as correlated Erdös-Rényi graph model [41, 50, 20, 30] and correlated Wigner model [14, 25, 19]. A series of works fully exploit the statistical properties of random weight matrix, and use spectral methods [21, 19, 26] or extracts the feature of vertices [17, 40] to efficiently align two correlated random matrices. The core question is: under what noise levels, one can design an efficient algorithm to find the permutation [19, 20, 21] and whether the algorithm can achieve the information-theoretical threshold [14, 25].
On the other hand, the study on the optimization approaches to solve these random instances is quite limited compared with spectral methods or feature-extraction-based approaches. But optimization-based approach often enjoys more robustness [12]. Therefore, we are interested in studying the performance of optimization methods, especially SDR, in solving the random instances of QAP. In the work [39], the authors studied (1.2) for the correlated Erdös-Rényi model and proved that (1.2) will never produce the exact permutation matrix even if the noise level is extremely small. The work [19, 20] proposed a spectral method to estimate the true permutation under both correlated Gaussian Wigner and Erdös-Rényi models. The spectral method can be viewed as a convex relaxation of (1.1). Although that the global minimizer is not a permutation, it produces the true permutation after a rounding procedure.
SDR (semidefinite relaxation) has proven itself to be powerful in tacking many challenging nonconvex problems in signal processing and data science [48]. The exactness or tightness of SDR has been studied in various problems in signal processing and data science. k-means and data clustering [8, 31, 35, 37], community detection [1, 2], synchronization [9, 52, 36], phase retrieval [11], matrix completion [45], and blind deconvolution [4]. These works show that the SDR can recover the ground truth as long as the SNR (signal-to-noise ratio) is sufficiently large, such as the sample size is large enough or the noise in the data is smaller than certain threshold. Inspired by these observations, we will study when the exactness holds for the SDR of QAP, i.e., the SDR produces a permutation matrix which is also the global minimizer to (1.1).
In this work, we focus on two variants of SDR for the QAP, and study its exactness under the signal-plus-noise model. We provide a sufficient condition that is based on the spectral gap of and also the noise strength to guarantee the exactness of the SDR. It is worth noting that this sufficient condition is deterministic, and can be applied to several statistical models. Despite the theoretical sub-optimality in the statistical examples, the SDR has shown powerful numerical performance and this could be the first step towards understanding the exactness of SDR for QAP.
1.1 Notation
Before proceeding, we go over some notations that will be used. We denote boldface and as a vector and matrix respectively, and and are their corresponding transpose. The and stands the identity and constant “1” matrix of size . For a vector , is a diagonal matrix whose diagonal entries are given by For two matrices and of the same size, is the Hadamard product, is their inner product, and is their Kronecker product. For any matrix , the Frobenius, operator norms, and the maximum absolute value among all entries are denoted by , , and respectively. Here is the set of permutation matrix, is the set of all doubly stochastic matrix, is a one-hot vector, and if , and if We let be the vectorization of by stacking the columns of , and for a given vector , we denote as the matrization of .
1.2 Organization
2 Preliminaries and main results
Without loss of generality, we assume the hidden ground true permutation is in (1.3). Note that we can rewrite this least squares objective by vectorizing and :
where and are symmetric, and is the vectorization of , i.e., for any permutation , which is bijective on ,
(2.1) |
Therefore, by letting
(2.2) |
the quadratic assignment problem is equivalent to
(2.3) |
Another equivalent form of (1.1) follows from
By letting
(2.4) |
the least squares objective is equal to (2.3). Throughout the discussion, we will mainly focus on (2.3) with in (2.2), and all the theoretical analysis also applies to (2.4) with minor modifications.
2.1 Convex relaxation
By letting we note that (2.3) is a linear function in . Therefore, the idea of convex relaxation of (2.3) is to find a proper convex set that includes all rank-1 matrix where and By (2.1), is highly structured:
which consists of blocks with each block exactly rank-1 and only containing one non-zero entry.
Now we try to find a proper convex set which contains
It is obvious that and , which will be incorporated into the constraints.
Convex relaxation I.
Note that for each block, we have
is exactly rank-1 and only contains one nonzero entry. Moreover, for any permutation , it holds that
and also
Therefore, combining these constraints leads to the following convex relaxation:
(2.5) | ||||
s.t. | ||||
Convex relaxation II.
Note that for , it holds Using the fact that
we have a few new constraints:
where is the set of doubly stochastic matrix. Combining the constraints above with and nonnegativity, we get the SDR similar to [51]:
(2.6) | ||||
s.t. | ||||
For the last constraints in (2.6), the explicit form of is given by
which reshapes the diagonal elements of into an matrix. The relaxation (2.6) is tighter than (2.5) as it imposes a few more constraints on
2.2 Main theorems
With the introduction of two SDRs in (2.5) and (2.6), we are ready to present our main theorems. Note that if has distinct eigenvalues and all eigenvectors are not orthogonal to , then finding the optimal permutation is possible via simple convex relaxations [3, 33]. We interested in whether the exactness holds in presence of the noise, and below is our main theorem.
Theorem 2.1.
Let be the -th eigenvalue and eigenvector of , then is the unique global minimizer to SDR (2.5) if
where and are the -th columns of and respectively.
There are remarks regarding Theorem 2.1. Suppose , and , then the exactness always holds. This is aligned with the results for graph isomorphism in [3, 33]. The interesting point is that even if the noise exists, as long as the noise strength is sufficiently small compared with minimum spectral gap and the alignment of and , then SDR is still exact. In other words, our theorem provides a deterministic condition that guarantees the exactness of the SDR (2.5) in presence of noise. A result of similar flavor was derived in [3, Lemma 2], which provides an error estimation between the solution from a quadratic program relaxation and the true permutation in terms of spectral gap and . However, the exactness was not obtained in [3].
For the SDR in (2.6), we can derive a similar deterministic sufficient condition for its exactness under the same assumptions.
Theorem 2.2.
Let be the -th eigenvalue and eigenvector of , then is the unique global minimizer to SDR (2.6) if
where and are the -th columns of and respectively.
The proof of both theorems can be reduced to constructing a dual certificate (finding the dual variables) such that it can ensure the global optimality of the solution This routine is well-established but the actual construction is highly problem dependent. For the SDR of QAP, the construction is not simple as the SDRs have complicated constraints. One may expect a sharper theoretical bound for the SDR (2.6) than (2.5), as (2.6) has more constraints. However, due to the complication of constructing a proper dual certificate that ensures , we employ a similar construction of the dual variables and this leads to the same deterministic condition.
As both Theorem 2.1 and 2.2 are general, we present three special examples and see how our theorem works, and also make a comparison with the numerical experiments and the best known theoretical bounds. The experiments under these models imply that our theoretical bound is sub-optimal, although the SDR is remarkable in numerics. As briefly mentioned before, this sub-optimality results from the construction of dual certificate which is quite challenging for the SDR of QAP with general input data.
2.3 Examples and numerics
Example: diagonal matrix plus Gaussian noise.
Suppose is a diagonal matrix, i.e.,
where the eigenvalues are in a descending order and the eigenvector is . Suppose where is a standard Gaussian random vector, and is always a diagonal matrix. Then applying Theorem 2.1 and 2.2 implies that the SDR is exact if
where holds with high probability and . Then the noise level should satisfy
On the other hand, given two diagonal matrices and , the global minimizer to (1.1) should still be if and only if the ordering of the eigenvalues remains unchanged, which holds if
For a specific example , , then
This indicates that the SDR is sub-optimal by a factor of
We also look into the performance of (2.5) in numerical simulations. We choose due to the high computational complexity for larger , and let and . For each , we run 20 experiments and compute the correlation of and
(2.7) |
where is the solution to (2.5). This correlation is between 0 and 1: the higher the correlation, the better the recovery. In particular, if is 1, then . We count one instance as exactness if All simulations are done by using CVX [28] and the numerical result is presented below in Figure 1.
Figure 1 indicates that for , then the SDR is exact and its performance nearly matches the optimal bound. The two types of SDRs perform similarly under this random setting.
Example: diagonal-plus-Wigner model
Consider and is a Gaussian random matrix with , and . Then applying Theorem 2.1 implies that
is needed to ensures the exactness where and holds with high probability [49]. In particular, for , , then the exactness of the SDR holds if
The numerical experiment is given in Figure 2, which shows the exactness holds if . For , our bound requires in theory. This implies that there is gap between the actual performance of the SDR and our results.
Example: correlated Wigner model.
The correlated Wigner model assumes and are two Gaussian random matrices satisfying where is a Gaussian random matrix independent of . To apply Theorem 2.1 and 2.2, it suffices to obtain a lower bound on the spectral gap and also an upper bound on
For the spectral gap, [22, Corollary 1] implies for sufficiently large, it holds that
where denotes the smallest gap. Therefore, it holds with probability at least that
As a result, our result implies that the exactness holds if
where and holds with high probability. As shown in the Figure 3, it implies that suffices to have the exactness numerically which is far better than the theoretic bound. Under the correlated Wigner model, (2.6) performs better than (2.5) when
Let’s conclude this section by commenting on the numerics and also future research problems. In all three examples, we can see the SDRs perform much better in numerics than the theoretical bound by a factor of or even more. In fact, for the correlated Gaussian Wigner model, the SDRs are able to recover the true permutation even if is of constant order, which is comparable with the state-of-the-art works on the same model such as [19]. Therefore, there is still big room for the improvement. In particular, it will be very interesting to show theoretically that SDR achieves exactness on random instances that is consistent with the numerical observation. This asks for a new technique to construct dual certificates that could exploit the statistical property of noise. More importantly, it is unclear that for the QAP problem, what is the best way to characterize the SNR (signal-to-noise ratio) that could be useful in analyzing the SDR of QAP. For example, in the correlated Gaussian Wigner model, using the spectral gap for the analysis does not seem to match the numerical performance as the spectral gap of a Gaussian random matrix can be very small for larger . All of these will be worthwhile to consider in the future.
3 Proof of Theorem 2.1
The proof of Theorem 2.1 essentially follows from the well-established technique in convex relaxation. Without loss of generality, we assume and thus
Our goal is to establish sufficient conditions that ensure the global minimizer to (2.5) is by constructing a dual certificate.
3.1 Dual program and optimality condition of (2.5)
As a result, the dual program of (2.5) is
(3.1) | ||||
s.t. | ||||
Suppose is the global minimizer, then corresponding the complementary slackness:
(3.2) |
Note that and , and then is equivalent to Therefore the KKT condition becomes:
-
1.
Stationarity:
(3.3) -
2.
Dual feasibility:
(3.4) -
3.
Complementary slackness:
(3.5)
where is the data matrix. Due to the nonnegativity of , is equivalent to where “” is the Hadamard product of two matrices.
Now we compute to further make the KKT condition explicit. Instead of computing , we will use the property of Kronecker product to simplify the expression. Note that
where We define
(3.6) |
as we will use it quite often, and it holds
Therefore, (3.3), (3.4) and (3.5) imply
(3.7) | ||||
where is in (3.4). Therefore, the KKT condition has a simplified form:
(3.8) | ||||
In the following section, we will construct an explicit dual certificate, i.e., , and such that (3.8) holds. Moreover, we can also certify as the unique global minimizer under mild conditions.
Theorem 3.1.
Suppose there exist with , and and such that for some , then is a global minimizer. In addition, if , then is the unique global minimizer.
Proof: .
Let be any feasible solution that is not equal to . Then
where and holds due to the primal feasibility of and . Now we have
where is nonzero. In particular, if , then the inequality above is strict, implying the uniqueness of the global minimizer. ∎
3.2 Construction of a dual certificate
From the first equation in (3.8) and , it holds
where and Note that , and thus , i.e., the diagonal elements of equal 0. Therefore, the second equation in (3.8) gives .
With the discussion above, we try to first determine via solving the linear equation , and we will find that the construction of a dual certificate finally reduce to searching for proper and .
Proposition 3.2 (Construction of and ).
Suppose
(3.9) |
for some , , and . Let
(3.10) |
then and hold automatically. Moreover, if
(3.11) |
for some and As a result, it holds
(3.12) |
with the construction of in (3.10).
Proof: .
We proceed to verify that :
follows from (3.9). This implies that the diagonal entries of are zero, i.e., the supports of and are disjoint. Now using in (3.10) leads to
where and
Then it holds that
where ∎
Finally, we summarize our findings: by choosing in the form of (3.10), it suffices to find , , and such that
(3.13) | ||||
where the first two constraints ensure and respectively, and third one corresponds to
Now we proceed to prove Theorem 2.1. The proof relies on the following proposition that assume . In the noiseless case, i.e., with :
where is the th eigenvalue and eigenvector of The smallest eigenvectors of are 0 with corresponding eigenvectors
Theorem 3.3 (Noise-free version of Theorem 2.1).
Suppose for all and the eigenvalues of are distinct. Moreover, by letting
(3.14) |
then the second smallest eigenvalue of
is bounded below by
for sufficiently large , and
Proof: .
Next, we will show under the assumption of this theorem. Before proceeding, we introduce a few notations. Let consist of eigenvectors of with corresponding eigenvalues , i.e., Let
be the eigenvectors of w.r.t. eigenvalue . Then it holds that
(3.15) |
Let , and
where , , and are all projection matrices satisfying and
Our goal is to show that for some sufficiently large , it holds
for some . Lemma 4.2 implies that it suffices to prove that
Note that
and thus it remains to show that for some . Since
it remains to show that and equivalently
which follows from
Since , we first compute
and then
where
Using (3.15), we have
where . Therefore, we only need to control the second smallest eigenvalue of the first term.
Since is rank-1 and , we have is a projection matrix. The second smallest eigenvalue of is lower bounded by Therefore, we have
(3.16) |
where and it holds Finally, we have
Lemma 4.2 implies that
for a sufficiently large where can be arbitrarily small for a sufficiently large . ∎
Proof of Theorem 2.1.
Theorem 3.1 indicates that if the second smallest eigenvalue of is nonnegative, then the exactness holds, i.e., is a global minimizer to (2.5).
In presence of noise, equals
Let
(3.17) | ||||
for some and . Now we verify (3.9) in Proposition 3.2:
where Therefore, we can choose in the form of (3.10). To ensure the KKT condition (3.13), it suffices to have (3.11) holds so that and in (3.12) is positive semidefintie. For the first requirement, we have
and thus is guaranteed if
provided that .
For , we note
where Then
4 Proof of Theorem 2.2
4.1 Dual program and optimality condition of (2.6)
We start with deriving the dual form of (2.6). For each constraint, we assign a dual variable:
where takes the diagonal entries of and forms them into a column vector.
For the constraint ,
where Also
where
Now the Lagrangian function is
We define
(4.1) |
Then the Lagrangian equals
The resulting dual program becomes
(4.2) | ||||
s.t. | ||||
As a result, the KKT conditions are
-
1.
Stationarity:
-
2.
Dual feasibility:
-
3.
Complementary slackness:
Suppose is a global minimizer, and then we can try to simplify the KKT optimality condition by determining some of the dual variables. Using the complementary slackness condition gives rise to
which leads to and Now we look into the KKT conditions again: for the dual feasibility, it holds that
For the stationarity, we have
This linear system has a unique solution for :
and then
Therefore, the KKT condition becomes
(4.3) | ||||
where is defined in (4.1). Next, we show that (4.3) implies the (unique) global optimality of
Theorem 4.1.
Suppose that there exist , , and such that (4.3) holds, then is a global minimizer. Moreover, if , then is the unique global minimizer.
Proof of Theorem 4.1.
Proof of Theorem 2.2.
Consider the dual certificate and in (3.17), and then it holds that
where is chosen in the form of (3.10). As a result, satisfies
where and Note that (3.18) gives
Note that where is defined in (4.1). Also we have
In particular, if we choose and as constant vectors, i.e., , then
and thus
For any , then holds and therefore is the unique global minimizer to (2.6), following from Theorem 4.1. ∎
Appendix
Lemma 4.2.
Let and be two matrices of same size and and be their orthogonal projection matrices respectively that satisfy , i.e., . Suppose
for some . Then for any , we have
for sufficiently large .
Proof of Lemma 4.2.
We decompose into
Let where , “” stands for the Moore-Penrose pseudo-inverse and
(4.5) |
Note that
where . Using and gives
which implies is invertible on as 1 is an eigenvalue of with multiplicity equal to the rank of Therefore, is equivalent to . Now
which follows from and (4.5). Then
where for sufficiently large .
Next, we will show that is strictly positive semidefinite when restricted to Suppose , and then for a sufficiently large , it holds for that
as the second term can be arbitrarily small for a sufficiently large Then
where for sufficiently large .
Note that is invertible on and also the range of belongs to Therefore, we have
for any where goes to 0 at the rate of and thus is close to for sufficiently large . ∎
References
- [1] E. Abbe. Community detection and stochastic block models: recent developments. The Journal of Machine Learning Research, 18(1):6446–6531, 2017.
- [2] E. Abbe, A. S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. IEEE Transactions on information theory, 62(1):471–487, 2015.
- [3] Y. Aflalo, A. Bronstein, and R. Kimmel. On convex relaxation of graph isomorphism. Proceedings of the National Academy of Sciences, 112(10):2942–2947, 2015.
- [4] A. Ahmed, B. Recht, and J. Romberg. Blind deconvolution using convex programming. IEEE Transactions on Information Theory, 60(3):1711–1732, 2013.
- [5] K. M. Anstreicher. Eigenvalue bounds versus semidefinite relaxations for the quadratic assignment problem. SIAM Journal on Optimization, 11(1):254–265, 2000.
- [6] K. M. Anstreicher. Recent advances in the solution of quadratic assignment problems. Mathematical Programming, 97:27–42, 2003.
- [7] K. M. Anstreicher and N. W. Brixius. A new bound for the quadratic assignment problem based on convex quadratic programming. Mathematical Programming, Series A, 89(341-357), 2001.
- [8] P. Awasthi, A. S. Bandeira, M. Charikar, R. Krishnaswamy, S. Villar, and R. Ward. Relax, no need to round: Integrality of clustering formulations. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, pages 191–200. ACM, 2015.
- [9] A. S. Bandeira. Random Laplacian matrices and convex relaxations. Foundations of Computational Mathematics, 18(2):345–379, Apr 2018.
- [10] J. F. Bravo Ferreira, Y. Khoo, and A. Singer. Semidefinite programming approach for the quadratic assignment problem with a sparse graph. Computational Optimization and Applications, 69:677–712, 2018.
- [11] E. J. Candes, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Communications on Pure and Applied Mathematics, 66(8):1241–1274, 2013.
- [12] Y. Cheng and R. Ge. Non-convex matrix completion against a semi-random adversary. In Conference On Learning Theory, pages 1362–1394. PMLR, 2018.
- [13] D. Conte, P. Foggia, C. Sansone, and M. Vento. Thirty years of graph matching in pattern recognition. International journal of pattern recognition and artificial intelligence, 18(03):265–298, 2004.
- [14] D. Cullina and N. Kiyavash. Improved achievability and converse bounds for erdos-rényi graph matching. ACM SIGMETRICS performance evaluation review, 44(1):63–72, 2016.
- [15] E. De Klerk and R. Sotirov. Exploiting group symmetry in semidefinite programming relaxations of the quadratic assignment problem. Mathematical Programming, 122:225–246, 2010.
- [16] E. de Klerk and R. Sotirov. Improved semidefinite programming bounds for quadratic assignment problems with suitable symmetry. Mathematical programming, 133:75–91, 2012.
- [17] J. Ding, Z. Ma, Y. Wu, and J. Xu. Efficient random graph matching via degree profiles. Probability Theory and Related Fields, 179:29–115, 2021.
- [18] Y. Ding and H. Wolkowicz. A low-dimensional semidefinite relaxation for the quadratic assignment problem. Mathematics of Operations Research, 34(4):1008–1022, 2009.
- [19] Z. Fan, C. Mao, Y. Wu, and J. Xu. Spectral graph matching and regularized quadratic relaxations i algorithm and gaussian analysis. Foundations of Computational Mathematics, 23(5):1511–1565, 2023.
- [20] Z. Fan, C. Mao, Y. Wu, and J. Xu. Spectral graph matching and regularized quadratic relaxations ii: Erdős-rényi graphs and universality. Foundations of Computational Mathematics, 23(5):1567–1617, 2023.
- [21] S. Feizi, G. Quon, M. Recamonde-Mendoza, M. Medard, M. Kellis, and A. Jadbabaie. Spectral alignment of graphs. IEEE Transactions on Network Science and Engineering, 7(3):1182–1197, 2019.
- [22] R. Feng, G. Tian, and D. Wei. Small gaps of GOE. Geometric and Functional Analysis, 29(6):1794–1827, 2019.
- [23] M. Fiori and G. Sapiro. On spectral properties for graph matching and graph isomorphism problems. Information and Inference: A Journal of the IMA, 4:63–76, 2015.
- [24] F. Fogel, R. Jenatton, F. Bach, and A. d’Aspremont. Convex relaxations for permutation problems. Advances in Neural Information Processing Systems, 26, 2013.
- [25] L. Ganassali. Sharp threshold for alignment of graph databases with gaussian weights. In Mathematical and Scientific Machine Learning, pages 314–335. PMLR, 2022.
- [26] L. Ganassali, M. Lelarge, and L. Massoulié. Spectral alignment of correlated gaussian matrices. Advances in Applied Probability, 54(1):279–310, 2022.
- [27] P. C. Gilmore. Optimal and suboptimal algorithms for the quadratic assignment problem. Journal of the Society for Industrial and Applied Mathematics, 10(2):305–313, 1962.
- [28] M. Grant, S. Boyd, and Y. Ye. Cvx: Matlab software for disciplined convex programming, 2008.
- [29] S. C. Gutekunst and D. P. Williamson. Semidefinite programming relaxations of the traveling salesman problem and their integrality gaps. Mathematics of Operations Research, 47(1):1–28, 2022.
- [30] G. Hall and L. Massoulié. Partial recovery in the graph alignment problem. Operations Research, 71(1):259–272, 2023.
- [31] T. Iguchi, D. G. Mixon, J. Peterson, and S. Villar. Probably certifiably correct k-means clustering. Mathematical Programming, 165:605–642, 2017.
- [32] R. M. Karp. Reducibility among combinatorial problems. Complexity of Computer Computations, pages 85–103, 1972.
- [33] S. Klus and T. Sahai. A spectral assignment approach for the graph isomorphism problem. Information and Inference: A Journal of the IMA, 7:689–706, 2018.
- [34] E. L. Lawler. The quadratic assignment problem. Management Science, 9(4):586–599, 1963.
- [35] X. Li, Y. Li, S. Ling, T. Strohmer, and K. Wei. When do birds of a feather flock together? k-means, proximity, and conic programming. Mathematical Programming, 179:295–341, 2020.
- [36] S. Ling. Improved performance guarantees for orthogonal group synchronization via generalized power method. SIAM Journal on Optimization, 32(2):1018–1048, 2022.
- [37] S. Ling and T. Strohmer. Certifying global optimality of graph cuts via semidefinite relaxation: A performance guarantee for spectral clustering. Foundations of Computational Mathematics, 20(3):367–421, 2020.
- [38] E. M. Loiola, N. M. M. De Abreu, P. O. Boaventura-Netto, P. Hahn, and T. Querido. A survey for the quadratic assignment problem. European Journal of Operational Research, 176(2):657–690, 2007.
- [39] V. Lyzinski, D. E. Fishkind, M. Fiori, J. T. Vogelstein, C. E. Priebe, and G. Sapiro. Graph matching: Relax at your own risk. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(01):60–73, 2016.
- [40] C. Mao, M. Rudelson, and K. Tikhomirov. Random graph matching with improved noise robustness. In 34th Annual Conference on Learning Theory, volume 134, pages 1–34, 2021.
- [41] C. Mao, M. Rudelson, and K. Tikhomirov. Exact matching of random graphs with constant correlation. Probability Theory and Related Fields, 186(1):327–389, 2023.
- [42] D. E. Oliveira, H. Wolkowicz, and Y. Xu. Admm for the sdp relaxation of the qap. Mathematical Programming Computation, 10(4):631–658, 2018.
- [43] E. Onaran, S. Garg, and E. Erkip. Optimal de-anonymization in random graphs with community structure. In 2016 50th Asilomar Conference on Signals, Systems and Computers, pages 709–713. IEEE, 2016.
- [44] J. Povh and F. Rendl. Copositive and semidefinite relaxations of the quadratic assignment problem. Discrete Optimization, 6(3):231–241, 2009.
- [45] B. Recht. A simpler approach to matrix completion. Journal of Machine Learning Research, 12(12), 2011.
- [46] F. Rendl and H. Wolkowicz. Applications of parametric programming and eigenvalue maximization to the quadratic assignment problem. Mathematical Programming, 53(1):63–78, 1992.
- [47] R. Singh, J. Xu, and B. Berger. Global alignment of multiple protein interaction networks with application to functional orthology detection. Proceedings of the National Academy of Sciences, 105(35):12763–12768, 2008.
- [48] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38(1):49–95, 1996.
- [49] R. Vershynin. High-dimensional Probability: An Introduction with Applications in Data Science, volume 47. Cambridge University Press, 2018.
- [50] Y. Wu, J. Xu, and H. Y. Sophie. Settling the sharp reconstruction thresholds of random graph matching. IEEE Transactions on Information Theory, 68(8):5391–5417, 2022.
- [51] Q. Zhao, S. E. Karisch, F. Rendl, and H. Wolkowicz. Semidefinite programming relaxations for the quadratic assignment problem. Journal of Combinatorial Optimization, 2:71–109, 1998.
- [52] Y. Zhong and N. Boumal. Near-optimal bounds for phase synchronization. SIAM Journal on Optimization, 28(2):989–1016, 2018.