This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Simulated bifurcation for higher-order cost functions

Taro Kanao taro.kanao@toshiba.co.jp    Hayato Goto Frontier Research Laboratory, Corporate Research & Development Center, Toshiba Corporation, 1, Komukai-Toshiba-cho, Saiwai-ku, Kawasaki 212-8582, Japan
Abstract

High-performance Ising machines for solving combinatorial optimization problems have been developed with digital processors implementing heuristic algorithms such as simulated bifurcation (SB). Although Ising machines have been designed for second-order cost functions, there are practical problems expressed naturally by higher-order cost functions. In this work, we extend SB to such higher-order cost functions. By solving a problem having third-order cost functions, we show that the higher-order SB can outperform not only the second-order SB with additional spin variables, but also simulated annealing applied directly to the third-order cost functions. This result suggests that the higher-order SB can be practically useful.

Special purpose hardware solvers for the Ising problem Barahona1982 have been applied to combinatorial optimization problems Lucas2014 . Such hardware solvers are referred to as Ising machines Mohseni2022a . Ising machines with analogue computers have attracted attention, such as quantum annealers Kadowaki1998 with superconducting circuits Johnson2011 and coherent Ising machines with pulse lasers Marandi2014 . Also, Ising machines with digital processors have been developed based on, e.g., simulated annealing (SA) Kirkpatrick1983 ; Yamaoka2016 ; Aramon2019 , simulated bifurcation (SB) Goto2019 , and other algorithms Leleu2021a .

SB is a heuristic algorithm based on numerical simulation of a classical counterpart of a quantum bifurcation machine, which is a quantum annealer using nonlinear oscillators Goto2016 ; Puri2017 ; Goto2019a ; Kanao2021 . SB allows us to simultaneously update variables, and thus can easily be accelerated by parallel processing with, e.g., graphics processing units (GPUs) and field-programmable gate arrays (FPGAs) Goto2019 ; Tatsumura2019 ; Zou2020 ; Tatsumura2021 . Two improved variants of SB have been reported: ballistic and discrete SBs (bSB and dSB) Goto2021 . In addition, bSB and dSB with thermal fluctuations have been studied Kanao2022a . SB has been benchmarked against other Ising machines Oshiyama2022 ; Kowalsky2022 . Besides, SB has been applied to financial portfolio optimizations Steinhauer2020a ; Cohen2020 , financial trading machines Tatsumura2020 , traveling salesman problems Zhang2021c , clustering with a hybrid method Matsumoto2022 , and wireless communications Zhang2022c .

In the Ising problem, the aim is to minimize cost functions written in second-order polynomials of spin variables, for which Ising machines have been designed. However, some combinatorial optimization problems are naturally expressed by cost functions with higher-order polynomials than second order (higher-order cost functions). To apply Ising machines to such problems, higher-order cost functions have been transformed to second-order ones Kolmogorov2004 ; Ishikawa2011 ; Xia2018 ; Dattani2019 ; Fujisaki2022a , though such transformations usually require additional spin variables and thus increase computational costs.

In this work, we extend SB to apply it directly to a problem with higher-order cost functions. Since SB is based on simulation, this approach can easily be implemented. To benchmark the higher-order SB, we use a three-regular three-XORSAT (3R3X) problem Hen2019 ; Bernaschi2021 ; Bellitti2021 ; Kowalsky2022 , because this problem yields third-order cost functions and its solution can be known in advance. We compare the third-order SB with that transformed to second order. As a result, we find that third-order bSB and dSB (3bSB and 3dSB) show higher performance than the second-order ones (2bSB and 2dSB). Surprisingly, 3bSB performs better than 3dSB, while 2dSB does better than 2bSB as in the previous study Goto2021 . We further find that 3bSB can be superior to SA applied directly to the third-order cost functions. These results suggest that the higher-order SB will offer another useful approach to special purpose hardware solvers for combinatorial optimization problems.

A third-order cost function with NN spin variables si=±1s_{i}=\pm 1 can generally be written as

E(𝒔)\displaystyle E(\bm{s}) =\displaystyle= ihisii,jJijsisji,j,kKijksisjsk,\displaystyle-\!\sum_{i}\!h_{i}s_{i}-\!\sum_{i,j}\!J_{ij}s_{i}s_{j}-\!\sum_{i,j,k}\!K_{ijk}s_{i}s_{j}s_{k}, (1)

where 𝒔\bm{s} is a vector consisting of sis_{i}, and hi,Jijh_{i},J_{ij}, and KijkK_{ijk} are coefficients. Since si2=1s_{i}^{2}=1, we assume without loss of generality that Jii=0J_{ii}=0, and Kijk=0K_{ijk}=0 if two of i,ji,j, and kk are equal.

We here extend second-order bSB and dSB to third-order ones. The third-order bSB and dSB are based on the following equations of motion for positions xix_{i} and momenta yiy_{i} corresponding to sis_{i}:

y˙i\displaystyle\dot{y}_{i} =\displaystyle= [a0a(t)]xi+cfi,\displaystyle-\left[a_{0}-a(t)\right]x_{i}+cf_{i}, (2)
x˙i\displaystyle\dot{x}_{i} =\displaystyle= a0yi,\displaystyle a_{0}y_{i}, (3)

where the dots denote time derivatives, a0a_{0} is a constant parameter, a(t)a(t) is a bifurcation parameter increased from 0 to a0a_{0} with the time tt, and cc is a normalization factor determined from forces fif_{i}. The forces fif_{i} are given by

fi\displaystyle f_{i} =\displaystyle= E(𝒙)xi=Gi(𝒙)(bSB),\displaystyle-\frac{\partial E(\bm{x})}{\partial x_{i}}=G_{i}(\bm{x})\quad\text{(bSB)}, (4)
fi\displaystyle f_{i} =\displaystyle= E(𝒙)xi|𝒙=𝒔=Gi(𝒔)(dSB).\displaystyle-\left.\frac{\partial E(\bm{x})}{\partial x_{i}}\right|_{\bm{x}=\bm{s}}=G_{i}(\bm{s})\quad\text{(dSB)}. (5)

For 3bSB, fif_{i} are the gradients Gi(𝒙)G_{i}(\bm{x}) of E(𝒙)E(\bm{x}), while for 3dSB, 𝒙\bm{x} in the gradients is substituted by 𝒔\bm{s}, which is a vector of the signs of xix_{i}, si=sgn(xi)s_{i}={\rm sgn}\left(x_{i}\right). Both 3bSB and 3dSB assume perfectly inelastic walls at xi=±1x_{i}=\pm 1 Goto2021 , that is, if |xi|>1\left|x_{i}\right|>1, xix_{i} and yiy_{i} are set to sgn(xi){\rm sgn}\left(x_{i}\right) and 0, respectively. Equations (2) and (3) are solved with the symplectic Euler method Leimkuhler2004 , which discretizes the time with an interval Δt\Delta t and computes time evolutions of xix_{i} and yiy_{i} step by step. The signs of xix_{i} at the final time give a solution. The solution at least corresponds to a local minimum of E(𝒔)E(\bm{s}) as for 2bSB and 2dSB Goto2021 , because Eq. (2) becomes y˙i=c0fi\dot{y}_{i}=c_{0}f_{i} at the final time. Note that when all KijkK_{ijk} are zero in Eq. (1), Eqs. (2)-(5) reproduce 2bSB and 2dSB in Ref. Goto2021, . We thus focus on the third term in Eq. (1) in the following.

To reduce the computational cost of calculating fif_{i}, we utilize sparse connectivity of the 3R3X problem, that is, the number, MM, of nonzero KijkK_{ijk} is much smaller than the total number of KijkK_{ijk}. We expect that sparse connectivities often appear in applications and therefore the following method will be useful. By denoting nonzero KijkK_{ijk} by KmK_{m} (m=1,2,,M)(m=1,2,\cdots,M), E(𝒔)E(\bm{s}) is expressed as

E(𝒔)\displaystyle E(\bm{s}) =\displaystyle= m=1MKmsvm1svm2svm3,\displaystyle-\sum_{m=1}^{M}K_{m}s_{v_{m1}}s_{v_{m2}}s_{v_{m3}}, (6)

where vmnv_{mn} with n=1,2,3n=1,2,3 represent the indices of spin variables included in the mmth term. To further reduce the time required for evaluating the third-order SB, we calculate the gradients by

Gi(𝒙)\displaystyle G_{i}(\bm{x}) =\displaystyle= 1xi+ϵm=1MaimKmxvm1xvm2xvm3(bSB),\displaystyle\frac{1}{x_{i}+\epsilon}\sum_{m=1}^{M}a_{im}K_{m}x_{v_{m1}}x_{v_{m2}}x_{v_{m3}}\quad\text{(bSB)}, (7)
Gi(𝒔)\displaystyle G_{i}(\bm{s}) =\displaystyle= sim=1MaimKmsvm1svm2svm3(dSB),\displaystyle s_{i}\sum_{m=1}^{M}a_{im}K_{m}s_{v_{m1}}s_{v_{m2}}s_{v_{m3}}\quad\text{(dSB)}, (8)
aim\displaystyle a_{im} =\displaystyle= δi,vm1+δi,vm2+δi,vm3,\displaystyle\delta_{i,v_{m1}}+\delta_{i,v_{m2}}+\delta_{i,v_{m3}}, (9)

where δi,j\delta_{i,j} is 1 if i=ji=j and otherwise 0. Equation (8) exactly gives the derivative in Eq. (5), because aima_{im} pick up terms including sis_{i} from Eq. (6), and these sis_{i} are canceled by multiplying sis_{i} in Eq. (8) with si2=1s_{i}^{2}=1. This method is valid since Eq. (6) is linear with respect to each sis_{i}. Similarly, Eq. (7) yields the same value as Eq. (4) if xi0x_{i}\neq 0 and ϵ=0\epsilon=0. To stabilize calculations, we set ϵ=1014\epsilon=10^{-14}. We empirically found that Eqs. (7) and (8) could reduce the evaluation time compared with direct differentiation, which may be because these equations gather three terms derived from xvm1xvm2xvm3x_{v_{m1}}x_{v_{m2}}x_{v_{m3}} together and thus reduce memory accesses for xvmnx_{v_{mn}} and svmns_{v_{mn}}.

The 3R3X problem is mapped to a minimization problem of Eq. (6) with the following vmnv_{mn} and KmK_{m} Hen2019 . Here, MM equals NN. For each of n=1,2,3n=1,2,3, the vector of indices vmnv_{mn} is a permutation of 1,2,,N1,2,\cdots,N. For each mm, three indices vm1v_{m1}, vm2v_{m2}, and vm3v_{m3} are different from each other. For different mm, the combinations of vm1v_{m1}, vm2v_{m2}, and vm3v_{m3} are different. KmK_{m} is given by (1)bm(-1)^{b_{m}} with bm=0,1b_{m}=0,1. Thus E(𝒔)NE(\bm{s})\geq-N holds.

For a 3R3X instance, a solution 𝒔0\bm{s}_{0} minimizing E(𝒔)E(\bm{s}) can be obtained with linear algebra Hen2019 , that is, 𝒔0\bm{s}_{0} is related to a solution, ξi=0,1\xi_{i}=0,1, of linear equations,

i=1Namiξi\displaystyle\sum_{i=1}^{N}a_{mi}\xi_{i} =\displaystyle= bm,mod2,\displaystyle b_{m},\quad\mathrm{mod}\quad 2, (10)

by si0=(1)ξis_{i0}=(-1)^{\xi_{i}}. However, depending on amia_{mi} and bmb_{m}, Eq. (10) may have no solution. To ensure that Eq. (10) has at least one solution, we instead give amia_{mi} and ξi\xi_{i}, and determine bmb_{m} with Eq. (10). Then an exact solution 𝒔0\bm{s}_{0} is indicated by E(𝒔0)=NE(\bm{s}_{0})=-N.

A third-order cost function of the 3R3X problem can be decomposed to second-order one by mapping a term (1)bs1s2s3-(-1)^{b}s_{1}s_{2}s_{3} to

h(s1+s2+s3)+h~s~\displaystyle h\left(s_{1}+s_{2}+s_{3}\right)+\tilde{h}\tilde{s}
+J(s1s2+s2s3+s3s1)+J~(s1+s2+s3)s~,\displaystyle+J\left(s_{1}s_{2}+s_{2}s_{3}+s_{3}s_{1}\right)+\tilde{J}\left(s_{1}+s_{2}+s_{3}\right)\tilde{s}, (11)

where s~=±1\tilde{s}=\pm 1 is an ancillary spin variable, and JJ, J~\tilde{J}, hh, and h~\tilde{h} are coefficients determined so that s1s_{1}, s2s_{2}, and s3s_{3} minimizing Eq. (11) coincide with ones minimizing (1)bs1s2s3-(-1)^{b}s_{1}s_{2}s_{3} Kolmogorov2004 ; Leib2016 ; Hen2019 . We use J=1/4J=1/4, J~=(1)b/2\tilde{J}=(-1)^{b}/2, h=(1)b/4h=-(-1)^{b}/4, and h~=1/2\tilde{h}=-1/2, which normalize the minimum of Eq. (11) to 1-1. The ancillary spin variables increase the total number of spin variables to 2N2N.

We also directly apply SA to third-order cost functions (3SA). In an update in SA Kirkpatrick1983 , one calculates a change in a cost function ΔEi\Delta E_{i} caused by a flip of sis_{i}. The flip is accepted if βΔEi<lnR\beta\Delta E_{i}<-\ln R, where β\beta and RR are the inverse temperature increased during annealing and a random number in the interval (0,1)(0,1), respectively. In a step of SA, we sequentially update spin variables sis_{i} Isakov2015 .

In numerical experiments, we measure computational costs of finding exact solutions as follows. We randomly generate 100 3R3X instances for each NN. For these instances, we run the above mentioned SB and SA many times with different initial conditions or random numbers, and obtain probabilities of finding exact solutions, PP. Then, using the number of steps in a run, NsN_{\mathrm{s}}, we estimate the number of steps necessary to obtain exact solutions with a probability of 99% (step-to-solution) by Leleu2021a ; Kanao2022a

S\displaystyle S =\displaystyle= Nslog0.01log(1P).\displaystyle N_{\mathrm{s}}\frac{\log 0.01}{\log(1-P)}. (12)

A smaller SS indicates that an algorithm provides an exact solution more efficiently (or faster if computation time for each step is assumed to be the same).

We set the parameters as follows. For SB, a0=1a_{0}=1, and a(t)a(t) is increased linearly. To normalize the term cficf_{i} in Eq. (2), we give cc by Sakai2020

c\displaystyle c =\displaystyle= c1(1νi=1νfi2)1/2,\displaystyle c_{1}\left(\frac{1}{\nu}\sum_{i=1}^{\nu}f_{i}^{2}\right)^{-1/2}, (13)

where c1c_{1} and ν\nu are a constant parameter and the number of spin variables, respectively (ν\nu are 2N2N and NN for the second- and third-order SBs, respectively). We optimize NsN_{\mathrm{s}} for each NN, and (Δt,c1)(\Delta t,c_{1}) for certain large values of NN, to minimize the median of SS for the 100 instances Kowalsky2022 . Initial xix_{i} and yiy_{i} are generated by uniform random numbers in the interval (1,1)(-1,1). To suppress a statistical error in PP, the number of runs NrN_{\mathrm{r}} is set to be large such that NrP1N_{\mathrm{r}}P\gg 1 Goto2021 , though this condition is not necessarily met for the largest NN in this work owing to limitations of computational resources. For SA, β\beta is linearly increased from zero to a final value β1\beta_{1}, and β1\beta_{1} and NsN_{\mathrm{s}} are optimized as in SB.

Refer to caption
Figure 1: Distributions of a third-order cost function E(𝒔)E(\bm{s}) for a 3R3X instance with 100 spin variables. (a) 3bSB, 3dSB, and 3SA. (b) 2bSB and 2dSB. The lines are eye guides. The numbers of steps NsN_{\mathrm{s}} are shown in the parentheses. Here, NsN_{\mathrm{s}} and the following parameters are determined to minimize step-to-solution SS. The parameters are (Δt,c1)=(1.1,0.7)\left(\Delta t,c_{1}\right)=(1.1,0.7) for 3bSB, (0.7,1.1)(0.7,1.1) for 3dSB, (0.8,0.9)(0.8,0.9) for 2bSB, (0.7,1.6)(0.7,1.6) for 2dSB, and β1=2\beta_{1}=2 for 3SA.

We first present typical distributions of E(𝒔)E(\bm{s}) for a 3R3X instance, for which 3bSB gives SS near the median of the 100 instances. Figure 1 shows the distributions by the rates of runs resulting in E(𝒔)E(\bm{s}). A higher value of the rate means a higher probability of finding solutions with certain accuracy, though here the parameters are optimized to minimize SS as mentioned above and the rates can be improved by increasing NsN_{\mathrm{s}}. Interestingly, Fig. 1(a) shows that the rate for small E(𝒔)E(\bm{s}) is higher for 3bSB than for 3dSB despite smaller NsN_{\mathrm{s}} for 3bSB. On the other hand, Fig. 1(b) shows that the rate is higher for 2dSB than for 2bSB in accordance with much larger NsN_{\mathrm{s}} for 2dSB. As shown in Fig. 1(a), the rate for 3SA is between those of 3bSB and 3dSB. These rates and NsN_{\mathrm{s}} suggest the highest performance of 3bSB.

Refer to caption
Figure 2: The medians of step-to-solution SS for 100 3R3X instances as functions of the number of spin variables NN. The lines and error bars show fittings by Eq. (14) and statistical errors Goto2021 , respectively. Parameters Δt,c1\Delta t,c_{1}, and β1\beta_{1} are the same as in Fig. 1.

We next evaluate the results for 100 3R3X instances. Figure 2 shows the medians of SS as functions of NN. All the algorithms exhibit exponential increases of SS with NN. The values of SS are smaller for third-order bSB and dSB than for the second-order ones. Among the second-order ones, 2dSB gives smaller SS than 2bSB, which is consistent with a reported result that dSB more often reaches exact solutions for the Ising problem Goto2021 . Among the third-order SB, in contrast, 3bSB shows smaller SS than 3dSB. The higher performance of 3bSB might be because bSB can rapidly converge to a local minimum of a cost function Goto2021 and a local search algorithm is effective for the 3R3X problem Bernaschi2021 ; Bellitti2021 ; Kowalsky2022 . A comparison between 2bSB and 3bSB suggests that the local search by bSB is especially effective when directly applied to third-order cost functions with fewer spin variables. 3SA results in SS between those of 3bSB and 3dSB, indicating that 3bSB is superior to 3SA.

Scaling can be measured by exponents α\alpha and β\beta such that

S\displaystyle S \displaystyle\simeq 10αN+β,\displaystyle 10^{\alpha N+\beta}, (14)

for large NN Kowalsky2022 . A smaller α\alpha allows an algorithm to be applied to larger instances. We fit the medians of SS by Eq. (14) with the least squares method as the lines in Fig. 2. Here, only the data for intermediate NN are used for the fittings, because errors due to finite-size effects and statistical errors are large for small and large NN, respectively Kowalsky2022 . Table 1 shows α\alpha and β\beta. The value of α\alpha for 3bSB is the smallest. α\alpha for 3dSB is a little larger than it, and α\alpha for 2dSB is significantly larger. α\alpha for 2bSB is by far the largest. The value of α\alpha for 3SA is larger than those for 3bSB and 3dSB. These α\alpha demonstrate that 3bSB scales the best among these algorithms.

Table 1: Scaling exponents. The numbers in the parentheses are standard deviations in the least squares method.
3bSB 3dSB 3SA 2bSB 2dSB
α\alpha 0.0355(2) 0.0375(6) 0.0384(6) 0.078(1) 0.0433(2)
β\beta 3.36(4) 4.23(8) 3.69(8) 3.19(9) 4.72(2)

Finally, we compare α\alpha with the results of Ref. Kowalsky2022, , where various Ising machines are benchmarked using the 3R3X problem expressed by second-order cost functions. Among the Ising machines, Digital Annealer reported the smallest exponent corresponding to α=0.0370(8)\alpha=0.0370(8) Footnote2022 . Compared with this value, α\alpha for 3bSB seems to be smaller. α\alpha for Simulated Bifurcation Machine (provided by Toshiba Digital Solutions Corporation TDSL ) corresponds to 0.0434(12)0.0434(12), which coincides with that for 2dSB in this work within the range of errors. This coincidence may support the validity of our evaluation. In Ref. Kowalsky2022, , a quasi-greedy algorithm called SATonGPU Bernaschi2021 ; Bellitti2021 ; Kowalsky2022 is also compared, which is specialized for the 3R3X problem and deals with third-order cost functions directly. As a result, SATonGPU provided the exponent corresponding to α=0.0341(2)\alpha=0.0341(2) Footnote2021 , which is close to an estimated optimal exponent Bernaschi2021 . The value of α\alpha for SATonGPU is smaller than α\alpha for 3bSB in our work, but the difference is fairly small. This implies that 3bSB can achieve almost optimal scaling even though 3bSB is not specialized for the 3R3X problem.

In summary, we have demonstrated that third-order SB can perform better than second-order one for a 3R3X problem. Although 2dSB provides better performance than 2bSB, 3bSB instead does better than 3dSB. Furthermore, we have shown that 3bSB can be superior to 3SA. These results indicate that higher-order SB can be useful for solving combinatorial optimization problems expressed naturally by higher-order cost functions. Since we have formulated the higher-order SB by matrix-vector operations, we expect that the higher-order SB may be accelerated by parallel processing with, e.g., GPUs and FPGAs.

We thank Y. Sakai, T. Kashimata, K. Tatsumura, and R. Hidaka for helpful discussions.

References

  • (1) F. Barahona, J. Phys. A: Math. Gen. 15, 3241 (1982).
  • (2) A. Lucas, Front. Phys. 2, 5 (2014).
  • (3) N. Mohseni, P. L. McMahon, and T. Byrnes, Nat. Rev. Phys. 4, 363 (2022).
  • (4) T. Kadowaki and H. Nishimori, Phys. Rev. E 58, 5355 (1998).
  • (5) M. W. Johnson et al., Nature 473, 194 (2011).
  • (6) A. Marandi, Z. Wang, K. Takata, R. L. Byer, and Y. Yamamoto, Nat. Photonics 8, 937 (2014).
  • (7) S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Science 220, 671 (1983).
  • (8) M. Yamaoka, C. Yoshimura, M. Hayashi, T. Okuyama, H. Aoki, and H. Mizuno, IEEE J. Solid-State Circuits 51, 303 (2016).
  • (9) M. Aramon, G. Rosenberg, E. Valiante, T. Miyazawa, H. Tamura, and H. G. Katzgraber, Front. Phys. 7, 48 (2019).
  • (10) H. Goto, K. Tatsumura, and A. R. Dixon, Sci. Adv. 5, eaav2372 (2019).
  • (11) T. Leleu, F. Khoyratee, T. Levi, R. Hamerly, T. Kohno, and K. Aihara, Commun. Phys. 4, 266 (2021).
  • (12) H. Goto, Sci. Rep. 6, 21686 (2016).
  • (13) S. Puri, C. K. Andersen, A. L. Grimsmo, and A. Blais, Nat. Commun. 8, 15785 (2017).
  • (14) H. Goto, J. Phys. Soc. Jpn. 88, 061015 (2019).
  • (15) T. Kanao and H. Goto, npj Quantum Inf. 7, 18 (2021).
  • (16) K. Tatsumura, A. R. Dixon, and H. Goto, IEEE Int. Conf. Field Programmable Logic and Applications (FPL), 2019, p. 59.
  • (17) Y. Zou and M. Lin, ACM/SIGDA Int. Symp. Field-Programmable Gate Arrays (FPGA ’20), 2020, p. 65.
  • (18) K. Tatsumura, M. Yamasaki, and H. Goto, Nat. Electron. 4, 208 (2021).
  • (19) H. Goto, K. Endo, M. Suzuki, Y. Sakai, T. Kanao, Y. Hamakawa, R. Hidaka, M. Yamasaki, and K. Tatsumura, Sci. Adv. 7, eabe7953 (2021).
  • (20) T. Kanao and H. Goto, Commun. Phys. 5, 153 (2022).
  • (21) H. Oshiyama and M. Ohzeki, Sci. Rep. 12, 2146 (2022).
  • (22) M. Kowalsky, T. Albash, I. Hen, and D. A. Lidar, Quantum Sci. Technol. 7, 025008 (2022).
  • (23) K. Steinhauer, T. Fukadai, and S. Yoshida, arXiv:2009.08412 (2020).
  • (24) J. Cohen and C. Alexander, arXiv:2011.01308 (2020).
  • (25) K. Tatsumura, R. Hidaka, M. Yamasaki, Y. Sakai, and H. Goto, IEEE Int. Symp. Circuits and Systems (ISCAS), 2020, p. 1.
  • (26) T. Zhang, Q. Tao, and J. Han, IEEE Int. SoC Design Conf. (ISOCC), 2021, p. 288.
  • (27) N. Matsumoto, Y. Hamakawa, K. Tatsumura, and K. Kudo, Sci. Rep. 12, 2669 (2022).
  • (28) W. Zhang and Y.-L. Zheng, arXiv:2210.14660 (2022).
  • (29) V. Kolmogorov and R. Zabih, IEEE Trans. Pattern Analysis and Machine Intelligence 26, 147 (2004).
  • (30) H. Ishikawa, IEEE Trans. Pattern Analysis and Machine Intelligence 33, 1234 (2011).
  • (31) R. Xia, T. Bian, and S. Kais, J. Phys. Chem. B 122, 3384 (2018).
  • (32) N. Dattani, arXiv:1901.04405 (2019).
  • (33) J. Fujisaki, H. Oshima, S. Sato, and K. Fujii, Phys. Rev. Res. 4, 043086 (2022).
  • (34) I. Hen, Phys. Rev. Appl. 12, 011003 (2019).
  • (35) M. Bernaschi, M. Bisson, M. Fatica, E. Marinari, V. Martin-Mayor, G. Parisi, and F. Ricci-Tersenghi, Europhys. Lett. 133, 60005 (2021).
  • (36) M. Bellitti, F. Ricci-Tersenghi, and A. Scardicchio, Phys. Rev. Res. 3, 043015 (2021).
  • (37) B. Leimkuhler and S. Reich, Simulating Hamiltonian Dynamics (Cambridge University Press, Cambridge, 2004).
  • (38) M. Leib, P. Zoller, and W. Lechner, Quantum Sci. Technol. 1, 015008 (2016).
  • (39) S. V. Isakov, I. N. Zintchenko, T. F. Rønnow, and M. Troyer, Comput. Phys. Commun. 192, 265 (2015).
  • (40) Y. Sakai, H. Goto, K. Tatsumura, K. Endo, and M. Suzuki, International patent application PCT/JP2020/006841 (20 February 2020).
  • (41) In Ref. Kowalsky2022, , scaling is given with respect to ν=2N\nu=2N.
  • (42) See https://aws.amazon.com/marketplace/pp/prodview-f3hbaz4q3y32y for Simulated Bifurcation Machine by Toshiba Digital Solutions Corporation; accessed 21 November 2022.
  • (43) In Ref. Bernaschi2021, , scaling is expressed with eaN\mathrm{e}^{aN}.