This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A graphical user interface software for lattice QCD based on Python acceleration technology

Lin Gao silvester_gao@qq.com American Association for the Advancement of Science, Washington, DC 20005, USA
(August 6, 2025)
Abstract

A graphical user interface (GUI) software is provided for lattice QCD simulations, aimed at streamlining the process. The current version of the software employs the Metropolis algorithm with the Wilson gauge action. It is implemented in Python, utilizing Just-In-Time (JIT) compilation to enhance computational speed while preserving Python’s simplicity and extensibility. Additionally, the program supports parallel computations to evaluate physical quantities at different inverse coupling β\beta values, allowing users to specify the number of CPU cores. The software also enables the use of various initial conditions, as well as the specification of the save directory, file names, and background settings. Through this software, users can observe the configurations and behaviors of the plaquette under different β\beta values.

I Introduction

Currently, we recognize four fundamental interactions in nature: strong interaction, weak interaction, electromagnetic interaction, and gravitational interaction. Among these, gravitational and electromagnetic interactions are long-range interactions. Gravitational interaction plays a significant role in celestial dynamics, and the electromagnetic interaction can be observed in various forms at the macroscopic scale that our human eyes can directly see. In contrast, strong interaction is a short-range interaction, with effects typically observed at scales much smaller than those of electromagnetic interactions, usually at the femtometer level (1015m10^{-15}\,\text{m}). This scale makes the observation of strong interactions particularly challenging.

Moreover, strong interactions exhibit unique effects that are not directly applicable from studies of the macroscopic world, such as quarks carrying fractional electric charges, unlike macroscopic objects which can only possess integral charges. The mathematical framework describing strong interactions is also more complex than that for electromagnetic interactions; for example, electromagnetic interactions are described using abelian groups, while strong interactions require non-abelian groups.

These factors contribute to the relatively less comprehensive understanding of strong interactions compared to electromagnetic interactions, leaving many avenues for further research. Quantum Chromodynamics (QCD) is the theory dedicated to the study of strong interactions, quarks, gluons, and related phenomena. Perturbation theory is ineffective in the low-energy regime of QCD, necessitating the development of non-perturbative methods to address QCD problems. Lattice QCD represents one such non-perturbative approach that allows the study of QCD from first principles. In lattice QCD, we utilize formulations in Euclidean spacetime rather than Minkowski spacetime. Additionally, it is essential to discretize physical quantities defined in continuous spacetime, and the discrete gauge action used is the Wilson gauge action[1, 2]

SG[U]=β3nΛμ<νReTr[1Uμν(n)],{S_{G}}\left[U\right]=\frac{\beta}{3}\mathop{\sum}\limits_{n\in{{\Lambda}}}\mathop{\sum}\limits_{\mu<\nu}{\mathop{\rm Re}\nolimits}{\rm{Tr}}\left[{1-{U_{\mu\nu}}\left(n\right)}\right], (1)

where β\beta is the inverse coupling and Uμν(n)U_{\mu\nu}\left(n\right) is the plaquette

Uμν(n)\displaystyle U_{\mu\nu}(n) =Uμ(n)Uν(n+μ^)Uμ(n+μ^+ν^)Uν(n+ν^)\displaystyle=U_{\mu}(n)U_{\nu}(n+\hat{\mu})U_{-\mu}(n+\hat{\mu}+\hat{\nu})U_{-\nu}(n+\hat{\nu}) (2)
=Uμ(n)Uν(n+μ^)Uμ(n+ν^)Uν(n).\displaystyle=U_{\mu}(n)U_{\nu}(n+\hat{\mu})U_{\mu}(n+\hat{\nu})^{\dagger}U_{\nu}(n)^{\dagger}.

Uμ(n)U_{\mu}(n) is the link variable. A gauge invariant quantity about plaquette is as follows

u0=13TrUpl14.{u_{0}}={\left\langle{\frac{1}{3}Tr{U_{pl}}}\right\rangle^{\frac{1}{4}}}. (3)

In this article, the configuration of the Wilson gauge field and u0u_{0} will be calculated.

The Metropolis algorithm is used to handle the simulation of lattice QCD. In pure SU(3) lattice gauge theory, the conditional transition probability of a Markov process is[2]

P(Un=UUn1=U)=T(UU),P\left(U_{n}=U^{\prime}\mid U_{n-1}=U\right)=T\left(U^{\prime}\mid U\right), (4)

where the configuration changes from UU to UU^{\prime}. In many Monte Carlo algorithms, detailed balance condition is used

T(UU)P(U)=T(UU)P(U),T(U^{\prime}\mid U)P(U)=T(U\mid U^{\prime})P(U^{\prime}), (5)

where P(U)P(U) satisfies P(U)exp(S[U])P(U)\propto\exp(-S[U]). Thus

T(UU)T(UU)=P(U)P(U)=eΔS with ΔS=S[U]S[U].\frac{{T({U^{\prime}}\mid U)}}{{T(U\mid{U^{\prime}})}}=\frac{{P({U^{\prime}})}}{{P(U)}}={e^{-\Delta S}}\text{ with }\Delta S=S[U^{\prime}]-S[U]. (6)

The conditional transition probability can be further written as the product of the priori selection probability T0(UU)T_{0}(U^{\prime}\mid U) and the acceptance probability TA(UU)T_{A}(U^{\prime}\mid U). Therefore, when the priority selection probability has symmetry

T0(UU)=T0(UU),T_{0}(U\mid U^{\prime})=T_{0}(U^{\prime}\mid U), (7)

we can obtain

TA(UU)TA(UU)=eΔS.\frac{{{T_{A}}({U^{\prime}}\mid U)}}{{{T_{A}}(U\mid{U^{\prime}})}}={e^{-\Delta S}}. (8)

In the Metropolis algorithm, the acceptance probability TA(UU)T_{A}(U^{\prime}\mid U) can be simplified to[3]

TA(UU)=min(1,exp(ΔS)),T_{A}(U^{\prime}\mid U)=\min\left(1,\exp(-\Delta S)\right), (9)

where ΔS=S[U]S[U]\Delta S=S[U^{\prime}]-S[U].

II Improvement of Computational Speed

The enhancement of computational speed arises from both hardware and software improvements.

From a hardware perspective, a combination of CPUs and GPUs can be employed to accelerate computations. Intel’s Gordon Moore famously proposed Moore’s Law[4, 5], which states that the number of transistors (or MOSFETs) on a computer chip doubles approximately every two years (or 18 months in some versions), leading to a corresponding doubling of microprocessor performance every 18 months. This law dominated chip development for an extended period[6]; however, Intel’s production of 14-nanometer chips in 2014 was followed by a delay in the introduction of its 10-nanometer process until 2019. While some companies have claimed successful research into 7-nanometer or even smaller sizes, many current technology nodes have become equivalent dimensions rather than actual channel lengths of MOSFETs. From a physical standpoint, as MOSFET sizes continue to shrink, issues such as increased tunneling currents and decreased effective carrier mobility in the channel become more prevalent. Additionally, the radius of a silicon atom is approximately 111 picometers. These aspects indicate that the size of individual MOSFET is nearing a physical limit. Hardware structure also plays a significant role; for instance, when CPUs and GPUs are manufactured using the same semiconductor technology, GPUs generally outperform CPUs in matrix parallel computations due to their architectural advantages.

On the software side, while programs can be written more concisely, this may lead to decreased performance efficiency. Therefore, it is essential to find ways to enhance the calculation speed of software. One study demonstrated that researchers were able to multiply two 4096×40964096\times 4096 matrices by parallelizing the code to run across all 18 processing cores, optimizing the memory hierarchy of the processor, and utilizing Intel’s Advanced Vector Extensions (AVX) instructions. The optimized code completed the computation in just 0.41 seconds, whereas Python 2 required 7 hours and Python 3 required 9 hours[6]. This underscores the importance of software-hardware synergy in improving computational speed.

Python allows for concise and highly extensible programming. However, Python code typically exhibits slower execution speeds, necessitating acceleration techniques to enhance computational efficiency. Just-In-Time (JIT) Compilation is a method that improves program execution efficiency by dynamically compiling bytecode or other intermediate code into machine code during program execution[7]. This allows the program to run directly on machine code, thereby reducing the overhead associated with interpretation.

In this study, Numba’s Just-In-Time (JIT) compilation is utilized to accelerate Python computations [8, 7]. For detailed results on the speed improvement, see the Results and Discussion section.

III Main Features of the Software

III.1 Graphical User Interface (GUI)

The GUI of the program is developed using Tkinter, providing an intuitive interface for parameter setup and simulation execution. The layout is modernized with custom fonts and colors, enhancing the user experience. Users can set lattice size, β\beta, iterations, CPU core numbers, and initial scheme.

III.2 Custom Background Images

The GUI allows for custom background images to enhance visual appeal. Users can personalize the background by replacing the background.jpg file used in the GUI.

III.3 Initialization Schemes and Boundary Conditions

The program supports two types of initial lattice schemes (hot start or cold start), enabling users to explore various initial conditions. Additionally, periodic boundary conditions are employed.

III.4 Parallel Processing

Users can process simulations for multiple β\beta values in parallel, specifying the number of CPU cores to optimize the utilization of multi-core systems. The program leverages Python’s multiprocessing module to efficiently handle simulations across different β\beta values, executing each simulation in independent processes that update results separately.

Currently, the version of the program does not implement Numba parallelization for matrix multiplication within one configuration. This decision is based on the observation that such parallelization involves frequent thread activation and deactivation, which can be time-consuming and less efficient compared to parallel simulations across different β\beta values. However, if computations are limited to a single β\beta or if an exceptionally high number of CPU cores are available, internal parallel computations may still be considered.

III.5 Visualization and Data Saving

Upon completion of the simulation, the program saves the final lattice configuration and u0u_{0} data in .npy format, automatically storing them in the specified directory. The .npy format is a file format used by NumPy [8]. Graphs illustrating the variation of u0u_{0} with iteration counts are generated for each β\beta, assisting users in understanding the temporal evolution of the system. The visualization utilizes Matplotlib for saving images[9].

IV User Instructions

Refer to Fig. 1 for an example of input parameters. In the GUI, set the required parameters (lattice size, β\beta, iteration counts, CPU core numbers, and initialization scheme).

1. Lattice Size: The lattice size supports two input methods. When Nx=Ny=NzN_{x}=N_{y}=N_{z}, only NtN_{t} and NxN_{x} need to be specified. The program will automatically interpret these as NtN_{t}, NxN_{x}, NyN_{y}, and NzN_{z}. In this example, the input “8,4” indicates Nt=8N_{t}=8 and Nx=Ny=Nz=4N_{x}=N_{y}=N_{z}=4. If NxN_{x}, NyN_{y}, and NzN_{z} are not equal, all four parameters must be provided, separated by commas.

2. β\beta: Multiple values for β\beta can be input simultaneously to facilitate parallel computations. Different values should be separated by commas.

3. Iteration Counts: One complete update of all lattice points is called one sweep. Furthermore, it is computationally economic to repeat the updating step 10 times for the visited variable, since the computation of the sum of staples is costly[2]. In this context, one iteration corresponds to 10 sweeps.

4. Initialization Scheme: The default option is a cold start; however, users can also select a hot start.

5. File Saving Options: Users can specify the save directory and file name. In this example, if the file name is designated as “Test”, all file names of generated configurations and plaquette-related files will begin with “Test”.

After completing these parameter settings, press the “Run Simulation” button to initiate the simulation.

Refer to caption
Figure 1: An example of parameters.

V Results and Discussion

This paper presents a lattice QCD simulation program based on the Metropolis algorithm, utilizing a GUI to facilitate intuitive user input, thereby visualizing and simplifying the simulation process. This implementation demonstrates the program’s advantages in user-friendliness. In this section, a discussion will focus on the computational speed of the JIT-optimized Python 3. Additionally, further testing will be conducted on the data generated by this GUI software.

V.1 computational speed

Both C++ and JIT-optimized Python 3 programs were developed for multiplying two SIZE×SIZESIZE\times SIZE matrices, measuring the computation time. The relevant codes are provided in Appendix A and Appendix B. As shown in Fig. 2, the computation time for matrix multiplication increases as the matrix size (SIZESIZE) grows from 200 to 2000 for both methods.

Refer to caption
(a) The time cost for multiplying two SIZE×SIZESIZE\times SIZE matrices using the C++ program
Refer to caption
(b) The time cost for multiplying two SIZE×SIZESIZE\times SIZE matrices using the JIT-optimized Python 3 program
Figure 2: The time cost for multiplying two SIZE×SIZESIZE\times SIZE matrices is evaluated, with SIZESIZE incrementally increasing from 200 to 2000 in intervals of 100. For each matrix size, matrix multiplication is implemented using both C++ and JIT-optimized Python3.

The time complexity of matrix multiplication is typically O(SIZE3)O(SIZE^{3}). Therefore, for two SIZE×SIZESIZE\times SIZE matrices, the time required for multiplication is approximately proportional to SIZE3SIZE^{3}. Consequently, when the matrix size doubles (i.e., SIZESIZE becomes 2×SIZE2\times SIZE), the computation time increases by a factor of eight. In this test, using C++ as an example, the following time costs were observed for smaller matrix sizes: SIZE=200SIZE=200 resulted in 0.033s0.033s, SIZE=400SIZE=400 in 0.259s0.259s, and SIZE=800SIZE=800 in 2.232s2.232s, which aligns with the expected eightfold growth. However, for larger matrices, such as SIZE=1600SIZE=1600, the time cost escalated to 37.565s37.565s, leading to the observation that 37.565/2.232837.565/2.232\gg 8.

Several factors contribute to this phenomenon. Matrix multiplication involves extensive data reading and writing. When the matrix size exceeds the CPU’s cache capacity, memory access becomes more frequent, leading to increased computation time. Furthermore, matrix multiplication requires significant data transfer, particularly for large matrices, which can create a memory bandwidth bottleneck. Additionally, matrix rows and columns are stored linearly in memory; accessing matrices—especially in a column-major order—can result in poor cache utilization, negatively impacting performance. Moreover, the standard triple-loop algorithm is inefficient for large-scale matrices. Although more efficient algorithms exist, their complexities may not be straightforward, and their implementation can be complex. Consequently, due to the aforementioned factors, computation time often increases significantly in practical applications.

The results indicate that the JIT-optimized Python3 program for multiplying two 2000×20002000\times 2000 double-precision matrices took 0.117s0.117s, while the C++ implementation took 82.268s82.268s. It is evident that, thanks to the optimizations in matrix multiplication provided by NumPy and Numba, the JIT-optimized Python3 implementation is faster than the C++ triple-loop algorithm. Moreover, the program is significantly more concise, which is advantageous when writing larger programs.

V.2 Configuration Testing

The software was tested with the parameters Lattice Size (Nt, Nx, Ny, Nz) set to (8, 4, 4, 4), β=5.0,5.2,5.4,5.6\beta=5.0,5.2,5.4,5.6, iteration = 2000, and the number of CPU cores set to 4. The key results and discussions are presented below.

The generated configurations consist of some SU(3) matrices, specifically with dimensions of Nt×Nx×Ny×Nz×4×3×3{N_{t}}\times{N_{x}}\times{N_{y}}\times{N_{z}}\times 4\times 3\times 3, where the last 3×33\times 3 indicates the size of the SU(3) matrices. This dimension is derived under the consideration of complex values. If only real values are considered, the overall size becomes Nt×Nx×Ny×Nz×4×3×3×2{N_{t}}\times{N_{x}}\times{N_{y}}\times{N_{z}}\times 4\times 3\times 3\times 2.

A thorough examination of all the 3×33\times 3 complex matrices in the configuration revealed that they are indeed SU(3) matrices.

V.3 Plaquette Testing

As illustrated in Fig. 3, after 1000 iterations, the curves for the same β\beta starting from different initial conditions converge. This convergence is observed for all tested β\beta values, indicating that the system reaches equilibrium after 1000 iterations.

Refer to caption
Figure 3: The evolution of u0u_{0} with respect to the iteration under different conditions is illustrated. This figure highlights how u0u_{0} varies as the iterations progress, demonstrating the impact of varying conditions on the system’s behavior. Such observations are crucial for understanding the dynamics of the system and validating the simulation results.

The software allows for intuitive input of parameters such as lattice size and iterations, executing the Metropolis algorithm’s update process in parallel and achieving system equilibrium. This demonstrates the software’s effectiveness in lattice QCD simulations. The actual results show that with adjustments to β\beta and iteration counts, the physical quantity u0u_{0} exhibits reasonable variations that align with physical expectations.

VI conclusion

This paper presents a graphical user interface (GUI) software for lattice QCD based on Python acceleration techniques, achieving a complete workflow from input parameters to result output. This approach offers a new perspective for studying numerical simulations of lattice QCD, facilitating wider user adoption and understanding through an intuitive GUI design. The simulation software provides an easy-to-use platform that combines parallel computing and interactive visualization, assisting users in performing lattice QCD calculations. The main conclusions are as follows:

User Friendliness and Experience. The GUI facilitates parameter input and result saving, allowing users to conduct physical simulation experiments without writing code. This interface design is particularly appealing to students, users outside computational physics, and researchers in experimental physics who may not be adept at programming. Additionally, the software incorporates customizable background images, enabling users to modify the background according to their preferences. The background images enhance the visual feedback, making the simulation experience more vivid and intuitive, thereby highlighting the importance of visual elements in improving interface friendliness. Furthermore, the software design emphasizes flexibility, permitting users to freely adjust simulation parameters according to specific research needs. Users can input lattice size, set different ranges for β\beta, and choose initial conditions, accommodating a variety of experimental contexts.

Python Acceleration Techniques. Python offers simplicity and high scalability in programming. However, traditional Python programs are significantly slower than C/C++ for numerical calculations. This study utilizes Just-In-Time (JIT) compilation techniques to accelerate Python computations, preserving Python’s simplicity while enhancing its computational speed. In the example in this article, the JIT-optimized Python program for matrix multiplication even outperforms the traditional C++ triple-loop algorithm.

Parallel Computing. The software’s parallel computing capability significantly improves simulation efficiency. By fully leveraging the computational power of multi-core processors, users can obtain simulation results under different β\beta values in a shorter time. This efficiency not only conserves computational resources but also enables researchers to conduct larger-scale experiments, exploring a broader parameter space and thereby advancing physical research.

Diverse Data Output and Research Applicability. The program outputs include the final lattice configurations and the u0u_{0} data as a function of iteration, saved in .npy.npy format. Additionally, graphs depicting the variation of u0u_{0} with iteration will be generated for each β\beta. These outputs provide researchers with diverse options for post-processing and data analysis. The output images illustrate the evolution of u0u_{0} under different β\beta values and initialization schemes, visually reflecting the impact of model parameters on the system state. The .npy.npy data files facilitate loading into other computational environments for further analysis and data mining, thereby providing researchers with convenient data management and analysis options.

Future Work. Overall, the functionality of this program effectively implements stable simulations using the Metropolis algorithm and demonstrates excellence in numerical simulation and user interface friendliness. The analysis and discussion of the results indicate that this method lays a solid foundation for further model expansion and performance optimization. Due to Python’s high scalability, this software is easily extendable to incorporate more functionalities in future versions, including additional physical quantities and alternative actions. Currently, the software only supports simple periodic boundary conditions, and future versions may consider other boundary conditions. Furthermore, machine learning techniques are increasingly influencing lattice QCD research, and integration of machine learning content will be considered in subsequent versions[10]. Lastly, further optimization of the parallelization aspect will include the potential addition of GPU acceleration.

VII Software Acquisition

You may need the following.

  • Python 3.x

  • Some packages: numpy, numba, matplotlib

Acknowledgments

This research utilized ChatGPT to polish the paper. I extend my gratitude to the contributors of GPT for their valuable contributions[11, 12].

Appendix A Python program for matrix multiplication

Listing 1: Python code
import numpy as np
import time
from numba import njit,set_num_threads
set_num_threads(1)
# Matrix multiplication function
@njit
def matrix_multiply(A, B):
result = np.dot(A, B)
return result
for SIZE in range(100,2100,100):
# The start time
start_time = time.time()
# Generate two random matrices
matrix1 = np.random.rand(SIZE, SIZE)
matrix2 = np.random.rand(SIZE, SIZE)
# Matrix multiplication
result = matrix_multiply(matrix1, matrix2)
# The end time and the time taken
end_time = time.time()
print(f”{SIZE} {end_time - start_time:.3f}”)

Appendix B C++ program for matrix multiplication

Listing 2: C++ code
#include <iostream>
#include <cstdlib>
#include <ctime>
// Matrix multiplication function
void multiplyMatrices(double** matrix1, double** matrix2, double** result,int SIZE) {
for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
result[i][j] = 0;
for (int k = 0; k < SIZE; ++k) {
result[i][j] += matrix1[i][k] * matrix2[k][j];
}
}
}
}
int main() {
clock_t start_time,end_time;
for (int SIZE = 100; SIZE < 2100; SIZE += 100) {
// The start time
start_time = clock();
// Create random matrices matrix1 and matrix2
double **matrix1 = new double *[SIZE];
double **matrix2 = new double *[SIZE];
double **result = new double *[SIZE];
for (int i = 0; i < SIZE; ++i) {
matrix1[i] = new double[SIZE];
matrix2[i] = new double[SIZE];
result[i] = new double[SIZE];
}
std::srand(static_cast<unsigned int>(std::time(nullptr)));
// Fill matrices matrix1 and matrix2 with random numbers
for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
matrix1[i][j] = static_cast<double>(std::rand()) / RAND_MAX;
matrix2[i][j] = static_cast<double>(std::rand()) / RAND_MAX;
}
}
// Multiply the matrices
multiplyMatrices(matrix1, matrix2, result, SIZE);
for (int i = 0; i < SIZE; ++i) {
delete[] matrix1[i];
delete[] matrix2[i];
delete[] result[i];
}
delete[] matrix1;
delete[] matrix2;
delete[] result;
//The end time and the time taken
end_time = clock();
std::cout<< SIZE<<” ”<< (double) (end_time-start_time) / CLOCKS_PER_SEC<< std::endl;
}
return 0;
}

References

  • Wilson [1974] K. G. Wilson, Phys. Rev. D 10, 2445 (1974).
  • Gattringer and Lang [2010] C. Gattringer and C. B. Lang, Quantum chromodynamics on the lattice, Vol. 788 (Springer, Berlin, 2010).
  • Metropolis et al. [1953] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, The journal of chemical physics 21, 1087 (1953).
  • Moore [1965] G. E. Moore, Sidsel Lond Grosen, Agnete Meldgaard Hansen & Jo Krøjer  (1965).
  • Moore [2006] G. E. Moore, IEEE Solid-State Circuits Society Newsletter 11, 36 (2006).
  • Leiserson et al. [2020] C. E. Leiserson, N. C. Thompson, J. S. Emer, B. C. Kuszmaul, B. W. Lampson, D. Sanchez, and T. B. Schardl, Science 368, eaam9744 (2020)https://www.science.org/doi/pdf/10.1126/science.aam9744 .
  • Lam et al. [2015] S. K. Lam, A. Pitrou, and S. Seibert, in Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC (2015) pp. 1–6.
  • Harris et al. [2020] C. R. Harris, K. J. Millman, S. J. Van Der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, et al., Nature 585, 357 (2020).
  • Hunter [2007] J. D. Hunter, Computing in science & engineering 9, 90 (2007).
  • Gao et al. [2024] L. Gao, H. Ying, and J. Zhang, Physical Review D 109, 074509 (2024).
  • OpenAI [2023] OpenAI, arXiv preprint arXiv:2303.08774  (2023), arXiv:2303.08774 [cs.CL] .
  • Lightman et al. [2023] H. Lightman, V. Kosaraju, Y. Burda, H. Edwards, B. Baker, T. Lee, J. Leike, J. Schulman, I. Sutskever, and K. Cobbe, arXiv preprint arXiv:2305.20050  (2023), arXiv:2305.20050 [cs.LG] .