This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A High-Accuracy Alignment Approach for Solar Images of Different Wavelengths

Yun Wang Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216, China
University of Chinese Academy of Sciences, Beijing 101408, China
KaiFan Ji jkf@ynao.ac.cn
Kaifan Ji Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216, China
Zhenyu Jin Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216, China
Yunnan Key Laboratory of Solar Physics and Space Science, 650216, China
Hui Liu Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216, China
Abstract

Image alignment plays a crucial role in solar physics research, primarily involving translation, rotation, and scaling. The different wavelength images of the chromosphere and transition region have structural complexity and differences in similarity, which poses a challenge to their alignment. Therefore, a novel alignment approach based on dense optical flow (OF) and the RANSAC algorithm is proposed in this paper. It takes the OF vectors of similar regions between images to be used as feature points for matching. Then, it calculates scaling, rotation, and translation. The study selects three wavelengths for two groups of alignment experiments: the 304 Å of the Atmospheric Imaging Assembly (AIA), the 1216 Å of the Solar Disk Imager (SDI), and the 465 Å of the Solar Upper Transition Region Imager (SUTRI). Two methods are used to evaluate alignment accuracy: Monte Carlo simulation and Uncertainty Analysis Based on the Jacobian Matrix (UABJM). The evaluation results indicate that this approach achieves sub-pixel accuracy in the alignment of AIA 304 Å and SDI 1216 Å, while demonstrating higher accuracy in the alignment of AIA 304 Å and SUTRI 465 Å, which have greater similarity.

Astronomical methods (1043) — Astronomy data analysis (1858) — Solar transition region (1532)

1 Introduction

The sun emits electromagnetic radiation across various wavelengths, including infrared, visible light, ultraviolet, extreme ultraviolet, and X-rays. The observation of these different wavelengths provides insight into the physical processes occurring in the solar atmosphere at varying heights and temperatures. Therefore, multi-wavelength observations provide comprehensive and three-dimensional information for studying solar activity. Multi-wavelength observation is an important method for empirical solar physics research. Many solar activity phenomena exhibit different observational characteristics across different radiation wavelengths, with varying brightness and spatial forms. Image alignment at a single wavelength facilitates observation and study of the sun’s evolution at that specific wavelength. Conversely, performing image alignment across multiple wavelengths facilitates a comprehensive analysis of these activity phenomena, thereby helping to uncover the patterns of solar activity.

A telescope pointing and tracking system can be used for single-wavelength image stabilization (Staiger, 2013). This makes it possible to acquire information like heliocentric coordinates and align single-wavelength images. However, even if the telescope is designed to have sub-arcsecond pointing and tracking accuracy, problems such as bending of the optical support system due to its structure and thermal jitter can lead to inaccuracies during the observation process. Shimizu et al. (2007) utilized two Ultra Fine Sun Sensors (UFSS-A and UFSS-B) to detect satellite jitter. In fact, not only can jitter measurements be realized by hardware, but also various algorithms can be used to detect the jitter problem. Orange et al. (2014),for example, performed pointing jitter measurements on the Helioseismic and Magnetic Imager (HMI) and AIA on the Solar Dynamics Observatory (SDO) using a mutual correlation algorithm. In contrast to the alignment of single-wavelength images, multi-wavelength image alignment is the alignment of solar images from different wavelengths. These images usually originate from different wavelengths of the same observing instrument or from different wavelengths of different observing instruments. There may also be small offsets and pixel size differences between images taken at different wavelengths by the same instrument (Guglielmino et al., 2010). Shimizu et al. (2007) evaluated the internal offsets and size differences of the broadband filter imager of the Solar Optical Telescope (SOT). The alignment of images across different wavelengths poses a significant challenge due to the varied criteria associated with different wavelengths and instruments. This challenge is further compounded by the presence of scale, rotation, and translation differences, which can arise from instrument differences, wavelength differences, and disparities in image processing techniques. Therefore, various algorithms need to be developed to realize image alignment between different wavelengths.

Typically, image alignment involves the estimation of translation, rotation, and scaling. With the increasing demand for fine information in solar physics research, the accuracy requirements of image alignment have increased. At present, two primary classical methods are employed for solar image alignment. One is a region-based statistical method, which maximizes the correlation between images through the statistical information of image regions to achieve alignment. Cross-correlation (CC) and phase correlation (PC) are two common statistical alignment methods. Kuehner et al. (2010) achieved multi-wavelength alignment on HINODE/SOT by CC algorithm, and Berkebile-Stoiser et al. (2009) also used CC algorithm to achieve the image alignment of Dutch open telescope (DOT) and the transition region and coronal explorer (TRACE). However, CC performs well in sub-pixel accurate translation transformations but has difficulty in scale and rotation transformations. Conversely, the PC algorithm has been developed to achieve rotation and scale transformations between images using Fourier transform, Polar Coordinate transform, and Logarithmic transform (Reddy & Chatterji, 1996). Druckmüller (2009) achieved the alignment of coronal images during a total solar eclipse by measuring the translations, rotations, and scale factors between images with the PC algorithm. The IPC algorithm is an extension of the PC algorithm that utilizes the differential evolution algorithm to optimize the parameters and improve the effectiveness and accuracy of the algorithm (Hrazdíra et al., 2020). Another approach is the feature-based matching method, which utilizes salient regions, lines, or points in the image as distinct reference features. Lowe (1999, 2004) designed and developed the scale invariant feature transform (SIFT) method by combining the steps of feature point detection, vector generation, and matching search. In the solar photosphere, the spot features are obvious, thus prompting the application of the SIFT method in the field of astronomy (Yue et al., 2015). Yang et al. (2018) employed this method to align and localize the local solar magnetic field from the Huairou Solar Observatory (HSO) with full-disk solar magnetic field images from SDO/HMI. Later, Ji et al. (2019) employed SIFT to align HMI, GONG, and AIA 304 Å data with TiO and Hα\alpha wavelength images acquired by NVST. In addition, SIFT was utilized to conduct a search for solar active regions (Jiang et al., 2022).

Optical flow (OF) represents a significant research direction within the domain of computer vision, with applications including target recognition and tracking. Recently, OF methods have also been applied to the alignment of solar images. Cai et al. (2022) utilized the OF algorithm to align the Hα\alpha data of the NVST and evaluated its performance accuracy with raster images obtained from the Fast Imaging Solar Spectrometer (FISS) run by the GST. Moreover, the accuracy of the method is higher than the CC algorithm. Yang et al. (2022) used OF and SIFT algorithms to align the data from GST. However, both of them only utilized OF for translational direction. Also, the OF method can be used for high-resolution solar image reconstruction (Liu et al., 2022).

The distinct observational characteristics exhibited by different wavelengths are attributable to the varied solar atmospheres observed. To realize the alignment of different wavelengths of solar images, it is necessary to require similar structures between these wavelengths. However, in scenarios where the similarity between images is low, particularly when only a portion of the similar structure is present, achieving an accurate alignment between images can be a formidable challenge. Currently, the correlation algorithms are reliant on the similarity between images. If there are only partially similar structures between wavelengths, the accuracy of the correlation algorithm is decreases. For feature point matching algorithms, the higher the number of feature points, the higher the alignment accuracy usually is. In practical applications, the number of feature points in solar images is often small, and there is too much manual intervention, making it difficult to improve the alignment accuracy. When aligning images from the SDI 1216 Å and AIA 304 Å channels, the two datasets exhibit significant differences in spatial resolution, leading to inherently low global similarity. And regions and edge boundaries demonstrate limited structural correspondence (Figure 1). We use the Bhattacharyya (1943) coefficient to quantify the similarity between the images. A subsequent comparison of the intensity histograms of the solar images yielded a Bhattacharyya coefficient of 0.58 for SDI 1216 Å and AIA 304 Å and 0.79 for SUTRI 465 Å and AIA 304 Å. Therefore, the alignment effect of these two methods is not ideal.

In this paper, a novel solar image alignment approach based on dense OF and the RANSAC algorithm is proposed. It is able to realize region-based matching by using the partially similar structure between images on solar images where feature points are difficult to find. Then, it detects information such as translation, rotation, and scale. The paper is organized as follows: Section 2 describes the solar image data used for alignment; Section 3 details the alignment method using the example of SDI 1216 Å and AIA 304 Å alignment; Section 4 evaluates the alignment accuracy of the method and shows the results; and conclusions and discussions are given in Section 5.

Refer to caption
Figure 1: Comparison image between AIA 304 Å and SDI 1216 Å. A1 is AIA 304 Å, B1 is SDI 1216 Å. A2 and B2 are the comparison of their explosion areas, while A3 and B3 are the comparison of their edge areas.

2 Data

Three data groups are selected for analysis: AIA 304 Å, SDI 1216 Å, and SUTRI 465 Å. And we can validate the proposed approach through these data. The three selected data groups are from neighboring regions of the solar atmosphere in adjacent time periods and have partially similar structural features. Among them, AIA 304 Å is primarily employed for the observation of the chromosphere and the transition region, SDI 1216 Å is utilized for the observation of the chromosphere-to-corona region of the Sun, and SUTRI 465 Å focuses on the upper transition region of the Sun. In this image alignment experiment, two groups of data are selected for the purpose of aligning different wavelengths: SUTRI 465 Å and AIA 304 Å data on November 14, 2022, and SDI 1216 Å and AIA 304 Å data on January 31, 2024.

The Atmospheric Imaging Assembly (AIA) was launched on February 11, 2010 from the Solar Dynamics Observatory (SDO) (Pesnell et al., 2012). The AIA is capable of observing ten different wavelengths, including seven extreme ultraviolet, two ultraviolet, and one visible wavelength. In in-orbit observations, AIA 304 Å has a spatial resolution of 1.5′′ and a temporal resolution of 12 s, generating images of 4096×\times4096 pixels2. The image scale is 0.6′′ pixel-1 (Lemen et al., 2012).

The Lyman-alpha (Lyα\alpha) Solar Telescope (LST) is one of the payloads on board the Advanced Space-based Solar Observatory (ASO-S), which was successfully launched on October 8, 2022 (Gan et al., 2023). The Solar Disk Imager (SDI) is an instrument onboard the LST with an operating spectrum of the Lyα\alpha line (1216 Å) (Chen et al., 2019). In the in-orbit observations, SDI has a spatial resolution of approximately 9.5′′ (Chen et al., 2024) and a temporal resolution of 10 s, generating images of 4608×\times4608 pixels2. The image scale is 0.5′′ pixel-1 (Li et al., 2019).

The Solar Upper Transition Region Imager (SUTRI) was carried on board the SATech-01 satellite of the Chinese Academy of Sciences and was successfully launched on July 22, 2022, with an orbital period of 96 min (Zhang et al., 2024). SUTRI operates at the spectral line of Ne VII 465 Å and is mainly formed in the transition region in the solar atmosphere above 0.5 MK degrees. In the in-orbit observations, SUTRI has a spatial resolution of 8′′ and a temporal resolution of 30 s, generating images of 2048×\times2048 pixels2. The image scale is 1.229′′ pixel-1 (Bai et al., 2023).

One image from each of the above three data groups is selected as a reference for the simulation of the alignment experiment. The generation of three test groups is achieved through the presetting of four parameters: scale, rotation angle, x-direction displacement, and y-direction displacement. Each test group contains 1000 randomly generated images that have undergone similarity transformation. The alignment evaluation of the single-wavelength simulated data is performed in Section 4 through three test groups.

3 Alignment Method

Given the differences in the solar atmospheric regions observed by AIA 304 Å and SDI 1216 Å, we aim to align the two images by focusing on the similar regions that are common to both. It is evident that these two wavelengths contain similar structures within the quiet region of the sun. Utilizing the dense OF method facilitates the extraction of these similar structures. And due to the inherent limitations of the OF algorithm, its capacity to process a wide range of movement is constrained. Consequently, it is necessary to first perform a coarse alignment of the image prior to calculating the OF vectors. The utilization of the dense OF algorithm gives rise to mismatched OF vectors. These erroneous vectors are observed in active solar regions, such as flares, as well as in the sun’s limb, which are dissimilar regions. Therefore, it is necessary to eliminate these dissimilar regions with the help of the RANSAC algorithm and retain only the similar regions between images. Concurrently, the OF within these regions is employed as the acquired feature points to fit the similarity transformation model. Subsequently, the translation, rotation, and scale parameters between the images are derived.

In summary, the alignment method proposed in this paper consists of three main steps: First, the process of coarse alignment is executed by employing the FITS header file data of the two solar images; Second, the OF field of the two images is calculated following the coarse alignment; Lastly, the OF vectors within the OF field are used as feature points to fit the similarity transformation model using RANSAC. This process enables the derivation of the similarity transformation matrix and the realization of the fine alignment of the images. Figure 2 shows the specific registration process of AIA 304 Å and SDI 1216 Å.

Refer to caption
Figure 2: Alignment flowchart for AIA 304 Å and SDI 1216 Å.

3.1 Coarse Alignment

Initially, a preliminary alignment of the images must be conducted to ascertain that the geometric transformations between the two solar images are small. This is necessary to facilitate the effective implementation of the OF algorithm. Typically, the FITS header file of an image contains essential information such as the rotation angle, the pixel scale, and the sun center position. Leveraging this information, we are able to perform a coarse alignment of solar images in different wavelengths using similarity transformation. However, given the differences in observation equipment and image processing methods, this alignment may still lead to image errors in practice. Thus, a subsequent fine alignment is required.

3.2 Calculate the OF field

Optical flow, as an important research area in the field of computer vision, is the instantaneous velocity that describes the motion of pixels of a spatially moving object on the observation imaging plane (Berthold & Brian, 1981). The OF method calculates the OF field on an image. The field consists of a large number of OF vectors. The OF method is employed in the domains of object recognition and tracking. It utilizes the variation of pixels in an image sequence in the time domain and the correlation between adjacent frames to find the correspondence that exists between the previous frame and the current frame. This method enables the calculation of motion information between adjacent frames, facilitating the analysis of object motion.

The OF method has three basic assumptions: (1) constant brightness: the brightness of the same target does not change when it moves between frames; (2) time consistency: changes in time will not cause drastic changes in the target position, and the displacement between neighboring frames should be relatively small; (3) spatial consistency: object motion in an image is typically smooth, with neighboring pixel points exhibiting similar velocities and orientations.The aforementioned assumptions can be expressed as follows:

I(x,y,t)=I(x+Δx,y+Δy,t+Δt),I(x,y,t)=I(x+\Delta x,y+\Delta y,t+\Delta t), (1)

where II denotes the pixel intensity at (x,y)(x,y) in the first frame. After a time interval of Δt\Delta t, it moves the displacement (Δx\Delta x,Δy\Delta y) to the next frame. Due to the small magnitude of the motion, the right-hand side of the equation is obtained by performing a first-order Taylor expansion and neglecting the higher terms:

Ixdxdt+Iydydt+It=0,\frac{\partial I}{\partial x}\frac{\mathrm{d}x}{\mathrm{d}t}+\frac{\partial I}{\partial y}\frac{\mathrm{d}y}{\mathrm{d}t}+\frac{\partial I}{\partial t}=0, (2)

where Ix\frac{\partial I}{\partial x} and Iy\frac{\partial I}{\partial y} are the spatial gradients of the image and It\frac{\partial I}{\partial t} is the gradient in the temporal direction. And dxdt\frac{\mathrm{d}x}{\mathrm{d}t} and dydt\frac{\mathrm{d}y}{\mathrm{d}t}, are the OF vector and the unknown quantities to be solved.

At present, there are many algorithms and theories to calculate the OF field. The Gunnar farneback algorithm (Farnebäck, 2003) is a type of dense OF that calculates the motion information of pixels individually. It generates a Gaussian pyramid of images with different resolutions for multi-resolution image search. While this algorithm is more time-consuming than other OF methods (such as sparse optical flow), it is capable of achieving high accuracy for images with complex structures. For this reason, it was selected to calculate the motion information for each pixel in the solar image. Although the OF vector itself describes the instantaneous velocity of pixels between images, it can be approximated as equal to the pixel displacement under certain circumstances. The image data in this paper is calculated using the Gunnar farneback algorithm. The resulting OF field is the relative displacement field of the image pixels.

It is necessary to implement image masking and region sampling before acquiring the OF field for executing RANSAC. In essence, the calculation of the OF field necessitates a concentration on the solar region alone, obviating the need for the calculation of the entire image. Therefore, the mask can be employed to calculate only the sun component. Given that the sun is already centered in the image during the coarse alignment process, the mask can be utilized to remove the sun’s edges and the subsequent regions. This approach enables the reduction of both the calculated burden and the unreliable sun edge regions. Subsequent to the calculation of the dense OF for the sun region, the sampling operation is performed by dividing the sun region. The Gunnar Farneback OF algorithm generates a dense OF field, comprising the OF vector at each pixel point. The selection of the median OF vector within each region is achieved by dividing the regions for sampling. This approach ensures the homogeneity of the solar surface OF field during the subsequent fitting process. Thus, it avoids overfitting of specific regions, which would otherwise lead to a shift in the overall fitting results. By doing the above, we can obtain the OF field used for the RANSAC operation, as shown in the middle image of Figure 3.

3.3 Fine Alignment

The Random Sample Consensus (RANSAC) algorithm (Fischler & Bolles, 1981) estimates the parameters of a mathematical model from a group of observed data containing outliers. This estimation is achieved through an iterative approach. Compared to the least squares method, it incorporates the concept of rejecting outlier data. Consequently, it facilitates the expeditious and precise identification of data samples that contain erroneous data.

The OF field obtained is not reliable in the solar active regions or in the solar limbic region due to the limitations of the assumptions of the OF algorithm. Accordingly, the OF field calculated by the Gunnar Farneback algorithm contains a number of outlier points. The outliers can be quickly and accurately screened out using the RANSAC algorithm, and the model is fitted with the similarity transformation. The similarity transformation matrix possesses four degrees of freedom, namely the scaling factor, rotation, x-direction displacement, and y-direction displacement. The specific model is as follows:

[xy1]=[scosβssinβdxssinβscosβdy001][xy1],\begin{bmatrix}x^{\prime}\\ y^{\prime}\\ 1\end{bmatrix}=\begin{bmatrix}s\cos\beta&-s\sin\beta&\mathrm{d}x\\ s\sin\beta&s\cos\beta&\mathrm{d}y\\ 0&0&1\end{bmatrix}\begin{bmatrix}x\\ y\\ 1\end{bmatrix}, (3)

where (x,y)(x,y) is the pixel position of the first image(x,y)(x^{\prime},y^{\prime}), is the corresponding pixel position of the next image, ss is the scaling factor, β\beta is the rotation angle, and dx\mathrm{d}x followed by dy\mathrm{d}y are the displacements in the x and y directions.

The OF field, as determined by the Gunnar Farneback algorithm, comprises both pixel coordinates and displacements. This provides the corresponding pixel positions within the similarity transformation matrix. The RANSAC algorithm can then be utilized to filter out outliers to preserve the OF field with similar structure between images. The right part of Figure 3 shows the OF field after the RANSAC algorithm filters out the outliers. The figure shows that the filtered OF vectors remove part of the solar limb where there are significant differences. Moreover, the OF vectors are reduced in active regions such as flares and are abundant in quiet regions. This improves the accuracy of the alignment in subsequent fits. The distribution of the OF vectors indicates the presence of rotation between images. Utilizing the OF vectors as feature points, we fit the similarity transformation model and thus solve for the four parameters. Then, the four resolved parameters can be utilized to perform the similarity transformation on the image to be aligned, thereby ensuring fine alignment with the reference image.

Refer to caption
Figure 3: Results of the OF field between images are shown. On the left is the AIA 304 Å image used as a reference image; in the center is the SDI 1216 Å image and the regionally sampled OF field; and on the right is the SDI 1216 Å image and the effective OF field after RANSAC.

4 Alignment accuracy evaluation and Results

We select two approaches for evaluation: Monte Carlo simulation and UABJM, to validate the alignment accuracy of our method. The three datasets in this experiment are obtained from different instruments with varying wavelengths. They inherently lack precise alignment relationships. So it becomes challenging to directly assess cross-band alignment accuracy. We therefore conduct preliminary evaluations using Monte Carlo simulation on single-wavelength data. Subsequently, we introduce the CC algorithm to compare alignment accuracy in translational dimensions. To demonstrate our method’s noise robustness, we perform additional validation by incorporating Gaussian noise into solar images. The UABJM method is employed for accuracy assessment for cross-wavelength alignment verification. The validity of UABJM for single-wavelength evaluation is first confirmed through comparison with Monte Carlo simulation results. Following this confirmation, we extend the UABJM methodology to evaluate different wavelength alignment accuracy.

4.1 Monte Carlo simulation

Monte Carlo simulation estimates target statistics or expected values by generating a large number of random samples and analyzing their computational outcomes. In our experiments, we establish accurate alignment relationships between simulated test datasets and original images through pre-defining four critical parameters (scale factor, rotation, x-translation, and y-translation). This framework enable systematic Monte Carlo simulations for three distinct single-wavelength datasets. The implementation procedure consists of four key phases: First, to eliminate interference from solar rotation and active region flares, we select one high-quality observational image as the reference template. Subsequently, we randomly generate 1000 parameter sets (comprising floating-point values for the four transformation parameters) using uniform probability distributions. These randomized parameters are applied to perform similarity transformations on the original image, thereby creating comprehensive test datasets. Following this implementation, we employ our methods to calculate the measured parameters. Systematic comparison between these measured values and true parameters yield alignment residuals. Finally, we quantify the alignment accuracy by calculating the root mean square error (RMSE) across all 1000 residual sets, with detailed results presented in Table 1.

The Gunnar Farneback algorithm for calculating the OF field needs to provide a pixel window to detect pixel motion information in our approach. The size of this window affects the simulation accuracy, while the simulation range of the four parameters affects the window size. To ensure equivalent conditions for the evaluation of simulated data, we provide the same arcsec window (about 63′′) for all image data. Our approach demonstrates high accuracy in single-wavelength simulated data. Table 1 shows that all wavelengths have scale errors <5e6<5e^{-6}, rotation errors <<1′′, and displacement direction errors <<0.01 pixels.

At the same time, we conduct a comparative analysis with the CC algorithm targeting translational accuracy. By configuring predefined displacement parameters in both x- and y-directional axes, we perform 100 independent measurement trials for our approach and the CC algorithm. As demonstrated in Figure 4, the comparison of alignment accuracy reveals the superior performance of our proposed method over the CC algorithm. The RMSE of the CC algorithm for x-axis and y-axis alignment accuracy is 0.0597 and 0.0582, respectively. And the RMSE of our approach for x-axis and y-axis alignment accuracy is 0.0037 and 0.0026, respectively. Furthermore, we conduct a systematic noise robustness evaluation of our method. Gaussian noise (zero-mean; standard deviation equivalent to three times the background noise standard deviation) is introduced to the original solar image, followed by 100 simulated measurements. Figure 5 presents a comparative visualization of measurement outcomes for AIA 304 Å images under no-noise and add-noise conditions. Quantitative analysis reveals that the proposed method maintains measurement stability, with a slow degradation of accuracy even under strong noise contamination. As detailed in Table 1, SUTRI 465 Å exhibits error amplification. This discrepancy stems from the fact that the data from SUTRI have higher background noise and lower image quality than the remaining two groups.

Table 1: RMSE of residual in the Monte Carlo simulation.
RMSE Scale Rotation x-direction y-direction
e6e^{-6} arcsec pixel pixel
AIA 304 Å 1.9017 0.6046 0.0093 0.0063
Add noise 2.4336 0.7052 0.0082 0.0102
SDI 1216 Å 2.3404 0.4007 0.0088 0.0092
Add noise 2.0870 0.7580 0.0107 0.0095
SUTRI 465 Å 4.5216 0.5263 0.0055 0.0056
Add noise 8.9630 1.6681 0.0110 0.0096
Refer to caption
Figure 4: The alignment accuracy of 100 images using two different methods. The residual errors of x- and y-directions are plotted in two panels. Simulation range: x-direction -10 : 10 pixels; y-direction -10 : 10 pixels. The AIA 304 Å image is selected here for comparison testing.
Refer to caption
Figure 5: Comparison of the accuracy of four parameters with and without noise. The AIA 304 Å image is selected here for noise testing.

4.2 Uncertainty Analysis Based on the Jacobian Matrix (UABJM)

The Monte Carlo simulation in Section 4.1 can only evaluate the alignment accuracy in a single wavelength and cannot evaluate the accuracy in different wavelengths. Consequently, an alternative error evaluation method is necessary to ascertain the accuracy of the alignment across different wavelengths. Considering the fitting of the similarity transformation matrix using RANSAC, then we can evaluate the accuracy of this alignment approach with the Jacobian matrix.

The specific steps of the UABJM are as follows: First, for the similarity transformation matrix model, the Jacobian matrix is calculated. Second, for the model fitted by RANSAC, we calculated the variance of its residuals. The expression is as follows:

S(θ)=i=1n[Yif(Xi,θ)]2,S(\theta)=\sum_{i=1}^{n}\left[Y_{i}-f(X_{i},\theta)\right]^{2}, (4)
σ2=S(θ)np,\sigma^{2}=\frac{S(\theta)}{n-p}, (5)

where XiX_{i} and YiY_{i} denote the coordinates of the two images sought by the OF, ff denotes the fitting model, θ\theta is the four parameters, nn is the sample size, pp is the number of parameters, here p=4p=4, SS is the residual sum of squares, and σ2\sigma^{2} is the variance of the residuals. We can then estimate its covariance matrix with the following expression:

Cov(θ)=(JTJ)1σ2,Cov(\theta)=(J^{T}J)^{-1}\sigma^{2}, (6)

where JJ is the Jacobian matrix and CovCov is the covariance matrix, the diagonal arithmetic square root of which is referred to as the standard error estimates of the four parameters. Finally, the standard errors of the four parameters are estimated using the covariance matrix to serve as the alignment accuracy.

Before evaluating the accuracy of the method in different wavelengths, the UABJM is performed on the simulated data in Section 4.1. The RMSE of these 1000 groups of standard errors is then utilized as the alignment accuracy (shown in Table 2). Compared to Table 1, the accuracies of these two methods are close. However, the UABJM is not a statistical method. It is capable of producing a standard error for each measurement. Then, we need to understand the relationship between the magnitude of the standard error and the true residual in each measurement. In each measurement, the true residual is denoted by ss, and the standard error given by the UABJM is denoted by σ\sigma. Figure 6 shows the s/σs/\sigma of 1000 measurements represented as a histogram. As can be seen in Figure 6, the true residuals in the 1000 simulations essentially fall within 3 times the standard error, which is statistically reasonable and valid. Therefore, it is reasonable and effective to use the standard error to evaluate the accuracy of the alignment method on a single wavelength.

Table 2: RMSE of standard errors in the UABJM.
RMSE Scale Rotation x-direction y-direction
e6e^{-6} arcsec pixel pixel
AIA 304 Å 3.9509 0.8152 0.0087 0.0087
Add noise 4.1854 0.8635 0.0093 0.0092
SDI 1216 Å 3.5788 0.7384 0.0090 0.0088
Add noise 3.5771 0.7380 0.0091 0.0088
SUTRI 465 Å 4.1617 0.8585 0.0046 0.0044
Add noise 10.0704 2.0777 0.0111 0.0110
Refer to caption
Figure 6: Histogram statistics of the error ratios (s/σs/\sigma) of the four parameters in AIA 304 Å. In each measurement, the true residual is denoted by ss, and the standard error given by the UABJM is denoted by σ\sigma. Inside the red line indicates that the true residual is within 1 times standard error, and inside the yellow line indicates that the true residual is within 3 times standard error.

4.3 Results

Given the reasonable validity of the standard errors derived from the UABJM in single-wavelength simulations, it is feasible to apply this method to estimate the alignment accuracy across different wavelengths. In this experiment, it is applied to alignments of SUTRI 465 Å with AIA 304 Å and SDI 1216 Å with AIA 304 Å. Table 3 shows the standard error estimates for the four parameters for these two alignments.

As illustrated in Table 3, the standard error estimates of SDI 1216 Å versus AIA 304 Å are larger than those of SUTRI 465 Å versus AIA 304 Å in all aspects. One is because the similarity of the former is smaller than that of the latter; the other is because the difference in spatial resolution of the former is larger. Converting the standard error from arcseconds to pixels based on image size, it can be found that the scaling error of SDI 1216 Å versus AIA 304 Å is 0.5 pixels, the maximum rotation error is 0.77 pixels, and the translation error in both directions is << 0.33 pixels. The alignment accuracy of SUTRI 465 Å and AIA 304 Å is higher. The scaling error of SUTRI 465 Å and AIA 304 Å is 0.08 pixels, the maximum rotation error is 0.12 pixels, and the translation error in both directions is << 0.05 pixels.

Among the three groups of data in the experiments in this paper, AIA has the following characteristics: long runtime, stable image quality, and high optical resolution. So we use AIA 304 Å as the alignment standard and align the SUTRI 465 Å image and the SDI 1216 Å image to AIA 304 Å, respectively. To illustrate the alignment results, the two images are synthesized into a pseudo-color composite image. Figure 7 shows the alignment results for SDI 1216 Å and AIA 304 Å. The solar structure overlaps well on the aligned image, and the alignment result is precise.

Table 3: Standard errors of alignment for different wavelengths calculated by UABJM in a single measurement
Stabdard Error Scale Rotation x-direction y-direction image size
e5e^{-5} arcsec pixel pixel pixels2
AIA 304 Å-SDI 1216 Å 13.52 27.68 0.28 0.32 4096×\times4096
AIA 304 Å-SUTRI 465 Å 4.05 8.37 0.05 0.03 2048×\times2048
Refer to caption
Figure 7: SDI 1216 Å and AIA 304 Å alignment results are shown. The left figure shows the coarse alignment results, and the right figure shows the fine alignment. The red channel is AIA 304 Å; the green and blue channels are SDI 1216 Å. The green box is the area with the obvious alignment effect.

5 Conclusion and Discussion

The image alignment of different wavelengths of the sun has historically been a pivotal aspect in the research of the sun. In this paper, a solar image alignment approach is proposed. This approach is based on the dense OF and the RANSAC algorithm. It finds the feature points by extracting the OF fields on these similar structures. From there, it comes to fitting the similarity transformation model to realize the alignment. And the alignment test is performed employing two different groups of images, designated SUTRI 465 Å and AIA 304 Å, along with SDI 1216 Å and AIA 304 Å. Then, the accuracy of the alignment is evaluated based on the UABJM. The efficacy of the approach is demonstrated through its high degree of accuracy in performing single wavelength simulation alignment experiments, with scale errors << 5e6e^{-6}, rotation errors << 1′′, and translation direction errors << 0.01 pixels. This also indicates that the approach will exhibit high accuracy in the actual alignment work in a single wavelength. And it is expected to be applied in the future to measure the jitter problem of observation instruments. In the actual alignment work in different wavelengths, the accuracy will vary due to the similarity between the images. And the more similar structures, alignment has higher accuracy. We simply use the Bhattacharyya coefficient to quantify the similarity between the images. SUTRI 465 Å and AIA 304 Å exhibits higher similarity. The alignment of the SDI 1216 Å and AIA 304 Å images can be performed at the sub-pixel level. Furthermore, in the image alignment of SUTRI 465 Å with AIA 304 Å, which exhibits a higher degree of similarity, the pixel error is reduced even further. The rotated pixel errors for each of these alignments are observed to be larger. In fact, the rotated pixel error on the solar image is not as large as calculated. This is because the calculated error is the maximum rotation pixel error for the image, at the upper right corner of the image (by the lower left corner as the origin). And the fact that the sun is in the center of the image does not include that location.

The alignment approach proposed in this paper is not without its limitations in terms of practical application. Given the OF algorithm’s dependence on small motion, this approach is also constrained in its ability to measure large ranges (a significant image position discrepancy) and large motions (such as flares). In the specific code implementation, there exists a pixel window to detect motion. The dimensions of the window, therefore, must be carefully calibrated to ensure that the OF vectors do not exceed their boundaries and fail to identify similar regions. On the other hand, an large window can lead to a decline in the accuracy of the results. Therefore, determining the use of the appropriate window is also a problem, and the topic is not explored in this paper. At the same time, the approach also depends on the degree of similarity between images. When the degree of similarity between two solar images decreases, the accuracy of the approach will be reduced. For example, the degree of similarity between the SDI 1216 Å and AIA 304 Å images in the paper is lower than that between the SUTRI 465 Å and AIA 304 Å images, and the errors of the former are larger than those of the latter. Furthermore, when the degree of similarity between two solar images is minimal, the alignment approach is rendered ineffective. Therefore, the existence of similar structures between images and small image motion are important prerequisites for the method in this paper.

Our alignment approach is implemented in Python. And the AIA 304 Å and SDI 1216 Å alignment work is publicly available on GitHub: https://github.com/yushiweiliang/Alignment-Method.git.


We acknowledge the use of data from the Atmospheric Imaging Assembly (AIA) of the Solar Dynamics Observatory (SDO), the Solar Disk Imager (SDI) of the Lyman-alpha (Lyα\alpha) Solar Telescope (LST), and the Solar Upper Transition Region Imager (SUTRI). We also appreciate all the help from the colleagues in the laboratory team. This work is supported by the National Natural Science Foundation of China under grant 12373115, the Yunnan Key Laboratory of Solar Physics and Space Science under the number 202205AG070009, and the Yunnan Revitalization Talent Support Program under the numbers 202305AS350029 and 202305AT350005.

References

  • Bai et al. (2023) Bai, X., Tian, H., Deng, Y., et al. 2023, Research in Astronomy and Astrophysics, 23, 065014, doi: 10.1088/1674-4527/accc74
  • Berkebile-Stoiser et al. (2009) Berkebile-Stoiser, S., Gömöry, P., Veronig, A. M., Rybák, J., & Sütterlin, P. 2009, A&A, 505, 811, doi: 10.1051/0004-6361/200912100
  • Berthold & Brian (1981) Berthold, K. P. H., & Brian, G. S. 1981, Artificial Intelligence, 17, 185, doi: https://doi.org/10.1016/0004-3702(81)90024-2
  • Bhattacharyya (1943) Bhattacharyya, A. 1943, in . https://api.semanticscholar.org/CorpusID:235941388
  • Cai et al. (2022) Cai, Y.-F., Yang, X., Xiang, Y.-Y., et al. 2022, Research in Astronomy and Astrophysics, 22, 065010, doi: 10.1088/1674-4527/ac69b9
  • Chen et al. (2019) Chen, B., Li, H., Song, K.-F., et al. 2019, Research in Astronomy and Astrophysics, 19, 159, doi: 10.1088/1674-4527/19/11/159
  • Chen et al. (2024) Chen, B., Feng, L., Zhang, G., et al. 2024, Solar Physics, 299, 118, doi: 10.1007/s11207-024-02354-3
  • Druckmüller (2009) Druckmüller, M. 2009, ApJ, 706, 1605, doi: 10.1088/0004-637X/706/2/1605
  • Farnebäck (2003) Farnebäck, G. 2003, in Proceedings of the 13th Scandinavian Conference on Image Analysis, LNCS 2749, Gothenburg, Sweden, 363–370
  • Fischler & Bolles (1981) Fischler, M. A., & Bolles, R. C. 1981, Commun. ACM, 24, 381–395, doi: 10.1145/358669.358692
  • Gan et al. (2023) Gan, W., Zhu, C., Deng, Y., et al. 2023, Solar Physics, 298, 68, doi: 10.1007/s11207-023-02166-x
  • Guglielmino et al. (2010) Guglielmino, S. L., Bellot Rubio, L. R., Zuccarello, F., et al. 2010, ApJ, 724, 1083, doi: 10.1088/0004-637X/724/2/1083
  • Hrazdíra et al. (2020) Hrazdíra, Z., Druckmüller, M., & Habbal, S. 2020, ApJS, 247, 8, doi: 10.3847/1538-4365/ab63d7
  • Ji et al. (2019) Ji, K., Liu, H., Jin, Z., Shang, Z., & Qiang, Z. 2019, Chinese Science Bulletin, 64, 1738, doi: https://doi.org/10.1360/N972019-00092
  • Jiang et al. (2022) Jiang, B., Liu, L., Zheng, S., et al. 2022, Chinese Astronomy and Astrophysics, 46, 264, doi: https://doi.org/10.1016/j.chinastron.2022.09.007
  • Kuehner et al. (2010) Kuehner, O., Utz, D., Hanslmeier, A., et al. 2010, Central European Astrophysical Bulletin, 34, 31
  • Lemen et al. (2012) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Solar Physics, 275, 17, doi: 10.1007/s11207-011-9776-8
  • Li et al. (2019) Li, H., Chen, B., Feng, L., et al. 2019, Research in Astronomy and Astrophysics, 19, 158, doi: 10.1088/1674-4527/19/11/158
  • Liu et al. (2022) Liu, H., Jin, Z., Xiang, Y., & Ji, K. 2022, Research in Astronomy and Astrophysics, 22, 095005, doi: 10.1088/1674-4527/ac7cba
  • Lowe (1999) Lowe, D. 1999, in Proceedings of the Seventh IEEE International Conference on Computer Vision, Vol. 2, 1150–1157 vol.2, doi: 10.1109/ICCV.1999.790410
  • Lowe (2004) Lowe, D. G. 2004, International Journal of Computer Vision, 60, 91, doi: 10.1023/B:VISI.0000029664.99615.94
  • Orange et al. (2014) Orange, N. B., Oluseyi, H. M., Chesny, D. L., et al. 2014, Solar Physics, 289, 1901, doi: 10.1007/s11207-013-0441-2
  • Pesnell et al. (2012) Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, Solar Physics, 275, 3, doi: 10.1007/s11207-011-9841-3
  • Reddy & Chatterji (1996) Reddy, B., & Chatterji, B. 1996, IEEE Transactions on Image Processing, 5, 1266, doi: 10.1109/83.506761
  • Shimizu et al. (2007) Shimizu, T., Katsukawa, Y., Matsuzaki, K., et al. 2007, Publications of the Astronomical Society of Japan, 59, S845, doi: 10.1093/pasj/59.sp3.S845
  • Staiger (2013) Staiger, J. 2013, in Journal of Physics Conference Series, Vol. 440, Journal of Physics Conference Series (IOP), 012004, doi: 10.1088/1742-6596/440/1/012004
  • Yang et al. (2018) Yang, P., Zeng, S., Liu, S., et al. 2018, Astronomical Techniques and Instruments, 15, 59. http://www.ati.ac.cn/en/article/id/2254
  • Yang et al. (2022) Yang, X., Cao, W., & Yurchyshyn, V. 2022, ApJS, 262, 55, doi: 10.3847/1538-4365/ac91c9
  • Yue et al. (2015) Yue, X., Shang, Z., Qiang, Z., et al. 2015, Computer Science, 42, 57. https://api.semanticscholar.org/CorpusID:216028568
  • Zhang et al. (2024) Zhang, X., Chen, W., Zhu, X., et al. 2024, Science China Technological Sciences, 67, 240, doi: 10.1007/s11431-023-2510-x