This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Mixed geometry information regularization for image multiplicative denoising

Shengkun Yang1, Zhichang Guo1, Jia Li1,777Author to whom any correspondence should be addressed., Fanghui Song1, Wenjuan Yao1 1 School of Mathematics, Harbin Institute of Technology, Harbin 150001, China jli@hit.edu.cn
Abstract

This paper focuses on solving the multiplicative gamma denoising problem via a variation model. Variation-based regularization models have been extensively employed in a variety of inverse problem tasks in image processing. However, sufficient geometric priors and efficient algorithms are still very difficult problems in the model design process. To overcome these issues, in this paper we propose a mixed geometry information model, incorporating area term and curvature term as prior knowledge. In addition to its ability to effectively remove multiplicative noise, our model is able to preserve edges and prevent staircasing effects. Meanwhile, to address the challenges stemming from the nonlinearity and non-convexity inherent in higher-order regularization, we propose the efficient additive operator splitting algorithm (AOS) and scalar auxiliary variable algorithm (SAV). The unconditional stability possessed by these algorithms enables us to use large time step. And the SAV method shows higher computational accuracy in our model. We employ the second-order SAV algorithm to further speed up the calculation while maintaining accuracy. We demonstrate the effectiveness and efficiency of the model and algorithms by a lot of numerical experiments, where the model we proposed has better features texture-preserving properties without generating any false information.

  • 29 April 2024

Keywords: multiplicative denoising, mixed geometry information,

additive operator splitting, scalar auxiliary variable

1 Introduction

Multiplicative noise, commonly referred to as speckle noise, is prevalent in the synthetic aperture radar (SAR) images [1, 2], laser images [3], ultrasound images [4], and tomographic images [5]. It destroys almost all the original information, hence it is necessitating urgent attention to perform multiplicative noise removal in real imaging systems. For a mathematical description of such noise, we consider a clean image u:Ωu:\Omega\rightarrow\mathbb{R}, where Ω\Omega is a bounded open set in 2\mathbb{R}^{2} with a Lipschitz boundary. Let η\eta represent the multiplicative noise, then the pollution process of noisy image ff can be expressed mathematically as follows

f=uη.f=u\eta. (1)

In this study, our focus is on the consideration that η\eta follows the gamma distribution [6], which is commonly encountered in SAR images. Specifically, η\eta obeys the mean value of 11 and the variance of 1/L1/L. The probability density function is formally expressed as

p(η)=LLηL1Γ(L)eLη𝟏{η0},p(\eta)=\frac{L^{L}\eta^{L-1}}{\Gamma(L)}\mathrm{e}^{-L\eta}\mathbf{1}_{\{\eta\geq 0\}}, (2)

where 𝟏{η0}\mathbf{1}_{\{\eta\geq 0\}} denotes the characteristic function of the set {η0}\{\eta\geq 0\}. Different from the additive noise, multiplicative noise exhibits coherence and non-Gaussian properties [7], making traditional additive denoising models fail and posing more challenges.

In the literature, various multiplicative denoising methods have been proposed, including variational methods [8, 9, 10, 11], partial differential equation (PDE) methods [12, 13], non-local filtering methods [14, 15, 16, 17], and so on. Among these, variation-based models renowned for their adeptness in incorporating prior information and possessing stringent theoretical guarantees, and have attracted significant attentions [18, 8, 9, 19, 20]. A classic model within this category is the Rudin, Lions, and Osher (RLO) model [18]. Notably, RLO utilized total variation (TV) as a regularizer, allowing for the existence of discontinuous solutions. It can effectively remove internal noise while preserving boundaries. However, the model solely considers the statistical information of mean and variance, thereby constraining its denoising capabilities. To address this limitation, Aubert and Aujol [8] proposed the AA model, which combined the TV regularizer with a modified fidelity term logu+f/u{\rm{log}}u+f/u derived from maximum a posteriori estimation.

While the classic TV regularizer and its variants can effectively remove noise, they often exhibit defects, such as the staircasing effects, corners loss and contrast reduction [21, 22, 23, 24]. In response to these limitations, numerous curvature-based models have been proposed. Curvature serves as a crucial tool for characterizing curves and surfaces in image processing. Models incorporating such priors can enforce continuity constraints on the solution space, allowing for the preservation of jumps and mitigation of the aforementioned deficiencies. Meanwhile, as models containing a type of geometric information, the curvature-based models also exhibit good performance in protecting image geometric structure, such as corners [25, 26, 27, 28, 29].

Besides employing curvature-based models, denoising can also be achieved by minimizing the area of image surface [30, 31, 32, 33]. Sapiro et al. [30] first minimize the area of surfaces to penalize the roughness of the solution, which can avoid smoothing the edges while denoising. Yezzi [31] proposed a modified momentum model from the non-variational perspective, and the corresponding model can flatten out oscillations while protecting the edge.

In conclusion, compared with the TV-norm based models, the geometry-based models can effectively remove the multiplicative noise while protecting image features. However, it can be seen that both types of models suffers from some limitations. For example, the model based only on surface area has the advantage of preserving edges and contours but also brings problems such as contrast reduction and staircasing effects. The model based solely on curvature can avoid the above problems, but it also brings the disadvantage of being unable to protect the edges of objects and making the boundaries too smooth [34].

Then it is natural to consider a model that mixes geometric information to compensate for their shortcomings. Here we use a multiplicative structure similar to Euler’s elastica to balance their effects. We take the minimal surface area term as the main denoising term, which has excellent edge protection ability and low computational cost. However, it is accompanied by unnatural artifacts such as the staircasing effects. We combine the curvature term with it to alleviate this phenomenon. In addition, we incorporate the gray level indicator to perform adaptive multiplicative denoising.

To sum up, the model we proposed by considering curvature and surface area, not only possesses strong denoising capabilities but also preserves the geometric information of images, such as texture, edges, and corners. However, the integration of curvature and surface results in a high-order nonlinear functional model, which poses challenge in numerical solving. Frequently, related models employ rapid algorithms like the augmented Lagrangian method (ALM) and primal-dual method to expedite computations. Nevertheless, such approach introduces the issue of manually selecting numerous multipliers, resulting in additional parameter adjustment burden [35, 36, 37, 38, 39]. Addressing these challenges necessitates the development of an efficient and accurate numerical algorithm for minimizing the energy functional. To maintain equivalence during algorithmic iterations, we first employ the additive operator splitting (AOS) algorithm to solve the gradient flow. The AOS algorithm is a classic algorithm used in image processing and has unconditional stability. However, when applied to our model, it sometimes leads to computational inaccuracies. It is worth mentioning that in recent years, Shen et al. [40] have proposed the scalar auxiliary variable (SAV) method to speed up the solving process of gradient flow problems. It is also unconditionally stable and capable of iteratively minimizing the original energy functional without necessitating modification or relaxation of the initial problem. Therefore the first-order and second-order SAV algorithm is used to calculate our model.

The main contributions of our work mainly lie in the following three aspects:

•We propose an adaptive multiplicative denoising model incorporating mixed geometry information. We mix the area term and the curvature term to protect edges and mitigate staircasing effects respectively. Furthermore, tailored to the coherence characteristics of multiplicative noise, we introduce gray level indicator to perform stronger denoising on areas with high signal intensity.

•To overcome the numerical challenges inherent in solving nonlinear models, we utilize the AOS and SAV algorithms to expedite the conventional gradient flow method. These algorithms exhibit unconditional stability and unconditional energy stability, respectively. Compared with the AOS algorithm, the SAV algorithm has a better ability to maintain accuracy in our model.

•Adequate numerical experiments demonstrate that the proposed model effectively removes noise while preserving the geometric features of the original image, including corners and edges.

The rest of this paper is organized as follows. We propose a model based on mixed geometry information regularization in Sect. 2. In Sect. 3 we derive the gradient flow corresponding to the mixed geometric information model. In Sect. 4, the unconditionally stable algorithms based on AOS and SAV are proposed. Sect. 5 is devoted to numerical experiment. In Sect. 6, we summarize the whole paper.

2 Adaptive Minimization Model Based on Mixed Geometry Information

2.1 The Comparison between Single Geometric Information and Mixed Geometric Information

Among traditional variational denoising methods, regularization models based on geometric information exhibit superior performance. Models with TV regularization or minimal surface regularization have the advantage of edge protection, but these models bring bad experimental effects such as loss of image contrast [23]. At the same time, the curvature-based model can effectively maintain contrast, but the edge protection ability is reduced and the computational overhead increases several times [27, 41]. We can find that models based on single geometric information enjoy some advantages but also have disadvantages. Then a natural idea is that we can design a mixed geometric model that can combine their advantages and compensate for each other’s shortcomings.

Suppose the image are degraded by the multiplicative gamma noise pollution level L=4L=4. As displayed in Fig. 1, during the multiplicative denoising process, the minimal surface model has lower image contrast than the model based on curvature. In terms of visual effects, curvature-based model can protect image details better but over-smooths the edges. To further improve their denoising performance, we use a mixed geometric information model. As shown in the results in Fig. 1, the mixed geometric information model has the best edge structure protection capabilities and comprehensive results.

Refer to caption
(a) Minisurface
Refer to caption
(b) Curvature
Refer to caption
(c) Mixed geometric
Figure 1: Denoising results of single geometric information and mixed geometric information. a minmimal surface (PSNR/SSIM:27.2070/0.8728); b curvature (PSNR/SSIM:27.2534/0.8476); c mixed geometric information (PSNR/SSIM:28.4377/0.8938)

2.2 Gray Level Indicator

Unlike additive noise, the extent of corruption caused by multiplicative noise is linked to the signal strength of its clean image. Based on the above analysis, we design gray level indicator for adaptive denoising. Regions with higher original signal strength indicate a greater degree of pollution, necessitating stronger denoising in such regions. However, multiplicative noise severely destroys almost all information in the clean image, making it challenging to directly obtain a corrupted image. Therefore, to achieve our objective, we use a modified gray level indicator. For the image u(x,y)u(x,y), the gray level indicator α\alpha can be expressed as

α(x)=(GσuM)p,\alpha(x)=\left(\frac{G_{\sigma}*u}{M}\right)^{p}, (3)

where GσG_{\sigma} is the two-dimensional convolution kernel with standard deviation σ\sigma, M=supxΩ(Gσu(x))M=\sup_{x\in\Omega}\left(G_{\sigma}*u\left(x\right)\right). α\alpha utilizes a Gaussian convolution kernel to pre-smooth the noisy image and approximate the clean image. This operation makes the indicator less sensitive to noise of scale smaller than σ\sigma. Normalization allows the indicator to be regarded as a weight function. The higher the signal strength, the more attention it warrants, and active minimization of the surface should be prioritized. As depicted in the Fig. 2, the indicator effectively characterizes the pollution degree of the noisy image and accurately captures the spatial variation of the image.

Refer to caption
(a) Residual
Refer to caption
(b) Indicator
Figure 2: Noise pollution level and gray level indicator

2.3 Proposed Model

Assuming that the gray image u(x,y)u(x,y) is defined on the domain Ω\Omega, and is considered as a regular surface in three-dimensional space. Encouraged by the success of mean curvature in handling additive noise, we incorporate the mean curvature of surfaces into the our proposed model. We reconsider the image surface as the zero level set of the level set function ϕ\phi [42], ϕ(x,y,z)=u(x,y)z\phi(x,y,z)=u(x,y)-z. Therefore, its unit outer normal is

N=ϕ|ϕ|=(n1,n2,n3).N=\frac{\nabla\phi}{|\nabla\phi|}=(n_{1},n_{2},n_{3}). (4)

Further, the mean curvature of a surface is defined as the divergence of the unit normal

κ=N=ϕ|ϕ|=n1x+n2y+n3z.\kappa=\nabla\cdot N=\nabla\cdot\frac{\nabla\phi}{|\nabla\phi|}=\frac{\partial n_{1}}{\partial x}+\frac{\partial n_{2}}{\partial y}+\frac{\partial n_{3}}{\partial z}. (5)

Clearly, the smoothness of the image can be effectively represented by the curvature. Through such a transformation with the level set method, the value of ϕ\phi can be used to rapidly and accurately calculate the mean curvature using methods such as the finite difference method.

Considering that the mean curvature regularization of the surface is used in the model, we regularize the surface area rather than directly limiting the length of the level set curve. This also has the advantage that it avoids the singularity caused by division by zero. Specifically, constraining all contours and regularizing the elastic energy of the level set function, we get the following regularization term

Rϕ=0(a+bκ2)dsdz\displaystyle\int_{R}\int_{\phi=0}(a+b\kappa^{2})\mathrm{d}s\mathrm{d}z (6)
=\displaystyle= Rϕ=0(a+bκ2)1+|u|2dtdz\displaystyle\int_{R}\int_{\phi=0}(a+b\kappa^{2})\sqrt{1+|\nabla u|^{2}}\mathrm{d}t\mathrm{d}z
=\displaystyle= Ω(a+bκ2)1+|u|2dx.\displaystyle\int_{\Omega}(a+b\kappa^{2})\sqrt{1+|\nabla u|^{2}}\mathrm{d}x.

Further, by combining the gray level indicator and fidelity term designed for the multiplicative noise process, we propose a model E(u)E\left(u\right) that mixes area and curvature geometric information

Ω(α(u)+b((u1+|u|2))2)1+|u|2dx+ηΩ(uflogu)dx,\int_{\Omega}\left(\alpha(u)+b\left(\nabla\cdot\left(\frac{\nabla u}{\sqrt{1+|\nabla u|^{2}}}\right)\right)^{2}\right)\sqrt{1+|\nabla u|^{2}}\mathrm{~{}d}x+\eta\int_{\Omega}(u-f\log u)\mathrm{d}x, (7)

where α(u)\alpha(u) is the gray level indicator, and bb and λ\lambda are positive constants utilized to balance the fidelity term and the regularization term. To better address the characteristics of multiplicative denoising and achieve the regional adaptive effect, we employ the gray level indicator α\alpha from [12]. To further illustrate the rationality of the model, we will further introduce the degradation model of the model, each of which is a well-known classic denoising model.

2.4 Degenerate Case of the Model

In order to further illustrate the advantages of mixing area terms with curvature terms, we conducted the following experiments. In particular, the free parameters and adaptive part of the model can degenerate the model into many simpler forms despite the high complexity of the model. (1) α(u)0\alpha(u)\neq 0 and is a constant, b0b\neq 0, degenerate into regularization resembling Euler’s elastica; (2) α(u)0\alpha(u)\neq 0 and is a constant, b=0b=0, degenerate into minimal surface regularization; (3) α(u)\alpha(u) is not a constant, b=0b=0, degenerate into adaptive minimal surface regularization; (4) α(u)\alpha(u) is not a constant, b0b\neq 0, it is a multiplicative deduction based on the mixed geometry information of the surface. We next analyze various visual denoising effects of the model.

Refer to caption
(a) Clean
Refer to caption
(b) Minisurface
Refer to caption
(c) Elastica
Refer to caption
(d) α=0.5\alpha=0.5,b0b\neq 0
Refer to caption
(e) α=α(u)\alpha=\alpha\left(u\right),b=0b=0
Refer to caption
(f) α=α(u)\alpha=\alpha\left(u\right),b0b\neq 0
Figure 3: Model Degradation Analysis

In the experiment, the noise parameter L=4L=4 was initially selected. The traditional minimal surface model exhibits a conspicuous phenomenon of fragmentation, which leads to more false boundaries in the result, which is not conducive to the subsequent analysis. Upon employing the gray level indicator, Fig. 3d illustrates a significant reduction in the staircasing effects of the minimal surface model, and Fig. 3e shows a smoother result. However, a “fish scale phenomeno” emerges, accompanied by noticeable jagged edges at the shape’s periphery. In contrast to previous models, our proposed model effectively restores image details and successfully avoids the staircasing effects. The amalgamation of area and curvature geometric information yields mutual benefits, surpassing the denoising capabilities of each individual geometric feature in multiplicative denoising.

3 Fourth-Order Euler-Lagrange Equation of mixed geometry information model

In view of the gradient flow approach, we discretize the Euler-Lagrange equation to solve the functional minimization problem. This kind of continuous approach can accurately depict the geometric characteristics of the image without causing grid bias or measurement artifacts. Additionally, unlike optimization-based methods, this approach does not introduce errors between relaxed functional and original functional.

We first derive the gradient flow corresponding to our model. To simplify the model derivation, we consider α(u)\alpha(u) as a weighting function in the subsequent analysis, thereby yielding the Euler-Lagrange equation that corresponds to the mixed geometry information model

E(u)=\displaystyle E^{\prime}\left(u\right)= (α(1+|u|2)12u)\displaystyle-\nabla\cdot\left(\alpha\left(1+|\nabla u|^{2}\right)^{-\frac{1}{2}}\nabla u\right) (8)
+2b(11+|u|2[(𝐈𝐏)(κ1+|u|2)])\displaystyle+2b\nabla\cdot\left(\frac{1}{\sqrt{1+|\nabla u|^{2}}}\left[\left(\mathbf{I}-\mathbf{P}\right)\nabla\left(\kappa\sqrt{1+|\nabla u|^{2}}\right)\right]\right)
b(κ2(1+|u|2)12u)+η(1fu)\displaystyle-b\nabla\cdot\left(\kappa^{2}\left(1+|\nabla u|^{2}\right)^{-\frac{1}{2}}\nabla u\right)+\eta\left(1-\frac{f}{u}\right)
=\displaystyle= 0,\displaystyle 0,

where κ=ϕ|ϕ|\kappa=\nabla\cdot\frac{\nabla\phi}{|\nabla\phi|}, 𝐈(x)=x\mathbf{I}(x)=x, 𝐏(x)=xu1+|u|2u\mathbf{P}(x)=\frac{x\cdot\nabla u}{1+|\nabla u|^{2}}\nabla u. The corresponding gradient descent flow is

ut=E(u)\frac{\partial u}{\partial t}=-E^{\prime}\left(u\right) (9)

with Neumann boundary condition on Ω\partial\Omega

u𝐧=0,\frac{\partial u}{\partial\mathbf{n}}=0, (10)

where 𝐧\mathbf{n} is the outward normal derivative at the boundary.

From formula (8) we can see that α(u)+bκ21+|u|2\frac{\alpha(u)+b\kappa^{2}}{\sqrt{1+|\nabla u|^{2}}} is the main term of diffusion. When |u||\nabla u| is large, the diffusion coefficient is relatively small, which indicates that diffusion should be slowed down at the edges. When |u||\nabla u| is quite small, the forward diffusion coefficient is relatively large, which indicates that rapid diffusion denoising is performed inside the image. Similar to [23], when |u|1|\nabla u|\ll 1, our model will have a degenerate form

ut[(α(u)+b(Δu)2)u]2bΔ2uη(1fu),\frac{\partial u}{\partial t}\approx\nabla\cdot\left[\big{(}\alpha(u)+b\left(\Delta u\right)^{2}\big{)}\nabla u\right]-2b\Delta^{2}u-\eta\left(1-\frac{f}{u}\right), (11)

which manifests as anisotropic diffusion and slight biharmonic diffusion. Due to the fact of αb\alpha\gg b, anisotropic diffusion plays a dominant role in the inner region.

In addition, for non-smooth boundaries, due to the existence of the denominator part 1+|u|2\sqrt{1+|\nabla u|^{2}}, this will lead to a reduction in forward heat diffusion efficiency. However, the phenomenon of non-smooth boundaries still exists. In our model (8), due to the combination of the curvature term and the surface area term, it can be exactly eliminated with 1+|u|2\sqrt{1+|\nabla u|^{2}}. This is crucial for the backward diffusion of the model. At this time, for the rough edges of the image, κ\nabla\kappa will dominate the diffusion process of the model, causing the boundaries sharper and the interior smoother.

4 Numerical Implementation

In this section, we first introduce the numerical discretization process. To overcome the time step limitations in the finite difference method, we employ the unconditionally stable AOS and SAV algorithms.

4.1 Operator discretization

We proceed by taking time as the evolution parameter for advancing the fourth-order equation to facilitate the explicit update of uu. Initially, we assume the image size is M×NM\times N. Let τ\tau represent the time step and define the grid as follows

xi\displaystyle x_{i} =iΔxi=1,,M,\displaystyle=i\Delta x\quad i=1,\cdots,M, (12)
yj\displaystyle y_{j} =jΔyj=1,,N,\displaystyle=j\Delta y\quad j=1,\cdots,N,
tn\displaystyle t_{n} =nτn=1,.\displaystyle=n\tau\mspace{29.0mu}n=1,\cdots.

Generally Δx=1\Delta x=1 and Δy=1\Delta y=1. Denoting ui,ju_{i,j} as the value of uu at (xi,yj)(x_{i},y_{j}), various approximations of the derivative in the xx direction and the yy direction are listed

Δ±xui,j=±(ui±1,jui,j),\displaystyle\Delta_{\pm}^{x}u_{i,j}=\pm\left(u_{i\pm 1,j}-u_{i,j}\right), (13)
Δ±yui,j=±(ui,j±1ui,j),\displaystyle\Delta_{\pm}^{y}u_{i,j}=\pm\left(u_{i,j\pm 1}-u_{i,j}\right),
Δcxui,j=ui+1,jui1,j2,\displaystyle\Delta_{c}^{x}u_{i,j}=\frac{u_{i+1,j}-u_{i-1,j}}{2},
Δcyui,j=ui,j+1ui,j12.\displaystyle\Delta_{c}^{y}u_{i,j}=\frac{u_{i,j+1}-u_{i,j-1}}{2}.

We exclusively provide the discrete scheme for the second term in (8), with a similar discretization approach applicable to the remaining terms. The second term is reformulated as

V:=(V1,V2)=11+|u|2((𝐈𝐏)(κ1+|u|2)),V:=\left(V_{1},V_{2}\right)=\frac{1}{\sqrt{1+|\nabla u|^{2}}}\Big{(}\left(\mathbf{I}-\mathbf{P}\right)\nabla\big{(}\kappa\sqrt{1+|\nabla u|^{2}}\big{)}\Big{)}, (14)

where the forms of V1V_{1} and V2V_{2} are

V1=11+|u|2(κ1+|u|2)x(κ1+|u|2)u1+|u|2ux,\displaystyle V_{1}=\frac{1}{\sqrt{1+|\nabla u|^{2}}}\left(\kappa\sqrt{1+|\nabla u|^{2}}\right)_{x}-\frac{\nabla\left(\kappa\sqrt{1+|\nabla u|^{2}}\right)\cdot\nabla u}{\sqrt{1+|\nabla u|^{2}}}u_{x}, (15)
V2=11+|u|2(κ1+|u|2)y(κ1+|u|2)u1+|u|23uy.\displaystyle V_{2}=\frac{1}{\sqrt{1+|\nabla u|^{2}}}\left(\kappa\sqrt{1+|\nabla u|^{2}}\right)_{y}-\frac{\nabla\left(\kappa\sqrt{1+|\nabla u|^{2}}\right)\cdot\nabla u}{{\sqrt{1+|\nabla u|^{2}}}^{3}}u_{y}.

Furthermore, we list an approximation of V\nabla\cdot V

V=V1x|i,j+V2y|i,j=V1|i+12,jV1|i12,j+V2|i,j+12V2|i,j12.\nabla\cdot V=\left.\frac{\partial V_{1}}{\partial x}\right|_{i,j}+\left.\frac{\partial V_{2}}{\partial y}\right|_{i,j}=\left.V_{1}\right|_{i+\frac{1}{2},j}-\left.V_{1}\right|_{i-\frac{1}{2},j}+\left.V_{2}\right|_{i,j+\frac{1}{2}}-\left.V_{2}\right|_{i,j-\frac{1}{2}}. (16)

Therefore, we need to calculate V1|i+12,j\left.V_{1}\right|_{i+\frac{1}{2},j}, V1|i12,j\left.V_{1}\right|_{i-\frac{1}{2},j}, V2|i,j+12\left.V_{2}\right|_{i,j+\frac{1}{2}}, V2|i,j12\left.V_{2}\right|_{i,j-\frac{1}{2}}. Taking V1|i+12,j\left.V_{1}\right|_{i+\frac{1}{2},j} as an example, the discretization of others is omitted for simplicity. Let Ψ:=κ1+|u|2\Psi:=\kappa\sqrt{1+|\nabla u|^{2}}, and presented below is the approximate scheme for V1V_{1} at (i+12,j)\left(i+\frac{1}{2},j\right).

ux\displaystyle u_{x} =Δ+xui,j,\displaystyle=\Delta_{+}^{x}u_{i,j}, (17)
uy\displaystyle u_{y} =minmod(Δcyui,j,Δcyui+1,j),\displaystyle=\operatorname{minmod}\left(\Delta_{c}^{y}u_{i,j},\Delta_{c}^{y}u_{i+1,j}\right),
u\displaystyle\nabla u =1+(Δ+xui,j)2+(minmod(Δcyui+1,j,Δcyui,j))2,\displaystyle=\sqrt{1+\left(\Delta_{+}^{x}u_{i,j}\right)^{2}+\left(\operatorname{minmod}\left(\Delta_{c}^{y}u_{i+1,j},\Delta_{c}^{y}u_{i,j}\right)\right)^{2}},
Ψx\displaystyle\Psi_{x} =Δ+x(κ1+|u|2)i,j,\displaystyle=\Delta_{+}^{x}\left(\kappa\sqrt{1+|\nabla u|^{2}}\right)_{i,j},
Ψy\displaystyle\Psi_{y} =minmod(Δcy(κ1+|u|2)i,j,Δcy(κ1+|u|2)i+1,j),\displaystyle=\operatorname{minmod}\left(\Delta_{c}^{y}\left(\kappa\sqrt{1+|\nabla u|^{2}}\right)_{i,j},\Delta_{c}^{y}\left(\kappa\sqrt{1+|\nabla u|^{2}}\right)_{i+1,j}\right),

where the minmod function is used to increase the anisotropy of the equation

minmod(a,b)=sgn(a)+sgn(b)2min(|a|,|b|).\operatorname{minmod}(a,b)=\frac{\operatorname{sgn}(a)+\operatorname{sgn}(b)}{2}\min(|a|,|b|). (18)

The approximation of u\nabla u and κ\kappa at (xi,yj)\left(x_{i},y_{j}\right) can be expressed as

u=\displaystyle\nabla u= (Δcxui,j)2+(Δcyui,j)2,\displaystyle\sqrt{\left(\Delta_{c}^{x}u_{i,j}\right)^{2}+\left(\Delta_{c}^{y}u_{i,j}\right)^{2}}, (19)
κ=\displaystyle\kappa= Δx(Δ+xui,j1+(Δ+xui,j)2+(minmod(Δcyui+1,j,Δcyui,j))2)\displaystyle\Delta_{-}^{x}\left(\frac{\Delta_{+}^{x}u_{i,j}}{\sqrt{1+\left(\Delta_{+}^{x}u_{i,j}\right)^{2}+\left(\operatorname{minmod}\left(\Delta_{c}^{y}u_{i+1,j},\Delta_{c}^{y}u_{i,j}\right)\right)^{2}}}\right)
+Δy(Δ+yui,j1+(Δ+xui,j)2+(minmod(Δcxui,j+1,Δcxui,j))2).\displaystyle+\Delta_{-}^{y}\left(\frac{\Delta_{+}^{y}u_{i,j}}{\sqrt{1+\left(\Delta_{+}^{x}u_{i,j}\right)^{2}+\left(\operatorname{minmod}\left(\Delta_{c}^{x}u_{i,j+1},\Delta_{c}^{x}u_{i,j}\right)\right)^{2}}}\right).

4.2 Numerical Discretization for Additive Operator Splitting Algorithm

Let W=(W1,W2):=α(u)+bκ21+|u|2uW=\left(W_{1},W_{2}\right):=\frac{\alpha(u)+b\kappa^{2}}{\sqrt{1+|\nabla u|^{2}}}\nabla u and ui,jnu_{i,j}^{n} represents the nnth update result of ui,ju_{i,j}, then the evolution process can be expressed as

ui,jn+1ui,jnτ\displaystyle\frac{u_{i,j}^{n+1}-u_{i,j}^{n}}{\tau} =2b(V1|i+12,jV1|i12,j+V2|i,j+12V2|i,j12)\displaystyle=2b\left(\left.V_{1}\right|_{i+\frac{1}{2},j}-\left.V_{1}\right|_{i-\frac{1}{2},j}+\left.V_{2}\right|_{i,j+\frac{1}{2}}-\left.V_{2}\right|_{i,j-\frac{1}{2}}\right) (20)
+W1|i+12,jW1|i12,j+W2|i,j+12W2|i,j12η(1fi,jui,jn),\displaystyle+\left.W_{1}\right|_{i+\frac{1}{2},j}-\left.W_{1}\right|_{i-\frac{1}{2},j}+\left.W_{2}\right|_{i,j+\frac{1}{2}}-\left.W_{2}\right|_{i,j-\frac{1}{2}}-\eta\left(1-\frac{f_{i,j}}{u_{i,j}^{n}}\right),

where i=1Mi=1\cdots M, j=1Nj=1\cdots N. Discretization of Neumann boundary conditions is given by

u0,jn=u1,jn,uM+1,jn=uM,jn,uj,0n=uj,1n,ui,N+1n=ui,Nn.u_{0,j}^{n}=u_{1,j}^{n},\quad u_{M+1,j}^{n}=u_{M,j}^{n},\quad u_{j,0}^{n}=u_{j,1}^{n},\quad u_{i,N+1}^{n}=u_{i,N}^{n}. (21)

The explicit scheme evolves in a small time step size due to the constraints imposed by CFL conditions. This restriction diminishes computational efficiency making iteration in this scheme unfeasible.

While the theoretical foundation of model (7) effectively preserves the texture features, the incorporation of high-order nonlinear terms introduces substantial challenges in devising efficient numerical algorithms. Addressing the numerical instability associated with the explicit finite difference method and aiming to retain disorder in diffusion directions, we employ the additive operator splitting (AOS) algorithm as proposed in [43] for the numerical solution of our model.

To begin, we rewrite the evolution equation (9) as

ut=(g(u)u)+F(u),\frac{\partial u}{\partial t}=\nabla\cdot\left(g(u)\nabla u\right)+F(u), (22)

where

g(u):=α(u)+bκ21+|u|2,F(u)=2b(11+|u|2((𝐈𝐏)(κ1+|u|2))η(1fu)).\begin{gathered}g(u):=\frac{\alpha(u)+b\kappa^{2}}{\sqrt{1+|\nabla u|^{2}}},\\ F(u)=-2b\nabla\cdot\Big{(}\frac{1}{\sqrt{1+|\nabla u|^{2}}}\Big{(}\left(\mathbf{I}-\mathbf{P}\right)\nabla\big{(}\kappa\sqrt{1+|\nabla u|^{2}}\big{)}\Big{)}-\eta(1-\frac{f}{u})\Big{)}.\end{gathered} (23)

The AOS algorithm decomposes the original diffusion into horizontal and vertical directions diffusion, then obtains u1n+1u_{1}^{n+1} and u2n+1u_{2}^{n+1} from unu^{n}, and finally uses the average value of u1n+1u_{1}^{n+1} and u2n+1u_{2}^{n+1} as the iteratively updated un+1u^{n+1}. From the form of (23), we can easily calculate the diffusion matrices DxnD_{x}^{n} and DynD_{y}^{n} in the xx and yy directions of the size M×NM\times N image, taking the xx direction as an example:

Dx,in=[a1ib1ic2ia2ib2icN1iaN1ibN1icNiaNi],D_{x,i}^{n}=\left[\begin{array}[]{ccccc}a_{1}^{i}&b_{1}^{i}&&&\\ c_{2}^{i}&a_{2}^{i}&b_{2}^{i}&&\\ &\ddots&\ddots&\ddots&\\ &&c_{N-1}^{i}&a_{N-1}^{i}&b_{N-1}^{i}\\ &&&c_{N}^{i}&a_{N}^{i}\end{array}\right], (24)

where boundary conditions are correspondingly defined as ui,0n=ui,1n,ui,N+1n=ui,Nnu_{i,0}^{n}=u_{i,1}^{n},u_{i,N+1}^{n}=u_{i,N}^{n}, then one can derive each element in the matrix accordingly

{aki={gi,32n,k=1,(gi,k+12n+gi,k12n),k=2,,N1,gi,N12n,k=N,bki=gi,k+12n,k=1,,N1,cki=gi,k12n,k=2,,N.\left\{\begin{array}[]{l}a_{k}^{i}=\left\{\begin{array}[]{l}-g_{i,\frac{3}{2}}^{n},\mspace{137.0mu}k=1,\\ -\left(g_{i,k+\frac{1}{2}}^{n}+g_{i,k-\frac{1}{2}}^{n}\right),\quad k=2,\cdots,N-1,\\ -g_{i,N-\frac{1}{2}}^{n},\mspace{113.0mu}k=N,\end{array}\right.\\ b_{k}^{i}=g_{i,k+\frac{1}{2}}^{n},\mspace{154.0mu}k=1,\cdots,N-1,\\ c_{k}^{i}=g_{i,k-\frac{1}{2}}^{n},\mspace{154.0mu}k=2,\cdots,N.\end{array}\right. (25)

The values gi,k+12g_{i,k+\frac{1}{2}} and gi,k12g_{i,k-\frac{1}{2}} are approximated through forward and backward differences. Subsequently, we adjust the original AOS algorithm and add an additional FnF^{n} term. Upon acquiring the diffusion matrix, one-dimensional diffusion is independently applied to the rows and columns of unu^{n} for KK iterations.

un+1=12Σl[I2τDln(un)]1(un+Fn),u^{n+1}=\frac{1}{2}\Sigma_{l}\left[I-2\tau D_{l}^{n}\left(u^{n}\right)\right]^{-1}\left(u^{n}+F^{n}\right), (26)

where l{x,y}l\in\left\{x,y\right\} and un+1u^{n+1} can be solved in an efficient way.

(I2τDxn)u1n+1=un+Fn,\displaystyle\left(I-2\tau D_{x}^{n}\right)u_{1}^{n+1}=u^{n}+F^{n}, (27)
(I2τDyn)u2n+1=un+Fn.\displaystyle\left(I-2\tau D_{y}^{n}\right)u_{2}^{n+1}=u^{n}+F^{n}.

Following the utilization of the Thomas algorithm for the alternate computation of two variables, the denoising result for iterative updates is derived by averaging the values of u1n+1u_{1}^{n+1} and u2n+1u_{2}^{n+1}

un+1=12(u1n+1+u2n+1).u^{n+1}=\frac{1}{2}\left(u_{1}^{n+1}+u_{2}^{n+1}\right). (28)

After the tridiagonal matrices DxnD_{x}^{n} and DynD_{y}^{n} are obtained through (24), the problem of image denoising is transformed into the problem of solving linear system. The AOS algorithm possesses absolute stability, consistently yielding outstanding experimental results even under large time step sizes. However, in order to maintain the accuracy of the algorithm, it is common to set a smaller time step size. Consequently, in experiments, a delicate balance must be struck between solution accuracy and computational efficiency. The choice of an appropriate time step size involves a tradeoff. Algorithm 1 outlines the AOS algorithm for solving (22).

Algorithm 1 The AOS for mixed geometry information model
1:Noisy image ff, iterations KK, time step size τ\tau.
2:Denoising result uKu^{K} of step KK.
3:Initialization. u0fu^{0}\longleftarrow f
4:for n=0Kn=0\cdots K do
5:     for j=1Nj=1\cdots N do
6:         Compute akja_{k}^{j}, bkjb_{k}^{j}, ckjc_{k}^{j}, get the diffusion matrix DynD_{y}^{n}
7:         Solving (I2τDyn)u2,jn+1=ujn+Fjn\left(I-2\tau D_{y}^{n}\right)u_{2,j}^{n+1}=u_{j}^{n}+F_{j}^{n} using Thomas algorithm 
8:     end for
9:     for i=1Mi=1\cdots M do
10:         Compute akia_{k}^{i}, bkib_{k}^{i}, ckic_{k}^{i}, get the diffusion matrixDxnD_{x}^{n}
11:         Solving (I2τDx,in)(u1,in+1)T=(uin)T+(Fin)T\left(I-2\tau D_{x,i}^{n}\right)\left(u_{1,i}^{n+1}\right)^{T}=\left(u_{i}^{n}\right)^{T}+\left(F_{i}^{n}\right)^{T} using Thomas algorithm 
12:     end for
13:     ui,jn+1=12(u1n+1+u2n+1)u_{i,j}^{n+1}=\frac{1}{2}\left(u_{1}^{n+1}+u_{2}^{n+1}\right)
14:end for
15:Calculate relevant quality evaluation indicators and count calculation time.
Theorem 1.

For any time step τ>0\tau\textgreater 0, the scheme (27) in AOS algorithm is absolutely stable.

Proof.

From the form of gg in (23), we know gi,j0g_{i,j}\geq 0, for i=1,Mi=1,\cdots M, j=1,Nj=1,\cdots N. Then the main diagonal elements in the matrix Dx,inD^{n}_{x,i} corresponding to (24) are negative values, and the subdiagonal elements are positive values. And we can easily see that Dx,inD^{n}_{x,i} is a symmetric matrix. In addition, we need to note that Dx,inD^{n}_{x,i} is not only a weak diagonally dominant matrix, but also a irreducibly matrix. Applying the Cottle theorem [44], Dx,inD^{n}_{x,i} has negative eigenvalues λ<0\lambda\textless 0.

Let QnQ^{n} represents the matrix (I2τDxn)1(I-2\tau D_{x}^{n})^{-1}. The eigenvalue of matrix is Qn(0,1)Q^{n}\in(0,1). On the other hand, we know the fact that the sum of eigenvalues is equal to the sum of the diagonal elements. So the spectral radius of Dx,inD^{n}_{x,i} is strictly greater than 0 due to the fact that the main diagonal elements of Dx,inD^{n}_{x,i} are strictly less than 0. Then denoting L(0,1)L\in\left(0,1\right) as the spectral radius of QnQ^{n}. For any n,pn,p

u1n+pu1n=\displaystyle\left\|u_{1}^{n+p}-u_{1}^{n}\right\|= (Qn+p1QnI)(Qn1Q0)(u0+F0)\displaystyle\left\|\left(Q^{n+p-1}\cdots Q^{n}-I\right)\left(Q^{n-1}\cdots Q^{0}\right)\left(u^{0}+F^{0}\right)\right\| (29)
\displaystyle\leq Ln(u0+F0).\displaystyle L^{n}\left\|\left(u^{0}+F^{0}\right)\right\|.

This inequality indicates u1nu_{1}^{n} is a convergent sequence. Similarly, the convergence of u2nu_{2}^{n} can be obtained. Thus the semi-implicit AOS algorithm 1 corresponding to (7) is unconditionally stable for any τ>0\tau\textgreater 0.

Remark.

While there is an additional term involving u\nabla u in the 𝐏\mathbf{P} operator in (23), this term cannot be incorporated into the operator gg. On the one hand, the 𝐏\mathbf{P} operator fails to ensure the positive definiteness of the coefficient of (u)\left(\nabla u\right), leading to the inability to satisfy the prerequisite conditions for gg in the 𝐏\mathbf{P} operator. Hence unconditional stability cannot be guaranteed, resulting in oscillatory behavior during the iterative process. On the other hand, overly complex diffusion coefficients may, to some extent, conflict with each other, introducing ambiguity in the diffusion process. This underscores that the AOS framework fundamentally operates as a second-order solution framework. Consequently, the inclusion of high-order derivatives in the diffusion coefficients for both the xx and yy directions should be approached cautiously. An excessive presence of high-order terms may potentially lead to algorithmic malfunction.

4.3 Numeric Discretization of Scalar Auxiliary Variable Scheme

While the AOS algorithm has been used to successfully solve numerous nonlinear models, it is accompanied by the drawbacks of reduced accuracy and ‘streaking artifacts’ [45, 46]. It is worth noting that in order to solve the problem of energy functional minimization, Shen et al. [40, 47] transformed the gradient flow solution of the original problem by introducing a scalar auxiliary variable, and proposed the scalar auxiliary variable (SAV) method. As a burgeoning numerical algorithm in the field of image processing, SAV represents a general and efficient solution framework with simple numerical scheme, accurate calculation accuracy, and unconditional stability [48, 41, 49].

Redefine the new energy functional (7), which can be rewritten as:

ε[u]=γ2(u,Lu)+ε1[u],\varepsilon\left[u\right]=\frac{\gamma}{2}\left(u,Lu\right)+\varepsilon_{1}\left[u\right], (30)

where LL is a non-negative symmetric linear operator, typically independent of the function uu and ε1[u]\varepsilon_{1}\left[u\right] containing a nonlinear term related to uu. As all terms in problem (7) are nonlinear operators, we take ε1[u]\varepsilon_{1}\left[u\right] equal to the total energy functional subtract linear term inner product. This choice ensures that ε[u]\varepsilon\left[u\right] is still equal to the total energy functional, allowing them to be harmonized into a unified form. Furthermore, the L2L^{2} gradient flow corresponding to the minimization of the functional (30) is

ut=δεδu.\frac{\partial u}{\partial t}=-\frac{\delta\varepsilon}{\delta u}. (31)

By the chain rule, we get that the derivative of energy with respect to time

dε[u]dt=δεδuut=(δεδu,δεδu)=δεδu220.\frac{\mathrm{d}\varepsilon[u]}{\mathrm{d}t}=\frac{\delta\varepsilon}{\delta u}\cdot\frac{\partial u}{\partial t}=\left(\frac{\delta\varepsilon}{\delta u},-\frac{\delta\varepsilon}{\delta u}\right)=-\left\|\frac{\delta\varepsilon}{\delta u}\right\|_{2}^{2}\leq 0. (32)

Therefore, the gradient descent method effectively maintains the reduction of energy during the evolution process. Obviously, the objective functional possesses a well-defined lower bound, so there exists a positive constant C>0C\textgreater 0 such that ε1[u]=E(u)γ2(u,Lu)+C>0\varepsilon_{1}\left[u\right]=E\left(u\right)-\frac{\gamma}{2}\left(u,Lu\right)+C\textgreater 0. To facilitate analysis, we introduce the scalar auxiliary variable r=ε1r=\sqrt{\varepsilon_{1}} and present the equivalent form of the gradient flow within the SAV framework

ut\displaystyle\frac{\partial u}{\partial t} =μ,\displaystyle=-\mu, (33a)
μ\displaystyle\mu =γLu+rε1[u]ε1(u),\displaystyle=\gamma Lu+\frac{r}{\sqrt{\varepsilon_{1}[u]}}\varepsilon_{1}^{\prime}\left(u\right), (33b)
rt\displaystyle r_{t} =12ε1[u]Ωε1(u)utdx.\displaystyle=\frac{1}{2\sqrt{\varepsilon_{1}[u]}}\int_{\Omega}\varepsilon_{1}^{\prime}\left(u\right)u_{t}\mathrm{~{}d}x. (33c)

Next, we introduce the first-order scheme of SAV

un+1unτ=μn+1,\displaystyle\frac{u^{n+1}-u^{n}}{\tau}=-\mu^{n+1}, (34a)
μn+1=γLun+1+rn+1ε1ε1(un),\displaystyle\mu^{n+1}=\gamma Lu^{n+1}+\frac{r^{n+1}}{\sqrt{\varepsilon_{1}}}\varepsilon_{1}^{\prime}\left(u^{n}\right), (34b)
rn+1rn=12ε1Ωε1(un)(un+1un)dx.\displaystyle r^{n+1}-r^{n}=\frac{1}{2\sqrt{\varepsilon_{1}}}\int_{\Omega}\varepsilon_{1}^{\prime}\left(u^{n}\right)\left(u^{n+1}-u^{n}\right)\mathrm{d}x. (34c)

Although the update of uu in (34) may appear intricate, the computational cost primarily focuses on solving a linear system with constant coefficients, and the associated calculation cost is comparatively modest. Moreover, we can efficiently address (34) through the following process:

Substitute (34b) and (34c) into (34a), leading to

(I+τγL)un+1+τ2bn(bn,un+1)=cn,\left(I+\tau\gamma L\right)u^{n+1}+\frac{\tau}{2}b^{n}\left(b^{n},u^{n+1}\right)=c^{n}, (35)

where

I(x)\displaystyle I\left(x\right) =x,\displaystyle=x, (36)
bn\displaystyle b^{n} =ε1(un)ε1,\displaystyle=\frac{\varepsilon_{1}^{\prime}\left(u^{n}\right)}{\sqrt{\varepsilon_{1}}},
cn\displaystyle c^{n} =unτrnbn+τ2bn(bn,un).\displaystyle=u^{n}-\tau r^{n}b^{n}+\frac{\tau}{2}b^{n}\left(b^{n},u^{n}\right).

Multipling the (35) by the bnb^{n} and integrating, we obtain

(I+τγL)(bn,un+1)+τ2bn22(bn,un+1)=(bn,cn).\left(I+\tau\gamma L\right)\left(b^{n},u^{n+1}\right)+\frac{\tau}{2}\left\|b^{n}\right\|_{2}^{2}\left(b^{n},u^{n+1}\right)=\left(b^{n},c^{n}\right). (37)

Then there holds

(bn,un+1)=(bn,(I+τγL)1cn)1+τ2(bn,(I+τγL)1bn).\left(b^{n},u^{n+1}\right)=\frac{\left(b^{n},\left(I+\tau\gamma L\right)^{-1}c^{n}\right)}{1+\frac{\tau}{2}\left(b^{n},\left(I+\tau\gamma L\right)^{-1}b^{n}\right)}. (38)

Finally, we substitute (38) back into (35) to get un+1u^{n+1}, which gives

un+1=(I+τγL)1(cnτ2bn(bn,un+1)).u^{n+1}=\left(I+\tau\gamma L\right)^{-1}\big{(}c^{n}-\frac{\tau}{2}b^{n}(b^{n},u^{n+1})\big{)}. (39)

In summary, the first-order scheme of the SAV algorithm can be summarized as Algorithm 2.

Algorithm 2 The SAV with an adaptive time stepping strategy for mixed geometry information model
1:Noisy image ff, time step size τ\tau.
2:Final denoised result uFu^{F}.
3:Initialization. u0fu^{0}\longleftarrow f
4:for n=0n=0\cdots do
5:     Compute bnb^{n} and cnc^{n} through the formula (36) or (43)
6:     Compute (bn,un+1)\left(b^{n},u^{n+1}\right) through the formula (38) or (45)
7:     Compute un+1u^{n+1} through the formula (35) or (42)
8:     Compute τn+1\tau_{n+1} through th formula (47)
9:     End until some stopping criterion is reached, get the denoising result uFu^{F}
10:end for
11:Calculate relevant quality evaluation indicators and count calculation time.

SAV approaches of second-order or higher-order schemes can also be easily constructed, typically exhibiting energy stability. Therefore the selection of the time step size becomes less critical, as it does not necessitate adherence to the CFL condition. In what follows, we introduce a second-order scheme within the SAV framework, employing the Crank-Nicolson scheme.

un+1unτ=μn+12,\displaystyle\frac{u^{n+1}-u^{n}}{\tau}=-\mu^{n+\frac{1}{2}}, (40a)
μn+12=γL[12(un+un+1)]+rn+1+rn2ε1[u~n+12]ε1(u~n+12),\displaystyle\mu^{n+\frac{1}{2}}=\gamma L\left[\frac{1}{2}\left(u^{n}+u^{n+1}\right)\right]+\frac{r^{n+1}+r^{n}}{2\sqrt{\varepsilon_{1}\left[\widetilde{u}^{n+\frac{1}{2}}\right]}}\varepsilon_{1}^{\prime}\left(\widetilde{u}^{n+\frac{1}{2}}\right), (40b)
rn+1rn=Ωε1(u~n+12)2ε1[u~n+12](un+1un)dx,\displaystyle r^{n+1}-r^{n}=\int_{\Omega}\frac{\varepsilon_{1}^{\prime}\left(\widetilde{u}^{n+\frac{1}{2}}\right)}{2\sqrt{\varepsilon_{1}\left[\widetilde{u}^{n+\frac{1}{2}}\right]}}\left(u^{n+1}-u^{n}\right)\mathrm{~{}d}x, (40c)

where the value of u~n+12\widetilde{u}^{n+\frac{1}{2}} can be approximated using an explicit difference method, as exemplified by the following formulation

u~n+12=12(3unun1).\widetilde{u}^{n+\frac{1}{2}}=\frac{1}{2}\left(3u^{n}-u^{n-1}\right). (41)

Then the iterative update of unu^{n} can be derived in a similar process in the first-order scheme. First, substitute (40b) and (40c) into (40a), we get

(I+τ2γL)un+1+τ4bn(bn,un+1)=cn,\left(I+\frac{\tau}{2}\gamma L\right)u^{n+1}+\frac{\tau}{4}b^{n}\left(b^{n},u^{n+1}\right)=c^{n}, (42)

where

bn\displaystyle b^{n} =ε1(u~n+12)2ε1[u~n+12],\displaystyle=\frac{\varepsilon_{1}^{\prime}\left(\widetilde{u}^{n+\frac{1}{2}}\right)}{2\sqrt{\varepsilon_{1}\left[\widetilde{u}^{n+\frac{1}{2}}\right]}}, (43)
cn\displaystyle c^{n} =unτ2γLunτbnrn+τ4bn(bn,un).\displaystyle=u^{n}-\frac{\tau}{2}\gamma Lu^{n}-\tau b^{n}r^{n}+\frac{\tau}{4}b^{n}\left(b^{n},u^{n}\right).

Similarly, multipling (43) by bnb^{n} and integrating, then there holds

(I+τ2γL)(bn,un+1)+τ4bn22(bn,un+1)=(bn,cn).\left(I+\frac{\tau}{2}\gamma L\right)\left(b^{n},u^{n+1}\right)+\frac{\tau}{4}\left\|b^{n}\right\|_{2}^{2}\left(b^{n},u^{n+1}\right)=\left(b^{n},c^{n}\right). (44)

Furthermore, it arrives at

(bn,un+1)=(bn,(I+τ2γL)1cn)1+τ4(bn,(I+τγL)1bn).\left(b^{n},u^{n+1}\right)=\frac{\left(b^{n},\left(I+\frac{\tau}{2}\gamma L\right)^{-1}c^{n}\right)}{1+\frac{\tau}{4}\left(b^{n},\left(I+\tau\gamma L\right)^{-1}b^{n}\right)}. (45)

Finally, combining (42) and (45), it leads to the iterative update result of un+1u^{n+1}

un+1=(I+τ2γL)1(cnτ4bn(bn,un+1)).u^{n+1}=\left(I+\frac{\tau}{2}\gamma L\right)^{-1}\left(c^{n}-\frac{\tau}{4}b^{n}\left(b^{n},u^{n+1}\right)\right). (46)

During the practical evolution process, the energy functional may experience rapid decay within a short time period, followed by slight changes in the remaining periods. Therefore, it becomes imperative to opt for a small step size when the energy undergoes rapid changes and a relatively larger step size during periods of energy stability. To address this, we implement an adaptive time step strategy according to [47],

τn+1=max{τmin,min{ρ(tole)1/2τn,τmax}},\tau^{n+1}={\rm max}\left\{\tau_{min},{\rm min}\left\{\rho\left(\frac{tol}{e}\right)^{1/2}\tau^{n},\tau_{max}\right\}\right\}, (47)

where ρ\rho is a default safety coefficient, toltol is a constant representing reference tolerance and ee is the relative error in the iteration, τmin\tau_{min} is the minimum time step size, τmax\tau_{max} is the maximum time step size.

We now give the unconditional energy stability properties of the numerical scheme.

Theorem 2.

The energy functional E(u)E(u) in the scheme (33) is steadily declining

dE(u)dt0.\frac{\mathrm{d}E(u)}{\mathrm{d}t}\leq 0.
Proof.

Firstly, we multiply ut\frac{\partial u}{\partial t} in the both side of (33a) and (33b) and intergrate on the image domain Ω\Omega,

(ut,ut)=(μ,ut),\left(\frac{\partial u}{\partial t},\frac{\partial u}{\partial t}\right)=-\left(\mu,\frac{\partial u}{\partial t}\right), (48)
(μ,ut)=γ(Lu,ut)+rε1[u]Ωε1(u)utdx.\left(\mu,\frac{\partial u}{\partial t}\right)=\gamma\left(Lu,\frac{\partial u}{\partial t}\right)+\frac{r}{\sqrt{\varepsilon_{1}[u]}}\int_{\Omega}\varepsilon_{1}^{\prime}\left(u\right)\frac{\partial u}{\partial t}\mathrm{~{}d}x. (49)

After multplying the 2r2r on the both side of (33c) and swap both sides of (48), we obtain

2rrt+γ(Lu,ut)+(ut,ut)=0.2rr_{t}+\gamma\left(Lu,\frac{\partial u}{\partial t}\right)+\left(\frac{\partial u}{\partial t},\frac{\partial u}{\partial t}\right)=0. (50)

By the positive definiteness of the inner product, we have

(γ2(Lu,u)+r2C)t0.\frac{\partial\left(\frac{\gamma}{2}\left(Lu,u\right)+r^{2}-C\right)}{\partial t}\leq 0. (51)

Then we can easily know the fact that dE(u)dt0\frac{\mathrm{d}E(u)}{\mathrm{d}t}\leq 0.

Multiply the three equations of (34) by μn+1\mu^{n+1}, un+1unτ\frac{u^{n+1}-u^{n}}{\tau}, and 2rn+1τ\frac{2r^{n+1}}{\tau}, respectively. Then integrate (34a) and (34b), and sum up them. We readily derive the following theorem on energy dissipation. The proof follows from the proof given in [47].

Theorem 3.

The scheme (34) is first-order accurate and unconditionally energy stable in the sense that

1τ\displaystyle\frac{1}{\tau} (ε(un+1)ε(un))\displaystyle\left(\varepsilon(u^{n+1})-\varepsilon(u^{n})\right)
+12(γτ(un+1un,L(un+1un))+(rn+1rn)2)=(μn+1,μn+1)0.\displaystyle+\frac{1}{2}\Big{(}\frac{\gamma}{\tau}\left(u^{n+1}-u^{n},L\left(u^{n+1}-u^{n}\right)\right)+\left(r^{n+1}-r^{n}\right)^{2}\Big{)}=-\left(\mu^{n+1},\mu^{n+1}\right)\leq 0.

Similarly, multiply the three equations of scheme (40) by μn+12\mu^{n+\frac{1}{2}}, un+1unτ\frac{u^{n+1}-u^{n}}{\tau}, and rn+1+rnτ\frac{r^{n+1}+r^{n}}{\tau}, respectively, integrate the first two equations, and finally sum up all the three equations. Using the linear and symmetric properties of LL, we get the following theorem.

Theorem 4.

The scheme (40) is second-order accurate and unconditionally energy stable in the sense that

1τ(ε(un+1)ε(un))=(μn+12,μn+12)0.\frac{1}{\tau}\left(\varepsilon(u^{n+1})-\varepsilon(u^{n})\right)=-\left(\mu^{n+\frac{1}{2}},\mu^{n+\frac{1}{2}}\right)\leq 0.

5 Numerical experiment

In this section, we evaluate the performance of the proposed model and algorithm through numerical experiments on the multiplicative denoising problem. Specifically, we use Peak Signal to Noise Ratio(PSNR) and Structural SIMilarity(SSIM)[50] as quantitative evaluation metrics to measure the quality of recovered images. PSNR and SSIM are defined as

PSNR\displaystyle\mathrm{PSNR} =10log10(25521MNi=1Mj=1N[u^(i,j)u(i,j)]2),\displaystyle=10\log_{10}\left(\frac{255^{2}}{\frac{1}{MN}\sum_{i=1}^{M}\sum_{j=1}^{N}[\hat{u}(i,j)-u(i,j)]^{2}}\right), (52)
SSIM\displaystyle\mathrm{SSIM} =(2μuμu^+c1)(2σuu^+c2)(μu2μu^2+c1)(σu2+σu^2+c2),\displaystyle=\frac{\left(2\mu_{u}\mu_{\hat{u}}+c_{1}\right)\left(2\sigma_{u\hat{u}}+c_{2}\right)}{\left(\mu_{u}^{2}\mu_{\hat{u}}^{2}+c_{1}\right)\left(\sigma_{u}^{2}+\sigma_{\hat{u}}^{2}+c_{2}\right)},

where uu is the original image of size M×NM\times N, u^\hat{u} is the restored image, μu\mu_{u} and μu^\mu_{\hat{u}} are mean values of uu and u^\hat{u} respectively, σu2\sigma_{u}^{2} and σu^2\sigma_{\hat{u}}^{2} represent the variances of uu and u^\hat{u} respectively, σuu^\sigma_{u\hat{u}} is the covariance of uu and u^\hat{u}, c1>0c_{1}\textgreater 0 and c2>0c_{2}\textgreater 0 are constants.

5.1 Parameters Discussing in AOS Algorithm

As mentioned in Section 4.2, the AOS algorithm is absolutely stable, so the time step can be selected arbitrarily in terms of stability. However, although an excessively large time step can greatly improve the computational efficiency, it also lead to too fast to diffuse and miss the best denoising result. Therefore, it is very important to find a compromise step to balance efficiency and accuracy. In order to obtain a suitable time step size, we tested the time step size τ\tau at 4 different scales, while keeping other parameters fixed.

Refer to caption
(a) τ=1\tau=1
Refer to caption
(b) τ=2\tau=2
Refer to caption
(c) τ=5\tau=5
Refer to caption
(d) τ=10\tau=10
Figure 4: The denoising effect of the image with noise level L=10L=10 under different time step sizes. Meanwhile, b=0.01b=0.01, η=0.01\eta=0.01, correspondingly. The optimal number of iterations, PSNR and SSIM values are as follows. a 76, 25.96, 0.7881, b 42, 26.02, 0.7878, c 20, 26.02, 0.7879, d 11, 25.76, 0.7749

As observed, Fig. 4c and Fig. 4d are blurred, losing some details. Compared with τ=1\tau=1, τ=2\tau=2 can reduce the calculation time by half, while hardly losing recovery information. Therefore, we specify the time step τ=2\tau=2 when the noise level L=10L=10.

5.2 Parameters Discussing in SAV Algorithm

In this section, we take the first-order SAV algorithm as an example to show the results. The second-order SAV algorithm obtains similar experimental results. We test an important parameter in the algorithm, the constant CC. As mentioned above, the energy functional itself has a lower bound, which needs to be added to a constant to ensure its non-negativity. The optimization process of modified functional is equivalent to the original functional, but in numerical calculation, it may be different when CC changes.

Refer to caption
(a) C=107C=10^{7}
Refer to caption
(b) C=108C=10^{8}
Refer to caption
(c) C=109C=10^{9}
Refer to caption
(d) C=1010C=10^{10}
Figure 5: The denoising effect of the image with noise level L=10L=10 under different time step sizes. Meanwhile, b=0.001b=0.001, η=0.15\eta=0.15, τmin=0.8\tau_{min}=0.8, τmax=1.2\tau_{max}=1.2. The CPU time, PSNR and SSIM values are as follows. a 15.9242, 26.48, 0.7948, b 18.1214, 26.39, 0.7935, c 12.1957, 26.46, 0.7733, d 11.9296, 26.31, 0.7894
Refer to caption
(a) C=1012C=10^{12}
Refer to caption
(b) C=1013C=10^{13}
Refer to caption
(c) C=1014C=10^{14}
Refer to caption
(d) C=1015C=10^{15}
Figure 6: The denoising effect of the image with noise level L=10L=10 under constant CC. And we set b=0.01b=0.01, η=0.3\eta=0.3, τmin=0.1\tau_{min}=0.1, τmax=0.5\tau_{max}=0.5, correspondingly. The CPU time, PSNR and SSIM values are as follows. a 19.0688, 27.14, 0.8008, b 15.8645, 27.10, 0.7995, c 17.0544, 27.13 0.7999, d 18.1358, 27.10, 0.7970

As observed, PSNR does not have much correlation with the value of CC, so we choose C=107C=10^{7}. But one issue that needs to be pointed out is that the denoising effect of Fig. 5d is extremely unstable. This is due to the rapid decay of the energy to a small magnitude during the evolution. Under the condition of the relative error stopping criterion, the relative error is easily less than the agreed threshold when the constant is too large, so the loop is jumped out, resulting in unsatisfactory results. In addition, unlike the arbitrary choice of CC in the literature [41], our energy functional itself has a lower bound but may have negative values. Therefore, when CC is too small, it will bring the singularity of the equation, and the choice of CC is related to the image itself. For example, when C=107C=10^{7}, it is successful for CameramanCameraman, but it will fail for AerialAerial. Therefore, combined with the consideration of non-negative auxiliary variables and the convergence of the model, we change the stopping criterion in AerialAerial to absolute error, which can effectively resolve their contradictions.

5.3 Denoising Performance

In this subsection, we compare the proposed model with many efficient multiplicative denoising models, including AA model[8], DD model[12], MuLog+BM3D model[51], NTV model [20], AAFD model[13], EE model[29] which is solved by alternating direction method of multipliers.

In order to illustrate the superiority of the model and algorithm, we used gamma noise pollution with noise levels LL of 11, 44, and 1010 respectively. Generally, all models can suppress noise very well. However, the NTV model cannot effectively solve the high-noise situation of L=1L=1, because the expectation and variance of the reciprocal of gamma noise are infinite when L=1L=1. In order to quantitatively evaluate the model, we report the PSNR and SSIM values of the recovery results of various methods in Table 1, where the optimal experimental results are marked in bold. We usually choose a not too large weight β\beta for high-order terms to alleviate the step effect and assist the prior of the area term. On the one hand, we can see that in most cases the AOS algorithm can obtain experimental results quickly, but it also brings the effect of large errors, which can be found in the results of L=1 and L=4 for “Halo” in Table 1. Therefore, we actually recommend using the SAV algorithm to obtain higher quality experimental results.

Refer to caption
(a) Noisy
Refer to caption
(b) AA
Refer to caption
(c) DD
Refer to caption
(d) MuLoG+BM3D
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 7: Denoised results of the Aerial. a Noisy:L=1. b-h:Denoised result
Refer to caption
(a) Noisy
Refer to caption
(b) AA
Refer to caption
(c) DD
Refer to caption
(d) MuLoG+BM3D
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 8: Denoised results of the Brain.a Noisy:L=1. b-h:Denoised result
Refer to caption
(a) Noisy
Refer to caption
(b) AA
Refer to caption
(c) DD
Refer to caption
(d) MuLoG+BM3D
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 9: Denoising results of Halo at noise level L=1L=1
Refer to caption
(a) AA
Refer to caption
(b) DD
Refer to caption
(c) MuLoG+BM3D
Refer to caption
(d) NTV
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 10: Denoising results of Aerial at noise level L=4L=4
Refer to caption
(a) AA
Refer to caption
(b) DD
Refer to caption
(c) MuLoG+BM3D
Refer to caption
(d) NTV
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 11: Denoising results of Brain at noise level L=4L=4
Refer to caption
(a) AA
Refer to caption
(b) DD
Refer to caption
(c) MuLoG+BM3D
Refer to caption
(d) NTV
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 12: Denoising results of Halo at noise level L=4L=4
Refer to caption
(a) AA
Refer to caption
(b) DD
Refer to caption
(c) MuLoG+BM3D
Refer to caption
(d) NTV
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 13: Denoising results of Aerial at noise level L=10L=10
Refer to caption
(a) AA
Refer to caption
(b) DD
Refer to caption
(c) MuLoG+BM3D
Refer to caption
(d) NTV
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 14: Denoising results of Brain at noise level L=10L=10
Refer to caption
(a) AA
Refer to caption
(b) DD
Refer to caption
(c) MuLoG+BM3D
Refer to caption
(d) NTV
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 15: Denoising results of Halo at noise level L=10L=10
Refer to caption
(a) AA
Refer to caption
(b) DD
Refer to caption
(c) MuLoG+BM3D
Refer to caption
(d) NTV
Refer to caption
(e) EE
Refer to caption
(f) AOS
Refer to caption
(g) SAV1
Refer to caption
(h) SAV2
Figure 16: Denoising results of Dartboard at noise level L=10L=10

When L=1L=1, an obvious feature is that quite a few white point outliers appear randomly in the image. And an advantage of the diffusion equation-based method is that it will smooth the image at any noise level. In contrast, these random outliers appearing in MuLoG’s denoising results brings bad visual effects. These phenomena are depicted in Fig. 9. In addition, DD, EE and our model based on adaptive area denoising can restore various areas of the image to varying degrees, and can well protect the real texture of the image. Meanwhile, the MuLoG method can effectively protect the contrast and tiny texture of the image. From Fig. 8, 11, 14, we can clearly see that the AA model exposes so relatively large linear region that is prone to occur in low-order models. Although NTV can remove noise very well, due to the limitations of the model itself, it is difficult to remove large noise, and there are still dirty spots in Fig. 11 that cannot be removed. It is worth pointing out that due to the use of block matching technology, the MuLoG method often appears artificial boundaries, which can be easily seen from the center of Dartboard in 16. Inevitably, in addition, for “Halo”, this method fails greatly, the false boundaries almost occupy the entire image, and it is unbearable. Compared with the DD and EE models, the model we proposed can protect the boundaries very well and obtain a denoising effect with richer details. This can be seen further when we plot the surface figure of “Halo” in Fig. 17. From Fig. 16 we can see that our model has the best performance among all models in maintaining the smoothness and connectivity of image lines, which benefits from the curvature model’s prior regularization of image curves.

Refer to caption
(a) Clean
Refer to caption
(b) Noisy
Refer to caption
(c) AA
Refer to caption
(d) DD
Refer to caption
(e) MuLoG
Refer to caption
(f) NTV
Refer to caption
(g) EE
Refer to caption
(h) AOS
Refer to caption
(i) SAV1
Refer to caption
(j) SAV2
Figure 17: Surface plot of denoised Halo at noise level L=10L=10

As can be seen from Table 1, although MuLoG has more artificial boundaries, it still achieves best PSNR and SSIM in some figures. Compared with other methods, our model can effectively avoid the staircasing effects on natural images, SAR images, ultrasound images, and synthetic images. It can effectively protect detailed textures and smooth noise. Especially for “Halo”, which is a synthetic images with large slopes, the model we proposed has achieved amazing experimental results and can almost restore the original image, far exceeding other models. Therefore, our model is very suitable for the denoising process of gradient images.

Table 1: PSNR and SSIM between various multiplicative denoising methods on test images corrupted by Gamma noise
Image Model PSNR SSIM
L 1 4 10 1 4 10
AA 21.66 23.76 19.86 0.45 0.65 0.57
DD 22.39 24.94 25.97 0.48 0.68 0.80\bm{0.80}
MuLoG+BM3D 22.18 24.86 27.30\bm{27.30} 0.40 0.65 0.80\bm{0.80}
Aerial NTV - 23.43 24.07 - 0.66 0.64
EE 22.62\bm{22.62} 23.45 26.79 0.51\bm{0.51} 0.59 0.78
AOS 22.52 24.85 26.92 0.50 0.68 0.79
SAV1 22.54 24.92\bm{24.92} 27.11 0.50 0.69\bm{0.69} 0.80\bm{0.80}
SAV2 22.53 24.88 27.02 0.49 0.68 0.80\bm{0.80}
AA 16.83 19.75 20.32 0.68 0.81 0.57
DD 18.93 22.01 24.39 0.74 0.85 0.90
MuLoG+BM3D 19.51\bm{19.51} 22.47 24.50 0.78\bm{0.78} 0.87\bm{0.87} 0.91\bm{0.91}
Brain NTV - 21.22 23.65 - 0.83 0.87
EE 18.93 21.04 23.24 0.75 0.84 0.89
AOS 19.23 22.12 24.10 0.75 0.84 0.88
SAV1 19.36 22.55\bm{22.55} 24.73 0.74 0.86 0.90
SAV2 19.39 22.50 24.75\bm{24.75} 0.75 0.86 0.90
AA 27.86 31.50 33.31 0.94 0.98 0.99
DD 33.00 36.66 39.05 0.99\bm{0.99} 0.99\bm{0.99} 1.00\bm{1.00}
MuLoG+BM3D 26.20 30.79 33.31 0.78 0.87 0.88
Halo NTV - 27.22 27.88 - 0.98 0.97
EE 29.89 34.93 36.74 0.92 0.94 0.94
AOS 32.13 37.18 40.66\bm{40.66} 0.99\bm{0.99} 0.99\bm{0.99} 1.00\bm{1.00}
SAV1 33.97\bm{33.97} 37.75\bm{37.75} 40.60 0.99\bm{0.99} 0.99\bm{0.99} 0.99
SAV2 33.97\bm{33.97} 37.58 40.63 0.96 0.99\bm{0.99} 0.99

5.4 Discussion between First-order and Second-order SAV

In order to better verify the advantages of the second-order SAV method, the PSNR and energy drop curves of each order model in the image “Hole” are shown in Fig. 18. We set the remaining parameters keep equal except for the time step size in the model. Compared with first-order SAV, we can clearly see that the second-order model can achieve the best denoising effect when iterating for 80 steps, while the first-order model requires 200 iterations to achieve the same denoising effect. The second-order SAV algorithm reaches the best experimental results faster and decay rapidly. Fig. 18b illustrates that the higher-order model can perform energy reduction faster. This means that in the second-order scheme, we can use a larger step size and achieve the same error accuracy, which is consistent with the theoretical error analysis of the numerical scheme. In order to further eliminate the iteration acceleration caused by the step size factor, we choose to choose the same time step for the two schemes. As shown in Fig. 19b, we can find that the second-order algorithm still performs well under the same step size. Compared with the first-order algorithm, it has the advantage in terms of computing speed. In addition, due to the advantages of the model itself, the convergence of PSNR in the two algorithms is almost the same, but after zooming in, it can be found that the second-order algorithm always keep it slightly higher with a certain gap. Under the premise of the same step size, the second-order algorithm can improve the denoising effect to a certain extent compared with the first-order algorithm.

Refer to caption
(a) PSNR
Refer to caption
(b) Energy
Figure 18: Denoising results of Hole at noise level L=10L=10. We fix b=0.0001b=0.0001, Constant=10810^{8}, T=200T=200. We set τmin=1.8\tau_{min}=1.8, τmax=2\tau_{max}=2 in the second-order scheme and τmin=0.8\tau_{min}=0.8, τmax=1\tau_{max}=1 in the first-order scheme
Refer to caption
(a) PSNR
Refer to caption
(b) Energy
Figure 19: Denoising results of Hole at noise level L=10L=10. We fix b=0.0001b=0.0001, Constant=10810^{8}, T=200T=200, τmin=1.8\tau_{min}=1.8, τmax=2\tau_{max}=2 for both sheme

6 Conclusions

In this paper, we propose a mixed geometry information model to deal with the multiplicative denoising task. The model we propose can well match the characteristics. Based on the surface area and curvature, we designed the mixed geometry information term to protect the edges and texture of the image. At the same time, in order to efficiently address the difficulties caused by solving highly nonlinear models, we proposed AOS and SAV algorithms to achieve unconditional stability. The SAV algorithm is more recommended to solve our model due to its higher accuracy. Numerical experiments are implemented on SAR images, ultrasound images and synthetic images, and various experimental results demonstrate the superiority and excellent performance of our model. Compared with other high-order methods, our model can well maintain image contrast, protect detailed texture and edge information. In addition, quantitative analysis results show that our proposed model has ideal experimental results compared with other state-of-the-art multiplicative denoising models.

Data availability statement

No new data were created or analysed in this study.

Acknowledgements

The work is partially supported by the National Natural Science Foundation of China (Nos. U21B2075, 12171123, 12101158 and 12301536), the Fundamental Research Funds for the Central Universities grant (Nos. 2022FRFK060020 and 2022FRFK060029), the Natural Sciences Foundation of Heilongjiang Province (No. ZD2022A001).

References

References

  • [1] J Patrick Fitch. Synthetic aperture radar. Springer Science & Business Media, 2012.
  • [2] Chris Oliver and Shaun Quegan. Understanding synthetic aperture radar images. SciTech Publishing, 2004.
  • [3] Joseph W Goodman. Statistical properties of laser speckle patterns. In Laser speckle and related phenomena, pages 9–75. Springer, 1975.
  • [4] Robert F Wagner, Stephen W Smith, John M Sandrik, and Hector Lopez. Statistics of speckle in ultrasound b-scans. IEEE Trans. Son. Ultrason., 30(3):156–163, 1983.
  • [5] John M Ollinger and Jeffrey A Fessler. Positron-emission tomography. IEEE Signal Proc. Mag., 14(1):43–55, 1997.
  • [6] Richard Bamler. Principles of synthetic aperture radar. Surv. Geophys., 21(2-3):147–157, 2000.
  • [7] Xiangchu Feng and Xiaolong Zhu. Models for multiplicative noise removal. Math. Model Algor. Comput. Vis. Imag., pages 1–34, 2021.
  • [8] Gilles Aubert and Jean-François Aujol. A variational approach to removing multiplicative noise. SIAM J. Appl. Math., 68(4):925–946, 2008.
  • [9] Jianing Shi and Stanley Osher. A nonlinear inverse scale space method for a convex multiplicative noise model. SIAM J. Imaging Sci., 1(3):294–321, 2008.
  • [10] Zhengmeng Jin and Xiaoping Yang. Analysis of a new variational model for multiplicative noise removal. J. Math. Anal. Appl., 362(2):415–426, 2010.
  • [11] Asmat Ullah, Wen Chen, Mushtaq Ahmad Khan, and HongGuang Sun. A new variational approach for multiplicative noise and blur removal. PLOS ONE, 12(1):1–26, 2017.
  • [12] Zhenyu Zhou, Zhichang Guo, Gang Dong, Jiebao Sun, Dazhi Zhang, and Boying Wu. A doubly degenerate diffusion model based on the gray level indicator for multiplicative noise removal. IEEE Trans. Image Process., 24(1):249–260, 2015.
  • [13] Wenjuan Yao, Zhichang Guo, Jiebao Sun, Boying Wu, and Huijun Gao. Multiplicative noise removal for texture images based on adaptive anisotropic fractional diffusion equations. SIAM J. Imaging Sci., 12(2):839–873, 2019.
  • [14] Satyakam Baraha, Ajit Kumar Sahoo, and Sowjanya Modalavalasa. A systematic review on recent developments in nonlocal and variational methods for sar image despeckling. IEEE Signal Process, 196:108521, 2022.
  • [15] T. Teuber and A. Lang. A new similarity measure for nonlocal filtering in the presence of multiplicative noise. Comput. Statist. Data Anal., 56(12):3821–3842, 2012.
  • [16] Charles-Alban Deledalle, Loïc Denis, Florence Tupin, Andreas Reigber, and Marc Jäger. Nl-sar: A unified nonlocal framework for resolution-preserving (pol)(in) sar denoising. IEEE Trans. Geosci. Remote Sens., 53(4):2021–2038, 2014.
  • [17] Pedro AA Penna and Nelson DA Mascarenhas. Sar speckle nonlocal filtering with statistical modeling of haar wavelet coefficients and stochastic distances. IEEE Trans. Geosci. Remote Sens., 57(9):7194–7208, 2019.
  • [18] Stanley Osher, Nikos Paragios, Leonid Rudin, Pierre-Luis Lions, and Stanley Osher. Multiplicative denoising and deblurring: theory and algorithms, 2003.
  • [19] Yu-Mei Huang, Michael K. Ng, and You-Wei Wen. A new total variation method for multiplicative noise removal. SIAM J. Imaging Sci., 2(1):20–40, 2009.
  • [20] Xi-Le Zhao, Fan Wang, and Michael K. Ng. A new convex optimization model for multiplicative noise and blur removal. SIAM J. Imaging Sci., 7(1):456–475, 2014.
  • [21] Tony Chan, Selim Esedoglu, Frederick Park, A Yip, et al. Recent developments in total variation image restoration. Math. Models Comput. Vis., 17(2):17–31, 2005.
  • [22] David Strong and Tony Chan. Edge-preserving and scale-dependent properties of total variation regularization. Inverse Problems, 19(6):165–187, 2003.
  • [23] Wei Zhu and Tony Chan. Image denoising using mean curvature of image surface. SIAM J. Imaging Sci., 5(1):1–32, 2012.
  • [24] K. Papafitsoros and C. B. Schönlieb. A combined first and second order variational approach for image reconstruction. J. Math. Imaging Vision, 48(2):308–338, 2014.
  • [25] Mushtaq Ahmad Khan, Wen Chen, and Asmat Ullah. Higher order variational multiplicative noise removal model. In 2015 int conf comput comput sci ICCCS, pages 116–118. IEEE, 2015.
  • [26] Li Gun, Li Cuihua, Zhu Yingpan, and Huang Feijiang. An improved speckle-reduction algorithm for sar images based on anisotropic diffusion. Multimedia Tools Appl., 76(17):17615–17632, 2017.
  • [27] Fuquan Ren and Roberta Rui Zhou. Optimization model for multiplicative noise and blur removal based on gaussian curvature regularization. J. Opt. Soc. Am. A, 35(5):798–812, 2018.
  • [28] Xinli Xu, Teng Yu, Xinmei Xu, Guojia Hou, Ryan Wen Liu, and Huizhu Pan. Variational total curvature model for multiplicative noise removal. IET Comput. Vis., 12(4):542–552, 2018.
  • [29] Yu Zhang, Songsong Li, Zhichang Guo, Boying Wu, and Shan Du. Image multiplicative denoising using adaptive Euler’s elastica as the regularization. J. Sci. Comput., 90(2):1–34, 2022.
  • [30] Guillermo Sapiro and Allen Tannenbaum. On invariant curve evolution and image analysis. Indiana Univ. Math. J., pages 985–1009, 1993.
  • [31] Anthony Yezzi. Modified curvature motion for image smoothing and enhancement. IEEE Trans. Image Process., 7(3):345–352, 1998.
  • [32] Ron Kimmel, Ravi Malladi, and Nir Sochen. Images as embedded maps and minimal surfaces: movies, color, texture, and volumetric medical images. Int. J. Comput. Vis., 39(2):111–129, 2000.
  • [33] Peihua Qiu and Partha Sarathi Mukherjee. Edge structure preserving 3d image denoising by local surface approximation. IEEE Trans. Pattern Anal. Mach. Intell., 34(8):1457–1468, 2011.
  • [34] Marius Lysaker, Arvid Lundervold, and Xue-Cheng Tai. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Trans. Image Process., 12(12):1579–1590, 2003.
  • [35] Xue-Cheng Tai, Jooyoung Hahn, and Ginmo Jason Chung. A fast algorithm for Euler’s elastica model using augmented Lagrangian method. SIAM J. Imaging Sci., 4(1):313–344, 2011.
  • [36] Xue-Cheng Tai. Fast numerical schemes related to curvature minimization:a brief and elementary review. Actes des rencontres du CIRM, 3(1):17–30, 2013.
  • [37] Yuping Duan, Yu Wang, and Jooyoung Hahn. A fast augmented Lagrangian method for Euler’s elastica models. Numer. Math. Theory Methods Appl., 6(1):47–71, 2013.
  • [38] Wei Zhu, Xue-Cheng Tai, and Tony Chan. Augmented Lagrangian method for a mean curvature based image denoising model. Inverse Probl. Imaging, 7(4):1409–1432, 2013.
  • [39] Luca Calatroni, Alessandro Lanza, Monica Pragliola, and Fiorella Sgallari. Adaptive parameter selection for weighted-tv image reconstruction problems. J. Phys. Conf. Ser., 1476(1):012003, 2020.
  • [40] Jie Shen, Jie Xu, and Jiang Yang. The scalar auxiliary variable (SAV) approach for gradient flows. J. Comput. Phys., 353:407–416, 2018.
  • [41] Chenxin Wang, Zhenwei Zhang, Zhichang Guo, Tieyong Zeng, and Yuping Duan. Efficient sav algorithms for curvature minimization problems. IEEE Trans. Circuits Syst. Video Technol., 33(4):1624–1642, 2022.
  • [42] Martin Burger. A level set method for inverse problems. Inverse Problems, 17(5):1327–1355, 2001.
  • [43] Joachim Weickert, BM Ter Haar Romeny, and Max A Viergever. Efficient and reliable schemes for nonlinear diffusion filtering. IEEE Trans. Image Process., 7(3):398–410, 1998.
  • [44] Richard W. Cottle, Jong-Shi Pang, and Richard E. Stone. The linear complementarity problem, volume 60 of Classics in Applied Mathematics. SIAM, 2009.
  • [45] Dongbo Min, Sunghwan Choi, Jiangbo Lu, Bumsub Ham, Kwanghoon Sohn, and Minh N Do. Fast global image smoothing based on weighted least squares. IEEE Trans. Image Process., 23(12):5638–5653, 2014.
  • [46] Pablo F Alcantarilla and T Solutions. Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Pattern Anal. Mach. Intell., 34(7):1281–1298, 2011.
  • [47] Jie Shen, Jie Xu, and Jiang Yang. A new class of efficient and robust energy stable schemes for gradient flows. SIAM Rev., 61(3):474–506, 2019.
  • [48] Wenjuan Yao, Jie Shen, Zhichang Guo, Jiebao Sun, and Boying Wu. A total fractional-order variation model for image super-resolution and its SAV algorithm. J. Sci. Comput., 82(3):1–18, 2020.
  • [49] Xiangyu Bai, Jiebao Sun, Jie Shen, Wenjuan Yao, and Zhichang Guo. A Ginzburg-Landau-H1H^{-1} model and its SAV algorithm for image inpainting. J. Sci. Comput., 96(2):1–27, 2023.
  • [50] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process., 13(4):600–612, 2004.
  • [51] Charles-Alban Deledalle, Loic Denis, Sonia Tabti, and Florence Tupin. Mulog, or how to apply gaussian denoisers to multi-channel sar speckle reduction? IEEE Trans. Image Process., 26(9):4389–4403, 2017.