On the convergences and applications of the inertial-like proximal point methods for null point problems
Abstract
Motivated and inspired by the discretization of the nonsmooth system of a nonlinear oscillator with damping, we propose what we call the inertial-like proximal point algorithms for finding the null point of the sum of two maximal operators, which has many applied backgrounds, such as, convex optimization and variational inequality problems, compressed sensing etc.. The common feature of the presented algorithms is using the new inertial-like proximal point method which does not involve the computation for the norm of the difference between two adjacent iterates and in advance, and avoids complex inertial parameters satisfying the traditional and difficult checking conditions. Numerical experiments are presented to illustrate the performances of the algorithms.
2000 Mathematics Subject Classification: 65K10; 65K05; 47H10; 47L25.
Keywords: null point problems; inertial-like proximal methods; maximal monotone operators.
1 Introduction
Let be a set-valued operator in real Hilbert space .
(1) The graph of is given by
(2) The operator is said to be monotone if
(3) The operator is said to be maximal monotone if is not properly contained in the graph of any other monotone operator. Let be two maximal monotone operators in . We are concern with the well-studied null point problem that is formulated as follows:
(1.1) |
with solution set denoted by . A special interesting case of (1.1) is a minimization of sum of two proper, lower semi-continuous and convex functions , that is,
(1.2) |
So this is equivalent to (1.1) with and being the subdifferentials of and , respectively. We recall the resolvent operator which is called the backward operator and plays an significant role in the approximation theory for zero points of maximal monotone operators. Due to the work of Aoyama et al. [3], we have the following properties:
(1.3) |
where . Moreover, the following key facts that will be needed in the sequel.
Fact 1: The resolvent is not only always single-valued, but also firmly monotone:
(1.4) |
Fact 2: Using the resolvent operator, we can write down inclusion problem (1.1) as a fixed point problem. It is known that
Douglas-Rachford‘s splitting method (DRSM) (or forward-backward splitting method) was first introduced in [10] as an operator splitting technique to solve partial differential equations arising in heat conduction and soon later has been extended to find solutions for the sum of two maximal monotone operators by Lions and Mercier [13]. Douglas-Rachford‘s splitting method(DRSM) is formulated as
(1.5) |
where , and is called the forward operator. Basing on the method (1.5), many researchers improved and modified the algorithms for the inclusion problem (1.1) and obtained nice results, see e.g., Boikanyo [7], Dadashi and Postolache[8], Kazmi and Rizvi [12], Moudafi[20][19], Sitthithakerngkiet et al. [27]. On the other hand, one classical way of looking at the null point problem is to consider the forward discretization for in the evolution system:
(1.6) |
and then the evolution system is discretized as
which inspired Alvarez and Attouch [2] to introduce the inertial method for the following nonsmooth case of a nonlinear oscillator with damping
(1.7) |
By discretizing, Alvarez and Attouch [2] obtained the implicit iterative sequence
(1.8) |
where and , which yielded the Inertial-Prox algorithm
(1.9) |
the extrapolation term is called inertial term. Note that when , the recursion (1.8) corresponds to the standard proximal iteration
which has been well studied by Martinet [15]and Moreau[17] and other researchers, and the weak convergence of to a solution of has being well known since the classical work of Rockafellar[25]. For ensuring the convergence of the Inertial-Prox sequence, Alvarez and Attouch[2] pointed the following key assumption: there exists such that and
(1.10) |
Inertial method is shown to have nice convergence properties in the field of continuous optimization and is studied intensively in split inverse problem by many authors soon later (because which could be utilized in some situations to accelerate the convergence of the sequences). For some recent works applied to various fields, see Alvarez [1], Attouch et al. [6, 5, 4], Ochs et al. [23, 22]. Although Alvarez and Attouch [2] pointed that one can choose appropriate rule to enable the the assumption (1.10) applicable, the parameter involving the iterates and should be computed in advance. Namely, to make the condition (1.10) hold, the researchers have to set constraints on the inertia coefficient and estimate the value of the before choosing . In recent works, Gibali et al. [11] improved the inertial control condition by constraining inertial coefficient such that , where and ,
More works on inertial methods, one can refer Dang et al. [9], Moudafi et al.[18], Suantai et al. [26], Tang [28], and therein. Theoretically, the condition (1.10) on the parameter is too strict for the convergence of the inertial algorithm. Practically, estimating the value of the before choosing the inertial parameter may need large amount of computation. The two drawbacks may make the Inertial-Prox method inconvenient in the practical test in the sense. So it is natural to think about the following question: Question 1.1 Can we delete the condition (1.10) in inertial method? Namely, can we construct a new inertial algorithm for solving (1.1) without any constraint on the the inertial parameter or the computation of norm of the difference between and before choosing the inertial parameter? The purpose of this paper is to present an affirmative answer to the above question. In this paper, we study the convergence problem of a new inertial-like technique for the solution of the null point problem (1.1) without the assumption (1.10) and the prior computation of the before choosing the inertial parameter . The outline of the paper is as follows. In section 2, we collect definitions and results which are needed for our further analysis. In section 3, our novel approach for the null point problem is introduced and analyzed, the convergence theorems of the presented algorithms are obtained. Moreover, convex optimization and variational inequality problem are studied as the applications of the null point problem in section 4. Finally, in section 5,some numerical experiments, using the inertial-like method, are carried out in order to support our approach.
2 Preliminaries
Let and be the inner product and the induced norm in a Hilbert space , respectively. For a sequence in , denote and by the strong and weak convergence to of , respectively. Moreover, the symbol represents the -weak limit set of , that is,
The identity below is useful:
(2.1) |
for all and .
Definition 2.1.
Let be a real Hilbert space, and some given operator.
(1) The operator is said to be Lipschitz continuous with constant on if
(2) The operator is said to be cocoercive if there exists such that
Remark 2.2.
(1) If is cocoercive, then it is Lipschitz continuous.
(2) From Fact 1, we can conclude that is a non-expansive operator if is a maximal monotone mapping.
Definition 2.3.
Let be a nonempty closed convex subset of . We use to denote the projection from onto ; namely,
The following significant characterization of the projection should be recalled : Given and ,
(2.2) |
Lemma 2.4.
(Xu [29]) Assume that is a sequence of nonnegative real numbers such that
where is a sequence in and and such that
-
(1)
;
-
(2)
or ;
-
(3)
.
Then .
Lemma 2.5.
(see e.g., Opial [21]) Let be a real Hilbert space and be a bounded sequence in . Assume there exists a nonempty subset satisfying the properties:
-
(i)
exists for every ,
-
(ii)
.
Then, there exists such that converges weakly to .
Lemma 2.6.
(Maingé [16]) Let be a sequence of real numbers that does not decrease at the infinity in the sense that there exists a subsequence of such that for all . Also consider the sequence of integers defined by
Then is a nondecreasing sequence verifying and, for all ,
3 Main Results
3.1 Motivation of Inertial-Like Proximal Technique
Inspired and motivated by the discretization (1.8), we consider the following iterative sequence
where , are two arbitrary initial points, and is a real nonnegative number sequence. This recursion can be rewritten as
(3.1) |
The discretization sequence in (3.1) always exists because the sequence satisfying (1.8) always exists for any choice of the sequence according to Alvarez and Attouch [2]. In addition, it can be deduced that if , the formula (3.1) can cover Alvarez and Attouch‘s Inertial-Prox algorithm. More relevantly, the inertial coefficient can be considered 1 in our new inertial proximal point algorithms. We thus obtain what we call the Inertial-Like Proximal point algorithm.
3.2 Some Conventions and Inertial-like Proximal Algorithms
C1:Throughout the rest of this paper, we always assume that is a Hilbert space. We rephrase the null point problem as follows:
(3.2) |
where are two maximal monotone set-valued operators with cocoercive.
C2: Denote by the solution set of the null point problem; namely,
and we always assume .
Now, combining the Fact 2 and the inertial-like technique (3.1), we introduce the following algorithms.
Algorithm 3.1
Initialization: Choose a positive sequence . Select arbitrary initial points , .
Iterative Step: After the -iterate is constructed, compute
(3.3) |
and define the th iterate by
(3.4) |
Remark 3.1
It is not hard to find that if , for some , then
is a solution of the inclusion problem (3.2), and the iteration process is terminated in finite iterations. If , Algorithm 3.1 reduces to the general forward-backward algorithm in Moudafi[19].
Algorithm 3.2
Initialization: Choose a sequence satisfying one of the three cases: (I.) such that ; (II.) ; (III.) . Choose and in such that
Select arbitrary initial points , .
Iterative Step: After the -iterate is constructed, compute
and define the th iterate by
(3.5) |
Remark 3.2. In the subsequent convergence analysis, we will always assume that the two algorithms generate an infinite sequence, namely, the algorithms are not terminated in finite iterations. In addition, in the simulation experiments, in order to be practical, we will give a stop criterion to end the iteration for practice. Otherwise, set and return to Iterative Step.
3.3 Convergence Analysis of Algorithms
Theorem 3.1.
If the assumptions are satisfied and for some given small enough, we obtain the weak convergence result, namely the sequence generated by Algorithm 3.1 converges weakly to a point .
Proof.
Without loss of generality, we take and then we have get from Fact 2. It turns out from (2) and (3.3) that
(3.6) | |||||
Since is firmly nonexpansive, it follows from (3.4) and Fact 1 that
(3.7) | |||||
It follows from the fact is cocoercive that
so we have from (3.7) that
On the other hand, we have
and, furthermore,
Hence we obtain from (3.6), (3.7) that
(3.8) | |||||
Since , it follows from (3.8) that
which means that the sequence is bounded, and so in turn is. Now we claim that there exists the limit of the sequence . For this purpose, two situations are discussed as follows:. Case 1. There exists an integer such that for all . Then there exists the limit of the sequence , denoted by , and so
Now it remains to show that
Since the sequence is bounded, let and be a subsequence of weakly converging to . To this end, it remains to verify that . Notice again (3.8), we have
which means that
and
At the same time, it follows from (3.4) that
(3.9) |
Since is cocoercive, we have that is -Lipschitz continuous. Passing to the limit on the subsequence of converging weakly to in (3.9) and taking account that is maximal monotone and thus its graph is weakly-strongly closed, it follows that
which means that . In view of the fact that the choice of in was arbitrary, namely , and we conclude from Lemma 2.5 that converges weakly to . Case 2. If the sequence does not decrease at infinity in the sense, then there exists a sub-sequence of such that for all Furthermore, by Lemma 2.6, there exists an integer, non-decreasing sequence for (for some large enough) such that as ,
for each . Notice the boundedness of the sequence , which implies that there exists the limit of the sequence and hence we conclude that
From (3.8) with replaced by , we have
By a similar argument to Case 1, we obtain
and
These together with (3.4) implies that
Since the sequence is bounded, let and be a subsequence of weakly converging to . Passing to the limit on the subsequence of converging weakly to in the above inequality and taking account that is maximal monotone and thus its graph is weakly-strongly closed, it follows that
which means that . In view of the fact that the choice of in was arbitrary, namely , and we conclude from Lemma 2.5 that converges weakly to . This completes the proof. ∎
Next we prove the strong convergence of Algorithm 3.2.
Theorem 3.2.
If the assumptions are satisfied and for some given small enough, we obtain the strong convergence result, namely the sequence generated by Algorithm 3.2 converges in norm to (i.e., the minimum-norm element of the solution set ).
Proof.
To illustrate the result clearly, the following three situations should be discussed: (I). ; (II). and (III). , respectively. (I). First we consider the strong convergence under the assumption . Similar to the previous proof of weak convergence, we begin by showing the boundedness of the sequence . To see this, we denote and we use the projection in a similar way to the proof of (3.7) and (3.8) of Theorem 3.1 to get that
(3.10) | |||||
hence one can see . It turns out from (3.3) and (3.5) that
which implies that the sequence is bounded, and so are the sequences , . Applying the identity (2) we deduce that
(3.11) | |||||
Substituting (3.10) into (3.11) and after some manipulations, we obtain
Combining (3.6) we have
(3.12) | |||||
We explain the strong convergence under two situations, respectively. Case 1. There exists an integer such that for all . Then there exists the limit of the sequence , denoted by , and so
In addition, we have
and therefore from (3.12) we obtain
and then from the condition , we get
Notice and notice the assumptions on the parameters and , hence we have
which imply that and as , and then
This proves the asymptotic regularity of . By repeating the relevant part of the proof of Theorem 3.1, we get . It is now at the position to prove the sequence strongly converges to . To this end, we rewrite (3.5) as , where , and making use of the inequality which holds for all , we get
It follows from (2) that
and then
Notice that from (3.10), and notice , hence we obtain
Submitting (3.6) into the above inequality, we have
(3.13) | |||||
and then noticing the for all , we have
(3.14) | |||||
where and . Since and , which implies from (2.2) that for all , we deduce that
(3.15) |
Since , and is bounded, combining (3.15) implies that
In addition, we have
These enable us to apply Lemma 2.4 to (3.14) to obtain that . Namely, in norm and the proof of Case 1 is complete.
Case 2. If the sequence does not decrease at infinity in the sense that there exists a sub-sequence of such that for all Furthermore, by Lemma 2.6, there exists an integer, non-decreasing sequence for (for some large enough) such that as ,
for each . Notice the boundedness of the sequence , which implies that there exists the limit of the sequence and hence we conclude that
As a matter of fact, observe that (3.12) holds for each , so from (3.12) with replaced by and using the relation , we have
Notice the assumptions on the parameters and , Taking the limit by letting yields
(3.16) | ||||
(3.17) | ||||
(3.18) |
Note that we still have and then that the relations (3.16)-(3.18) are sufficient to guarantee that . Next we prove . Observe that (3.13) holds for each . So replacing with in (3.13) and using the relation again for , we obtain
and then we obtain
which means that there exists a constant such that for all and
(3.19) | |||||
Now since , we have
by virtue of the facts and . Consequently, following from , the relation (3.19) assures that , which further implies from Lemma 2.6 that
Namely, in norm, and the proof of Case 2 is complete.
(II). Now, it is time to show the strong convergence when . Clearly, if , then and , where . Repeating the steps from (3.10)-(3.12), we have the following similar inequality:
(3.20) | |||||
Two possible cases will be shown as follows. Case 1. There exists an integer such that for all . Then there exists the limit of the sequence , denoted by , and so
In addition, we have
and therefore from (3.20) we obtain
Notice the assumptions on the parameters , and , we have
which imply that as . Next we show the asymptotic regularity of . Indeed, it follows from the relation between the norm and inner product that
where is a constant such that for all and is a given constant, which means that
and then
Therefore we obtain as .
Since , we have . This proves the asymptotic regularity of . By repeating the relevant part of the proof of Case 1 in (I), we get and
(3.21) | |||||
where and . The rest of proof is consistent with Case 1 in (I). So we get in norm. Case 2. If the sequence does not decrease at infinity in the sense that there exists a sub-sequence of such that for all Furthermore, by Lemma 2.6, there exists an integer, non-decreasing sequence for (for some large enough) such that as ,
for each . Note that we still have . By repeating the relevant part of the proof of Case 2 in (I), for , we have
Notice the assumptions on the parameters , and , taking the limit by letting yields
and that these relations are sufficient to guarantee that . Observe that (3.21) holds for each . So replacing with in (3.21) and using the relation again for , we obtain
The rest of proof is consistent with Case 2 in (I). So we get in norm. (III). Finally we consider the situation . It is obvious that if , then and , where . Repeating the steps from (3.10)-(3.12), we have the following similar inequality
which means that is bounded. Similar to the proof of the above situations, two possible cases will be shown as follows. Two possible cases will be shown as follows. Case 1. There exists an integer such that for all . Then there exists the limit of the sequence , denoted by , and so
In addition, we have
By repeating the relevant part of the proof of Case 1 in (I), we have
which imply that as . The rest of proof is consistent with Case 1 in (I). So we get in norm. Case 2. If the sequence does not decrease at infinity in the sense that there exists a sub-sequence of such that for all Furthermore, by Lemma 2.6, there exists an integer, non-decreasing sequence for (for some large enough) such that as ,
for each .
Note that we still have . The rest of proof is consistent with Case 1 in (I). So we get in norm.
This completes the proof. ∎
Remark 3.3.
It is easy to see that the convergence of our algorithms still holds even without the following condition:
The above assumption is not necessary at all in our cases. To some extents, our inertial-like algorithms seem to have two merits:(1) Compared with the general inertial proximal algorithms, we do not need to calculate the values of before choosing the parameters in numerical simulations, which make the algorithms convenient and use-friendly. (2) Compared with the general inertial algorithms, the inertial factors are chosen in with possible, which are new, natural and interesting algorithms in some ways. In particular, under more mild assumptions, our proofs are simpler and different from the others.
4 Applications
4.1 Convex Optimization
Let be a nonempty closed and convex subset of and be two proper, convex and lower semi-continuous functions. Moreover, assume that is differentiable with a Lipschitz gradient. With this data, consider the following convex minimizing problem :
(4.1) |
Denoted by . Recall that the subdifferential of at , denote by :
So, by taking and (: the gradient of ) in (3.2), where and are maximal monotone operators and is -cocoercive, we can apply Theorem 3.1 and Theorem 3.2 and obtain the following results:
Theorem 4.1.
Let and be two proper, convex and lower semi-continuous functions such that and are maximal monotone operators and also cocoercive. Assume that . Let and be any two sequences generated by the following scheme (see, e.g. ):
If for some given small enough, and , then the sequences converges weakly to a point of
Theorem 4.2.
Let and be two proper, convex and lower semi-continuous functions such that and are maximal monotone operators and also cocoercive. Assume that . Let and be any two sequences generated by the following scheme:
where are satisfying the selection criteria in Algorithm3.2. If for some given small enough, then the sequences converges strongly to .
4.2 Variational Inequality Problem
Let be a nonempty closed and convex subset of and a maximal and coercive operator.
Consider the classical variational inequality (VI) problem of finding a point such that
(4.2) |
Denote by the solution set of the problem (VI) (4.2).
So, by taking (: the normal cone of the set ), the problem (VI) (4.2) is equivalent to finding zeroes of (see e.g., Peypouquet[24]). We apply our algorithms to the variational inequality (VI) problem and have the following theorems.
Theorem 4.3.
Let be a nonempty closed and convex subset of , a maximal and coercive operator. Choose for some given small enough and assume that . Construct the sequences and as follows:
If , then the sequences converges weakly to a point of
Theorem 4.4.
Let be a nonempty closed and convex subset of , be a maximal and cocoercive operator. Choose for some given small enough and assume that . Construct the sequences , as follows.
where are satisfying the selection criteria in Algorithm 3.2. Then the sequence converges strongly to .
5 Numerical Examples
In this section, we first present two numerical examples in infinite and finite-dimensional Hilbert spaces to illustrate the applicability, efficiency and stability of Algorithm 3.1 and Algorithm 3.2, and then consider sparse signal recovery from real-world life in finite dimensional spaces. In addition, the comparison results with other algorithms are also described. All the codes are written in Matlab R2016b and are preformed on an LG dual core personal computer. Example 5.1.In this example, we take with Euclidean norm. Let be defined by and let be defined by , . We can see that and are maximal monotone mappings and with -cocoercive(), respectively. Indeed, let , then
while
which means is -cocoercive. It is not hard to check that . In the simulation process, we choose as two arbitrary initial points. In order to investigate the change and tendency of more clearly, we denote by and let be the stopping criterion. The experimental results are shown in Figs.1-2, where z-axis represents the logarithm of the third coordinate value for each point.






Indeed, if , then Algorithm 3.1 coincides with the result of Moudafi[19]. In addition, from Figs.1-2, we can see that when , two families of points alternate, and finally infinitely close to the exact null point .
Example 5.2. In this example, we show the behaviors of our algorithms in . In addition, the compared results with Dadashi and Postolach[8] are considered. In the simulation, we define two mappings and by and for all . Then it can be shown that is cocoercive. In the numerical experiment, the parameters are chosen as , in Algorithm 3.2 for all . In addition, is used as stopping criterion and the following three different choices of initial functions are chosen: Case 1: and ; Case 2: and ;
Case 3: and .
Figs.3-5 represent the numerical results for neither 0 nor 1, Fig.6 shows the numerical results for and , respectively. Fig.7 shows the comparising results with Dadashi and Postolach[8] for the initial points and , respectively. Table1 shows the comparisons with Dadashi and Postolach[8] for the initial point .










Error | Iter. | CPU in second | ||||
---|---|---|---|---|---|---|
Alg.1 | Alg.2 | Alg. in Dadashi et al.[31] | Alg.1 | Alg.2 | Alg. in Dadashi et al.[31] | |
41 | 21 | 26 | 6.2813 | 3.4688 | 3.8438 | |
49 | 33 | 52 | 10.0445 | 7.6542 | 7.8461 | |
57 | 45 | 110 | 10.2969 | 8.6768 | 24.3212 |
It is clear that our algorithms, especially Algorithm 3.2, is faster, more efficient, more stable. Example 5.3. (Compressed Sensing) In this example, to show the effectiveness and applicability of our algorithms in the real world, we consider the recovery of a sparse and noisy signal from a limited number of sampling which is a problem from the field of compressed sensing. The sampling matrix , , is stimulated by standard Gaussian distribution and vector , where is additive noise. When , it means that there is no noise to the observed data. For further explanations the reader can consult Nguyen and Shin[14]. Let be the -sparse signal, where . Our task is to recover the signal from the data . To this end, we transform it into finding the solution of LASSO problem:
where is a given positive constant. If we define
then one can see that the LASSO problem coincides with the problem of finding such that
which associates with the the problem for finding the solution of the problem:
where and with cocoercive, . In addition to showing the behavior of our algorithms, the results of Sitthithakerngkiet et al. [27], Kazimi and Riviz [12] without inertial process and Tang[28] with general inertial method are compared. For the experiment setting, we choose the following parameters: is generated randomly with , , is -spikes with amplitude distributed in whole domain randomly. For simplicity, we define the nonexpansive mappings as for Algorithm 3.1 in Sitthithakerngkiet et al. [27], and the strongly positive bounded linear operator , the constant and we give fixed point . Moreover, we take , in all compared algorithms. Let and be the stopping criterion. All the numerical results are presented in Table 2 and Fig.8.
Method | ,, | ,, | |
---|---|---|---|
Iter. Sec. | Iter. Sec. | ||
Algorithm 3.1 | 43 0.0992 | 83 0.4431 | |
Algorithm 3.2 | 63 0.1635 | 79 0.6368 | |
Sitthithakerngkiet et al. [27] | 91 0.12 | 122 0.6942 | |
Kazmi et al. [12] | 54 0.0771 | 39 0.1981 | |
Algo.3.2-Tang[28] | 54 2.2632 | 67 3.2541 |






6 Conclusion
We have provided two new Inertial-Like Proximal iterative algorithms (Algorithms 3.1 and 3.2) for the null point problem . We have proved that, under some mild conditions, Algorithms 3.1 converges weakly and Algorithms 3.2 strongly to a solution of the null point problem. Thanks to the novel structure of the inertial-like technique, our Algorithms 3.1 and 3.2 seem to have the following merits: (i) Theoretically, they do not need verify the traditional and difficult checking condition , namely, our convergence theorems still hold even without this condition.
(ii) Practically, they do not involve the computations of the norm of the difference between and before choosing the inertial parameter (hence less computation cost), as opposed to almost all previous inertial algorithms in the existing literature, that is, the constraints on inertial parameter are looser and more natural, so they are extremely attractive and friendly for users.
(iii) Different from the general inertial algorithms, the inertial factors in our Inertial-Like Proximal algorithms are chosen in with possible, which are new, natural and interesting algorithms in some ways. In particular, under more mild assumptions, our proofs are simpler and different from the others.
We have included several numerical examples which show the efficiency and reliability of Algorithm 3.1 and Algorithm 3.2.
We have also made comparisons of Algorithm 3.1 and Algorithm 3.2 with other four algorithms in Sitthithakerngkiet et al. [27], Kazimi and Riviz [12], Tang[28], Dadashi and Postolach[8]
confirming some advantages of our novel inertial algorithms.
Acknowledgements.
This article was funded by the Science Foundation of China (12071316),
Natural Science Foundation of Chongqing
(CSTC2019JCYJ-msxmX0661), Science and Technology Research Project
of Chongqing Municipal Education Commission (KJQN 201900804) and
the Research Project of Chongqing Technology
and Business University (KFJJ1952007).
References
- [1] Alvarez F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert spaces. SIAM J. Optim. 14, 773–782 (2004).
- [2] Alvarez F., Attouch H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping,Set-Valued Anal.9,3-11 (2001).
- [3] Aoyama K., Kohsaka F., Takahashi W.: Three generalizations of firmly nonexpansive mappings: their relations and continuity properties. J. Nonlinear Convex Anal. 10(1), 131–147 (2009).
- [4] Attouch H., Chbani Z.: Fast inertial dynamics and FISTA algorithms in convex optimization, perturbation aspects (2016). arXiv:1507.01367.
- [5] Attouch H., Chbani Z., Peypouquet J., Redont P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. 168, 123-175 (2018).
- [6] Attouch H., Peypouquet J., Redont P.:A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Optim.24(1), 232-256 (2014).
- [7] Boikanyo,O.A.:The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016,2371857(2016). https://doi.org/10.1155/2016/2371857.
- [8] Dadashi V.,Postolache M.: Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab.J.Math.,9:89-99(2020).
- [9] Dang Y.,Sun J., Xu H.K.:Inertial accelerated algorithms for solving a split feasibility problem,J.Indus.Manage.Optim.,13,1383-1394(2017).
- [10] Douglas J., Rachford H.H.: On the numerical solution of heat conduction problems in two or three space variables, Trans. Amer. Math. Soc.,82,421-439(1956).
- [11] Gibali A., Mai D.T., Nguyen T.V.: A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Indus. Manag. Optim. 2018, 1-25 (2018).
- [12] Kazmi K.R., Rizvi S.H.: An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim Letter, 8, 1113-1124 (2014).
- [13] Lions P.L., Mercier B.:Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal, 16, 964-979(1979).
- [14] Nguyen T.L.N., Shin Y.: Deterministic sensing matrices in compressive sensing: A survey,Sci.World J.,2013(2013),Article ID192795,6pages.
- [15] Martinet,B.: Rgularisation d‘inquations variationnelles par approximations successives. Rev.Française Informat.Recherche Oprationnelle 3,154-158(1970).
- [16] Maingé P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-valued Anal., 16, 899-912 (2008).
- [17] Moreau J.J.: Proximit et dualit dans un espace hilbertien.Bulletin de la Socit mathmatique de France 93,273-299(1965).
- [18] Moudafi A., Thakur BS.: Solving proximal split feasibilty problem without prior knowledge of matrix norms. Optimization Letters, 8(7) (2013). DOI10.1007/s11590-013-0708-4.
- [19] Moudafi,A.:On the convergence of the forward-backward algorithm for null-point problems.J.Nonlinear Var.Anal.2,263-268(2018).
- [20] Moudafi,A.;Thera,M.:Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 94(2),425-448(1997).
- [21] Opial Z.: Weak convergence of the sequence of successive approximations for nonexpansivemappings, Bull. Am. Math. Soc., 73 , 591-597 (1967).
- [22] Ochs P., Brox T., Pock T.: iPiasco: inertial proximal algorithm for strongly convex optimization. J.Math. Imaging Vis. 53, 171-181 (2015).
- [23] Ochs P., Chen Y., Brox T., Pock T.: iPiano: Inertial proximal algorithm for non-convex optimization. SIAM J. Imaging Sci. 7, 1388-1419 (2014).
- [24] Peypouquet J.: Convex optimization in normed spaces:Theory, Methods and Examples. Springer,Berlin (2015).
- [25] Rockafellar,R.T.: Maximal monotone operators and proximal point algorithm. SIAM J. Control Optim.14,877-898(1976).
- [26] Suantai S.,Pholasa N., Cholamjiak P.: The modified inertial relaxed CQ algorithm for solving the split feasibility problems,J.Ind.Manag.Optim.,13(5):1-21(2017).
- [27] Sitthithakerngkiet K., Deepho J., Martinez-Moreno J., Kumam P.: Convergence analysis of a general iterative algorithm for finding a common solution of split variational inclusion and optimization problems. Numer Algor., 2018. Doi:10.1007/s11075-017-0462-2.
- [28] Tang Y.: New inertial algorithm for solving split common null point problem in Banach spaces. Journal of Inequalities and Applications. (2019) 2019:17. https://doi.org/10.1186/s13660-019-1971-4.
- [29] Xu H.K.: Iterative algorithms for nonliear operators. J. Lond. Math. Soc., 66(2), 240-256 (2002).