ABSTRACT
In this work, error estimates are presented for the case in which the regularized solution is obtained by minimizing doubly-generalized Tikhonov-Phillips functionals. The first result is based mainly on an assumption given by a source condition. It is proved that it is possible to replace this assumption by a variational inequality, obtaining analogous result of the error estimate. Finally, relationships are established between the optimality condition associated with the problem, the source condition and the variational inequality. On the other hand, it is known that, in certain cases, the use of two or more penalizing terms is useful. For this reason, generalizations of the results of error estimates are presented for cases in which the regularized solution is a minimizer of doubly-generalized Tikhonov-Phillips functionals with multiple penalizers.
Keywords:
inverse problems; Tikhonov-Phillips; error estimate; source condition; variational inequality
1 INTRODUCTION
In a quite general framework, an inverse problem can be defined as the need of determining x in an equation of the form
where is a bounded linear operator between two Banach spaces of infinite dimension (in the classic theory, 𝒳 and 𝒴 are Hilbert spaces) and y is the data, supposed to be known, perhaps with a certain degree of error. Frequently, inverse problems are ill-posed in the sense of Hadamard (i.e., the solution does not exists, it is not unique or it does not depend continuously on the data) and thus arises the need to apply a regularization method. Associated with this method, there is a parameter called the “regularization parameter” and its choice is essential to achieve an adequate approximation of the solution of the inverse problem. There are several parameter choice rules that allow determining its value. The so-called “a-priori” are those that depend only on the noise level of the problem while the “a-posteriori” are those that depend on both the noise level and the data. Finally, there is another type of rules called “heuristics” and they are those that only depend on the noise level through the data. Among them, we can mention the “generalized cross-validation method” introduced in 1979 by G. H. Golub, M. T. Heath and G. Wahba 88 G.H. Golub, M.T. Heath & G. Wahba. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21 (1979), 215-223., “the L-curve criterion” proposed in 1992 by P. C. Hansen 99 P. Hansen. Analysis of discrete ill-posed problems by means of the L-curve. SIAM, Review 34, no. 4 (1992), 561-580. and the rule proposed by K. Ito, B. Jin and J. Zou in 2011 1313 K. Ito , B. Jin & J. Zou. A new choice rule for regularization parameters in Tikhonov regularization. Applicable Analysis, 90 (2011), 1521-1544., whose construction is based on the stochastic approach to solving an inverse problem. It is important to mention that this type of rule is very useful in those problems where the exact noise level is unknown and only the data is available.
Although there is a wide variety of regularization methods, probably the best known and most commonly and widely used is the Tikhonov-Phillips regularization method, which was originally proposed by D. L. Phillips and A. N. Tikhonov in 1962 and 1963, respectively 1616 D.L. Phillips. A technique for the numerical solution of certain integral equations of the first kind. J. Assoc. Comput. Mach., 9 (1962), 84-97.), (1919 A.N. Tikhonov. Regularization of incorrectly posed problems. Soviet Math. Dokl., 4 (1963), 1624-1627.), (2020 A.N. Tikhonov . Solution of incorrectly formulated problems and the regularization method. Soviet Math. Dokl., 4 (1963), 1035-1038.. Under the classic version of this method (with fidelity term and penalizer, both quadratic) and its generalized version (with quadratic fidelity term and generalized penalizer) it is possible to prove the convergence of the regularized solutions to a minimum penalizing solution, in the case in which the regularization parameter is chosen through an a-priori or a-posteriori rule 33 M. Burger & S. Osher. Convergence rates of convex variational regularization. Inverse Problems, 20(5) (2004), 1411-1421.), (55 H.W. Engl, M. Hanke & A. Neubauer . “Regularization of inverse problems”, volume 375 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht (1996).. It is possible to generalize these convergence results for the doubly-generalized Tikhonov-Phillips regularization method (that is, with generalized fidelity and penalizing terms) for these types of rules 66 J. Flemming. Theory and examples of variational regularization with non-metric fitting functionals. Journal of Inverse and Ill-posed Problems, 18(6) (2010), 677 - 699.), (1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).. A result due to A. B. Bakushinskii 11 A.B. Bakushinskii. Remarks on choosing a regularization parameter using the quasi-optimality and ratio criterion. USSR Comp. Math. Math. Phys., 24 (1984), 181-182. shows that a regularization method cannot be convergent when the associated parameter choice rule depends only on the data of the ill-posed inverse problem. But this does not indicate that the method cannot perform well for small noise levels. For this reason, given that convergence results cannot be obtained, various authors have presented error estimates, that is, bounds for the error between the solution of the problem and the regularized solution when the regularization parameter is chosen using a heuristic rule for a fixed noise level 22 M. Benning & M. Burger. Error Estimates for General Fidelities. Electron. T. Numer. Ana., 38 (2011), 44-68.), (1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).), (1818 T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012)..
We propose here to study the doubly-generalized Tikhonov-Phillips regularization method that consists in approximating the solution of the problem (1.1) by a minimizer of the functional, called “TPGG-p functional”, given by
where is a convex functional and η > 0 is the regularization parameter. In particular, if , the resulting functional is called “TPG-p functional” and is given by
where p ≥ 1 and is a convex functional. This work presents error estimates that we have proved for the case in which the regularized solution is obtained by minimizing the TPGG-p functionals given by (1.2). The first estimates are based mainly on an assumption given by a source condition which, as is known, imposes certain smoothness conditions on the minimum penalizing solution. These estimates are particularized for the case in which and p = 2, leaving evidence that are equivalent to that proposed by K. Ito and B. Jin in 1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015)..
As it is usual, we use the Bregman distance to estimate the error. This distance is a natural way to measure the deviation of elements in a Banach space with respect to a convex functional and it has a term that will be in our interest to be able to bounded. For this purpose, the so-called “variational inequalities” will be used to bound this term of interest and obtain error estimates. We will see here that it is possible to replace the source condition assumption used in the first error estimates by a variational inequality, obtaining similar results from that estimates. Finally, relationships are established between the optimality condition associated with the problem, the source condition and the variational inequality.
It is well known that an adequate choice of the penalizing term, based on the “a-priori” knowledge of certain type of information about the exact solution, will result in regularized solutions which appropriately reflect those characteristics. In recent years, there has been a growing interest in the multi-parameter Tikhonov regularization method that uses multiple constraints as a means of improving the quality of inversion. The multi-parameter regularization adds multiple different penalties which exhibit multi-scale features, while the single-parameter regularization uses a unique penalty which may result in a regularized solution that does not preserve certain features of the original solution. The use of multi-parameter regularization for solving ill-posed problems naturally matches with the multi-resolution analysis framework which has become a standard method to analyze the frequency information of images on different resolutions 1212 K. Ito , B. Jin & T. Takeuchi. Multi-parameter Tikhonov regularization. Methods and Applications of Analysis, 18 (2011), 31-46.), (1414 Y. Lu, L. Shen & Y. Xu. Multi-Parameter Regularization Methods for High-Resolution Image Reconstruction With Displacement Errors. IEEE Transactions on Circuits and Systems I: Regular Papers, 54(8) (2007), 1788-1799.), (1515 G.L. Mazzieri, R.D. Spies & K.G. Temperini. Directional convergence of spectral regularization method associated to families of closed operators. Computational and Applied Mathematics, 32 (2013), 119-134.), (2121 Z. Wang. Multi-parameter Tikhonov regularization and model function approach to the damped Morozov principle for choosing regularization parameters. Journal of Computational and Applied Mathematics, 236 (2012), 1815-1832.. For this reason, we present generalizations of the error estimates results obtained for the functionals given in (1.2) for the case in which the regularized solution is a minimizer of doubly-generalized Tikhonov-Phillips functional with multiple penalizers given by
where p ≥ 1, , with convex functionals for all i = 1,..., n and such that η i > 0, for all i = 1,..., n.
2 PRELIMINARES
As it was previously mentioned, to estimate the a-posteriori error, the Bregman distance will be used and, to present its definition, it is necessary to introduce the concepts of subgradient and subdifferential of a convex functional 1818 T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012)., which are presented below.
Definition 2.1.Let 𝒳 be a Banach space, 𝒳∗the dual space of X anda convex functional. Then, x∗ ∈ 𝒳 ∗ is a subgradient of ψ in x if, where [x ∗, x] corresponds to the functional x ∗ evaluated in x, i.e. The set ∂ψ(x) of all subgradients of ψ in x is called subdifferential of ψ in x.
It is important to mention here that the subgradient is a generalization of the classical concept of derivative for the case of convex functionals 2222 E. Zeidler. “Nonlinear Functional Analysis and Its Applications, III: Variational methods and optimization”. Springer-Verlag, New York (1985). URL https://books.google.com.ar/books?id=sCNXnAEACAAJ.
https://books.google.com.ar/books?id=sCN...
.
Definition 2.2.Let 𝒳 be a Banach space, a convex functional, x ∈ 𝒳 and. The Bregman distance in x with respect to ξ and ψ is defined as
It is immediate to see that and the convexity of the functional ψ implies that , for all z ∈ 𝒳. Also, it is easy to prove that if 𝒳 is a Hilbert space, the Bregman distance in an element x coincides with ∥x − z∥2, for all z ∈ 𝒳, when the functional ψ is chosen as the square of the norm in 𝒳. In this way, the Bregman distance extends the concept of norm.
Now, we introduce the concept of duality mapping that will be relevant for its close relationship with the subdifferential of a power of a norm 1717 O. Scherzer , M. Grasmair, H. Grossauer, M. Haltmeier & F. Lenzen. “Variational Methods in Imaging”, volume 167 of Applied Mathematical Sciences. Springer, New York (2009).), (1818 T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012).. To define this mapping, it is necessary to introduce the concept of a “gauge function” that consists of a continuous and strictly increasing function that verifies f (0) = 0 and . Furthermore, it is necessary to define the set denoted by 2𝒳 ∗ consisting of all subsets of the dual space of 𝒳, that is, .
Definition 2.3. Let 𝒳 be a Banach space. The duality mapping of 𝒳 with respect to the gauge function f is the (set-valued) mapping defined by
The following proposition presents the duality mapping with respect to the gauge function with p > 1, for the case of Hilbert spaces.
Proposition 2.1. Let 𝒳 be a Hilbert space andwith p > 1. The duality mapping J of 𝒳 with respect to f is given by.
The next result, called Asplund Theorem, relates the subdifferential of the primitive of a gauge function (which is convex) and the duality mapping with respect to that function.
Theorem 2.1. Let 𝒳 be a Banach space and f a gauge function. If then the duality mapping of 𝒳 with respect to f is given by
The following corollary is useful to obtain the subdifferential of the convex functional with p > 1, that is associated with the fidelity term considered in (1.3).
Corollary 2.1.If 𝒳 is a Banach space andwith p > 1, then the duality mapping J of 𝒳 with respect to f is given by.
It is well known that a minimizer of a differentiable and convex function verifies the optimality condition, that is, the derivative of that function vanishes in that minimizer. Similarly, a minimizer z of a convex functional 𝒥 defined on a Banach space satisfies the optimality condition given by , as it is presented in the following result. Furthermore, the optimality condition implies the existence of a minimizer.
Theorem 2.2.Let 𝒳 be a Banach space, a convex functional andwith. Then, if and only if.
3 PRINCIPAL RESULTS
3.1 One penalizer
The following result presents the first error estimation that we have obtained for the case in which the regularized solution is a minimizer of a TPGG-p functional. As will be seen, it is possible to particularize this estimate for the case of TPG-p functionals. More precisely, it will be seen that the estimate obtained for the case of TPG-2 functionals is equivalent to that proposed by K. Ito and B. Jin in 1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015)..
Theorem 3.3.Let 𝒳 and 𝒴 be Banach spaces,, the exact solution of (1.1) and yδ ∈ 𝒴 such thatand. Suppose that there exists w ∈ 𝒴 ∗ such thatand that the functionalsatisfies the following hypotheses:
-
, for all y1, y2, y3 ∈ 𝒴;
-
there exists C > 0 such that.
Then, for each minimizerof the functionalgiven in (1.2) for data y δ it follows that
-
if p > 1 and q is the conjugate of p then
-
if p = 1 andthen
Proof. Since is a minimizer of the functional for data y δ , it follows that
and, in consequence, .
Then,
where the last two equalities follow from the source condition and the definition of the dual adjoint operator, respectively. Now, since and , it follows that
Finally,
-
If p > 1, it immediately follows from Young’s inequality that
which implies that
where the last inequality follows from (3.2). Thus,
-
If p = 1, by (3.2) it follows that , and by hypothesis , finally, it turns out that
□
Given that every norm satisfies the triangular inequality and considering C = 1, it follows immediately that the hypotheses 1 and 2 of Theorem 3.3 are verified when corresponds to the norm in 𝒴. In this way, if the fidelity term is , with p ≥ 1, from Theorem 3.3 error estimates are obtained for the case of TPG-p functionals. In particular, for p = 2 (corresponding to the classic fidelity term) it turns out that
which coincides with that of K. Ito and B. Jin in 1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).. It is then possible to conclude that the Theorem 3.3 consists of a generalization of the estimation result presented by these authors.
On the other hand, it is appropriate to mention that the inequality (3.1), which constitutes a first error estimate, was obtained by M. Benning and M. Burger in 2011 under slightly different hypotheses about and ψ, using the symmetric Bregman distance 22 M. Benning & M. Burger. Error Estimates for General Fidelities. Electron. T. Numer. Ana., 38 (2011), 44-68..
Finally, it is important to observe that the Theorem 3.3 would allow obtaining orders of convergence for the case in which the regularization parameter is chosen with an a-priori or a-posteriori rule. However, if one is interested in obtaining estimates of the error for the case of heuristic rules, one can, from the inequality (3.2), obtain such an estimate as follows
From the proof of the Theorem 3.3, it is easy to see that the error estimate given by the inequality (3.3) is less than or equal to the estimate obtain in this theorem.
Next, it is proved that, in the context of Hilbert spaces and under certain hypotheses, the optimality condition for the functional , with (given by the existence of minimizers) implies the source condition present in the statement of Theorem 3.3. It should be noted here that, in this context, the dual adjoint operator 𝒯 # coincides with the adjoint operator of 𝒯.
Proposition 3.2.Let 𝒳 and 𝒴 be Hilbert spaces, andthe exact solution of (1.1). Suppose that {x η} is a sequence of minimizers of the functionalsuch thatwhen η → 0+ and there exists w ∈ 𝒴 such that. Then, , whereis the adjoint operator of 𝒯.
Proof. Since x η is a minimizer of the functional , from Theorem 2.2 it turns out that
Then, by property of the sum, product by a scalar, translation and composition of the subgradient 2222 E. Zeidler. “Nonlinear Functional Analysis and Its Applications, III: Variational methods and optimization”. Springer-Verlag, New York (1985). URL https://books.google.com.ar/books?id=sCNXnAEACAAJ.
https://books.google.com.ar/books?id=sCN...
it follows that
Thus, and the functional 0 can be decomposed as , where and . Since 𝒳 and 𝒴 are Hilbert spaces, from Proposition 2.1 and Corollary 2.1, it immediately follows that and g η = η x η . Since g η = − f η it turns out that
and then, by hypothesis, . By uniqueness of the limit it is concluded that . From Proposition 2.1 and Corollary 2.1 it follows that , and thus .
□
Finally, the optimality condition associated with the functional TPGG-p with p > 1, and convex and weakly lower semicontinuous penalizer imply the source condition present in the statement of Theorem 3.3, as it is proved in the following proposition.
Proposition 3.3.Let 𝒳 and 𝒴 be Banach spaces,a convex and weakly lower semicontinuous functional andthe exact solution of (1.1). Suppose that {x η } is a sequence of minimizers of the functionalwith p > 1 such thatwhen η → 0+ and that the following hypothesis is satisfied:
If there exists such that , then , where is the dual adjoint operator of 𝒯.
Proof. Since x η is a minimizer of the functional , from Theorem 2.2 it immediately follows that . Then, by property of the sum, product by a scalar and composition of the subgradient 2222 E. Zeidler. “Nonlinear Functional Analysis and Its Applications, III: Variational methods and optimization”. Springer-Verlag, New York (1985). URL https://books.google.com.ar/books?id=sCNXnAEACAAJ.
https://books.google.com.ar/books?id=sCN...
it follows that
and then, the functional 0 can be decomposed as , with
Now, by hypothesis, it is known that and thus . Since g η = − f η , it follows that . Analogously to the proof of Proposition 3.2, as a consequence of the weak lower semicontinuity of ψ and since and as η → 0+, it is proved that . □
It should be mentioned here that, under certain additional hypotheses, the assumption (3.4) about the subdifferential of with p > 1 is verified. For example, if corresponds to the norm in a Hilbert space, from Proposition 2.1 it follows that the subdifferential has a single element. On the other hand, if is Gateaux differentiable then 1818 T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012)..
Because the Bregman distance in with respect to the convex functional ψ involves the term , where and x ∈ 𝒳, several authors propose the use of inequalities containing this term, called variational inequalities, and it is proved that they are a powerful tool to obtain convergence rates 77 J. Flemming & B. Hofmann. A new Approach to Source Conditions in Regularization with General Residual Term. Numerical Functional Analysis and Optimization, 31 (2010), 254-284.), (1010 B. Hofmann , B. Kaltenbacher, C. Pöschl & O. Scherzer. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Problems, 23(3) (2007), 987.), (1717 O. Scherzer , M. Grasmair, H. Grossauer, M. Haltmeier & F. Lenzen. “Variational Methods in Imaging”, volume 167 of Applied Mathematical Sciences. Springer, New York (2009).. The first results of convergence rates for Tikhonov-Phillips functional minimizers are based on assumptions of the smoothness of the solution with respect to an operator (generally non-linear) defined in a Hilbert space 44 H. Engl, K. Kunisch & A. Neubauer. Convergence rates for Tikhonov regularization of non-linear ill-posed problems. Inverse Problems, 5(4) (1989), 523-540. or Banach 33 M. Burger & S. Osher. Convergence rates of convex variational regularization. Inverse Problems, 20(5) (2004), 1411-1421.. These assumptions are expressed in terms of a source condition (generally associated with an equation). However, numerical observations showed that, even when the assumptions of smoothness were not verified, the convergence rate was not necessarily significantly affected. In 2007, B. Hofmann et al. 1010 B. Hofmann , B. Kaltenbacher, C. Pöschl & O. Scherzer. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Problems, 23(3) (2007), 987. took this observation into account and weakened these assumptions by replacing the source condition with a variational inequality. We propose here to use inequalities of this type to prove a result analogous to that of Theorem 3.3 and thus obtain error estimates under this new approach. First, we will show the relationship between the source condition present in Theorem nd a variational inequality.
Proposition 3.4. Let 𝒳 and 𝒴 be Banach spaces, , the exact solution of (1.1) such that there exists w ∈ 𝒴 ∗ with . Suppose that the following hypothesis is satisfied:
Then, there exists β ≥ 0 such that the following variational inequality holds:
Proof. It is easy to proved that for each x ∈ 𝒳, by hypotheses and by the definition of the dual adjoint operator, it follows that
whence inequality (3.6) is verified considering . □
In the Proposition 3.5, a result in some “reciprocal” sense to that given in the previous statement, is presented. For this, the following result about the dual adjoint operator 𝒯 # of 𝒯 will be used (1717 O. Scherzer , M. Grasmair, H. Grossauer, M. Haltmeier & F. Lenzen. “Variational Methods in Imaging”, volume 167 of Applied Mathematical Sciences. Springer, New York (2009)., Lemma 8.21).
Lemma 3.1. Let 𝒳 and 𝒴 be normed spaces, and x ∗ ∈ 𝒳 ∗ . Then, if and only if there exists such that .
Proposition 3.5. Let 𝒳 and 𝒴 be Banach spaces, , the exact solution of (1.1). Suppose that there exists β ≥ 0 and such that , and that the following hypothesis is satisfied:
Then, there exists w ∈ 𝒴 ∗ such that ξ = 𝒯 # w.
Proof. By hypotheses, it follows that
for all x ∈ 𝒳. Then, by Lemma 3.1 we have that , i.e., there exists w ∈ 𝒴 ∗ such that ξ = 𝒯 # w. □
In the particular case in which , the equivalence between the source condition and the variational inequality (3.6) is obtained immediately from the Propositions 3.4 and 3.5. Indeed, the hypotheses (3.5) and (3.7) are verified considering c = C = 1.
The following theorem presents an analogous result of the Theorem 3.3 with a modification in its hypotheses since error estimates are established using a variational inequality instead of a source condition. By the Proposition 3.4, it is immediate to see that this new hypothesis is weaker than those used in Theorem 3.3.
Theorem 3.4.Let 𝒳 and 𝒴 be Banach spaces,,the exact solution of (1.1) and yδ ∈ 𝒴 such thatand. Suppose that the functionalsatisfies the following hypotheses:
-
, for all y 1, y 2, y 3 ∈ 𝒴;
-
there exist β ∈ [0, ∞) andsuch that, for all x ∈ 𝒳.
Then, for each minimizer of the functional given in (1.2) for data y δ it follows that
if p > 1 and q is the conjugate of p, then
-
if p = 1 and η β ≤ 1, then
Proof. Analogously to the proof of Theorem 3.3, since is a minimizer of the functional for data y δ and , it immediately follows that
Then,
Finally,
and the proof follows analogously to that of Theorem 3.3 with β instead of C∥w∥ in (3.2). □
From the proof of the Proposition 3.4 it follows that β = C ∥w∥ so that the error estimates obtained in the Theorems 3.3 and 3.4 coincide.
It is appropriate to mention here that if we consider the fidelity term given by , that is , the hypothesis 1 of Theorem 3.4 is satisfied as a consequence of the triangular inequality of the norm. In this way, from the Theorem 3.4, it is possible to obtain a result that establishes an estimate of the error between the exact solution of (1.1) and a regularized solution obtained by minimizing TPG-p functionals (in this case, β = ∥w∥ since C = 1).
3.2 Multiple penalizers
As previously mentioned, the simultaneous use of two or more penalizers of different nature will, in some way, allow the capturing of different characteristics on the exact solution. This is very usefull, for instance, in image restoration problems in which it is known “a-priori” that the original image is “blocky”, i.e. it possesses both regions of high regularity and regions with sharp discontinuities. Thus, generalizations of the Theorems 3.3 and 3.4 are presented for the case in which the regularized solution is obtained by minimizing the functional given in (1.4).
First, the following theorem presents an error estimate that constitutes a generalization of the Theorem 3.3.
Theorem 3.5.Let 𝒳 and 𝒴 be Banach spaces,convex functionals for all i = 1,..., n, given by, where, the exact solution of (1.1) and y δ ∈ 𝒴 such thatand. Suppose that, for each, there existssuch thatand that the functionalsatisfies the following hypotheses:
-
, for all y 1, y 2, y 3 ∈𝒴;
-
there exists C > 0 such that .
Then, if , for each minimizer of the functional given in (1.4) for data y δ , it follows that
-
if p > 1 and q is the conjugate of p then
-
if p = 1 andthen
Proof. Since is a minimizer of the functional for data y δ , and and , it follows that
and, in consequence,
Now, by hypothesis, there exists such that . Then, since is a convex functional, from inequality (3.8) it follows that
Then,
-
If p > 1, it immediately follows from Young’s inequality that
which with (3.9) imply that
Thus,
-
If p = 1, by (3.9) it turns out that
and since , finally, it follows that
□
Note that if η i = 0, for all i ≥ 2, that is, there is only one penalty term, the error estimate of Theorem 3.5 agrees with that of the Theorem 3.3.
As was done in the case in which there is only one penalty term, a result is presented below that generalizes the Theorem 3.4 for the case in which two or more penalty terms are used and where the hypothesis related to a source condition present in the Theorem 3.5 is replaced by one that considers a variational inequality.
Theorem 3.6.Let 𝒳 and 𝒴 be Banach spaces,convex functionals for all i = 1,..., n,given by,where,the exact solution of (1.1) and y δ ∈ 𝒴 such thatand. Suppose that the functionalsatisfies the following hypotheses:
-
, for all y1, y2, y3 ∈ 𝒴;
-
there exist β ∈ [0, ∞) andsuch that, for all x ∈ 𝒳.
Then, if , for each minimizer of the functional given in (1.4) for data y δ , it follows that
-
if p > 1 and q is the conjugate of p then
-
if p = 1 and then
Proof. Analogously to the proof of Theorem 3.5, since is a minimizer of the functional for data y δ and , it follows that
Then,
and thus,
and the proof follows analogously to that of Theorem 3.5 with β instead of in (3.9). □
It is appropriate to mention here that for the case in which there is only one penalizing term, that is, if η i = 0, for all i ≥ 2, the error estimate obtained in Theorem 3.6 is the same as that of Theorem 3.4.
4 CONCLUSIONS
In this work, error estimates were presented for the case in which the regularized solution is obtained by minimizing doubly-generalized Tikhonov-Phillips functionals. In particular, for the case of generalized Tikhonov-Phillips functionals, we have seen that the error estimates obtained coincide with those presented by K. Ito and B. Jin in 1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).. The first result obtained was mainly based on an assumption given by a source condition. We have seen that it is possible to replace this assumption by a variational inequality obtaining analogous results of error estimates. Finally, relationships were established between the optimality condition associated with the problem, the source condition and the variational inequality. On the other hand, because it is known that, in certain cases, the use of two or more penalizing terms is useful, generalizations of the results of error estimates are presented for cases in which the regularized solution is obtained by minimizing doubly-generalized Tikhonov-Phillips functionals with multiple penalizers.
Acknowledgments
This work was supported in part by Universidad Nacional del Litoral, through project CAI+D 2020 PI Tipo II - 50620190100069LI.
REFERENCES
-
1A.B. Bakushinskii. Remarks on choosing a regularization parameter using the quasi-optimality and ratio criterion. USSR Comp. Math. Math. Phys., 24 (1984), 181-182.
-
2M. Benning & M. Burger. Error Estimates for General Fidelities. Electron. T. Numer. Ana., 38 (2011), 44-68.
-
3M. Burger & S. Osher. Convergence rates of convex variational regularization. Inverse Problems, 20(5) (2004), 1411-1421.
-
4H. Engl, K. Kunisch & A. Neubauer. Convergence rates for Tikhonov regularization of non-linear ill-posed problems. Inverse Problems, 5(4) (1989), 523-540.
-
5H.W. Engl, M. Hanke & A. Neubauer . “Regularization of inverse problems”, volume 375 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht (1996).
-
6J. Flemming. Theory and examples of variational regularization with non-metric fitting functionals. Journal of Inverse and Ill-posed Problems, 18(6) (2010), 677 - 699.
-
7J. Flemming & B. Hofmann. A new Approach to Source Conditions in Regularization with General Residual Term. Numerical Functional Analysis and Optimization, 31 (2010), 254-284.
-
8G.H. Golub, M.T. Heath & G. Wahba. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21 (1979), 215-223.
-
9P. Hansen. Analysis of discrete ill-posed problems by means of the L-curve. SIAM, Review 34, no. 4 (1992), 561-580.
-
10B. Hofmann , B. Kaltenbacher, C. Pöschl & O. Scherzer. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Problems, 23(3) (2007), 987.
-
11K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).
-
12K. Ito , B. Jin & T. Takeuchi. Multi-parameter Tikhonov regularization. Methods and Applications of Analysis, 18 (2011), 31-46.
-
13K. Ito , B. Jin & J. Zou. A new choice rule for regularization parameters in Tikhonov regularization. Applicable Analysis, 90 (2011), 1521-1544.
-
14Y. Lu, L. Shen & Y. Xu. Multi-Parameter Regularization Methods for High-Resolution Image Reconstruction With Displacement Errors. IEEE Transactions on Circuits and Systems I: Regular Papers, 54(8) (2007), 1788-1799.
-
15G.L. Mazzieri, R.D. Spies & K.G. Temperini. Directional convergence of spectral regularization method associated to families of closed operators. Computational and Applied Mathematics, 32 (2013), 119-134.
-
16D.L. Phillips. A technique for the numerical solution of certain integral equations of the first kind. J. Assoc. Comput. Mach., 9 (1962), 84-97.
-
17O. Scherzer , M. Grasmair, H. Grossauer, M. Haltmeier & F. Lenzen. “Variational Methods in Imaging”, volume 167 of Applied Mathematical Sciences. Springer, New York (2009).
-
18T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012).
-
19A.N. Tikhonov. Regularization of incorrectly posed problems. Soviet Math. Dokl., 4 (1963), 1624-1627.
-
20A.N. Tikhonov . Solution of incorrectly formulated problems and the regularization method. Soviet Math. Dokl., 4 (1963), 1035-1038.
-
21Z. Wang. Multi-parameter Tikhonov regularization and model function approach to the damped Morozov principle for choosing regularization parameters. Journal of Computational and Applied Mathematics, 236 (2012), 1815-1832.
-
22E. Zeidler. “Nonlinear Functional Analysis and Its Applications, III: Variational methods and optimization”. Springer-Verlag, New York (1985). URL https://books.google.com.ar/books?id=sCNXnAEACAAJ
» https://books.google.com.ar/books?id=sCNXnAEACAAJ
Publication Dates
-
Publication in this collection
27 Mar 2023 -
Date of issue
Jan-Mar 2023
History
-
Received
15 Sept 2021 -
Accepted
10 July 2022