Acessibilidade / Reportar erro

Error Estimates for Doubly-Generalized Tikhonov-Phillips Regularization

ABSTRACT

In this work, error estimates are presented for the case in which the regularized solution is obtained by minimizing doubly-generalized Tikhonov-Phillips functionals. The first result is based mainly on an assumption given by a source condition. It is proved that it is possible to replace this assumption by a variational inequality, obtaining analogous result of the error estimate. Finally, relationships are established between the optimality condition associated with the problem, the source condition and the variational inequality. On the other hand, it is known that, in certain cases, the use of two or more penalizing terms is useful. For this reason, generalizations of the results of error estimates are presented for cases in which the regularized solution is a minimizer of doubly-generalized Tikhonov-Phillips functionals with multiple penalizers.

Keywords:
inverse problems; Tikhonov-Phillips; error estimate; source condition; variational inequality

1 INTRODUCTION

In a quite general framework, an inverse problem can be defined as the need of determining x in an equation of the form

T x = y , (1.1)

where T : XY is a bounded linear operator between two Banach spaces of infinite dimension (in the classic theory, 𝒳 and 𝒴 are Hilbert spaces) and y is the data, supposed to be known, perhaps with a certain degree of error. Frequently, inverse problems are ill-posed in the sense of Hadamard (i.e., the solution does not exists, it is not unique or it does not depend continuously on the data) and thus arises the need to apply a regularization method. Associated with this method, there is a parameter called the “regularization parameter” and its choice is essential to achieve an adequate approximation of the solution of the inverse problem. There are several parameter choice rules that allow determining its value. The so-called “a-priori” are those that depend only on the noise level of the problem while the “a-posteriori” are those that depend on both the noise level and the data. Finally, there is another type of rules called “heuristics” and they are those that only depend on the noise level through the data. Among them, we can mention the “generalized cross-validation method” introduced in 1979 by G. H. Golub, M. T. Heath and G. Wahba 88 G.H. Golub, M.T. Heath & G. Wahba. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21 (1979), 215-223., “the L-curve criterion” proposed in 1992 by P. C. Hansen 99 P. Hansen. Analysis of discrete ill-posed problems by means of the L-curve. SIAM, Review 34, no. 4 (1992), 561-580. and the rule proposed by K. Ito, B. Jin and J. Zou in 2011 1313 K. Ito , B. Jin & J. Zou. A new choice rule for regularization parameters in Tikhonov regularization. Applicable Analysis, 90 (2011), 1521-1544., whose construction is based on the stochastic approach to solving an inverse problem. It is important to mention that this type of rule is very useful in those problems where the exact noise level is unknown and only the data is available.

Although there is a wide variety of regularization methods, probably the best known and most commonly and widely used is the Tikhonov-Phillips regularization method, which was originally proposed by D. L. Phillips and A. N. Tikhonov in 1962 and 1963, respectively 1616 D.L. Phillips. A technique for the numerical solution of certain integral equations of the first kind. J. Assoc. Comput. Mach., 9 (1962), 84-97.), (1919 A.N. Tikhonov. Regularization of incorrectly posed problems. Soviet Math. Dokl., 4 (1963), 1624-1627.), (2020 A.N. Tikhonov . Solution of incorrectly formulated problems and the regularization method. Soviet Math. Dokl., 4 (1963), 1035-1038.. Under the classic version of this method (with fidelity term and penalizer, both quadratic) and its generalized version (with quadratic fidelity term and generalized penalizer) it is possible to prove the convergence of the regularized solutions to a minimum penalizing solution, in the case in which the regularization parameter is chosen through an a-priori or a-posteriori rule 33 M. Burger & S. Osher. Convergence rates of convex variational regularization. Inverse Problems, 20(5) (2004), 1411-1421.), (55 H.W. Engl, M. Hanke & A. Neubauer . “Regularization of inverse problems”, volume 375 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht (1996).. It is possible to generalize these convergence results for the doubly-generalized Tikhonov-Phillips regularization method (that is, with generalized fidelity and penalizing terms) for these types of rules 66 J. Flemming. Theory and examples of variational regularization with non-metric fitting functionals. Journal of Inverse and Ill-posed Problems, 18(6) (2010), 677 - 699.), (1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).. A result due to A. B. Bakushinskii 11 A.B. Bakushinskii. Remarks on choosing a regularization parameter using the quasi-optimality and ratio criterion. USSR Comp. Math. Math. Phys., 24 (1984), 181-182. shows that a regularization method cannot be convergent when the associated parameter choice rule depends only on the data of the ill-posed inverse problem. But this does not indicate that the method cannot perform well for small noise levels. For this reason, given that convergence results cannot be obtained, various authors have presented error estimates, that is, bounds for the error between the solution of the problem and the regularized solution when the regularization parameter is chosen using a heuristic rule for a fixed noise level 22 M. Benning & M. Burger. Error Estimates for General Fidelities. Electron. T. Numer. Ana., 38 (2011), 44-68.), (1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).), (1818 T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012)..

We propose here to study the doubly-generalized Tikhonov-Phillips regularization method that consists in approximating the solution of the problem (1.1) by a minimizer of the functional, called “TPGG-p functional”, given by

J ϕ ~ , ψ , η p ( x ) 1 p ϕ ~ ( T x , y ) p + η ψ ( x ) , (1.2)

where p1, ϕ~ : Y×Y[0,+),ψ : X[0,+) is a convex functional and η > 0 is the regularization parameter. In particular, if ϕ~(Tx,y)=Tx-y, the resulting functional is called “TPG-p functional” and is given by

J ψ , η p ( x ) 1 p T x - y p + η ψ ( x ) , (1.3)

where p ≥ 1 and ψ : X[0,+) is a convex functional. This work presents error estimates that we have proved for the case in which the regularized solution is obtained by minimizing the TPGG-p functionals given by (1.2). The first estimates are based mainly on an assumption given by a source condition which, as is known, imposes certain smoothness conditions on the minimum penalizing solution. These estimates are particularized for the case in which ϕ~(Tx,y)=Tx-y and p = 2, leaving evidence that are equivalent to that proposed by K. Ito and B. Jin in 1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015)..

As it is usual, we use the Bregman distance to estimate the error. This distance is a natural way to measure the deviation of elements in a Banach space with respect to a convex functional and it has a term that will be in our interest to be able to bounded. For this purpose, the so-called “variational inequalities” will be used to bound this term of interest and obtain error estimates. We will see here that it is possible to replace the source condition assumption used in the first error estimates by a variational inequality, obtaining similar results from that estimates. Finally, relationships are established between the optimality condition associated with the problem, the source condition and the variational inequality.

It is well known that an adequate choice of the penalizing term, based on the “a-priori” knowledge of certain type of information about the exact solution, will result in regularized solutions which appropriately reflect those characteristics. In recent years, there has been a growing interest in the multi-parameter Tikhonov regularization method that uses multiple constraints as a means of improving the quality of inversion. The multi-parameter regularization adds multiple different penalties which exhibit multi-scale features, while the single-parameter regularization uses a unique penalty which may result in a regularized solution that does not preserve certain features of the original solution. The use of multi-parameter regularization for solving ill-posed problems naturally matches with the multi-resolution analysis framework which has become a standard method to analyze the frequency information of images on different resolutions 1212 K. Ito , B. Jin & T. Takeuchi. Multi-parameter Tikhonov regularization. Methods and Applications of Analysis, 18 (2011), 31-46.), (1414 Y. Lu, L. Shen & Y. Xu. Multi-Parameter Regularization Methods for High-Resolution Image Reconstruction With Displacement Errors. IEEE Transactions on Circuits and Systems I: Regular Papers, 54(8) (2007), 1788-1799.), (1515 G.L. Mazzieri, R.D. Spies & K.G. Temperini. Directional convergence of spectral regularization method associated to families of closed operators. Computational and Applied Mathematics, 32 (2013), 119-134.), (2121 Z. Wang. Multi-parameter Tikhonov regularization and model function approach to the damped Morozov principle for choosing regularization parameters. Journal of Computational and Applied Mathematics, 236 (2012), 1815-1832.. For this reason, we present generalizations of the error estimates results obtained for the functionals given in (1.2) for the case in which the regularized solution is a minimizer of doubly-generalized Tikhonov-Phillips functional with multiple penalizers given by

J ϕ ~ , ψ , η p ( x ) 1 p ϕ ~ ( T x , y ) p + η · ψ ( x ) , (1.4)

where p ≥ 1, ϕ~ : Y×Y[0,+), ψψ1,,ψn : Xn with ψi : X[0,+) convex functionals for all i = 1,..., n and η=η1,,ηnn such that η i > 0, for all i = 1,..., n.

2 PRELIMINARES

As it was previously mentioned, to estimate the a-posteriori error, the Bregman distance will be used and, to present its definition, it is necessary to introduce the concepts of subgradient and subdifferential of a convex functional 1818 T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012)., which are presented below.

Definition 2.1.Let 𝒳 be a Banach space, 𝒳the dual space of X andψ : X{}a convex functional. Then, x𝒳 is a subgradient of ψ in x ifψ(z)ψ(x)+[x*, z-x], zX, where [x , x] corresponds to the functional x evaluated in x, i.e[x*,x]x*(x). The set ∂ψ(x) of all subgradients of ψ in x is called subdifferential of ψ in x.

It is important to mention here that the subgradient is a generalization of the classical concept of derivative for the case of convex functionals 2222 E. Zeidler. “Nonlinear Functional Analysis and Its Applications, III: Variational methods and optimization”. Springer-Verlag, New York (1985). URL https://books.google.com.ar/books?id=sCNXnAEACAAJ.
https://books.google.com.ar/books?id=sCN...
.

Definition 2.2.Let 𝒳 be a Banach space, ψ : X{}a convex functional, x ∈ 𝒳 andξψ(x). The Bregman distance in x with respect to ξ and ψ is defined as

d ξ ψ ( z , x ) ψ ( z ) - ψ ( x ) - [ ξ , z - x ] , z X .

It is immediate to see that dξψ(x,x)=0 and the convexity of the functional ψ implies that dξψ(z,x)0, for all z𝒳. Also, it is easy to prove that if 𝒳 is a Hilbert space, the Bregman distance in an element x coincides with ∥xz2, for all z𝒳, when the functional ψ is chosen as the square of the norm in 𝒳. In this way, the Bregman distance extends the concept of norm.

Now, we introduce the concept of duality mapping that will be relevant for its close relationship with the subdifferential of a power of a norm 1717 O. Scherzer , M. Grasmair, H. Grossauer, M. Haltmeier & F. Lenzen. “Variational Methods in Imaging”, volume 167 of Applied Mathematical Sciences. Springer, New York (2009).), (1818 T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012).. To define this mapping, it is necessary to introduce the concept of a “gauge function” that consists of a continuous and strictly increasing function f : [0,+)[0,+) that verifies f (0) = 0 and limtf(t)=. Furthermore, it is necessary to define the set denoted by 2𝒳 ∗ consisting of all subsets of the dual space of 𝒳, that is, 2X*{E: EX*}.

Definition 2.3. Let 𝒳 be a Banach space. The duality mapping of 𝒳 with respect to the gauge function f is the (set-valued) mapping J : X 2 X * defined by

J ( x ) = x * X * : [ x * , x ] = x * x , x * = f ( x ) .

The following proposition presents the duality mapping with respect to the gauge function f(t)=tp-1 with p > 1, for the case of Hilbert spaces.

Proposition 2.1. Let 𝒳 be a Hilbert space andf(t)=tp-1with p > 1. The duality mapping J of 𝒳 with respect to f is given byJ(x)=xp-2 x.

The next result, called Asplund Theorem, relates the subdifferential of the primitive of a gauge function (which is convex) and the duality mapping with respect to that function.

Theorem 2.1. Let 𝒳 be a Banach space and f a gauge function. If F ( t ) 0 t f ( s ) d s then the duality mapping of 𝒳 with respect to f is given by

J ( x ) = F ( x ) , x X .

The following corollary is useful to obtain the subdifferential of the convex functional 1pxp with p > 1, that is associated with the fidelity term considered in (1.3).

Corollary 2.1.If 𝒳 is a Banach space andf(t)=tp-1with p > 1, then the duality mapping J of 𝒳 with respect to f is given byJ(x)=1pxp.

It is well known that a minimizer of a differentiable and convex function verifies the optimality condition, that is, the derivative of that function vanishes in that minimizer. Similarly, a minimizer z of a convex functional 𝒥 defined on a Banach space satisfies the optimality condition given by 0J(z), as it is presented in the following result. Furthermore, the optimality condition implies the existence of a minimizer.

Theorem 2.2.Let 𝒳 be a Banach space, J : X{}a convex functional andzD(J)withJ(z)<. Then, J(z)=minxXJ(x)if and only if0J(z).

3 PRINCIPAL RESULTS

3.1 One penalizer

The following result presents the first error estimation that we have obtained for the case in which the regularized solution is a minimizer of a TPGG-p functional. As will be seen, it is possible to particularize this estimate for the case of TPG-p functionals. More precisely, it will be seen that the estimate obtained for the case of TPG-2 functionals is equivalent to that proposed by K. Ito and B. Jin in 1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015)..

Theorem 3.3.Let 𝒳 and 𝒴 be Banach spaces,TL(X,Y), ψ : X[0,+), ϕ~ : Y×Y[0,+), x~the exact solution of (1.1) and yδ𝒴 such thatϕ~(y,yδ)δandϕ~(yδ,y)δ. Suppose that there exists w𝒴 such thatξT#wψ(x~)and that the functionalϕ~satisfies the following hypotheses:

  1. ϕ~(y1,y3)ϕ~(y1,y2)+ϕ~(y2,y3), for all y1, y2, y3𝒴;

  2. there exists C > 0 such thaty1-y2C ϕ~(y2,y1), y1, y2R(T).

Then, for each minimizerxηδof the functionalJϕ~,ψ,ηpgiven in (1.2) for data y δ it follows that

  • if p > 1 and q is the conjugate of p then

d ξ ψ ( x η δ , x ~ ) δ p p + η δ w C + 1 q ( η w C ) q η ;

  • if p = 1 andη w C1then

d ξ ψ ( x η δ , x ~ ) δ 1 + η w C η .

Proof. Since xηδ is a minimizer of the functional Jϕ~,ψ,ηp for data y δ , it follows that

1 p ϕ ~ ( T x η δ , y δ ) p + η ψ ( x η δ ) 1 p ϕ ~ ( T x ~ , y δ ) p + η ψ ( x ~ ) ,

and, in consequence, 1p ϕ~(Txηδ,yδ)p1p ϕ~(Tx~,yδ)p+η ψ(x~)-ψ(xηδ).

Then,

1 p ϕ ~ ( T x η δ , y δ ) p + η d ξ ψ ( x η δ , x ~ ) 1 p ϕ ~ ( T x ~ , y δ ) p + η ψ ( x ~ ) - ψ ( x η δ ) + d ξ ψ ( x η δ , x ~ ) = 1 p ϕ ~ ( T x ~ , y δ ) p + η T # w , x ~ - x η δ = 1 p ϕ ~ ( T x ~ , y δ ) p + η w , T x ~ - T x η δ , (3.1)

where the last two equalities follow from the source condition and the definition of the dual adjoint operator, respectively. Now, since ϕ~(y,yδ)δ and ϕ~(yδ,y)δ, it follows that

1 p ϕ ~ ( T x η δ , y δ ) p + η d ξ ψ ( x η δ , x ~ ) δ p p + η [ w , T x ~ - T x η δ ] by (3.1) and T x ~ = y δ p p + η w T x ~ - T x η δ δ p p + η w C ϕ ~ ( T x η δ , y δ ) + η w C ϕ ~ ( y δ , y ) by hypotheses 1, 2 and T x ~ = y δ p p + η w C ϕ ~ ( T x η δ , y δ ) + η w C δ .

Finally,

1 p ϕ ~ ( T x η δ , y δ ) p - η w C ϕ ~ ( T x η δ , y δ ) + η d ξ ψ ( x η δ , x ~ ) δ p p + η w C δ . (3.2)

  • If p > 1, it immediately follows from Young’s inequality that

η w C ϕ ~ ( T x η δ , y δ ) 1 p ϕ ~ ( T x η δ , y δ ) p + 1 q ( η w C ) q ,

which implies that

η d ξ ψ ( x η δ , x ~ ) - 1 q ( η w C ) q 1 p ϕ ~ ( T x η δ , y δ ) p - η w C ϕ ~ ( T x η δ , y δ ) + η d ξ ψ ( x η δ , x ~ ) δ p p + η w C δ ,

where the last inequality follows from (3.2). Thus,

d ξ ψ ( x η δ , x ~ ) δ p p + η δ w C + 1 q ( η w C ) q η .

  • If p = 1, by (3.2) it follows that (1-η w C) ϕ~(Txηδ,yδ)+η dξψ(xηδ,x~)δ+η w C δ , and by hypothesis 1-η w C0 , finally, it turns out that

d ξ ψ ( x η δ , x ~ ) δ 1 + η w C η .

Given that every norm satisfies the triangular inequality and considering C = 1, it follows immediately that the hypotheses 1 and 2 of Theorem 3.3 are verified when ϕ~ corresponds to the norm in 𝒴. In this way, if the fidelity term is 1p Tx-yp, with p ≥ 1, from Theorem 3.3 error estimates are obtained for the case of TPG-p functionals. In particular, for p = 2 (corresponding to the classic fidelity term) it turns out that

d ξ ψ ( x η δ , x ~ ) δ 2 2 + η δ w + 1 2 ( η w ) 2 η = 1 2 δ η + η w 2 ,

which coincides with that of K. Ito and B. Jin in 1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).. It is then possible to conclude that the Theorem 3.3 consists of a generalization of the estimation result presented by these authors.

On the other hand, it is appropriate to mention that the inequality (3.1), which constitutes a first error estimate, was obtained by M. Benning and M. Burger in 2011 under slightly different hypotheses about ϕ~ and ψ, using the symmetric Bregman distance 22 M. Benning & M. Burger. Error Estimates for General Fidelities. Electron. T. Numer. Ana., 38 (2011), 44-68..

Finally, it is important to observe that the Theorem 3.3 would allow obtaining orders of convergence for the case in which the regularization parameter is chosen with an a-priori or a-posteriori rule. However, if one is interested in obtaining estimates of the error for the case of heuristic rules, one can, from the inequality (3.2), obtain such an estimate as follows

d ξ ψ ( x η δ , x ~ ) δ p p η + w C δ - 1 p ϕ ~ ( T x η δ , y δ ) p η + w C ϕ ~ ( T x η δ , y δ ) . (3.3)

From the proof of the Theorem 3.3, it is easy to see that the error estimate given by the inequality (3.3) is less than or equal to the estimate obtain in this theorem.

Next, it is proved that, in the context of Hilbert spaces and under certain hypotheses, the optimality condition for the functional Jψ,η2, with ψ(x)12x2 (given by the existence of minimizers) implies the source condition present in the statement of Theorem 3.3. It should be noted here that, in this context, the dual adjoint operator 𝒯 # coincides with the adjoint operator T* : YX of 𝒯.

Proposition 3.2.Let 𝒳 and 𝒴 be Hilbert spaces, ψ(x)12x2andx~the exact solution of (1.1). Suppose that {x η} is a sequence of minimizers of the functionalJψ,η2such thatxηωx~when η → 0+ and there exists w𝒴 such that-Txη-yηωw. Then, T*wψ(x~), whereT* : YXis the adjoint operator of 𝒯.

Proof. Since x η is a minimizer of the functional Jψ,η2, from Theorem 2.2 it turns out that

0 1 2 T · - y 2 + η 1 2 · 2 ( x η ) .

Then, by property of the sum, product by a scalar, translation and composition of the subgradient 2222 E. Zeidler. “Nonlinear Functional Analysis and Its Applications, III: Variational methods and optimization”. Springer-Verlag, New York (1985). URL https://books.google.com.ar/books?id=sCNXnAEACAAJ.
https://books.google.com.ar/books?id=sCN...
it follows that

1 2 T · - y 2 + η 1 2 · 2 ( x η ) = 1 2 T · - y 2 ( x η ) + η 1 2 · 2 ( x η ) = T * 1 2 · - y 2 ( T x η ) + η 1 2 · 2 ( x η ) = T * 1 2 · 2 ( T x η - y ) + η 1 2 · 2 ( x η ) .

Thus, 0T*12·2Txη-y+η 12·2(xη) and the functional 0 can be decomposed as 0=fη+gη, where fηT*12·2Txη-y and gηη 12·2(xη). Since 𝒳 and 𝒴 are Hilbert spaces, from Proposition 2.1 and Corollary 2.1, it immediately follows that fη=T*Txη-y and g η = η x η . Since g η = − f η it turns out that

x η = - T * T x η - y η .

and then, by hypothesis, xηωT*w. By uniqueness of the limit it is concluded that x~=T*w. From Proposition 2.1 and Corollary 2.1 it follows that ψ(x~)=12·2(x~)={x~}, and thus T*wψ(x~).

Finally, the optimality condition associated with the functional TPGG-p with p > 1, and convex and weakly lower semicontinuous penalizer imply the source condition present in the statement of Theorem 3.3, as it is proved in the following proposition.

Proposition 3.3.Let 𝒳 and 𝒴 be Banach spaces,ψ : X[0,+)a convex and weakly lower semicontinuous functional andx~the exact solution of (1.1). Suppose that {x η } is a sequence of minimizers of the functionalJϕ,ψ,ηpwith p > 1 such thatxηωx~when η → 0+ and that the following hypothesis is satisfied:

f o r e a c h η > 0 , t h e r e e x i s t s w η Y s u c h t h a t - 1 η ϕ ~ ( T · , y ) p p ( x η ) = { w η } . (3.4)

If there exists w Y such that w η ω w , then T # w ψ ( x ~ ) , where T # : Y * X * is the dual adjoint operator of 𝒯.

Proof. Since x η is a minimizer of the functional Jϕ~,ψ,ηp, from Theorem 2.2 it immediately follows that 01pϕ~(T·,y)p+η ψ(·)(xη). Then, by property of the sum, product by a scalar and composition of the subgradient 2222 E. Zeidler. “Nonlinear Functional Analysis and Its Applications, III: Variational methods and optimization”. Springer-Verlag, New York (1985). URL https://books.google.com.ar/books?id=sCNXnAEACAAJ.
https://books.google.com.ar/books?id=sCN...
it follows that

0 T # 1 p ϕ ~ ( T · , y ) p ( x η ) + η ψ ( x η ) ,

and then, the functional 0 can be decomposed as 0=fη+gη, with

f η T # 1 p ϕ ~ ( T · , y ) p ( x η ) and g η η ψ ( x η ) .

Now, by hypothesis, it is known that -1ηϕ~(T·,y)pp(xη)={wη} and thus -fηη=T#wη. Since g η = − f η , it follows that T#wη=gηηψ(xη). Analogously to the proof of Proposition 3.2, as a consequence of the weak lower semicontinuity of ψ and since wηωw and xηωx~ as η → 0+, it is proved that T#wψ(x~). □

It should be mentioned here that, under certain additional hypotheses, the assumption (3.4) about the subdifferential of 1pϕ~(T·,y)p with p > 1 is verified. For example, if ϕ~ corresponds to the norm in a Hilbert space, from Proposition 2.1 it follows that the subdifferential 1pϕ~(T·,y)p(xη) has a single element. On the other hand, if ϕ~ is Gateaux differentiable then ϕ~(x)=ϕ~(x) 1818 T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012)..

Because the Bregman distance in x~X with respect to the convex functional ψ involves the term [ξ,x-x~], where ξψ(x~) and x𝒳, several authors propose the use of inequalities containing this term, called variational inequalities, and it is proved that they are a powerful tool to obtain convergence rates 77 J. Flemming & B. Hofmann. A new Approach to Source Conditions in Regularization with General Residual Term. Numerical Functional Analysis and Optimization, 31 (2010), 254-284.), (1010 B. Hofmann , B. Kaltenbacher, C. Pöschl & O. Scherzer. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Problems, 23(3) (2007), 987.), (1717 O. Scherzer , M. Grasmair, H. Grossauer, M. Haltmeier & F. Lenzen. “Variational Methods in Imaging”, volume 167 of Applied Mathematical Sciences. Springer, New York (2009).. The first results of convergence rates for Tikhonov-Phillips functional minimizers are based on assumptions of the smoothness of the solution with respect to an operator (generally non-linear) defined in a Hilbert space 44 H. Engl, K. Kunisch & A. Neubauer. Convergence rates for Tikhonov regularization of non-linear ill-posed problems. Inverse Problems, 5(4) (1989), 523-540. or Banach 33 M. Burger & S. Osher. Convergence rates of convex variational regularization. Inverse Problems, 20(5) (2004), 1411-1421.. These assumptions are expressed in terms of a source condition (generally associated with an equation). However, numerical observations showed that, even when the assumptions of smoothness were not verified, the convergence rate was not necessarily significantly affected. In 2007, B. Hofmann et al. 1010 B. Hofmann , B. Kaltenbacher, C. Pöschl & O. Scherzer. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Problems, 23(3) (2007), 987. took this observation into account and weakened these assumptions by replacing the source condition with a variational inequality. We propose here to use inequalities of this type to prove a result analogous to that of Theorem 3.3 and thus obtain error estimates under this new approach. First, we will show the relationship between the source condition present in Theorem nd a variational inequality.

Proposition 3.4. Let 𝒳 and 𝒴 be Banach spaces, T L ( X , Y ) , ψ : X [ 0 , + ) , ϕ ~ : Y × Y [ 0 , + ) , x ~ the exact solution of (1.1) such that there exists w ∈ 𝒴 with ξ T # w ψ ( x ~ ) . Suppose that the following hypothesis is satisfied:

there exists C > 0 such that y 1 - y 2 C ϕ ~ ( y 2 , y 1 ) , y 1 , y 2 R ( T ) . (3.5)

Then, there exists β ≥ 0 such that the following variational inequality holds:

[ ξ , x ~ - x ] β ϕ ~ ( T x , T x ~ ) , x X . (3.6)

Proof. It is easy to proved that for each x ∈ 𝒳, by hypotheses and by the definition of the dual adjoint operator, it follows that

[ ξ , x ~ - x ] = [ T # w , x ~ - x ] = [ w , T x ~ - T x ] w T x ~ - T x w C ϕ ~ ( T x , T x ~ ) ,

whence inequality (3.6) is verified considering βC w. □

In the Proposition 3.5, a result in some “reciprocal” sense to that given in the previous statement, is presented. For this, the following result about the dual adjoint operator 𝒯 # of 𝒯 will be used (1717 O. Scherzer , M. Grasmair, H. Grossauer, M. Haltmeier & F. Lenzen. “Variational Methods in Imaging”, volume 167 of Applied Mathematical Sciences. Springer, New York (2009)., Lemma 8.21).

Lemma 3.1. Let 𝒳 and 𝒴 be normed spaces, T L ( X , Y ) and x ∈ 𝒳 . Then, x * R a n ( T # ) if and only if there exists C ~ > 0 such that [ x * , x ] C ~ T x , x X .

Proposition 3.5. Let 𝒳 and 𝒴 be Banach spaces, T L ( X , Y ) , ψ : X [ 0 , + ) , ϕ ~ : Y × Y [ 0 , + ) , x ~ the exact solution of (1.1). Suppose that there exists β ≥ 0 and ξ ψ ( x ~ ) such that [ ξ , x ~ - x ] β ϕ ~ ( T x , T x ~ ) , x X , and that the following hypothesis is satisfied:

there exists c > 0 such that c ϕ ~ ( y 1 , y 2 ) y 1 - y 2 , y 1 , y 2 R ( T ) . (3.7)

Then, there exists w ∈ 𝒴 such that ξ = 𝒯 # w.

Proof. By hypotheses, it follows that

[ ξ , x ] = [ ξ , x ~ - ( x ~ - x ) ] β ϕ ~ ( T ( x ~ - x ) , T x ~ ) β 1 c T x ,

for all x ∈ 𝒳. Then, by Lemma 3.1 we have that ξRan(T#), i.e., there exists w ∈ 𝒴 such that ξ = 𝒯 # w. □

In the particular case in which ϕ~(Tx,y)Tx-y, the equivalence between the source condition and the variational inequality (3.6) is obtained immediately from the Propositions 3.4 and 3.5. Indeed, the hypotheses (3.5) and (3.7) are verified considering c = C = 1.

The following theorem presents an analogous result of the Theorem 3.3 with a modification in its hypotheses since error estimates are established using a variational inequality instead of a source condition. By the Proposition 3.4, it is immediate to see that this new hypothesis is weaker than those used in Theorem 3.3.

Theorem 3.4.Let 𝒳 and 𝒴 be Banach spaces,TL(X,Y), ψ : X[0,+), ϕ~ : Y×Y[0,+),x~the exact solution of (1.1) and yδ𝒴 such thatϕ~(y,yδ)δandϕ~(yδ,y)δ. Suppose that the functionalϕ~satisfies the following hypotheses:

  1. ϕ~(y1,y3)ϕ~(y1,y2)+ϕ~(y2,y3) , for all y 1, y 2, y 3𝒴;

  2. there exist β ∈ [0, ) andξψ(x~)such that[ξ,x~-x]β ϕ~(Tx,Tx~), for all x ∈ 𝒳.

Then, for each minimizer x η δ of the functional J ϕ ~ , ψ , η p given in (1.2) for data y δ it follows that

if p > 1 and q is the conjugate of p, then

d ξ ψ ( x η δ , x ~ ) δ p p + η δ β + 1 q ( η β ) q η ;

  • if p = 1 and η β ≤ 1, then

d ξ ψ ( x η δ , x ~ ) δ 1 + η β η .

Proof. Analogously to the proof of Theorem 3.3, since xηδ is a minimizer of the functional Jϕ~,ψ,ηp for data y δ and ϕ~(y,yδ)δ, it immediately follows that

1 p ϕ ~ ( T x η δ , y δ ) p δ p p + η ψ ( x ~ ) - ψ ( x η δ ) .

Then,

1 p ϕ ~ ( T x η δ , y δ ) p + η d ξ ψ ( x η δ , x ~ ) δ p p + η ψ ( x ~ ) - ψ ( x η δ ) + d ξ ψ ( x η δ , x ~ ) = δ p p + η [ ξ , x ~ - x η δ ] δ p p + η β ϕ ~ ( T x η δ , y δ ) + η β ϕ ~ ( y δ , y ) by hypotheses 1, 2 and T x ~ = y δ p p + η β ϕ ~ ( T x η δ , y δ ) + η β δ . since ϕ ~ ( y δ , y ) δ

Finally,

1 p ϕ ~ ( T x η δ , y δ ) p - η β ϕ ~ ( T x η δ , y δ ) + η d ξ ψ ( x η δ , x ~ ) δ p p + η β δ ,

and the proof follows analogously to that of Theorem 3.3 with β instead of Cw∥ in (3.2). □

From the proof of the Proposition 3.4 it follows that β = Cw∥ so that the error estimates obtained in the Theorems 3.3 and 3.4 coincide.

It is appropriate to mention here that if we consider the fidelity term given by 1pTx-yp, that is ϕ~(Tx,y)=Tx-y, the hypothesis 1 of Theorem 3.4 is satisfied as a consequence of the triangular inequality of the norm. In this way, from the Theorem 3.4, it is possible to obtain a result that establishes an estimate of the error between the exact solution of (1.1) and a regularized solution obtained by minimizing TPG-p functionals (in this case, β = ∥w∥ since C = 1).

3.2 Multiple penalizers

As previously mentioned, the simultaneous use of two or more penalizers of different nature will, in some way, allow the capturing of different characteristics on the exact solution. This is very usefull, for instance, in image restoration problems in which it is known “a-priori” that the original image is “blocky”, i.e. it possesses both regions of high regularity and regions with sharp discontinuities. Thus, generalizations of the Theorems 3.3 and 3.4 are presented for the case in which the regularized solution is obtained by minimizing the functional given in (1.4).

First, the following theorem presents an error estimate that constitutes a generalization of the Theorem 3.3.

Theorem 3.5.Let 𝒳 and 𝒴 be Banach spaces,TL(X,Y), ϕ~ : Y×Y[0,+), ψi : X[0,+)convex functionals for all i = 1,..., n, ψη : Xgiven byψη(x)η·ψ(x), whereψ(x)ψ1(x),,ψn(x), η=η1,,ηn, x~the exact solution of (1.1) and y δ ∈ 𝒴 such thatϕ~(y,yδ)δandϕ~(yδ,y)δ. Suppose that, for eachηn, there existswηY*such thatξηT#wηψη(x~)and that the functionalϕ~satisfies the following hypotheses:

  1. ϕ~(y1,y3)ϕ~(y1,y2)+ϕ~(y2,y3) , for all y 1, y 2, y 3𝒴;

  2. there exists C > 0 such thaty1-y2C ϕ~(y2,y1), y1, y2R(T) .

Then, if η * η η 1 , for each minimizer x η δ of the functional J ϕ ~ , ψ , η p given in (1.4) for data y δ , it follows that

  • if p > 1 and q is the conjugate of p then

d ξ η * ψ η * ( x η δ , x ~ ) 1 p δ p + η 1 w η * C δ + 1 q η 1 w η * C q η 1 .

  • if p = 1 andη1 wη* C1then

d ξ η * ψ η * ( x η δ , x ~ ) δ 1 + η 1 w η * C η 1 .

Proof. Since xηδ is a minimizer of the functional Jϕ~,ψ,ηp for data y δ , and ϕ~(y,yδ)δ and Tx~=y, it follows that

1 p ϕ ~ ( T x η δ , y δ ) p + η · ψ ( x η δ ) 1 p ϕ ~ ( T x ~ , y δ ) p + η · ψ ( x ~ ) 1 p δ p + η · ψ ( x ~ ) ,

and, in consequence,

1 p ϕ ~ ( T x η δ , y δ ) p η 1 + η * · ψ ( x η δ ) 1 p δ p η 1 + η * · ψ ( x ~ ) 1 p ϕ ~ ( T x η δ , y δ ) p η 1 + ψ η * ( x η δ ) 1 p δ p η 1 + ψ η * ( x ~ ) 1 p ϕ ~ ( T x η δ , y δ ) p η 1 1 p δ p η 1 + ψ η * ( x ~ ) - ψ η * ( x η δ ) . (3.8)

Now, by hypothesis, there exists wη*Y* such that ξη*T#wη*ψη*(x~). Then, since ψη* is a convex functional, from inequality (3.8) it follows that

1 p ϕ ~ ( T x η δ , y δ ) p η 1 + d ξ η * ψ η * ( x η δ , x ~ ) 1 p δ p η 1 + ψ η * ( x ~ ) - ψ η * ( x η δ ) + d ξ η * ψ η * ( x η δ , x ~ ) = 1 p δ p η 1 + T # w η * , x ~ - x η δ 1 p δ p η 1 + w η * C ϕ ~ T x η δ , y δ + w η * C ϕ ~ y δ , T x ~ by hypotheses 1 and 2 1 p δ p η 1 + w η * C ϕ ~ T x η δ , y δ + w η * C δ . since ϕ ~ ( y δ , y ) δ and T x ~ = y

Then,

1 p ϕ ~ ( T x η δ , y δ ) p - η 1 w η * C ϕ ~ T x η δ , y δ + η 1 d ξ η * ψ η * ( x η δ , x ~ ) 1 p δ p + η 1 w η * C δ . (3.9)

  • If p > 1, it immediately follows from Young’s inequality that

η 1 w η * C ϕ ~ T x η δ , y δ 1 p ϕ ~ ( T x η δ , y δ ) p + 1 q η 1 w η * C q ,

which with (3.9) imply that

- 1 q η 1 w η * C q + η 1 d ξ η * ψ η * ( x η δ , x ~ ) 1 p δ p + η 1 w η * C δ .

Thus,

d ξ η * ψ η * ( x η δ , x ~ ) 1 p δ p + η 1 w η * C δ + 1 q η 1 w η * C q η 1 .

  • If p = 1, by (3.9) it turns out that

1 - η 1 w η * C ϕ ~ ( T x η δ , y δ ) + η 1 d ξ η * ψ ( x η δ , x ~ ) δ + η 1 w η * C δ ,

and since 1-η1 wη* C0, finally, it follows that

d ξ η * ψ η * ( x η δ , x ~ ) δ 1 + η 1 w η * C η 1 .

Note that if η i = 0, for all i ≥ 2, that is, there is only one penalty term, the error estimate of Theorem 3.5 agrees with that of the Theorem 3.3.

As was done in the case in which there is only one penalty term, a result is presented below that generalizes the Theorem 3.4 for the case in which two or more penalty terms are used and where the hypothesis related to a source condition present in the Theorem 3.5 is replaced by one that considers a variational inequality.

Theorem 3.6.Let 𝒳 and 𝒴 be Banach spaces,TL(X,Y), ϕ~ : Y×Y[0,+), ψi : X[0,+)convex functionals for all i = 1,..., n,ψη : Xgiven byψη(x)η·ψ(x),whereψ(x)ψ1(x),,ψn(x), η=η1,,ηn,x~the exact solution of (1.1) and y δ𝒴 such thatϕ~(y,yδ)δandϕ~(yδ,y)δ. Suppose that the functionalϕ~satisfies the following hypotheses:

  1. ϕ~(y1,y3)ϕ~(y1,y2)+ϕ~(y2,y3), for all y1, y2, y3𝒴;

  2. there exist β ∈ [0, ∞) andξψ(x~)such that[ξ,x~-x]β ϕ~(Tx,Tx~), for all x𝒳.

Then, if η * η η 1 , for each minimizer x η δ of the functional J ϕ ~ , ψ , η p given in (1.4) for data y δ , it follows that

  • if p > 1 and q is the conjugate of p then

d ξ η * ψ η * ( x η δ , x ~ ) 1 p δ p + η 1 β δ + 1 q η 1 β q η 1 .

  • if p = 1 and η 1 β 1 then

d ξ η * ψ η * ( x η δ , x ~ ) δ 1 + η 1 β η 1 .

Proof. Analogously to the proof of Theorem 3.5, since xηδ is a minimizer of the functional Jϕ~,ψ,ηp for data y δ and ϕ~(y,yδ)δ, it follows that

1 p ϕ ~ ( T x η δ , y δ ) p η 1 1 p δ p η 1 + ψ η * ( x ~ ) - ψ η * ( x η δ ) .

Then,

1 p ϕ ~ ( T x η δ , y δ ) p η 1 + d ξ η * ψ η * ( x η δ , x ~ ) 1 p δ p η 1 + ψ η * ( x ~ ) - ψ η * ( x η δ ) + d ξ η * ψ η * ( x η δ , x ~ ) = 1 p δ p η 1 + ξ η * , x ~ - x η δ 1 p δ p η 1 + β ϕ ~ T x η δ , y δ + β ϕ ~ y δ , T x ~ by hypotheses 1 and 2 1 p δ p η 1 + β ϕ ~ T x η δ , y δ + β δ since ϕ ~ ( y δ , y ) δ and T x ~ = y ,

and thus,

1 p ϕ ~ ( T x η δ , y δ ) p - η 1 β ϕ ~ T x η δ , y δ + η 1 d ξ η * ψ η * ( x η δ , x ~ ) 1 p δ p + η 1 β δ ,

and the proof follows analogously to that of Theorem 3.5 with β instead of wη* C in (3.9). □

It is appropriate to mention here that for the case in which there is only one penalizing term, that is, if η i = 0, for all i ≥ 2, the error estimate obtained in Theorem 3.6 is the same as that of Theorem 3.4.

4 CONCLUSIONS

In this work, error estimates were presented for the case in which the regularized solution is obtained by minimizing doubly-generalized Tikhonov-Phillips functionals. In particular, for the case of generalized Tikhonov-Phillips functionals, we have seen that the error estimates obtained coincide with those presented by K. Ito and B. Jin in 1111 K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).. The first result obtained was mainly based on an assumption given by a source condition. We have seen that it is possible to replace this assumption by a variational inequality obtaining analogous results of error estimates. Finally, relationships were established between the optimality condition associated with the problem, the source condition and the variational inequality. On the other hand, because it is known that, in certain cases, the use of two or more penalizing terms is useful, generalizations of the results of error estimates are presented for cases in which the regularized solution is obtained by minimizing doubly-generalized Tikhonov-Phillips functionals with multiple penalizers.

Acknowledgments

This work was supported in part by Universidad Nacional del Litoral, through project CAI+D 2020 PI Tipo II - 50620190100069LI.

REFERENCES

  • 1
    A.B. Bakushinskii. Remarks on choosing a regularization parameter using the quasi-optimality and ratio criterion. USSR Comp. Math. Math. Phys., 24 (1984), 181-182.
  • 2
    M. Benning & M. Burger. Error Estimates for General Fidelities. Electron. T. Numer. Ana., 38 (2011), 44-68.
  • 3
    M. Burger & S. Osher. Convergence rates of convex variational regularization. Inverse Problems, 20(5) (2004), 1411-1421.
  • 4
    H. Engl, K. Kunisch & A. Neubauer. Convergence rates for Tikhonov regularization of non-linear ill-posed problems. Inverse Problems, 5(4) (1989), 523-540.
  • 5
    H.W. Engl, M. Hanke & A. Neubauer . “Regularization of inverse problems”, volume 375 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht (1996).
  • 6
    J. Flemming. Theory and examples of variational regularization with non-metric fitting functionals. Journal of Inverse and Ill-posed Problems, 18(6) (2010), 677 - 699.
  • 7
    J. Flemming & B. Hofmann. A new Approach to Source Conditions in Regularization with General Residual Term. Numerical Functional Analysis and Optimization, 31 (2010), 254-284.
  • 8
    G.H. Golub, M.T. Heath & G. Wahba. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21 (1979), 215-223.
  • 9
    P. Hansen. Analysis of discrete ill-posed problems by means of the L-curve. SIAM, Review 34, no. 4 (1992), 561-580.
  • 10
    B. Hofmann , B. Kaltenbacher, C. Pöschl & O. Scherzer. A convergence rates result for Tikhonov regularization in Banach spaces with non-smooth operators. Inverse Problems, 23(3) (2007), 987.
  • 11
    K. Ito & B. Jin. “Inverse problems: Tikhonov theory and algorithms”, volume 22 of Series on applied mathematics. World Scientific (2015).
  • 12
    K. Ito , B. Jin & T. Takeuchi. Multi-parameter Tikhonov regularization. Methods and Applications of Analysis, 18 (2011), 31-46.
  • 13
    K. Ito , B. Jin & J. Zou. A new choice rule for regularization parameters in Tikhonov regularization. Applicable Analysis, 90 (2011), 1521-1544.
  • 14
    Y. Lu, L. Shen & Y. Xu. Multi-Parameter Regularization Methods for High-Resolution Image Reconstruction With Displacement Errors. IEEE Transactions on Circuits and Systems I: Regular Papers, 54(8) (2007), 1788-1799.
  • 15
    G.L. Mazzieri, R.D. Spies & K.G. Temperini. Directional convergence of spectral regularization method associated to families of closed operators. Computational and Applied Mathematics, 32 (2013), 119-134.
  • 16
    D.L. Phillips. A technique for the numerical solution of certain integral equations of the first kind. J. Assoc. Comput. Mach., 9 (1962), 84-97.
  • 17
    O. Scherzer , M. Grasmair, H. Grossauer, M. Haltmeier & F. Lenzen. “Variational Methods in Imaging”, volume 167 of Applied Mathematical Sciences. Springer, New York (2009).
  • 18
    T. Schuster, B. Kaltenbacher , B. Hofmann & K. Kazimierski. “Regularization Methods in Banach Spaces”. de Gruyter, Berlin, New York (2012).
  • 19
    A.N. Tikhonov. Regularization of incorrectly posed problems. Soviet Math. Dokl., 4 (1963), 1624-1627.
  • 20
    A.N. Tikhonov . Solution of incorrectly formulated problems and the regularization method. Soviet Math. Dokl., 4 (1963), 1035-1038.
  • 21
    Z. Wang. Multi-parameter Tikhonov regularization and model function approach to the damped Morozov principle for choosing regularization parameters. Journal of Computational and Applied Mathematics, 236 (2012), 1815-1832.
  • 22
    E. Zeidler. “Nonlinear Functional Analysis and Its Applications, III: Variational methods and optimization”. Springer-Verlag, New York (1985). URL https://books.google.com.ar/books?id=sCNXnAEACAAJ
    » https://books.google.com.ar/books?id=sCNXnAEACAAJ

Publication Dates

  • Publication in this collection
    27 Mar 2023
  • Date of issue
    Jan-Mar 2023

History

  • Received
    15 Sept 2021
  • Accepted
    10 July 2022
Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC Rua Maestro João Seppe, nº. 900, 16º. andar - Sala 163, Cep: 13561-120 - SP / São Carlos - Brasil, +55 (16) 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br