ABSTRACT
In this article a preconditioned version of the Delayed Weighted Gradient Method (DWGM) is presented and analyzed. In addition to the convergence, some nice properties as the A- orthogonality of the current transformed gradient with all the previous gradient vectors as well as finite convergence are demonstrated. Numerical experimentation is also offered, exposing the benefits of preconditioning.
Keywords:
gradient methods; convex quadratic optimization; Krylov subspace methods; preconditioning