# Image restoration and reconstruction beyond least squares /

Abstract (Summary)

iii
Systems of linear equations with ill conditioned coefficient matrices arise very
often in signal and image processing. The most commonly used method for solving such
systems is the Regularized Least Squares method, in which the unknown parameters
are computed by minimizing a cost function that consists of quadratic data-fitting and
regularization terms. We consider techniques other than the Regularized Least Squares
for solving such systems. Our focus is on image processing problems.
One implicit assumption behind the least squares solution is that exact information
of the coefficient matrix is available. When error exists in both the right hand side
and the coefficient matrix, the Total Least Squares method gives better results than the
ordinary Least Squares method. We present algorithms for the Regularized Total Least
Squares (RTLS) problem.
In many image and signal processing problems, the coefficient matrices have certain
useful structure. For example, in the problems of image restoration and high resolution
image reconstruction, the resulting blurring matrices have a Block Toeplitz Toeplitz
Block (BTTB) like structure. In the problem of color image restoration, the blurring
matrix consists of BTTB blocks. However, traditional Total Least Squares methods
do not preserve the structure in the coefficient matrix. Therefore it is more appropriate
to apply Structured Total Least Squares (STLS). This thesis presents Regularized
Structured Total Least Squares (RSTLS) algorithms for the problems of high resolution
image reconstruction and color image restoration. The major cost at each iteration of our
RSTLS algorithms is in solving large sparse and structured linear least squares systems.
We propose to use preconditioned CGLS or LSQR method to solve these systems. We
show that Discrete Cosine Transform or Fast Fourier Transform based preconditioners
are very effective for these systems.
Other assumptions behind the regularized least squares solution include Gaussian
prior distribution of the unknown parameters and the additive noise. When these
assumptions are violated, we consider the Least Mixed Norm (LMN) or the Least Absolute
Deviation (LAD) solution. For the LAD solution, both the data-fitting and the
regularization terms are in the ?1 norm. For the LMN solution, the regularization term
is in the ?1 norm but the data-fitting term is in the ?2 norm. Both solutions are formulated
as the solution to a convex programming problem, and solved by interior point
methods. At each iteration of the interior point method, a structured linear system must
be solved. The preconditioned conjugate gradient method with factorized sparse inverse
preconditioners is employed to such structured inner systems.
Bibliographical Information:

Advisor:

School:Pennsylvania State University

School Location:USA - Pennsylvania

Source Type:Master's Thesis

Keywords:

ISBN:

Date of Publication: