Image deblurring via self-similarity and via sparsity

Dr. Yifei Lou
School of Electrical and Computer Engineering
Georgia Institute of Technology


ABSTRACT


In this talk, I will present two deblurring methods, one exploits the spatial interactions in images, i.e. the self-similarity; and the other explicitly takes into account the sparse characteristics of natural images and does not entail solving a numerically ill-conditioned backward-diffusion.

In particular, the self-similarity is defined by a weight function, which induces two types of regularization in a nonlocal fashion. Furthermore, we get superior results using preprocessed data as input for the weighted functionals.

The second part of the talk is based on the observation that the sparse coefficients that encode a given image with respect to an over-complete basis are the same that encode a blurred version of the image with respect to a modified basis. Following an ``analysis-by-synthesis'' approach, an explicit generative model is used to compute a sparse representation of the blurred image, and the coefficients of which are used to combine elements of the original basis to yield a restored image.