Structured Kernel Estimation for Photon-Limited Deconvolution

Yash Sanghvi, Zhiyuan Mao, Stanley H. Chan

drawing
drawing

Arxiv

drawing

Video (5 min.)

drawing

Code

drawing

Poster

drawing
drawing
(Left) Proposed low-dimensional representation for the blur kernel estimation (Right) Proposed iterative method using a differentiable non-blind solver F(.) and low-dimensional kernel representation T(.)

Abstract

Images taken in a low light condition with the presence of camera shake suffer from motion blur and photon shot noise. State-of-the-art restoration networks perform well but are largely limited to well-illuminated scenes and their performance drops significantly when shot noise is strong.
In this paper, we propose a new blur estimation technique customized for photon-limited conditions. The proposed method employs a gradient-based backpropagation method to estimate the blur kernel. By modeling the blur kernel using a low-dimensional representation with the key points on the motion trajectory, we significantly reduce the search space and improve regularity of the kernel estimation problem. When plugged into the iterative framework, our novel low-dimensional representation provides improved kernel estimates and hence significantly better deconvolution performance when compared to end-to-end trained networks.

drawing Blurred and Noisy
drawing MPR-Net
drawing Ours
drawing Ground-Truth

Slides

Citation

@inproceedings{sanghvi2023structured,
  title={Structured Kernel Estimation for Photon-Limited Deconvolution},
  author={Sanghvi, Yash and Mao, Zhiyuan and Chan, Stanley H},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={9863--9872},
  year={2023}
}

If you want to talk further about this paper, feel free to drop me an email at sanghviyash95@gmail.com.