Deep neural networks achieve state-of-the-art results on numerous image processing tasks, but this typically requires training problem-specific networks. Towards multi-task learning, the One Network to Solve Them All (OneNet) method was recently proposed that first pretrains an adversarial denoising autoencoder and subsequently uses it as the proximal operator in Alternating Direction Method of Multipliers (ADMM) solvers of multiple imaging problems. In this work, we highlight training and ADMM convergence issues of OneNet, and resolve them by proposing an end-to-end learned architecture for training the two steps jointly using Unrolled Optimization with backpropagation. In our experiments, our solution achieves superior or on par results compared to the original OneNet and Wavelet sparsity on four imaging problems (pixelwise inpainting-denoising, blockwise inpainting, scattered inpainting and super resolution) on the MS-Celeb-1M and ImageNet data sets, even with a much smaller ADMM iteration count.
|Publication status||Published - 2020|
|Event||30th British Machine Vision Conference, BMVC 2019 - Cardiff, United Kingdom|
Duration: Sep 9 2019 → Sep 12 2019
|Conference||30th British Machine Vision Conference, BMVC 2019|
|Period||9/9/19 → 9/12/19|
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition