A Nonlinear, Regularized, and Data-independent Modulation for Continuously Interactive Image Processing Network

Abstract

Most studies on convolutional Neural Network (CNN) based image processing have proposed networks that can be optimized for a single level. Here, the term “level” refers to the specific objective defined for each task, such as the degree of noise in denoising tasks. Hence, they underperform on other levels and must be retrained to deliver optimal performance. Using multiple models to cover multiple levels involves very high computational costs. To solve these problems, recent approaches train the networks on two different levels and propose their own modulation methods to enable the arbitrary intermediate levels. However, many of them 1) have difficulty adapting from one level to the other, 2) suffer from unintended artifacts in the intermediate levels, or 3) require large memory and computational cost. In this paper, we propose a novel framework using Filter Transition Network (FTN), which is a non-linear module that easily adapts to new levels, is regularized to prevent undesirable side-effects, and extremely lightweight being a data-independent module. Additionally, for stable learning of FTN, we newly propose a method to initialize nonlinear CNNs with identity mappings. Extensive results for various image processing tasks indicate that the performance of FTN is stable regarding adaptation and modulation and is comparable to that of the other heavy frameworks.

Publication
International Journal of Computer Vision
Date
Links