Image inpainting

来源:互联网 发布:boost linux 编辑:程序博客网 时间:2024/05/17 02:27

Reference: Christine Guillemot, Olivier Le Meur. Image inpainting: Overview and recent advances[J]. IEEE Signal Processing Magazine, 2014.

Image inpainting is an ill-posed inverse problem that has no well-defined unique solution. To solve the problem, it is therefore necessary to introduce image priors. All methods are guided by the assumption that pixels in the known and unknown parts of the image share the same statistical properties or geometrical structures. This assumption translates into different local or global priors, with the goal of having an inpainted image as physically plausible and as visually pleasing as possible.

The first category of methods, known as diffusion-based inpainting, introduces smoothness priors via parametric models or partial differential equations (PDEs) to propagate (or diffuse) local structures from the exterior to the interior of the hole (as shown in Figure 1, where U denotes the unknown part or the hole to be filled in, and S the source or known part of the image). Many variants exist using different models (linear, nonlinear, isotropic, or anisotropic) to favor the propagation in particular directions or to take into account the curvature of the structure present in a local neighborhood. These methods are naturally well suited for completing straight lines, curves, and for inpainting small regions. They, in general, avoid having unconnected edges that are perceptually annoying. However, they are not well suited for recovering the texture of large areas, which they tend to blur.

The second category of methods is based on the seminal work of Efros and Leung  and exploits image statistical and selfsimilarity priors. The statistics of image textures are assumed to be stationary (in the case of random textures) or homogeneous (in the case of regular patterns). The texture to be synthesized is learned from similar regions in a texture sample or from the known part of the image. Learning is done by sampling, and by copying or stitching together patches (called examplar) taken from the known part of the image. The corresponding methods are known as examplar-based techniques.

With the advent of sparse representations and compressed sensing, sparse priors have also been considered for solving the inpainting problem. The image (or the patch) is in this case assumed to be sparse in a given basis [e.g., discrete cosine transform (DCT), or wavelets]. Known and unknown parts of the image are assumed to share the same sparse representation. Examplar-based and sparse-based methods are better suited than diffusion-based techniques for filling large texture areas. Hybrid solutions have then naturally emerged, which combine methods dedicated to structural (geometrical) and textural components.

This article surveys the theoretical foundations, the different categories of methods, and illustrates the main applications.


0 0
原创粉丝点击