Combining inconsistent textures using convolutional neural networks

2016 
It is difficult for us to generate a gradually changed texture by combing two textures when they have very different textures and structures. This is a common problem in constructing panoramas and in pasting from one texture to another. The inconsistency between the sources is what we want to deal with in this paper. We present a new method for synthesizing a transition region between two source textures, such that inconsistent textures and structural properties all change gradually from one source to the other. We first extract the convolutional neural network (CNN) features of two source textures and one initial target texture at convolution and pooling layers. We set the distortion function to be the square of the difference between feature maps of source texture and target texture. And we based on feature maps compute the Gram matrix which is the inner product between feature maps in each layer. We set the second distortion function to be the square of the difference between Gram matrix of second source texture and target texture. Our target function is the weighted sum of two distortion functions we described before. We use Large-scale bound-constrained optimization method to optimize the target function and get the ultimate result. The model provides a new tool to generate a transition region between two source textures by CNN. Our method is robust to various types of images tested.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []