|
video inpainting is a challenging task in the field of computer vision. Current methods are usually flow based methods in order to improve temporal coherence, and get better inpainting result. However, these networks are usually complex, and have less use value. In this paper, wo propose a quick video inpainting network based on endoder-decoder architecture, which optimizes network running time by enhance the ability of network feature extraction.We validate our approach on our video inpainting dataset based on DAVIS and Septuplet dataset. Experment results show that our method compares favorably against the mainstream algorithms, and has good inpainting capability. |
|
Keywords:deep learning; video inpainting; image inpainting |
|