Home > Papers

 
 
Research On Enhancing Video Temporal Consistency
Meng Junfeng,Wang Jing *
State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876;State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876
*Correspondence author
#Submitted by
Subject:
Funding: none
Opened online:23 February 2022
Accepted by: none
Citation: Meng Junfeng,Wang Jing.Research On Enhancing Video Temporal Consistency[OL]. [23 February 2022] http://en.paper.edu.cn/en_releasepaper/content/4756324
 
 
In the secondary production of short videos, the phenomenon of pixel flickering occurs when the image algorithm is directly applied to the video, which is called poor temporal consistency. To solve this problem, this paper treats the video temporal consistency problem as a learning task and proposes a network architecture based on optical flow constraints. This is a post-processing technology that inputs the pre-processed video and the original video into the network, and outputs a video with stable temporal. Since the post-processing algorithm has nothing to do with the specific image algorithm, it has certain versatility in pixel flickering scenes, such as: video stylization, dehazing, super-resolution, coloring, etc. The network consists of a temporal stabilization module called TCNet, a loss calculation module and an optical flow constraint module. The temporal stabilization module introduces ConvGRU, which effectively improves the ability to extract temporal information and enhances the memory capacity of the network. For the training of the model, this paper also proposes a new hybrid loss, which is a weighted summation of the temporal loss and the spatial loss. The algorithm only calculates the optical flow in the training process, and does not need optical flow in the prediction process, which effectively guarantees the real-time requirements of the algorithm landing. After a large number of experimental verifications, TCNet has greatly improved its ability to enhance temporal consistency, while taking into account the perceived similarity. Among the existing algorithms, TCNet has the best performance.
Keywords:Artificial Intelligence; Deep Learning; Video Stylization; Temporal consistency; Optical Flow; Temporally Stable Networks
 
 
 

For this paper

  • PDF (0B)
  • ● Revision 0   
  • ● Print this paper
  • ● Recommend this paper to a friend
  • ● Add to my favorite list

    Saved Papers

    Please enter a name for this paper to be shown in your personalized Saved Papers list

Tags

Add yours

Related Papers

Statistics

PDF Downloaded 23
Bookmarked 0
Recommend 0
Comments Array
Submit your papers