TY - GEN
T1 - Long-Term Video Generation with Evolving Residual Video Frames
AU - Kim, Nayoung
AU - Kang, Je Won
N1 - Funding Information:
This work was supported by Institute for Information and communications Technology Promotion(IITP) grant funded by the Korea government( MSIP) 2017-0-00072, Development of Audio/Video Coding and Light Field Media Fundamental Technologies for Ultra Realistic Tera-media
Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/29
Y1 - 2018/8/29
N2 - In this paper, we propose a novel long-term video generation algorithm, motivated by the recent developments of unsupervised deep learning techniques. The proposed technique learns two ingredients of internal video representation, i.e., video textures and motions to reproduce realistic pixels in the future video frames. To this aim, the proposed technique uses two encoders comprising convolutional neural networks (CNN) to extract spatiotemporal features from the original video frame and a residual video frame, respectively. The use of the residual frame facilitates the learning with fewer parameters as there are high spatiotemporal correlations in a video. Moreover, the residual frames are efficiently used for evolving pixel differences in the future frame. In a decoder, the future frame is generated by transforming the combination of two feature vectors into the original video size. Experimental results demonstrate that the proposed technique provides more robust and accurate results of long-term video generation than conventional techniques.
AB - In this paper, we propose a novel long-term video generation algorithm, motivated by the recent developments of unsupervised deep learning techniques. The proposed technique learns two ingredients of internal video representation, i.e., video textures and motions to reproduce realistic pixels in the future video frames. To this aim, the proposed technique uses two encoders comprising convolutional neural networks (CNN) to extract spatiotemporal features from the original video frame and a residual video frame, respectively. The use of the residual frame facilitates the learning with fewer parameters as there are high spatiotemporal correlations in a video. Moreover, the residual frames are efficiently used for evolving pixel differences in the future frame. In a decoder, the future frame is generated by transforming the combination of two feature vectors into the original video size. Experimental results demonstrate that the proposed technique provides more robust and accurate results of long-term video generation than conventional techniques.
KW - Convolutional neural network
KW - Future video prediction
KW - Unsupervised learning
KW - Video generation
UR - http://www.scopus.com/inward/record.url?scp=85062908861&partnerID=8YFLogxK
U2 - 10.1109/ICIP.2018.8451079
DO - 10.1109/ICIP.2018.8451079
M3 - Conference contribution
AN - SCOPUS:85062908861
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 3578
EP - 3582
BT - 2018 IEEE International Conference on Image Processing, ICIP 2018 - Proceedings
PB - IEEE Computer Society
T2 - 25th IEEE International Conference on Image Processing, ICIP 2018
Y2 - 7 October 2018 through 10 October 2018
ER -