Abstract
In this paper, we propose a novel long-term video generation algorithm, motivated by the recent developments of unsupervised deep learning techniques. The proposed technique learns two ingredients of internal video representation, i.e., video textures and motions to reproduce realistic pixels in the future video frames. To this aim, the proposed technique uses two encoders comprising convolutional neural networks (CNN) to extract spatiotemporal features from the original video frame and a residual video frame, respectively. The use of the residual frame facilitates the learning with fewer parameters as there are high spatiotemporal correlations in a video. Moreover, the residual frames are efficiently used for evolving pixel differences in the future frame. In a decoder, the future frame is generated by transforming the combination of two feature vectors into the original video size. Experimental results demonstrate that the proposed technique provides more robust and accurate results of long-term video generation than conventional techniques.
Original language | English |
---|---|
Title of host publication | 2018 IEEE International Conference on Image Processing, ICIP 2018 - Proceedings |
Publisher | IEEE Computer Society |
Pages | 3578-3582 |
Number of pages | 5 |
ISBN (Electronic) | 9781479970612 |
DOIs | |
State | Published - 29 Aug 2018 |
Event | 25th IEEE International Conference on Image Processing, ICIP 2018 - Athens, Greece Duration: 7 Oct 2018 → 10 Oct 2018 |
Publication series
Name | Proceedings - International Conference on Image Processing, ICIP |
---|---|
ISSN (Print) | 1522-4880 |
Conference
Conference | 25th IEEE International Conference on Image Processing, ICIP 2018 |
---|---|
Country/Territory | Greece |
City | Athens |
Period | 7/10/18 → 10/10/18 |
Bibliographical note
Publisher Copyright:© 2018 IEEE.
Keywords
- Convolutional neural network
- Future video prediction
- Unsupervised learning
- Video generation