Convolution Neural Network based Video Coding Technique using Reference Video Synthesis

Jung Kyung Lee, Nayoung Kim, Seunghyun Cho, Je Won Kang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

8 Scopus citations

Abstract

In this paper, we propose a novel video coding technique that uses a virtual reference (VR) video frame, synthesized by a convolution neural network (CNN) for an inter-coding. Specifically, an encoder generates a VR frame from a video interpolation CNN (VI-CNN) using two reconstructed pictures, i.e., one from the forward reference frames and the other from the backward reference frames. The VR frame is included into the reference picture lists to exploit further temporal correlation in motion estimation and compensation. It is demonstrated by the experimental results that the proposed technique shows about 1.4% BD-rate reductions over the HEVC reference test model (HM 16.9) as an anchor in a Random Access (RA) coding scenario.

Original languageEnglish
Title of host publication2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages505-508
Number of pages4
ISBN (Electronic)9789881476852
DOIs
StatePublished - 4 Mar 2019
Event10th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2018 - Honolulu, United States
Duration: 12 Nov 201815 Nov 2018

Publication series

Name2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2018 - Proceedings

Conference

Conference10th Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2018
Country/TerritoryUnited States
CityHonolulu
Period12/11/1815/11/18

Fingerprint

Dive into the research topics of 'Convolution Neural Network based Video Coding Technique using Reference Video Synthesis'. Together they form a unique fingerprint.

Cite this