Abstract
Recent works on machine learning have greatly advanced the accuracy of depth estimation from a single image. However, resulting depth images are still visually unsatisfactory, often producing poor boundary localization and spurious regions. In this paper, we formulate this problem from single images as a deep adversarial learning framework. A two-stage convolutional network is designed as a generator to sequentially predict global and local structures of the depth image. At the heart of our approach is a training criterion based on adversarial discriminator which attempts to distinguish between real and generated depth images as accurately as possible. Our model enables more realistic and structure-preserving depth prediction from a single image, compared to state-of-the-arts approaches. An experimental comparison demonstrates the effectiveness of our approach on large RGB-D dataset.
| Original language | English |
|---|---|
| Title of host publication | 2017 IEEE International Conference on Image Processing, ICIP 2017 - Proceedings |
| Publisher | IEEE Computer Society |
| Pages | 1717-1721 |
| Number of pages | 5 |
| ISBN (Electronic) | 9781509021758 |
| DOIs | |
| State | Published - 2 Jul 2017 |
| Event | 24th IEEE International Conference on Image Processing, ICIP 2017 - Beijing, China Duration: 17 Sep 2017 → 20 Sep 2017 |
Publication series
| Name | Proceedings - International Conference on Image Processing, ICIP |
|---|---|
| Volume | 2017-September |
| ISSN (Print) | 1522-4880 |
Conference
| Conference | 24th IEEE International Conference on Image Processing, ICIP 2017 |
|---|---|
| Country/Territory | China |
| City | Beijing |
| Period | 17/09/17 → 20/09/17 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.
Keywords
- Deep neural network
- Depth from a single image
- Generative adversarial learning
- RGB-D database