TY - GEN
T1 - Gaze correction for 3D tele-immersive communication system
AU - Eng, Wei Yong
AU - Min, Dongbo
AU - Nguyen, Viet Anh
AU - Lu, Jiangbo
AU - Do, Minh N.
PY - 2013
Y1 - 2013
N2 - The lack of eye contact between participants in a tele-conferencing makes nonverbal communication unnatural and ineffective. A lot of research has focused on correcting the user gaze for a natural communication. Most of prior solutions require expensive and bulky hardware, or incorporate a complicated algorithm causing inefficiency and deployment. In this paper, we propose an effective and efficient gaze correction solution for a 3D tele-conferencing system in a single color/depth camera set-up. A raw depth map is first refined using the corresponding color image. Then, both color and depth data of the participant are accurately segmented. A novel view is synthesized in the location of the display screen which coincides with the user gaze. Stereoscopic views, i.e. virtual left and right images, can also be generated for 3D immersive conferencing, and are displayed in a 3D monitor with 3D virtual background scenes. Finally, to handle large hole regions that often occur in the view synthesized with a single color camera, we propose a simple yet robust hole filling technique that works in real-time. This novel inpainting method can effectively reconstruct missing parts of the synthesized image under various challenging situations. Our proposed system works in real-time on a single core CPU without requiring dedicated hardware, including data acquisition, post-processing, rendering, and so on.
AB - The lack of eye contact between participants in a tele-conferencing makes nonverbal communication unnatural and ineffective. A lot of research has focused on correcting the user gaze for a natural communication. Most of prior solutions require expensive and bulky hardware, or incorporate a complicated algorithm causing inefficiency and deployment. In this paper, we propose an effective and efficient gaze correction solution for a 3D tele-conferencing system in a single color/depth camera set-up. A raw depth map is first refined using the corresponding color image. Then, both color and depth data of the participant are accurately segmented. A novel view is synthesized in the location of the display screen which coincides with the user gaze. Stereoscopic views, i.e. virtual left and right images, can also be generated for 3D immersive conferencing, and are displayed in a 3D monitor with 3D virtual background scenes. Finally, to handle large hole regions that often occur in the view synthesized with a single color camera, we propose a simple yet robust hole filling technique that works in real-time. This novel inpainting method can effectively reconstruct missing parts of the synthesized image under various challenging situations. Our proposed system works in real-time on a single core CPU without requiring dedicated hardware, including data acquisition, post-processing, rendering, and so on.
KW - depth camera
KW - depth image based rendering (DIBR)
KW - foreground segmentation
KW - Gaze correction
KW - tele-conferencing
UR - http://www.scopus.com/inward/record.url?scp=84888178879&partnerID=8YFLogxK
U2 - 10.1109/IVMSPW.2013.6611942
DO - 10.1109/IVMSPW.2013.6611942
M3 - Conference contribution
AN - SCOPUS:84888178879
SN - 9781467358583
T3 - 2013 IEEE 11th IVMSP Workshop: 3D Image/Video Technologies and Applications, IVMSP 2013 - Proceedings
BT - 2013 IEEE 11th IVMSP Workshop
T2 - 2013 IEEE 11th Workshop on 3D Image/Video Technologies and Applications, IVMSP 2013
Y2 - 10 June 2013 through 12 June 2013
ER -