TY - JOUR
T1 - Pyramidal Semantic Correspondence Networks
AU - Jeon, Sangryul
AU - Kim, Seungryong
AU - Min, Dongbo
AU - Sohn, Kwanghoon
N1 - Funding Information:
This work was supported in part by the R&D program for Advanced Integrated-intelligence for Identification (AIID) through the National Research Foundation under Grant NRF-2018M3E3A1057289 and Yonsei University Research Fund under Grant 2021-22-0001. This work of Seungryong Kim was supported in part by MSIT (Ministry of Science and ICT) under the ICT Creative Consilience program under Grant IITP-2021-2020-0-01819.
Publisher Copyright:
© 1979-2012 IEEE.
PY - 2022/12/1
Y1 - 2022/12/1
N2 - This paper presents a deep architecture, called pyramidal semantic correspondence networks (PSCNet), that estimates locally-varying affine transformation fields across semantically similar images. To deal with large appearance and shape variations that commonly exist among different instances within the same object category, we leverage a pyramidal model where the affine transformation fields are progressively estimated in a coarse-to-fine manner so that the smoothness constraint is naturally imposed. Different from the previous methods which directly estimate global or local deformations, our method first starts to estimate the transformation from an entire image and then progressively increases the degree of freedom of the transformation by dividing coarse cell into finer ones. To this end, we propose two spatial pyramid models by dividing an image in a form of quad-tree rectangles or into multiple semantic elements of an object. Additionally, to overcome the limitation of insufficient training data, a novel weakly-supervised training scheme is introduced that generates progressively evolving supervisions through the spatial pyramid models by leveraging a correspondence consistency across image pairs. Extensive experimental results on various benchmarks including TSS, Proposal Flow-WILLOW, Proposal Flow-PASCAL, Caltech-101, and SPair-71k demonstrate that the proposed method outperforms the lastest methods for dense semantic correspondence.
AB - This paper presents a deep architecture, called pyramidal semantic correspondence networks (PSCNet), that estimates locally-varying affine transformation fields across semantically similar images. To deal with large appearance and shape variations that commonly exist among different instances within the same object category, we leverage a pyramidal model where the affine transformation fields are progressively estimated in a coarse-to-fine manner so that the smoothness constraint is naturally imposed. Different from the previous methods which directly estimate global or local deformations, our method first starts to estimate the transformation from an entire image and then progressively increases the degree of freedom of the transformation by dividing coarse cell into finer ones. To this end, we propose two spatial pyramid models by dividing an image in a form of quad-tree rectangles or into multiple semantic elements of an object. Additionally, to overcome the limitation of insufficient training data, a novel weakly-supervised training scheme is introduced that generates progressively evolving supervisions through the spatial pyramid models by leveraging a correspondence consistency across image pairs. Extensive experimental results on various benchmarks including TSS, Proposal Flow-WILLOW, Proposal Flow-PASCAL, Caltech-101, and SPair-71k demonstrate that the proposed method outperforms the lastest methods for dense semantic correspondence.
KW - Dense semantic correspondence
KW - coarse-to-fine inference
KW - spatial pyramid model
UR - http://www.scopus.com/inward/record.url?scp=85118551202&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2021.3123679
DO - 10.1109/TPAMI.2021.3123679
M3 - Article
C2 - 34714738
AN - SCOPUS:85118551202
SN - 0162-8828
VL - 44
SP - 9102
EP - 9118
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 12
ER -