TY - GEN
T1 - Future Object Localization in Autonomous Driving Using Ego-Centric Images and Motions
AU - Jo, Seoyoung
AU - Lee, Jung Kyung
AU - Kang, Je Won
N1 - Funding Information:
This work is partly supported by Hyundai Motor Company. This work is partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00920, Development of Ultra High Resolution Unstructured Plenoptic Video Storage/Compression/Streaming Technology for Medium to Large Space) and by the MSIT under the ITRC (Information Technology Research Center) support program (IITP-2022-2020-0-01460) supervised by the IITP, and partly supported by the NRF grant funded by MSIT (No.NRF-2022R1A2C4002052).
Funding Information:
This work is partly supported by Hyundai Motor Company. This work is partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020- 0-00920, Development of Ultra High Resolution Unstructured Plenoptic Video Storage/Compression/Streaming Technology for Medium to Large Space) and by the MSIT under the ITRC (Information Technology Research Center) support program (IITP-2022-2020-0-01460) supervised by the IITP, and partly supported by the NRF grant funded by MSIT (No.NRF- 2022R1A2C4002052).
Publisher Copyright:
© 2022 Asia-Pacific of Signal and Information Processing Association (APSIPA).
PY - 2022
Y1 - 2022
N2 - In autonomous driving, future object localization (FOL) is actively used for trajectory prediction and a collision avoidance system. However, it is a challenging task to accurately determine the future locations of nearby pedestrians and vehicles during driving. In this paper, we propose a stochastic FOL using ego-centric images and motions (FOLe) that are generic information obtained from an autonomous agent. The proposed network consists of two staged sub-networks, including a future candidate network (FCN) and a future decision network (FDN) for localization. The FCN is used to generate several hypotheses to inform where an object will probably appear in an image according to its attribution. Our network directly produces the hypotheses in the future using only the ego-centric images and motions, which is trained with an end-to-end manner. The FDN predicts a multi-modal distribution based on the previous results of the FCN and determine the final location by maximizing the probability distribution. Experimental results demonstrate that the proposed model provides a superior performance to the state-of-the-art studies in nuScenes dataset.
AB - In autonomous driving, future object localization (FOL) is actively used for trajectory prediction and a collision avoidance system. However, it is a challenging task to accurately determine the future locations of nearby pedestrians and vehicles during driving. In this paper, we propose a stochastic FOL using ego-centric images and motions (FOLe) that are generic information obtained from an autonomous agent. The proposed network consists of two staged sub-networks, including a future candidate network (FCN) and a future decision network (FDN) for localization. The FCN is used to generate several hypotheses to inform where an object will probably appear in an image according to its attribution. Our network directly produces the hypotheses in the future using only the ego-centric images and motions, which is trained with an end-to-end manner. The FDN predicts a multi-modal distribution based on the previous results of the FCN and determine the final location by maximizing the probability distribution. Experimental results demonstrate that the proposed model provides a superior performance to the state-of-the-art studies in nuScenes dataset.
UR - http://www.scopus.com/inward/record.url?scp=85146290563&partnerID=8YFLogxK
U2 - 10.23919/APSIPAASC55919.2022.9980234
DO - 10.23919/APSIPAASC55919.2022.9980234
M3 - Conference contribution
AN - SCOPUS:85146290563
T3 - Proceedings of 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2022
SP - 1035
EP - 1039
BT - Proceedings of 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2022
Y2 - 7 November 2022 through 10 November 2022
ER -