TY - GEN
T1 - Are you hearing or listening? the effect of task performance in verbal behavior with smart speaker
AU - Park, Chaewon
AU - Choi, Jongsuk
AU - Sung, Jee Eun
AU - Lim, Yoonseob
N1 - Funding Information:
ACKNOWLEDGMENT This work was supported by the Technology Innovation Program (10077553, Development of Social Robot Intelligence for Social Human-Robot Interaction of Service Robots) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea) and by the National Research Council of Science and Technology (NIST) grant by the Korea government (MSIP) (No. CRC-15-04-KIST). We also thank Prof. Sojung Oh who allowed us to use CPLC pragmatics evaluation kit and Yujung Chae for providing TTS program.
Funding Information:
* This work was supported by the Technology Innovation Program (10077553, Development of Social Robot Intelligence for Social Human-Robot Interaction of Service Robots) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea) and by the National Research Council of Science and Technology (NIST) grant by the Korea government (MSIP) (No. CRC-15-04-KIST).
Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Human has an ability to adjust utterance depending on the state of interlocutor. In this study, we explore the verbal behaviors of human through interaction with two smart speakers that have different level of task competence. We analyzed (1) linguistic behaviors appeared in user's utterance, (2) length of the uttered speech, and (3) required pragmatics skills to understand the user's intent. As a result, there were no significant difference in linguistic behaviors and length of the speech while user interacts with speakers with different task competence. In addition, various pragmatics elements were equally utilized and especially, implied intentions were frequently observed in user's short utterance even under simple interaction scenarios.
AB - Human has an ability to adjust utterance depending on the state of interlocutor. In this study, we explore the verbal behaviors of human through interaction with two smart speakers that have different level of task competence. We analyzed (1) linguistic behaviors appeared in user's utterance, (2) length of the uttered speech, and (3) required pragmatics skills to understand the user's intent. As a result, there were no significant difference in linguistic behaviors and length of the speech while user interacts with speakers with different task competence. In addition, various pragmatics elements were equally utilized and especially, implied intentions were frequently observed in user's short utterance even under simple interaction scenarios.
UR - http://www.scopus.com/inward/record.url?scp=85081164959&partnerID=8YFLogxK
U2 - 10.1109/IROS40897.2019.8967930
DO - 10.1109/IROS40897.2019.8967930
M3 - Conference contribution
AN - SCOPUS:85081164959
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 319
EP - 324
BT - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
Y2 - 3 November 2019 through 8 November 2019
ER -