Evaluating voice applications by user-aware design guidelines using an automatic voice crawler

Xu Han, Tom Yeh

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

Adaptive voice applications supported by conversational agents (CAs) are increasingly popular (i.e., Alexa Skills and Google Home Actions). However, much work remains in the area of voice interaction evaluation, especially in terms of user-awareness. In our study, we developed a voice skill crawler to collect responses from the 100 most popular Alexa skills within 10 categories. We then evaluated these responses to assess their compliance to three user-aware design guidelines published by Amazon. Our findings show that more than 50% of voice applications do not follow some of these guidelines and variation in guideline compliance across skill categories exists. As voice interaction continues to increase in consumer settings, our crawler can evaluate CA-based voice applications with high efficiency and scalability.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2327
StatePublished - 2019
Event2019 Joint ACM IUI Workshops, ACMIUI-WS 2019 - Los Angeles, United States
Duration: 20 Mar 2019 → …

Bibliographical note

Publisher Copyright:
© 2019 Copyright for the individual papers by the papers’ authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors.

Keywords

  • Conversational agents
  • User-awareness evaluation
  • Voice applications

Fingerprint

Dive into the research topics of 'Evaluating voice applications by user-aware design guidelines using an automatic voice crawler'. Together they form a unique fingerprint.

Cite this