Abstract
Adaptive voice applications supported by conversational agents (CAs) are increasingly popular (i.e., Alexa Skills and Google Home Actions). However, much work remains in the area of voice interaction evaluation, especially in terms of user-awareness. In our study, we developed a voice skill crawler to collect responses from the 100 most popular Alexa skills within 10 categories. We then evaluated these responses to assess their compliance to three user-aware design guidelines published by Amazon. Our findings show that more than 50% of voice applications do not follow some of these guidelines and variation in guideline compliance across skill categories exists. As voice interaction continues to increase in consumer settings, our crawler can evaluate CA-based voice applications with high efficiency and scalability.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 2327 |
State | Published - 2019 |
Event | 2019 Joint ACM IUI Workshops, ACMIUI-WS 2019 - Los Angeles, United States Duration: 20 Mar 2019 → … |
Bibliographical note
Publisher Copyright:© 2019 Copyright for the individual papers by the papers’ authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors.
Keywords
- Conversational agents
- User-awareness evaluation
- Voice applications