Abstract
Adaptive voice applications supported by conversational agents (CAs) are increasingly popular (i.e., Alexa Skills and Google Home Actions). However, much work still remains in the area of voice interaction design and evaluation. In our study, we deployed a voice crawler to collect responses from the 100 most popular Alexa skills within 10 different categories. We then evaluated these responses to assess their compliance to 8 selected design guidelines published by Amazon. Our findings show that design guidelines requiring basic commands support are the most followed ones while those related to personalized interaction are relatively less. There also exists variation in design guidelines compliance across different skill categories. Based on our findings and real skill examples, we offer suggestions for new guidelines to complement the existing ones and propose agendas for future HCI research to improve voice applications' user experiences.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 2848 |
State | Published - 2020 |
Event | 2020 Joint Workshops on Human-AI Co-Creation with Generative Models and User-Aware Conversational Agents, HAI-GEN+user2agent 2020 - Cagliari, Italy Duration: 17 Mar 2020 → … |
Bibliographical note
Publisher Copyright:© 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Keywords
- Conversational agents
- User experience evaluation
- Voice user interface design