How does your alexa behave? Evaluating voice applications by design guidelines using an automatic voice crawler

Xu Han, Tom Yeh

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Adaptive voice applications supported by conversational agents (CAs) are increasingly popular (i.e., Alexa Skills and Google Home Actions). However, much work still remains in the area of voice interaction design and evaluation. In our study, we deployed a voice crawler to collect responses from the 100 most popular Alexa skills within 10 different categories. We then evaluated these responses to assess their compliance to 8 selected design guidelines published by Amazon. Our findings show that design guidelines requiring basic commands support are the most followed ones while those related to personalized interaction are relatively less. There also exists variation in design guidelines compliance across different skill categories. Based on our findings and real skill examples, we offer suggestions for new guidelines to complement the existing ones and propose agendas for future HCI research to improve voice applications' user experiences.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2848
StatePublished - 2020
Event2020 Joint Workshops on Human-AI Co-Creation with Generative Models and User-Aware Conversational Agents, HAI-GEN+user2agent 2020 - Cagliari, Italy
Duration: 17 Mar 2020 → …

Bibliographical note

Publisher Copyright:
© 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

Keywords

  • Conversational agents
  • User experience evaluation
  • Voice user interface design

Fingerprint

Dive into the research topics of 'How does your alexa behave? Evaluating voice applications by design guidelines using an automatic voice crawler'. Together they form a unique fingerprint.

Cite this