Refactoring the UX of a popular voice application

2020 
Commercial voice services like Google Assistant and Amazon Alexa are reaching extreme popularity. While Natural Language Processing (NLP) and Artificial Intelligence (AI) techniques -- applied to the aural channel -- deliver high quality voice recognition, the voice channel still lacks a good methodology to design user experiences. For instance, The Amazon Alexa team suggests gathering the information model of Alexa skills by talking with test users behind a curtain, pretending to be the machine. In our opinion, such kind of bottom-up strategy is not effective because it overfits the UX to very specific cases. A top-down approach could provide the right answer also in unseen and unpredictable situations instead. Our work aims to propose a novel model driven approach that allows authors to design from scratch the overall vocal UX as well as rethink existing visual UX before porting them to the aural channel. Our approach, which is inherently top-down, is based on Aural IDM, an UX design method thought for screen readers modelling in the early '00. In this paper we've refactored the Spotify Alexa skill to demonstrate the validity of Aural IDM for designing vocal UXs. The experience of Spotify on Alexa is quite primordial and does not reflect the richness of the desktop app. A prototype is currently under development, and the result of a comparison between the AS-IS and TO-BE voice skill will be subject of a future work.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    5
    References
    0
    Citations
    NaN
    KQI
    []