Designing UX for Voice User Interface
Voice-controlled devices were always a fantasy of a future in the early 2000s, but now, 18 years later, the future has arrived. Conversation based technology is the new norm and major players within the technology industry such as Microsoft, Google, and Apple have all released their own highly successful products that feature voice-enabled controls. Today, nearly one in five adults residing in the U.S. have access to a voice-enabled speaker, which is an undeniable rise in popularity of these products within the past two years. Yet the question remains: how will this shift from visual guides to audible ones affect the quality of user experience?
What can we expect?
The introduction of devices that rely primarily on voice control has given User Experience designers an interesting challenge. It is becoming apparent that our interactions with technology are evolving and our User Experience designers must adapt to these changes. Like any other major technological UX innovation, voice interaction has the power to revolutionize user experience as we know it, and that leaves a lot of unanswered questions and problems for designers to address.
What does VUI mean for UX designers?
With the popularization of voice control devices, different limitations are coming into play, such as the lack of any visual aid to help users perform tasks and understand where they are located within an interface. UX designers are going to have to rely heavily on spoken words without any visual cues to guide users through their experience. The ability for users to clearly recognize and perceive what is being communicated to them by their devices in the intended manner will become the biggest hurdle for designers to overcome. Many staples of web design will also be lost in the face of voice-controlled platforms. Clickable links and confirmation buttons will no longer hold the same gravity once visual elements are eliminated from the equation.
For UX Designers, a growing concern will be working with an invisible navigator that will rely solely on the oral commands of users to create a smooth and accurate experience. Without the availability of a button click for users to communicate what they want, it will become the job of designers to try and anticipate where the users will want to go and do.
Unlike a web platform, an invisible interface leaves more potential for communication errors between the AI and the user. Trying to design for a device that does not actually understand what the user is saying can be frustrating as one must account for all the possible responses, or lack therefore of, that a user may have. Creating appropriate strategies to deal with these errors when they arise will remain an obstacle that will become more apparent as these devices continue to evolve in upcoming years.