With Amazon reporting that the Echo Dot was their best-selling item last holiday season and Apple’s HomePod on the way, there’s no question that voice technology is the next big platform. So many people are talking to their devices just as they speak to each other, what does this development in voice enabled technology mean for chatbots and human-machine communication? With the technology rapidly developing, let’s take a look at what voice platforms are, how they’re used and where they’re headed.
Understanding Voice Enabled Technology
Voice platforms enable users to accomplish tasks with machines using a voice interface. They can answer simple questions, control smart devices or help users make a purchase. Many users got their first taste of voice interfaces through Siri on iOS; meanwhile, Amazon brought always-listening voice enabled technology into the home through its line of Amazon Echo devices.
Given its growing popularity and ease of use, voice is the next big platform. As of last year, 35.6 million people use a smart speaker per month. Amazon’s Alexa assistant has more than 20,000 skills, and while that doesn’t necessarily indicate quality, it shows how eager developers are to dive into the tech. But the future of voice technology isn’t just in smart speakers; 20% of all search queries are made via voice interface according to Google.
How Voice Enabled Technology is Being Used Right Now
Users love voice interfaces because they’re hands-free and easy to use. They also match the way humans communicate with one another, providing a more natural experience. Let’s say you want to check the weather. If you’re not near a computer, you’d have to pull out your phone then open your weather app. With voice interfaces, you might press a single button on your phone and verbally ask for the weather. This provides a faster result, but still requires reaching for a phone. But those near a smart speaker don’t need to do anything besides invoke a wake word (the phrase that summons a device to listen, like “Ok Google”) followed by a query. This provides an immediate, hands-free result.
Use cases for voice interface depends on the voice enabled technology. For example, smart speakers often serve as central hubs for other smart home/Internet of Things devices. They do this by applying a shared AI and voice platform to the connected devices. For example, a user might use Google Home to both control their lights and manage their home’s thermostat through a single interface. With third-party applications like IFTTT, users can explore even more functionality between devices connected to the voice platform.
Different voice platform may also serve certain use cases better than others. Amazon Alexa, for example, can pull from users’ past purchases and buying habits to help them easily make purchases—a use case that other voice assistants aren’t quite ready for yet. Mobile assistants like Siri, Cortana and Google Home are ideal for productivity by pulling from users search interests, calendar data and more data connected to their mobile accounts. Finally, Google Home’s connection to the Chromecast family of devices helps it excel at media consumption for users already invested in the Google ecosystem.
In short, voice enabled technology is set to dramatically change the way people interact with mobile devices and home appliances. In the future, we might see microphones on more devices than just phones, speakers and computers.
The Future of Voice Technology
If voice is the next big platform, it will need to undergo some improvements. Using voice to perform a task is easy in theory, but without some form of direction or prompting (buttons, screens, etc.) users may have difficulty knowing how to invoke a skill or query. This leads to confusion for both the user and the voice assistant. While developers work to make voice platforms smarter, they’ll need to educate users on the proper vocabulary they’ll need to use when talking to a voice interface. Amazon and Google both actively provide users direction on their apps with suggested queries.
As far as use cases go, the future of voice assistants will most notably revolutionize search. In the coming years, we’ll see search move from keyword-based queries to contextual content. This calls for deeper, more rich content designed to answer specific questions. It may also lead to a more conversational tone to match the interface.
Voice interface will also dethrone screened devices and keyboards that we depend on today. Apple is already driving the future of voice assistants on mobile through its peripheral devices, the Apple Watch and AirPods. The AirPods initially allowed users to accomplish tasks via Siri without having to pull their phones out of their pockets. The latest Apple Watch, meanwhile, empowers users to accomplish tasks via voice without having their phone on them at all. The popularity of these devices suggests users might turn away from phones and screens altogether in the future.