Voice interfaces beginning to find their way into business
Voice interfaces beginning to find their way into business
Imagine attending a business meeting with an Amazon Echo (or any voice-driven device) sitting on the conference table. A question arises about the month’s sales numbers in the Southeast region. Instead of opening a laptop, opening a program like Excel and finding the numbers, you simply ask the device and get the answer instantly.
That kind of scenario is increasingly becoming a reality, although it is still far from common place in business just yet.
With the increasing popularity of devices like the Amazon Echo, people are beginning to get used to the idea of interacting with computers using their voices. Anytime a phenomenon like this enters the consumer realm, it is only a matter of time before we see it in business.
Chuck Ganapathi, CEO at Tact, an AI-driven sales tool that uses voice, type and touch, says with our devices changing, voice makes a lot of sense. “There is no mouse on your phone. You don’t want to use a keyboard on your phone. With a smart watch, there is no keyboard. With Alexa, there is no screen. You have to think of more natural ways to interact with the device.”
As Werner Vogels, Amazon’s chief technology officer, pointed out during his AWS re:Invent keynote at the end of last month, up until now we have been limited by the technology as to how we interact with computers. We type some keywords into Google using a keyboard because this is the only way the technology we had allowed us to enter information.
“Interfaces to digital systems of the future will no longer be machine driven. They will be human centric. We can build human natural interfaces to digital systems and with that a whole environment will become active,” he said.
Amazon will of course be happy to help in this regard, introducing Alexa for Business as a cloud service at re:Invent, but other cloud companies are also exposing voice services for developers, making it ever easier to build voice into an interface.
While Amazon took aim at business directly for the first time with this move, some companies had been experimenting with Echo integration much earlier. Sisense, a BI and analytics tool company, introduced Echo integration as early as July 2016.
But not everyone wants to cede voice to the big cloud vendors, no matter how attractive they might make it for developers. We saw this when Cisco introduced the Cisco Voice Assistant for Spark in November, using voice technology it acquired with the MindMeld purchase the previous May to provide voice commands for common meeting tasks.
Roxy, a startup that got $2.2 million in seed money in November, decided to build its own voice-driven software and hardware, taking aim, for starters, at the hospitality industry. They have broader ambition beyond that, but one early lesson they have learned is that not all companies want to give their data to Amazon, Google, Apple or Microsoft. They want to maintain control of their own customer interactions and a solution like Roxy gives them that.
In yet another example, Synqq introduced a notes app at the beginning of the year that uses voice and natural language processing to add notes and calendar entries to their app without having to type.
As we move to 2018, we should start seeing even more examples of this type of integration both with the help of big cloud companies, and companies trying to build something independent of those vendors. The keyboard won’t be rendered to the dustbin just yet, but in scenarios where it makes sense, voice could begin to replace the need to type and provide a more natural way of interacting with computers and software.
Featured Image: Mark Cacovic/Getty Images
Source: Tech Crunch