Google has given its clearest demonstration to date of the ways its forthcoming Glass headwear will help users interact with web-based content, showing delegates at the South by Southwest (SXSW conference how the system integrates with third-party apps.
Timothy Jordan, a senior developer advocate at Google, showed delegates how Glass could provide users with the information that they wanted, when they wanted it – but without getting in their way.
Glass, he claimed, would create a new type of computing that is more about people than computers. He showed how a web searches could be conducted via voice commands, and how Gmail apps could present users with incoming emails.
Other applications included a New York Times program, to display headlines, bylines and even full articles, while the Evernote and Path apps let users share content online.
These services are enabled by Glass' Mirror API, which pulls down short burst of context relevant data, which is then displayed in the user's peripheral vision.
A voice recognition system is used to display what Glass thinks the wearer has said, enabling them to correct any tiny misunderstandings, Jordan said (see video).
But the system also relies on a trackpad, as can been seen in its Skitch/Evernote demonstration – so while Google may believe wearable computers are the way forward, it might still have one or two details to iron out when it comes to gesture recognition.
Kicking Palantir off of AWS is among their demands, too
Rafaela Vasquez was watching The Voice at the time of the crash, new evidence shows
PUBG price slashed on Steam after selling more than 50 million copies - as daily player numbers plunge
Use the same password for every website? It might be time to change them all