It’s been an interesting few weeks for computing interfaces. Microsoft unveiled Windows 8, which takes a major departure from the traditional ‘Startbar and desktop’ model, and attendees at Google I/O were given a thrilling demo of Project Glass, in which live video from the headsets of a group of skydivers was streamed to the audience via a Google+ hangout.
For those that haven’t seen it, Google’s Project Glass is basically a headset similar to a pair of glasses, packed with a camera, sensors and radios to provide an overlay of information. The concept video that Google released in April gives a pretty good idea of where the company is trying to go with this.
‘Wearable’ computing and similar concepts in computing interfaces have been around for quite some time, envisioned in various forms, from the science fiction of authors like Charles Stross and Neil Gaiman, through to movies like Iron Man, Johnny Mnemonic and Minority Report – to name but a few.
We’ve already seen a lot of changes in interface in today’s technology and a lot more research is underway currently. Virtual reality, touch screens, 3D, augmented reality, voice recognition, wireless communications, multi-touch, flexible displays, holographic technology and gesture based computing all try and take us beyond the traditional keyboard/mouse/screen interface setup.
But we keep running into the same problems, namely that computers don’t extend into the real world and the inherent concept of three-dimensional space is simply foreign to computers, and this creates some problems when it comes to input and output.
Google is hoping to release its Project Glass eyewear to consumers before the end of 2014, but developers are being offered the chance to buy an ‘explorer’ edition, with the fairly hefty price of $1,500, to start work on related software. But, assuming this comes to pass and gives us a whole new way of overlaying and consuming information, it still only solves one half of the interface problem – we still need a new way to input commands.
The Project Glass concept video points to voice as way forward, but voice recognition is very unreliable (thanks largely to the vast differences in everybody’s speech) not to mention, talking aloud to ‘yourself’ in a public place brings others to question your manners and your mental stability. A keyboard and mouse or touchscreen is clearly a substandard option in a mobile environment.
There are several other options being researched that may help solve these issues. Sub-vocal recognition and neural impulse interfaces are distinct possibilities, but are quite some way away to significant use and adoption. Another potential option that’s more feasible in the not too distant future is a combination of voice and contextual gesture recognition, whereby the camera in the headset picks up on hand gestures and responds based on the current situation. For instance, if a new email has just arrived, perhaps opening your hand will signal that you want the email opened, a closed fist would be to ignore the email and another gesture would be to have the email read aloud to your headset.
Project Glass (and Windows 8, for that matter) could be a huge success or a complete failure. At this stage it’s almost impossible to tell if the technology can live up to the hype and whether public consensus will accept or condemn the wearing of such a device (after all Bluetooth headsets are still shunned by many).
The keyboard and mouse aren’t in danger of being relegated to the annals of history just yet – but it’s clear that the increasing mobility of technology and communication is taking us into a new age of interfacing with the devices that permeate our lives, and I for one, can’t wait.
(image courtesy of Google’s Project Glass page)