Turns Out Google Glass May Be Only Kind of Distracting While Driving

hands-free driving doesn't equate with distraction-free driving. But whether Google Glass is the least of a bunch of evils hasn't been answered just yet.
20130517GOOGLEGLASS081edit
Ariel Zambelich/WIRED

Distracted driving killed more than 3,330 people in the US in 2012 and is so serious an issue it has its own government website. Meanwhile, the auto industry is in a mad rush to fulfill our desire to be constantly connected by ever more capable infotainment systems, a market worth more than $30 billion globally. Automakers say these devices aren’t distracting because they offer voice recognition. The logic is easy and tempting to accept: Yes, of course you can keep your eyes on the road, your hands on the wheel and your brain on your phone!

A study from MIT's AgeLab challenges that logic by comparing how well humans drive while entering a destination address using Google Glass, voice recognition on a Samsung Galaxy S4 smartphone, and the same phone's touchscreen. Turns out, hands-free driving doesn't mean distraction-free driving. But whether Google Glass, which is largely hands-free and uses visuals projected into your line of sight), is the the best of a bad lot hasn't been answered just yet.

The subjects were 25 students from MIT and nearby schools, all of whom got $40 for their time and have been driving for at least three years. The researchers don’t usually use students as subjects, but wanted people who would be more familiar with a novel technology like Google Glass. They used a simulator (a stationary 2001 VW Beetle in front of an 8-foot by 6-foot screen) because it’s safer than putting people in real cars, especially when the point is testing suboptimal driving conditions. They selected the S4 because it’s a popular phone that uses the same Android architecture as Glass. They focused on destination entry as the task because it’s a common demand while driving.

To evaluate how the devices impacted driving, the researchers used a system called Detection Response Task (DRT): When an LED light within a subject’s field of vision turns on, he responds using a small switch attached to his finger. This approximates the experience of seeing a brake light on a car ahead, recording how quickly the subject registers what’s happening. Delays longer than a second count as a “miss.” The researchers also recorded how long the subjects took to enter their destination and how much they deviated from their speed and lane position in the process.

There are no shocking results here, and much of what the researchers found reinforces what we already know. Using a touchscreen is more distracting than using a voice recognition system. The voice systems on Google Glass and the S4 produced similar results, but drivers did best of all when they just concentrated on, you know, driving.

There are two big takeaways. First, using any kind of technology negatively impacts your attention to driving. Second, voice inputs are a more effective answer than touch interfaces when it comes to attention allocation. Voice inputs are “associated with lower lower subjective workload ratings, apparently better lateral control (lower standard deviation of lateral lane position), shorter task durations, faster DRT reaction times, and lower DRT miss rates,” the authors write. But they don’t rush to any conclusions. The “cognitive demands” of voice recognition are “potentially less than the cognitive interactions involved in visually and manually typing,” says Bruce Mehler, one of the report's authors.

When it comes to comparing Google Glass and the S4 voice input, the differences are harder to parse. The study found that although Glass was quicker for entering a destination, it came with twice the miss percentage (how often the subjects took more than a second to respond to the LED light). “Whether one or the other equates to an overall safer driving experience is an open question,” the authors write.

But as the MIT study shows, voice recognition of either ilk is hardly a cure-all, since it requires cognitive energy that would otherwise go to interpreting and reacting to visual stimuli. A June 2013 report by the AAA Foundation for Traffic Safety, conducted by University of Utah researchers, found hands-free tasks like listening to and writing emails can be riskier than holding and talking on a cellphone, since the cognitive task load is heavier.

What we need is more studies to produce concrete data. We have plenty of information on the risks involved with visual distractions, but little on the effects of cognitive demands that don’t take our eyes off the road. It’s hard to believe we’ll be able to fully divorce ourselves from our gadgets while driving, if it’s possible at all. But if we can point to data that tells us which options are best, in which situations, it’ll be a step toward reducing accidents on the road.