Wednesday, September 4, 2013

What problem are we trying to solve with Google Glass?

“I’m paid to think deeply,” said Thad Starner, a founder and director of the Contextual Computing Group at Georgia Tech and credited with coining the term “augmented reality.” He thinks deeply at a computer screen mostly, often writing code or programming, and he works hard at controlling his attention.

You might think he would be annoyed by the notifications that pop into his view on Google Glass. In fact, he appreciates them. As a pioneer in wearable computing, he welcomes how it changes the way he can interact with the world without having to keep checking his phone.

Glass offers what he calls micro interactions. He likens it to the dashboard of the car you are driving. You can look down for brief moments without careening off the road.





There are a lot of reasons why he calls this “revolutionary,” but he also astutely notes that we can’t know how Glass is going to be used just yet. “Our perceptions of what you are going to be using it for are probably wrong—until you get to something in your everyday life, actually get to a stage you can experience it and you understand the problem you are trying to solve.”

Does that seem backwards? In a sense, perhaps. But thousands of testers are determining what in their lives need solving and seeing if Glass can do it. And to that end, Gaze Further will continue to explore wearable computing at work, particularly how communication professionals can employ Glass for the benefit of people interacting at work.

Starner thinks one answer to the question, what problem are we trying to solve with Google Glass, lies in reducing the time between your intention to do or see or think something and the actual action. Glass can offer split-second notification. “The time between my first thought of wanting information and having it in my eyeballs,” he said, “is a few seconds.” 

No comments:

Post a Comment