I
remember what a challenge it was to learn to type. I took a course taught at my
high school in summer school because I couldn’t imagine a whole semester of
typing exercises. Now, of course, I type constantly, and I’m pretty fast at it,
if not perfect.
I
even look forward to conducting telephone interviews when distance prevents
face-to-face research, because I can type a transcript as we talk, capturing
quotable responses efficiently via my desktop keyboard. I learned that skill decades
ago to replace audio recording, because I seldom had time to have to listen to the
whole conversation again.
I
still have a Chiclets BlackBerry, though I’m all thumbs—in a negative, slow way.
I much prefer my iPhone and one- or two-finger mobile typing. Yet for speed,
it’s back to my full-sized keyboard.
That kind of input won't work with wearable computing. Some innovators say gestures in the air are
enough. With Google Glass, its trackpad-at-the-temple will feed in motion
instructions. Tap once to open or play. Swipe down to close. Swipe forward with
great speed to get Glass to identify the song you are listening to at the
moment.
Samsung, which makes smart
watches among other things, has reportedly applied for a patent for an intricate approach for
thumb-tapping different sections of your fingers to represent different
characters. And I thought learning QWERTY over the summer in an
un-air-conditioned classroom was hard.
Nonetheless, I’m sure I’ll
take time to learn that, or Twiddler. In the category of devices called chorded
keyboards, a Twiddler is a one-handed alternative that has been around for years
and is getting renewed interest. Instead of tapping parts of your fingers
representing different characters as in Samsung’s vision, using Twiddler is
more like playing a guitar. Different finger combinations on the Twiddler play
a chord that represents a character or command.
We seem to want an input method
other than our voices. Imagine how distracting it would be for everyone around
you to be talking into wearable whatevers. People seem grateful that on most
airline flights, passengers can’t use cell phones, because no one wants to sit
next to that chatty passenger, or even worse, a talker on each side. Imagine a
workplace full of people talking into their shirtsleeves or…wait a minute. That
describes a busy office with people talking into headsets.
What we need is not only a
path for quietly inputting our thoughts so much as for computing power that
truly understands what we’re saying and what to do with that information, in
common language, when we do talk out loud. Now, for example Google Glass’s spoken
menu must be repeated word for word, like “make a call to…” or “send a message
to...” I might be inclined to say “send a text message to…” instead of “send a
message to….” It doesn’t work.
And as for me and my typing,
I will miss the opportunity to go back and edit what I’ve written before
tapping “send.”
No comments:
Post a Comment