Saturday, July 27, 2013

Where do I go next?


There are days I’m practically running from one conference room to another. I can envision this conversation: 

“OK, Glass. Remind me, where is my next meeting?” 

“The main conference room, 3rd floor.”

“Who will be there?” 

“Jason Oliver, Heather Garcia, Al Singh, Mary Brownwood, and Lewis Kim.” 

“What department is Jason in?” 

“He is the network operations manager.” 

Sure, I could juggle my iPhone as I hurry along, look up my calendar entry to find the location, click through to see who else is on the meeting invitation, and then search the people directory to find out who Jason is. Siri isn't quite up to this task yet, though she knows your location already and learns about you over time to better answer questions.

For Glass, the pieces are falling in place now for this kind of conversational search, especially the search capability to build on a previous question.


Without conversational search, the way the questions thread together, I would need to restate elements in each subsequent question:
“Where is my 3 o’clock meeting?”
“Who is attending my 3 o’clock meeting?”
“Where does Jason Oliver work?

My simple scenario illustrates how your device can understand what you are talking about—not just what you are saying—to connect facts. It might work for me if I used Google Calendar. I don’t. My day is logged in my employer’s enterprise system. It’s out of the reach of the “knowledge graph” that connects data for questions and follow-up questions for Google.

There are still some quirks, so while it is fun to test it out now, at some point you’ll find you have to revert to the old-fashioned way of search—keyboard input and reading the screen. 

Still, I can already use voice search and hear or see results returned. If you go to Google using Chrome on your desktop or have the Google search app on your phone, just click on the microphone icon to ask a question. Try something simple: Is it going to rain? I bet you’ll smile at the spoken and visual answer, especially considering you didn’t ask a precise question but a conversational question.  

I face another obstacle—my name. I have yet to ask anything that includes my name and have Google return a correct result. As you are asking Google a question, you see the letters and words onscreen as it hears you. My question seems simple: Who is Sheri Rosen? But Google hears: cherry rose, sherry resins, Jerry Rosen, Sharon Reisman, shee RI rose, Sherri Rose in, and if I’m lucky, Sherry Rosen. But there are a lot of Sherry Rosens out there, and not one of them is me. It’s only worse when I try to spell a word or name, because Google tries to prefill and guess what it thinks I really meant and always gets it wrong. I must have tried this self-search a couple of dozen times, on Glass, smartphone and desktop, even expanding the question with a descriptive hint, like “in Texas” or “employee communication,” and never once gotten a correct return. I suppose I need someone to start a Wikipedia entry on me.  

Nonetheless, in the spirit of gazing further, I’m hopeful we will soon get past obstacles like spoken name recognition and walled gardens. Actively participating in the discussion now will help shape the future as we envision it. In my mind, it’s a short leap from voice search to when-I-ask-for-it information and education for a company’s employees.




No comments:

Post a Comment