Just over a week ago I got an offer I couldn’t refuse. A friend and colleague, Bill Evans, had been accepted into the explorer program and was offering me the chance to try out Google Glass for the weekend. Of course I jumped at the chance and, lucky for me, a few days turned into a week. This is my story.
I had worn glass before, but never at length, so I knew the basics of the monitor and the touch pad. Navigation was difficult at first and, because this particular device was not mine, many of the functionality was off limits because of linked accounts. Still, there was plenty to do and plenty to learn.
The first thing I noticed was how it was nothing like the google glass promotional video which came out before the official release of glass. The monitor was actually pretty small and located up and to the right. Normal vision was not impaired in any way, and there was certainly not a augmented reality overlay to the world redirecting when the subway was delayed. I had to intentionally shift my line of sight up and to the right and focus only on the monitor specifically to see it. The first Saturday evening I wore it with friends over and they found it quite amusing, akin to the people who were seemingly talking to themselves when Bluetooth earbuds first came out, only I was looking up at nothing.
Configuration also took a bit for me to learn. Since I’m not part of the explorer program I didn’t get the personal tutorial and professional assistance setting up the device. Instead I downloaded the app for my iPhone (my first failure, I should have switched to an S4 for the week), watched a few videos, and went to work getting it linked to my phone and on the network. It was actually pretty easy to do the basics, but I’m pretty sure google locks it down to prevent resale, which I didn’t realize until I was a few hours in with little success linking my g+ account. No matter, I could still have fun defining words, googling the square root to 764, taking photos, and capturing videos. I can also tell you all the places of interest within a reasonable proximity of my house and the amount of traffic between work and home at any given time.
Photos and video capture seemed to be my most used feature. I specifically enjoyed having it at the ready as fast as I could say, “ok, glass. Record a video.” And it did. At first I was dismayed by the ten second limitation, but after a few I quickly realized I could simply tap to extend as long as I wanted. I’d prefer a voice command, but tapping definitely got me to where I wanted to go (note to self: write a better video app, think camera+ for iPhone).
What I found specifically interesting about video was how I found myself in the moment, only watching it virtually in my glass display as it recorded. I think I’d break this habit if I wore it more and it became more natural, but the distraction from viewing the actual moment with my own eyes was noticeable, which is why I’d opted to take them off when I was driving or doing anything else which required my full attention. I did, however, catch a great video sparring with my son, who caught me looking and knocked me down (video below).
Pictures were also fun, but it took a while to toy with the alignment. “Ok, glass. Take a picture,” did just that, with little to no time to align the shot. Once I figured about where it was going to shoot as compared to my line of sight, though, things got a lot better and I didn’t have so many retakes. The instant share feature was also nice, especially because this is what it seemed to me glass was all about. Sharing experiences.
I believe google was very smart when they named it the explorer program. Not just because we were selected to explore glass, but because we were explorers in life and glass is about sharing those experiences with others. From streaming video out to the heads up display and instantaneous social sharing glass is architected to capture and share experiences.
The last thing I learned, after the term glass-hole, was not about google glass itself, but how people reacted to it. What I quickly came to realize was that reactions occurred pretty consistently on two axis, within four quadrants. Along the x-axis there were those who reacted poorly and those who reacted positively. Along the y-axis were those who consider themselves technical and those who do not.
The bottom left quadrant are those who are non-technical and either had no or initially negative reactions to the rather obvious computer on my face as I walked in the room. If they were to be a metaphor, it would be for the unsuspecting patient walking into a room to find a doctor wearing them. For the most part their action was based in in fear, not knowledge. They didn’t know what it was, nor what it was capable of, so rather than be curious they opted to shut it down. No worries though, I only had glass for a short while and I wasn’t about to waste time pleasing the technophobes by removing it. A quick explanation and multiple assurances that I was neither recording nor broadcasting seemed to suffice until we parted ways.
The next group, along the top left, is still non-technical in nature, but much more curious. These people were fun and interesting and asked a lot of great questions. So many, in fact, they were quite inspiring as to what could be accomplished with glass. The sheer number of “could it do this” and “could it do that” questions caused me to begin jotting down ideas and eventually leading me to install the google glass SDK to find out just how far I could take it. This group gave me hope for the future of the technology because they reacted with curiosity, not fear. As John Nosta can be loosely quoted in a recent interview on glass, “for now people react with ‘that better not be on’, but someday soon they will react with, ‘that thing better be on,
.'” Really it’s all about the value proposition. If ex can come up with the ideas an applications of google glass which improve healthcare and outcomes, patients won’t run from it, they’ll seek it out.
To the bottom right we had a group I will dub “the disappointments,” primarily because I had hoped to share in the experience and use it as a trajectory for future strategic thinking and ideation. Instead, these folks, who are considered technical in nature, had non or negative reactions to glass. Most of these quadrant members seemed to take pride in panning either the look or functionality of glass. A gentile reminder that google glass is the first of its kind, in beta, and setting the stage for our future whether they liked it or not was enough to sway some to the top right. But there were still those who steadfastly disliked glass and couldn’t see past it’s beta exterior to the bright future of what’s possible. Again, my time was short so I did not waste much of it here.
The last segment, in the upper right, was where I had the most fun. These kindred spirits reveled in the access to such a device as if it were a movie star and a quick snap of the smartphone would show all their friends how cool they were. I dubbed them “the visionaries”. As much time as we could spend talking about the realities and the possibilities was our goal. I learned a ton from the research they had done in preparation for their one chance, and this was it. We prognosticated, strategize and experimented. And, in the end, came up with some great ideas that will hold me long after I have to give it up.
By the end of my week I had grown quit attached and hated to give it up. At work and in my social circles I had quickly gained notoriety as wearing google glass and as many who noticed it, also noticed its absence. While it wasn’t exactly what I had expected, it was much more in many ways. Despite it conspicuous clunky appearance, it is an amazing device considering Bluetooth, wifi, touchpad, a computer, and a monitor are all squeezed in. For that week I was perpetually linked to no only the features google glass currently has to offer, but the full power of google and the rest of the internet right there waiting for me. Beyond that, the potential to link glass to other devices, like quantified self sensors no the Internet of things, has the potential to revolutionize the way we interact with nearly everything.
It’s quite empowering to instantly know what song is playing or the definition of a word on the simplistic side, or to consider how this is going to redefine things like healthcare delivery, retail, and shared social experiences in the very near future.