Overwriting Monuments with AR

I do think augmented reality and voice-first computing (Siri, Alexa, Google Assistant etc) will get us out behind computer screens and keyboards and into the “real” world. What “real” means is subjective, and that will only intensify in the coming decades as computing comes full circle to being something that we naturally do with our voices and thoughts and without the need for a keyboard and mouse.

Now we just need a good pair of AR glasses from Apple or Google or some startup we haven’t heard of yet that’s working hard in a garage to change the world…

Last year, Movers and Shakers assembled a team of coders, artists and designers who use augmented reality technology to do their work. Their goal was circumvent the city’s decision by replacing the statue and similar monuments with digital ones of other historical figures — namely, people of color and women. “I think we have an opportunity to harness the storytelling capabilities of this technology,” said Glenn Cantave, founder and lead organizer, when explaining the group’s motivations. “Who’s going to own our narrative?”

— Read on theoutline.com/post/5123/movers-and-shakers-digital-sculptures-new-york-city

Augmented Reality Bowie via The New York Times

I’m a big David Bowie fan so, of course, this is amazing to me… but even if you’re not into superb music you can still appreciate the technology and work that makes this sort of experience possible.

No, we don’t have flying cars and jetpacks but this feels a lot like the future…

You can access the new Bowie feature via The New York Times app, projecting life-size versions of the rock star’s iconic costumes into your own space. As with other AR experiences, you can explore the outfits as if they were really there, walking around to see the back, for example, or getting up close to see details you might miss in a photo. The pieces were scanned at the Brooklyn Museum just before the “David Bowie is” exhibition opened.

Source: The New York Times brings Bowie exhibit to your phone with AR

Googling Inside Your Church

Fascinating piece on Google Maps history and possible directions…

So Google likely knows what’s inside all of the buildings it has extracted. And as Google gets closer and closer to capturing every building in the world, it’s likely that Google will start highlighting / lighting up buildings related to queries and search results.

This will be really cool when Google’s/Waymo’s self-driving cars have AR displays. One can imagine pointing out the window at a building and being able to see what’s inside.

via Google Maps’s Moat

I’m excited about Magic Leap’s Lightwear

We’ve been working on this tech since the 1830’s and we’re almost to the point of mass adoption and use cases…

Magic Leap today revealed a mixed reality headset that it believes reinvents the way people will interact with computers and reality. Unlike the opaque diver’s masks of virtual reality, which replace the real world with a virtual one, Magic Leap’s device, called Lightwear, resembles goggles, which you can see through as if wearing a special pair of glasses. The goggles are tethered to a powerful pocket-sized computer, called the Lightpack, and can inject life-like moving and reactive people, robots, spaceships, anything, into a person’s view of the real world.

via Lightwear: Introducing Magic Leap’s Mixed Reality Goggles – Rolling Stone

Voice Isn’t the Next Big Platform

This piece is originally from Dec 19, 2016, but interesting to revisit as we enter the home stretch of 2017 (and what a year it has been):

In 2017, we will start to see that change. After years of false starts, voice interface will finally creep into the mainstream as more people purchase voice-enabled speakers and other gadgets, and as the tech that powers voice starts to improve. By the following year, Gartner predicts that 30 percent of our interactions with technology will happen through conversations with smart machines.

via Voice Is the Next Big Platform, and Alexa Will Own It | WIRED

I have no doubt that we’ll all be using voice-driven computing on an ever increasing basis in the coming years. In our home, we have an Amazon Alexa, 4 Amazon Dots, and most rooms have Hue Smart Bulbs in the light fixtures (oh, and we have the Amazon Dash Wand in case we want to carry Alexa around with us…). I haven’t physically turned on a light in any of our rooms in months. That’s weird. It happened with the stealth of a technology that slowly but surely creeps into your life and rewires your brain the same way the first iPhone changed how I interact with the people I love. We even renamed all of our Alexa devices as “Computer” so that I can finally pretend I’m living on the Starship Enterprise. Once I have a holodeck, I’m never leaving the house.

And perhaps that’s the real trick to seeing this stealth revolution happen in front of our eyes and via our vocal cords… it’s not just voice-driving computing that is going to be the platform of the near future. In other words, voice won’t be the next big platform. There will be a combination of voice AND augmented reality AND artificial intelligence that will power how we communicate with ourselves, our homes, our environments, and the people we love (and perhaps don’t love). In twenty years, will my young son be typing onto a keyboard in the same way I’m doing to compose this post? In ten years, will my 10-year-old daughter be typing onto a keyboard to do her job or express herself?

I highly doubt both. Those computing processes will be driven by a relationship to a device representing an intelligence. Given that, as a species, we adapted to have relational interact with physical cues and vocal exchanges over the last 70 million years, I can’t imagine that a few decades of “typing” radically altered the way we prefer to communicate and exchange information. It’s the reason I’m not an advocate of teaching kids how to type (and I’m a ~90 wpm touch typist).

Voice combined with AI and AR (or whatever we end up calling it… “mixed reality” perhaps?) is the next big platform because these three will fuse into something the same way the web (as an experience) fused with personal computing to fuel the last big platform revolution.

I’m not sure Amazon will be the ultimate winner in the “next platform” wars that it is waging with Google (Google Assistant), Apple (Siri), Facebook (Messenger), and any number of messaging apps and startups that we haven’t heard of yet. However, our future platforms of choice will be very “human” in the same way we lovingly interact with the slab of metal and glass that we all carry around and do the majority of our computing on these days. It’s hard to imagine a world where computers are shrunk to the size of fibers in our clothing and become transparent characters that we interact with to perform whatever we’ll be performing, but the future does not involve a keyboard, a mouse, and a screen of light emitting diodes for most people (I hear you, gamers) and we’ll all see reality in even more differing ways than is currently possible as augmented reality quickly becomes mainstream in the same way that true mobile devices did after the iPhone.

Maybe I just watched too much Star Trek Next Generation.