Does Your Nonprofit or Church Need an Alexa Skill?

Anyone can create a trivia game and share it on the Alexa store.

Source: Amazon opens up Alexa store for anyone to create and publish custom skills – The Verge

Nifty new capability now for anyone to script out an Alexa Skill. I’ve been telling clients that they should really consider “voice first” capabilities when thinking about their messaging and marketing. It’s not a bad idea to really think about the possibilities for churches and nonprofits as more and more people adapt to the Alexa / Google Assistant / Siri paradigm of computing (and it will inevitably be the main form of home computing).

Overwriting Monuments with AR

I do think augmented reality and voice-first computing (Siri, Alexa, Google Assistant etc) will get us out behind computer screens and keyboards and into the “real” world. What “real” means is subjective, and that will only intensify in the coming decades as computing comes full circle to being something that we naturally do with our voices and thoughts and without the need for a keyboard and mouse.

Now we just need a good pair of AR glasses from Apple or Google or some startup we haven’t heard of yet that’s working hard in a garage to change the world…

Last year, Movers and Shakers assembled a team of coders, artists and designers who use augmented reality technology to do their work. Their goal was circumvent the city’s decision by replacing the statue and similar monuments with digital ones of other historical figures — namely, people of color and women. “I think we have an opportunity to harness the storytelling capabilities of this technology,” said Glenn Cantave, founder and lead organizer, when explaining the group’s motivations. “Who’s going to own our narrative?”

— Read on theoutline.com/post/5123/movers-and-shakers-digital-sculptures-new-york-city

Siri and Incarnation

The theological lens through which we might view these questions is incarnation. In an age of increased engagement with disembodied digital assistants, what might it mean for the church to counterweight this with insisting on and facilitating in-person fellowship? In an era of disembodied conversation, my prayer is that the church might be a contrast society to model a more excellent way of fully-embodied community and in-person presence.

Source: Thanks for your help, Siri. But what about that human connection? – Baptist News Global

I fundamentally disagree with John Chandler here regarding the notion that smart assistants (such as Siri or Alexa or Google Assistant or Cortana or Bixby or M or… well, there are many more) lead to more antisocial behavior or the dangers of people not interacting with other people.

Chandler also invokes the (in)famous Nicholas Carr article Is Google Making Us Stupid? from 2008. One of my favorite rebukes to that article comes from a review of Carr’s subsequent book on the topic, The Shallows: How the Internet is Changing the Way We Think, Read, and Remember:

Perhaps what he needs are better strategies of self-control. Has he considered disconnecting his modem and Fedexing it to himself overnight, as some digital addicts say they have done? After all, as Steven Pinker noted a few months ago in the New York Times, ‘distraction is not a new phenomenon.’ Pinker scorns the notion that digital technologies pose a hazard to our intelligence or wellbeing. Aren’t the sciences doing well in the digital age, he asks? Aren’t philosophy, history and cultural criticism flourishing too? There is a reason the new media have caught on, Pinker observes: ‘Knowledge is increasing exponentially; human brainpower and waking hours are not.’ Without the internet, how can we possibly keep up with humanity’s ballooning intellectual output?

Socrates feared the same fears of antisocial behavior and intellectual laziness that we project onto television, music, and now the internet in regards to books and admonished Plato to stop writing them.

Smart assistants such as Siri or Alexa do pose a whole new world of possibilities for developers and companies and groups to interact with the connected world. In just a few short years, many of us (our household included) now use these assistants to do everything from schedule events on our cloud-based calendars to turn the lights off before bed. I also stream music, play audio books, ask questions, and crack riddles with Alexa, Siri, and Google Assistant on a daily basis.

While we fear the inevitability of a bleak future as depicted in the movie Her from 2013 in which human beings are completely subsumed into relationships and reality driven by their own personal digital assistants and rarely interact with others, I don’t think that’s the reality we’ll see. There’s a simple reason for that… antisocial behavior is a part of our own internal psychologies and neuropathies. Projecting these fears onto tools such as Siri is misplaced. I’d argue that positioning the church to be anti-tool to encourage incarnational relationships is misplaced as well.

This isn’t the same as arguing that “Guns don’t kill people; People kill people” although that’s an easy leap to make. But no, what I’m arguing here has to do with coming to terms with the ongoing revelation we are making and receiving about the very nature of human thought and how our brain and nervous systems work in tandem with our concept of consciousness. Understanding that newspapers, books, radio, TV, internet, and now Siri don’t make us any more or less lazy or antisocial is an important step in understanding that the core issue of incarnation relies on relationality between humans and the universe.

I do agree with Chandler that the church should be anti-cultural in the sense that it provides a way for exploration into the concept of incarnation. But positing that experience as anti-tool or anti-specific-technology seems to undercut the very notion of the incarnation’s theological and ongoing event in history (call that the kerygma or the Christ event or God Consciousness etc).

Yes, technology can be addictive or exacerbate issues. But let’s address the fundamental issues of culture and personal psychologies that the church is led to do with a healthy and holy notion of inter-personal and inner-personal relationships.

Voice Isn’t the Next Big Platform

This piece is originally from Dec 19, 2016, but interesting to revisit as we enter the home stretch of 2017 (and what a year it has been):

In 2017, we will start to see that change. After years of false starts, voice interface will finally creep into the mainstream as more people purchase voice-enabled speakers and other gadgets, and as the tech that powers voice starts to improve. By the following year, Gartner predicts that 30 percent of our interactions with technology will happen through conversations with smart machines.

via Voice Is the Next Big Platform, and Alexa Will Own It | WIRED

I have no doubt that we’ll all be using voice-driven computing on an ever increasing basis in the coming years. In our home, we have an Amazon Alexa, 4 Amazon Dots, and most rooms have Hue Smart Bulbs in the light fixtures (oh, and we have the Amazon Dash Wand in case we want to carry Alexa around with us…). I haven’t physically turned on a light in any of our rooms in months. That’s weird. It happened with the stealth of a technology that slowly but surely creeps into your life and rewires your brain the same way the first iPhone changed how I interact with the people I love. We even renamed all of our Alexa devices as “Computer” so that I can finally pretend I’m living on the Starship Enterprise. Once I have a holodeck, I’m never leaving the house.

And perhaps that’s the real trick to seeing this stealth revolution happen in front of our eyes and via our vocal cords… it’s not just voice-driving computing that is going to be the platform of the near future. In other words, voice won’t be the next big platform. There will be a combination of voice AND augmented reality AND artificial intelligence that will power how we communicate with ourselves, our homes, our environments, and the people we love (and perhaps don’t love). In twenty years, will my young son be typing onto a keyboard in the same way I’m doing to compose this post? In ten years, will my 10-year-old daughter be typing onto a keyboard to do her job or express herself?

I highly doubt both. Those computing processes will be driven by a relationship to a device representing an intelligence. Given that, as a species, we adapted to have relational interact with physical cues and vocal exchanges over the last 70 million years, I can’t imagine that a few decades of “typing” radically altered the way we prefer to communicate and exchange information. It’s the reason I’m not an advocate of teaching kids how to type (and I’m a ~90 wpm touch typist).

Voice combined with AI and AR (or whatever we end up calling it… “mixed reality” perhaps?) is the next big platform because these three will fuse into something the same way the web (as an experience) fused with personal computing to fuel the last big platform revolution.

I’m not sure Amazon will be the ultimate winner in the “next platform” wars that it is waging with Google (Google Assistant), Apple (Siri), Facebook (Messenger), and any number of messaging apps and startups that we haven’t heard of yet. However, our future platforms of choice will be very “human” in the same way we lovingly interact with the slab of metal and glass that we all carry around and do the majority of our computing on these days. It’s hard to imagine a world where computers are shrunk to the size of fibers in our clothing and become transparent characters that we interact with to perform whatever we’ll be performing, but the future does not involve a keyboard, a mouse, and a screen of light emitting diodes for most people (I hear you, gamers) and we’ll all see reality in even more differing ways than is currently possible as augmented reality quickly becomes mainstream in the same way that true mobile devices did after the iPhone.

Maybe I just watched too much Star Trek Next Generation.

Our AI Assisted (Near) Future

jarvis

Courtbot was built with the city of Atlanta in partnership with the Atlanta Committee for Progress to simplify the process of resolving a traffic citation. After receiving a citation, people are often unsure of what to do next. Should they should appear in court, when should they appear, how much will the fine cost, or how can they contend the citation? The default is often to show up at the courthouse and wait in line for hours. Courbot allows the public to find out more information and pay their citations

Source: CourtBot · Code for America

Merianna and I were just talking about the implications of artificial intelligence and interactions with personal assistants such as my beloved Amy.

The conversation came about after we decided to “quickly” stop by a Verizon store and upgrade her phone (she went with the iPhone SE btw… tiny but impressive). We ended up waiting for 45 mins in a relatively sparse store before being helped with a process that took all of 5 minutes. With a 7 month old baby, that’s not a fun way to spend a lunch hour break.

The AI Assistant Talk

We were in a part of town that we don’t usually visit, so I opened up the Ozlo app on my phone and decided to see what it recommended for lunch. Ozlo is a “friendly AI sidekick” that, for now, recommends meals based on user preferences in a messaging format. It’s in a closed beta, but if you’re up for experimenting, it’s not steered me wrong over the last few weeks of travel and in-town meal spots. It suggested a place that neither one of us had ever heard of, and I was quite frankly skeptical. But with the wait and a grumpy baby, we decided to try it out. Ozlo didn’t disappoint. The place was tremendous and we both loved it and promised to return often. Thanks, Ozlo.

Over lunch, we discussed Ozlo and Amy, and how personal AI assistants were going to rapidly replace the tortured experience of having to do something like visit a cell provider store for a device upgrade (of course, we could have just gone to a Best Buy or ordered straight from Apple as I do for my own devices, but most people visit their cell provider’s storefront). I said that I couldn’t wait to message Amy and tell her to find the best price on the iPhone SE 64 gig Space Grey version, order it, have it delivered next day, and hook it up to my Verizon account. Or message Amy and ask her to take care of my traffic ticket with the bank account she has access to. These are menial tasks that can somewhat be accomplished with “human” powered services like TaskRabbit, Fancy Hands, or the new Scale API. However, I’d like for my assistant to be virtual in nature because I’m an only child and I’m not very good at trusting other people to get things done in the way I want them done (working on that one!). Plus, it “feels” weird for me to hire out something that I “don’t really have time to do” even if they are willing and more than ready to accept my money in order to do it.

Ideally, I can see these personal AI assistants interfacing with the human services like Fancy Hands when something requires an actual phone call or physical world interaction that AI simply can’t (yet) perform such as picking up dry cleaning.

I don’t see this type of work flow or production flow being something just for elites or geeks, either. Slowly but surely with innovations like Siri or Google Now or just voice assisted computing, a large swath of the population (in the U.S.) is becoming familiar and engaging with the training wheels of AI driven personal assistants. It’s not unimaginable to think that very soon, my Amy will be interacting with Merianna’s Amy to help us figure out a good place and time to meet for lunch (Google Calendar is already quasi doing this, though without the personal assistant portion). Once Amy or Alexa or Siri or Cortana or whatever personality Google Home’s device will have is able to tap into services like Amy or Scale, we’re going to see some very interesting innovations in “how we get things done.” If you have a mobile device (which most adults and growing number of young people do), you will have an AI assistant that helps you get very real things done in ways that you wouldn’t think possible now.

“Nah, this is just buzzword futurisms. I’ll never do that or have that kind of technology in my life. I don’t want it.” People said the same thing about buying groceries or couches or coffee on their phones in 2005. We said the same thing about having a mobile phone in 1995. We said the same thing about having a computer in our homes in 1985. We said the same thing about ever using a computer to do anything productive in 1975. We said the same thing about using a pocket calculator in 1965.

In the very near future of compatible API’s and interconnected services, I’ll be able to message this to my AI assistant (saving me hours):

“Amy, my client needs a new website. Get that set up for me on the agency Media Temple’s account as a new WordPress install and set up four email accounts with the following names. Also, go ahead and link the site to Google Analytics and Webmaster Tools, and install Yoast to make sure the SEO is ok. I’ll send over some tags and content but pull the pictures you need from their existing account. They like having lots of white space on the site as well.”

That won’t put me out of a job, but it will make what I do even more specialized.

Whole sectors of jobs and service related positions will disappear while new jobs that we can’t think of yet will be created. If we look at the grand scheme of history, we’re just at the very beginning of the “computing revolution” or “internet revolution” and the keyboard / mouse / screen paradigm of interacting with the web and computers themselves are certainly going to change (soon, I hope).