Book Review: John Longhurst’s Can Robots Love God and Be Saved?

As someone with a rich background in the cutting-edge side of marketing and technology (and education) and someone often referred to as a futurist but is fascinated with ethical and theological impacts and contexts, I found John Longhurst’s “Can Robots Love God and Be Saved? (CMU Press 2024) to be a fascinating exploration of the convergence between cutting-edge technology, ethical considerations, and theological inquiry. This book speaks directly to my passions and professional experiences, offering a unique perspective on the future of faith in a rapidly evolving world where concepts such as artificial intelligence (and AGI) must be considered through both technological and theological lenses. 

A seasoned religion reporter in Canada, John Longhurst tackles various topics that bridge faith and modern societal challenges. The book is structured into sections that address different aspects of faith in contemporary life, including mental health, societal obligations, and the intriguing possibilities of artificial intelligence within religious contexts. Those are constructed out of interviews and perspectives from Longhurt’s interviews with a wide variety of cast and characters.

Longhurst discusses the ongoing challenges many face with mental illness and the role faith communities play in providing support. This aligns with my work in consulting and education, emphasizing the need for understanding and empathy in addressing situations such as mental health issues, whether in the classroom or the broader community. He also delves into the discussion on Christians’ duty to pay taxes and support societal welfare, raising essential questions about the practical application of faith from various personas and perspectives. I found this particularly relevant when contemplating the intersection of personal beliefs and civic responsibility, echoing ethical marketing practices and corporate social responsibility principles.

Exploring the deep bonds between humans and their pets, Longhurst touches on the theological implications of animals in heaven. This can be a fascinating topic in environmental science discussions, highlighting the interconnectedness of all life forms and reflecting on how technology (like AI in pets) might change our relationships with animals. The book also delves into ethical concerns about government surveillance from a religious standpoint, providing an excellent case study for understanding the balance between security and privacy rights—a crucial consideration in both marketing and technology sectors where data privacy is paramount.

One of the most thought-provoking sections of the book delves into AI’s potential role in religious practices. Longhurst’s exploration of whether robots can participate in spiritual activities and even achieve salvation is a direct intersection of my interests in technology and ethics. It raises profound questions about the future of faith, challenging traditional theological boundaries and offering a glimpse into future innovations in religious practice.

Longhurst also examines how religious communities can address the loneliness epidemic, which I found particularly engaging. The sense of belonging and support provided by faith groups is mirrored in the need for community in education and the workplace. Technology, mainly social media and AI, can play a role in mitigating loneliness, but it also highlights the need for genuine human connections. That’s also one of my motivators for exploring when setting up a marketing strategy: How does this product/service/technology help establish more genuine human connectivity?

Additionally, the book ponders the existence of extraterrestrial life and its implications for religious beliefs. This speculative yet fascinating topic can engage students in critical thinking about humanity’s place in the universe, much like futuristic marketing strategies encourage us to envision new possibilities and innovations. This is a hot topic, with other books such as American Cosmos making many “must read” lists this year, along with general interest in extraterrestrial / non-human intelligence / Unidentified Aerial Phenomenon (UAP) / Non-Human Intelligence (NHI) very much in cultural conversations these days.

Longhurst’s exploration of AI and its potential spiritual implications is particularly compelling from a marketing and technology perspective. As someone who thrives on being at the cutting edge, this book fuels my imagination about the future intersections of technology and spirituality. The ethical questions raised about AI’s role in religious practices are reminiscent of the debates we have in marketing about the ethical use of AI and data analytics.

The work is a thought-provoking collection that challenges readers to consider the evolving role of faith amidst technological advancements. Longhurst’s ability to tackle complex and often controversial topics with nuance and empathy makes this book a valuable resource for educators, faith leaders, technologists, and marketers alike. It provides a rich tapestry of discussions that can be seamlessly integrated into lessons on environmental science, ethics, technology, and even literature in a succinct and “quick-read” fashion.

Can Robots Love God and Be Saved?” is a compelling exploration of how faith intersects with some of the most pressing issues of our time. It is a fascinating read for anyone interested in understanding the future of spirituality in a world increasingly shaped by technology based on first-hand considerations rather than a purely academic or “one-sided” perspective. For those of us on the cutting edge, whether in marketing, technology, or education, this book offers a profound and thought-provoking look at the possibilities and challenges ahead.

Good read!

AI Video Generators

OpenAI’s Sora is impressive but the amount of text-to-video AI generators we’re seeing released (especially from China) points to a very real moment that we all need to pause and reflect upon. The coming year (I would’ve said the coming 2-3 years back in March) is going to be fascinating, haunting, and challenging all at once…

Introducing Gen-3 Alpha: A New Frontier for Video Generation:

Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models.

More from Runway’s X account here.

Accelerationism: What Are We Doing to Ourselves?

Here’s your word for today as Apple’s WWDC looks to include an announcement of a major partnership with OpenAI (the folks behind ChatGPT) to make Siri much closer to an artificial intelligence (or “Apple Intelligence” as the marketing goes) assistant.

Accelerationism.

It’s a term that’s been used in the tech world for years, but the mindset (mind virus?) has really reached new levels in the post-ChatGPT 4 era that we now live in before what feels like an imminent release of something even more powerful in the coming months or years.

Here’s an article from 2017 about the term accelerationism and accelerationists: 

Accelerationism: how a fringe philosophy predicted the future we live in – The Guardian: 

Accelerationists argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative. Accelerationists favour automation. They favour the further merging of the digital and the human. They often favour the deregulation of business, and drastically scaled-back government. They believe that people should stop deluding themselves that economic and technological progress can be controlled. They often believe that social and political upheaval has a value in itself.

With my mind heavy on what the Apple / OpenAI partnership might look like before WWDC starts in just a few minutes (it feels like this could be an important moment for historical events), Ted Gioia made this thought-provoking post on the realization that we are doing to ourselves what Dr. Calhoun did to his poor mice (unknowingly) in the 1960’s famous Universe 25 experiment.

It’s worth your time to read this and ponder our own current situation.

Is Silicon Valley Building Universe 25? – by Ted Gioia:

Even today, Dr. Calhoun’s bold experiment—known as Universe 25—demands our attention. In fact, we need to study Universe 25 far more carefully today, because zealous tech accelerationists—that’s now a word, by the way—aim to create something comparable for human beings.What would you do if AI took care of all your needs?

After being in the classroom for the last three years of “post-Covid” education and seeing how many young people are absolutely struggling with mental health (and how little schools of any sort, from public to private such as the ones where I taught, are doing to help them), it’s shocking that we’ll send stocks soaring on big tech news today that will make our swipes and screen time increase and lead us further down the primrose path of a future of disconnected violence and mental health disaster.

New iPhones Get 5 Year Support

Now add in right-to-repair principles and more ethical mineral procurement (for batteries etc compared to the current terrible conditions and practices) and I’ll be happy!

Apple will update iPhones for at least 5 years in rare public commitment | Ars Technica:

Apple has taken a rare step and publicly committed to a software support timeline for one of its products, as pointed out by MacRumors. A public regulatory filing for the iPhone 15 Pro (PDF) confirms that Apple will support the device with new software updates for at least five years from its “first supply date” of September 22, 2023, which would guarantee support until at least 2028.

Anxious Generation Study

Ted’s entire newsletter is a worthy read here, but this part about new research indicating that the current genertion of young people growing up in a phone-based culture (globally) is doing real harm and damage. It makes me think back to the tobacco industry trying to pretend that cigarettes don’t hurt people or the petroleum companies hiding the neurological effects of lead-infused gasoline and so on…

Crisis in the Culture: An Update – by Ted Gioia:

Haidt declared victory on social media: “There are now multiple studies showing that a heavily phone-based childhood changes the way the adolescent brain wires up, in many ways including cognitive control and reward valuation.”

We still need more research. But we can already see that we’re dealing with actual physiological decline, not just pundits’ opinions.

At this point, the debate isn’t over whether this is happening. Instead we now need to gauge the extent of the damage, and find ways of protecting people, especially kids.

AI and Bicycle of the Mind

I don’t have the same optimism that Thompson does here, but it’s a good read and worth the thought time!

The Great Flattening – Stratechery by Ben Thompson:

What is increasingly clear, though, is that Jobs’ prediction that future changes would be even more profound raise questions about the “bicycle for the mind” analogy itself: specifically, will AI be a bicycle that we control, or an unstoppable train to destinations unknown? To put it in the same terms as the ad, will human will and initiative be flattened, or expanded?

Why I am Using a Light Phone

I have lots more to say about this, but I wanted to share this vital part of a recent article about “dumbphones” in The New Yorker. I’ve been attempting to be much more deliberate about using technology and devices, especially in front of my children and students.

The Light Phone (and Camp Snap camera) have been a significant part of that effort. I’ve been in love with the Light Phone since converting from an iPhone earlier this year.

The Dumbphone Boom Is Real | The New Yorker:

Like Dumbwireless, Light Phone has recently been experiencing a surge in demand. From 2022 to 2023, its revenue doubled, and it is on track to double again in 2024, the founders told me. Hollier pointed to Jonathan Haidt’s new book, “The Anxious Generation,” about the adverse effects of smartphones on adolescents. Light Phone is receiving increased inquiries and bulk-order requests from churches, schools, and after-school programs. In September, 2022, the company began a partnership with a private school in Williamstown, Massachusetts, to provide Light Phones to the institution’s staff members and students; smartphones are now prohibited on campus. According to the school, the experiment has had a salutary effect both on student classroom productivity and on campus social life. Tang told me, “We’re talking to twenty to twenty-five schools now.”

The Museum of Me

The Museum of You – Herbert Lui:

I see a lot of discussion on how people miss blogs, and RSS, and internet culture before what we call Web 2.0 (social media, platforms, ecommerce, etc.) came along and wiped it away. 

The best way to pay homage is to bring it back—to set up our own blogs that we control, to preserve our own libraries of content in multiple places so they don’t disappear with social media, to actively document our lives the way we miss and the way we would want to be remembered. We can choose a responsibility, every day, to collect the best of what came before us, to embody it, and to preserve it by sharing its charms with other people.

Much agreed, and this is one of the reasons I’ve kept my own blog and podcast here since 2006. I thought back then, “What if these awesome new tools like MySpace (or early Twitter) somehow go away or fall into the hands of the wrong leaders?” 

I read previous posts and thoughts here occasionally and marvel at how naive, bold, brave, or afraid I was at various points in my life. Now looking back on this Museum of Me, I can glimpse previous iterations of my own self and perceptions and not just remember but learn. 

Blogs like this, however silly they may seem in the face of social media apps, are powerful places!

OpenAI’s Lens on the Near Future

Newton has the best take I’ve read (and I’ve read a lot) on the ongoing OpenAI / Sam Altman situation… worth your time:

OpenAI’s alignment problem – by Casey Newton – Platformer:

At the same time, though, it’s worth asking whether we would still be so down on OpenAI’s board had Altman been focused solely on the company and its mission. There’s a world where an Altman, content to do one job and do it well, could have managed his board’s concerns while still building OpenAI into the juggernaut that until Friday it seemed destined to be.

That outcome seems preferable to the world we now find ourselves in, where AI safety folks have been made to look like laughingstocks, tech giants are building superintelligence with a profit motive, and social media flattens and polarizes the debate into warring fandoms. OpenAI’s board got almost everything wrong, but they were right to worry about the terms on which we build the future, and I suspect it will now be a long time before anyone else in this industry attempts anything other than the path of least resistance.