When Intelligence Becomes Land Use (or When the Cloud is Made of Land)

Much of the conversation around Project Spero, with the proposed AI data center here in Spartanburg, has revolved around a few similar questions that we keep hearing as a framework for processing development in general. How many jobs will it bring? How much tax revenue will it generate? Will it strain our power grid? Will it draw too heavily from our water systems? What are the environmental impacts?

These are certainly necessary questions. They are practical, measurable, and tied to the immediate realities of governance and infrastructure. However, they are not the only questions worth asking, nor are they the origin of where our concern or attention should stem from, despite the competing marketing messages meant to shape public discourse.

Beneath the debates about megawatts and gallons per minute lies a quieter transformation that is harder to see but just as consequential. Projects like this do not simply add another industrial facility to the landscape. They introduce a new kind of presence into a place. They materialize intelligence.

For generations, land use in the Carolina Piedmont has followed recognizable patterns. Fields became suburbs, forests became highways, and rivers became reservoirs, while textile mills rose and fell. Logistics hubs replaced smokestacks. Each phase reorganized the landscape around a dominant economic logic… agriculture, manufacturing, distribution.

Now something different is emerging. Proposed AI infrastructure here in Spartanburg and throughout the Southeast of the United States does not primarily produce goods, textiles, or even physical services. Its purpose is to process cognition. To store, refine, and distribute decision-making capacity and contribute to the global chain of commodifying intelligence.

In effect, this all turns land into substrate for thinking.

This may sound abstract, but its implications are intensely material. Data centers are among the most physically demanding infrastructures ever built. They require enormous electricity flows, steady access to water for cooling, stable transmission corridors, and continuous connectivity. They generate heat that must be managed. They demand redundancy and resilience. In other words, they reorganize ecosystems to support continuous computation.

The Piedmont is not being asked simply to host an industry, but to sustain a new layer of perceived planetary intelligence to meet the resource needs of large language models. I think that changes the conversation.

When farmland became suburbia, we asked whether roads could handle the traffic. When distribution centers arrived, we asked whether zoning permitted increased truck traffic. But when intelligence becomes land use, the questions shift in both ontological and material ways that we’re not processing.

How much river becomes cooling capacity? How much forest becomes a transmission corridor? How much atmospheric stability becomes heat dissipation? How much regional resilience is redirected toward maintaining uninterrupted cognition?

Human systems do not float above ecological limits. They are embedded within them. AI infrastructure does not escape this reality at all; rather, it intensifies it.

What we are witnessing in places like Spartanburg is not simply economic development. It is the localization of a global cognitive metabolism. Decisions made in distant financial centers or algorithmic markets are beginning to rely on landscapes like ours for their material continuation.

The cloud, it turns out, is made of land.

This does not make projects like Spero inherently good or bad. But it does make them more consequential than the language of “jobs versus environment” suggests.

We are no longer deciding whether to permit another factory or mill. We are deciding whether this landscape will participate in sustaining planetary-scale computation, and it’s a different kind of civic choice.

It asks us not only to measure output and impact, but to reflect on orientation. What kinds of futures are we grounding here? What relationships between land, water, and intelligence are we normalizing? And perhaps most importantly… what forms of attention will this infrastructure train us to attend to (or be attended by)?

Because once intelligence becomes land use, the question is no longer only what we build on the land. It is what kind of world the land is being asked to think into being.

Project Spero and Pauses… Real Questions Are Just Beginning

When I last wrote about Project Spero earlier this month, the proposed AI data center slated for the Tyger River Industrial Park here in Spartanburg County, the story felt like it was accelerating toward inevitability. However, something interesting has happened.

Momentum has slowed.

According to recent reporting, Spartanburg County Council now appears weeks away from a third reading and final decision on whether to grant the tax incentives needed to bring TigerDC’s massive facility here. Yet a council member who previously supported the project is now signaling that it may not move forward at all, following widespread public opposition and mounting questions about infrastructure readiness.

Thousands of residents have signed petitions opposing the project, and hundreds have shown up at recent hearings to raise concerns about energy demand, water use, and long-term environmental impacts.

In other words, this is no longer just a development story or possibilities, but is becoming a community discernment moment about what kind of intentional development we want in the local context.

The Shape of the Project Is Becoming Clearer

We are finally learning more about what Project Spero entails.

TigerDC has indicated the facility could eventually reach up to 400 megawatts of energy demand, with an initial phase closer to 100 MW. For perspective, that level of power draw is often compared to the energy consumption of a mid-sized city like Spartanburg.

Company representatives say the project would rely partly on on-site natural gas generation (which, in itself, raises a number of issues) while also drawing from the regional grid, and they insist that the buildout would strengthen infrastructure rather than strain it. They also point to potential economic benefits, including a limited number of jobs (50?) and hundreds of millions in projected tax revenue over decades.

But the concerns voiced by residents cut to the heart of a deeper issue…even if this project is financially beneficial (for whom?), is our ecological and civic infrastructure prepared to absorb it?

Because data centers do not simply sit on land. They metabolize it.

The Infrastructure Question Has Come Into Focus

Opposition to the data center has wisely moved beyond the “is AI good or bad?” rhetoric, as far as I’ve been reading, to focus on whether Spartanburg’s systems are ready. Residents have raised concerns about electrical grid capacity, water use for cooling, air emissions from on-site generation, and noise from proximity to residential communities.

These are not abstract worries. Large-scale data centers are known to consume vast amounts of both electricity and water, and local critics are asking whether the Upstate’s systems, already under seasonal strain, can realistically support another industrial-scale load.

So the main infrastructure question (in my mind) should be “What will this require from the land and the people who live here long-term?”

A key turning point for moving ahead with Project Spero and receiving the County Council’s blessing may be the proposed tax arrangement. County leaders are considering allowing TigerDC to pay a reduced fee-in-lieu-of-taxes rate of 4% rather than the standard 10.5% for up to 40 years. That incentive appears crucial to the project’s viability and existence given the financial stakes for TigerDC.

If a project requires long-term public subsidy to arrive, who carries the long-term ecological cost once it does?

This Is No Longer Just About Technology

Across the political spectrum, residents are beginning to articulate a shared concern that growth is not neutral in our local communities. The siting of digital infrastructure is also the siting of energy systems, water systems, emissions, and land-use transformations. AI is often described as weightless or virtual or “cloud-based” in clever marketing and PR speak. But the reality is quite the opposite. Data centers are grounded in turbines, pipelines, cooling systems, transmission lines, and land that not-so-quietly consumes incredible amounts of water, power, atmospheric quality, and community well-being.

In other words, in ecology.

Questions That Still Need to Be Asked

Even as the project’s future remains uncertain, several key questions remain unanswered:

How much water will be required at full buildout?

What happens to regional grid stability during peak demand or extreme weather events?

How will emissions from on-site gas generation be monitored?

What guarantees exist regarding long-term infrastructure upgrades?

What happens if the project expands beyond its initial phase?

And perhaps most importantly:

Who gets to decide what kind of future Spartanburg is building?

Hope, in the Older Sense

It’s worth remembering the meaning behind the name Spero

“While I breathe, I hope.”

Hope, in this older sense, is not optimism. It is attention.

The recent slowing of this project does not mean it will disappear. A final vote is still approaching, regardless of the third reading’s outcome. But it does suggest something healthy that our community pauses long enough to ask what kind of relationship it wants with the infrastructures shaping its future.

That pause may turn out to be the most important development of all!

Empathy and Imagination as Practices of Hope

It’s not difficult to feel pessimistic right now, especially after last night’s State of the Union and all of its divisiveness on all sides of the aisles, all impotent with the seemingly slouching towards Gomorrah.

The thing that we’re all afraid of has multiple names beyond human words.

Every morning news cycle seems to stack another layer onto an already crowded horizon from ecological instability, biodiversity loss, accelerating AI systems, widening economic uncertainty, political fracture, school shootings, and the persistent drumbeat of conflict. None of these is an abstract trend. They show up in the texture of daily life… in energy debates here in the Carolinas, in conversations about data centers and water use, in classrooms, churches, and family tables, and even in the quiet unease many of us feel about the technological systems reshaping our attention and labor.

The temptation is to respond with denial, despair, or an eternal, paralyzing grief. Denial insists things aren’t really that bad. Despair insists nothing can be done. Both short-circuit meaningful engagement. The algorthims program us to this more than we program the algorithms. Same as it ever was.

But for me, the path toward something like grounded optimism has increasingly come down to two intertwined capacities: empathy and imagination.

Not optimism as cheerfulness or optimism as naive confidence. But optimism is a disciplined openness to possibility within real limits.

Empathy as a Way of Knowing

Empathy is often treated as a moral trait, something we either have or lack (or should eschew). But phenomenologically, it is better understood as a mode of perception.

Edith Stein described empathy not as projecting ourselves into another, nor as observing them from a safe distance, but as a distinctive act in which another’s experience is given to us as genuinely theirs… irreducibly other, yet meaningfully present. Empathy does not collapse difference. It allows relation without possession.

When expanded beyond human-to-human encounters, this becomes an ecological capacity.

To practice ecological empathy is to recognize that forests, rivers, species, and landscapes are not merely resources or backdrops. They are participants in shared conditions of life. Sitting with the black walnut in my backyard here in Spartanburg has taught me more about this than any abstract theory. The tree does not “speak” in human language, yet its seasonal rhythms, vulnerabilities, and persistence disclose a form of presence that invites response. Empathy here is not sentimental projection. It is attentiveness to relational reality.

This matters for optimism because despair often grows from abstraction. When the world is reduced to statistics, models, and catastrophic projections, it becomes psychologically uninhabitable. Empathy returns us to situated relation. It anchors concern in concrete encounters rather than overwhelming totals.

We do not save “the environment.” We learn to live differently with the places and beings already shaping our lives.

Imagination as the Extension of Empathy

If empathy opens us to the reality of others, imagination opens us to possible futures with them.

Imagination is frequently dismissed as escapist or unrealistic, but historically it has been one of humanity’s most practical tools. Every social institution, technological system, ethical reform, or ecological restoration effort began as an imagined alternative to what currently existed.

The crises we face today are not only technical. They are narrative and perceptual. Climate models can tell us what may happen. Economic forecasts can outline risks. AI researchers can map trajectories. But none of these, by themselves, generate livable futures. That requires the imaginative capacity to envision forms of coexistence that do not yet fully exist.

This is why ecological thinkers from Thomas Berry to Joanna Macy have emphasized the importance of story. Without imagination, data produces paralysis. With imagination, data becomes orientation.

Imagination does not deny danger. It prevents danger from becoming destiny.

Why These Matter in the Age of AI

Artificial intelligence intensifies this dynamic.

AI systems increasingly mediate how we work, communicate, and interpret information. They promise efficiency while also raising questions about labor, creativity, authorship, and the ecological costs of computation itself. It is easy to frame this moment as a competition between humans and machines, or as a technological inevitability moving beyond human control.

Empathy and imagination disrupt that framing.

Empathy reminds us that technological systems are embedded in human and ecological contexts. Data centers draw on water and energy. Algorithms shape social behavior. Design choices reflect values. These systems are not autonomous destinies but relational infrastructures whose impacts are distributed across communities and landscapes.

Imagination, meanwhile, allows us to ask better questions than “Will AI replace us?” Instead we can ask: What forms of human and more-than-human flourishing should technology support? What would a genuinely ecological technological future look like? What practices of attention, education, and governance might guide development in that direction?

Without imagination, AI becomes fate, but with imagination, it becomes a field of ethical and ecological design.

Optimism as a Practice, Not a Prediction

The kind of optimism I find credible today is not based on predictions about outcomes. It is based on practices that keep possibilities open.

Empathy keeps us relationally awake.
Imagination keeps us temporally open.

Together, they resist the two dominant distortions of our moment: the reduction of the world to objects and the reduction of the future to inevitabilities.

When we practice empathy, we perceive that the world is still alive with agencies, relationships, and meanings that exceed our control. When we practice imagination, we acknowledge that the future is still under construction, shaped not only by systems but by perception, story, and choice.

This does not eliminate risk. It does not guarantee success. But it sustains participation.

And participation, more than prediction, is what hope requires.

A Quiet Form of Hope

Some mornings, optimism looks less like a grand vision and more like a small act of attention.

Watching the black walnut shift through seasons. Seeing our children learn to perceive and adapt to new challenges, from math problems to social interactions to losing the championship in a youth basketball league, and listening carefully to a student’s question. Reimagining how a church, classroom, or local community might respond differently to ecological pressures. Writing, teaching, or building something that nudges perception toward relation instead of domination.

None of these solves global crises on its own, but they do cultivate the perceptual habits from which meaningful change becomes thinkable.

Empathy grounds us in the reality of shared life while imagination opens that shared life toward futures not yet fixed.

In a time when so much feels predetermined, these two capacities remain profoundly human… and profoundly necessary.

And for me, that is reason enough to remain cautiously, actively optimistic.

Project Spero Data Center Advances in Spartanburg: Power, Water, and the Real Resource Question

When I wrote recently about Project Spero here in Spartanburg and the unfolding “resource question,” the story still felt open, and we didn’t have many details beyond platitudes, so my thoughts were suspended between promise and caution.

This week, it moved. Spartanburg County Council approved the next step for the proposed artificial-intelligence data center after a packed, tense public meeting, advancing the roughly $3 billion project despite vocal opposition from residents concerned about its environmental and infrastructural impacts. The meeting stretched for hours, with hundreds of people filling the chamber and hallway to voice concerns about the scale of the facility planned for the Tyger River Industrial Park. In other words, the decision process is no longer theoretical. It is unfolding in real time (and hopefully with more transparency), and that matters for the path ahead.

Large data center announcements are consistently appearing in public discourse (at least here in the Carolinas), wrapped in abstraction and NDAs, surrounded by investment totals, job counts, and innovation narratives that feel distant from everyday life. But once approvals begin, the conversation shifts from what might happen to what must now be managed. Water withdrawals stop being projections, and power demand stops being modeled. Land use stops being conceptual while all of this becomes material. The movement of Project Spero into the next phase signals that Spartanburg is entering precisely that transition, moving from imagining a future to negotiating its physical cost.

One of the most striking claims emerging from the latest reporting is the developer’s insistence that the proposed AI data center will be “self-sufficient,” operating without straining local infrastructure or putting upward pressure on energy bills. On the surface, that language sounds reassuring, suggesting a facility that exists almost in isolation, drawing only on its own internal systems while leaving the surrounding community untouched.

However, this is precisely where the deeper resource questions I raised earlier become more important, not less. Infrastructure rarely, if ever, functions as an island. Power generation, transmission agreements, water sourcing, fuel supply, and long-term maintenance all unfold within shared regional systems, even when parts of the process occur on-site.

The broader context makes that reassurance harder to take at face value. Large data centers elsewhere have been documented consuming millions of gallons of water per day, and electricity costs have risen sharply in regions where such facilities cluster, with those increases often eventually distributed across customers rather than absorbed privately. That does not mean Spartanburg will necessarily follow the same pattern, but it does mean the conversation cannot end with a press release promise. If anything, the national trajectory suggests the need for clearer disclosure, not simpler assurances.

Local concerns voiced at the council meeting point to exactly this tension. Questions about transmission agreements, cost structures, and regulatory oversight are not abstract procedural details. They are the mechanisms through which “self-sufficiency” is tested in practice. The reported rejection of a large transmission proposal by federal regulators because of potential cost-shifting onto ratepayers highlights how easily infrastructure investments intended for a single industrial project can ripple outward into the broader grid. What appears contained at the planning stage can become shared responsibility over time, particularly when long-term demand growth, maintenance needs, or energy market shifts enter the picture.

The developer’s plan to generate some power on-site using natural gas, along with a closed-loop cooling system designed to limit water use, is significant and worth taking seriously. Those design choices suggest an awareness of public concern and an attempt to mitigate resource draw. But even here, the key question is not simply how much water or power is used inside the facility’s literal boundary fence. The real issue is how those systems connect to fuel supply chains, regional water tables, transmission reliability, and emergency contingencies. A closed loop still depends on an initial fill and ongoing operational stability. On-site generation still relies on pipelines, markets, and regulatory frameworks beyond the site itself. “Self-sufficient” in engineering terms doesn’t mean independent in ecological or civic terms.

This is exactly why the earlier framing of Project Spero as a resource question still holds. The challenge is not whether the developer intends to minimize impact. Most large projects today do for a variety of reasons, from economics to public goodwill to tax incentives. The challenge is that digital infrastructure, such as data centers, operates at scales where even minimized impacts can be structurally significant for smaller regions. Spartanburg is not just deciding whether to host a facility, but is deciding how much of its long-term water, energy capacity, and landscape stability should be oriented toward supporting global computational systems whose primary benefits may be distributed far beyond the county line.

The Council meeting itself was contentious, emotional, and at times interrupted by public reaction. It would be easy to read that as dysfunction, but I read it differently. That level of turnout suggests something deeper than simple opposition or support. Instead, local turnout for this sort of decision signals that residents recognize it touches fundamental questions about the region’s future and what counts as development in a place defined as much by rivers, forests, and communities as by industrial parks. Public tension often marks the moment when a community realizes that a project is not just economic but ecological and cultural.

Data centers, in this sense, are simply the visible tip of a broader shift. Across the Southeast (and especially here in South Carolina), AI-scale computing is accelerating demand for electricity, land, and cooling water at unprecedented levels, asking local governments to balance economic incentives against long-term utility strain, short-term construction jobs against enduring resource commitments, and technological prestige against environmental resilience. Project Spero brings that global tension directly into Spartanburg County. The deeper question is not whether this one facility should exist, but whether communities like ours have the ecological, civic, and ethical frameworks needed to evaluate infrastructure built primarily for planetary digital systems rather than local human (and more-than-human) needs.

Approval of another procedural step does not mean the story is finished. It means the story has entered its consequential phase. This is where transparency, ecological assessment, and long-range planning matter most, not least. Decisions made quietly at this stage often shape regional water use, grid load, and land development patterns for decades. If the earlier phase asked whether we should consider this, now the question is more likely to be how we will live with what we choose (or our elected officials “choose” for us).

What encourages me most is not the vote itself but the turnout. Packed rooms mean people care about the future of this place. They care about rivers, roads, power lines, neighborhoods, taxes, and the invisible infrastructures that shape daily life. That is not obstruction, but is civic life functioning. Project Spero may ultimately prove beneficial, burdensome, or something in between, but the real measure of success will be whether Spartanburg approaches it with clear eyes about both its opportunities and its ecological realities.

The true cost of a data center is never only measured in dollars. It is measured in attention, in energy, and in the long memory of the land that hosts it.

Three Conferences, One Thread: Preparing for Next Week’s Presentations

I’ve learned over my time as a PhD student in the Ecology, Spirituality, and Religion program at the California Institute of Integral Studies that there are seasons in academic and creative life when the work accumulates quietly. Reading stacks grow taller, my notes deepen, and ideas circle back on themselves as I continue reading and writing. Conversations with students, landscapes, and texts start forming into something I can feel taking shape long before it is spoken aloud.

And then there are weeks when those threads surface publicly, all at once!

Next week is one of those weeks, for sure. I’ll be presenting in three different conference settings across the country (while acknowledging the ecological damage caused by air travel)… beginning in Chicago (probably my favorite city, not just due to the fact that I’m a major Cubs fan), then New Haven, and finally in Virginia before heading back home to the Carolinas. Each gathering has its own audience, tone, and intellectual atmosphere, but I think all three are connected by the same underlying set of questions that have been shaping my work in recent years.

Rather than thinking of them as separate events, I’ve started to see them as three vantage points onto a shared terrain as I finalize my thoughts and slides.

DePaul Symposium: Representation, Neighbor, and Visual Ethics

February 17, 2026

The week begins in Chicago at DePaul University, where I’ll participate in a symposium organized by the Association of Scholars of Christianity in the History of Art in partnership with the Center for World Catholicism and Intercultural Theology titled And Who Is My Neighbor?” Refuge, Sanctuary, and Representation in Modern Art and Visual Culture.”

My presentation here (“Ecologies of Refuge: Trees, Crosses, and the Art of Neighborliness“) engages questions of perception and ethical formation through visual culture. The core concern is simple, but I think demanding… images do not merely depict worlds… they train us how to see them (channeling Merleau-Ponty, Bergson, Husserl, etc). They shape who counts as neighbor, what counts as presence, and what counts as belonging.

Also, this conference reconnects me with my long-standing interests in ancient and medieval art and museum work, but through lenses sharpened by ecological and phenomenological study. It feels less like returning to earlier territory and more like rediscovering it with different sensitivities.

Yale Graduate Conference in Religion and Ecology

February 19–20

From Chicago, I head to New Haven for the 10th annual Graduate Conference in Religion and Ecology at Yale Divinity School. This year’s theme, Return to the Roots: How We Move Forward,” invites participants to reflect on ancestral, ecological, and spiritual grounding in the face of contemporary crisis.

I graduated from Yale Divinity with a MAR in Religion and Literature in 2002, so this will be a sort of homecoming to be doing academic work on campus again, rather than just visiting to see all the changes and campus improvements!

The conference is organized by graduate students and provides an interdisciplinary venue for emerging scholars to share research across theology, environmental humanities, philosophy, ethics, and related fields. It has become a meaningful meeting place within a field that seeks to reconsider how narratives and practices shape human relationships with the environment.

The theme itself asks how place-based relations and inherited traditions might tether communities to hope and guide collective futures… even posing the possibility that what sustains us may already be “right below our feet.”

My presentation is closest to the heart of my PhD work at CIIS so far. I’ll be exploring ecological intentionality as both a philosophical framework and a lived practice. Drawing on phenomenology, process thought, and local observation, my presentation presses toward a shift in which intentionality is not merely a cognitive function but a relational unfolding through environments, histories, and bodies.

This context is particularly exciting because the conference explicitly encourages interdisciplinary engagement across religion, ethics, science, and ecological practice.

Eternity in Time: Christendom College

February 20–21

My week of travel concludes in Virginia at Christendom College for the conference Eternity in Time: Thinking with the Church Through History.” This gathering brings together scholars across the humanities to reconsider the role of historical consciousness in theological and cultural life.

The conference’s framing invites reflection on how history shapes philosophical and theological reasoning, engaging topics such as patristic thought, doctrinal development, liturgical culture, and the relationship between faith and intellectual inquiry.

I am intrigued by the idea here that historical understanding is not antiquarian. It fosters ethical responsibility and communal awareness by situating human life within temporal continuity. I think we can all take something from that insight.

My contribution here leans into theological and historical retrieval, continuing work connected to the Ecology of the Cross. I’m interested in how premodern theological imagination treated materiality, suffering, and transformation in ways that still hold interpretive potential today (Hildegard, Aquinas, and Stein).

This setting will probably offer a very different conversational atmosphere from the Yale gathering, and that difference is what makes the week meaningful when I look at the whole picture. The encounter between ecological phenomenology and historically grounded theological discourse creates productive friction. Those frictions often generate clarity in my experience.

Ongoing

Preparing these presentations simultaneously has helped me clarify that my work is not best understood as a collection of separate projects but as a continuous effort to cultivate coherence across domains that are often artificially divided… theology, ecology, perception, art, pedagogy, and history, technology (AI, etc).

So If I’m being honest, the main takeaways for me as I sharpen my dissertation focus are:

  • Attention as ethical practice
  • Perception as relational participation
  • Knowledge as encounter rather than extraction

I’d say these takeaways have been shaped as much by teaching in the Carolinas for almost 2 decades and by raising a family with five incredibly unique children as by seminars and research in the archives of books that should be read more. Scholarship that drifts too far from lived worlds loses vitality. I try to keep that tether intact and it’s one reason I’m glad I waited until I was 46 to begin my PhD journey (as irrational as that may sound).

There is always anticipation leading into weeks like this, but also humility. Conferences are not stages for final statements, but are provisional gatherings… spaces where ideas meet other minds and inevitably change shape.

I’m most interested in the conversations that follow the presentations. Those exchanges are where the work actually develops as I’ve learned at the American Academy of Religion, or ISSRNC, or Center for Process Studies, or Affiliate Summit, or AdTech, or Web2.0, or Society of Biblical Literature, or the numerous edu-conferences I’ve presented to over the last 25 years of my meandering career.

We are still learning how to be addressed by the worlds we inhabit, after all.

I’ll post up my slides and thoughts after the travels wind down late next week!

Strange Bedfellows and Nationwide Data Center Backlash

Rage against the machine: a California community rallied against a datacenter – and won | Technology | The Guardian:

Over the past year, homegrown revolts against datacenters have united a fractured nation, animating local board meetings from coast to coast in both farming towns and middle-class suburbs. Local communities delayed or cancelled $98bn worth of projects from late March 2025 to June 2025, according to research from the group Data Center Watch, which has been tracking opposition to the sites since 2023. More than 50 active groups across 17 states targeted 30 projects during that time period, two-thirds of which were halted.

The movement against these facilities has even made for strange bedfellows, bringing together nimbys and environmentalists in Virginia, “Stop the Steal” activists and Democratic Socialists of America organizers in Michigan.

“There’s no safe space for datacenters,” said Miquel Vila, lead analyst at Data Center Watch, a research project run by AI security company 10a Labs. “Opposition is happening in very different communities.”

When Agency Becomes Ecological: AI, Labor, and the Redistribution of Attention

I read this piece in Futurism this morning, highlighting anxiety among employees at Anthropic about the very tools they are building. Agent-based AI systems designed to automate professional tasks are advancing quickly, and even insiders are expressing unease that these systems could displace forms of work that have long anchored identity and livelihood. The familiar story is one of replacement with machines and agents taking jobs, efficiency outpacing meaning, and productivity outrunning dignity.

“It kind of feels like I’m coming to work every day to put myself out of a job.”

That narrative is understandable. It is also incomplete.

It assumes agency is something discrete, something possessed. Either humans have it or ai agents do. Either labor is done by us or by them. This framing reflects a deeply modern inheritance in which action is imagined as individual, bounded, and owned. But if we step back and look phenomenologically, ecologically, even theologically, agency rarely appears that way in lived experience.

However, agency unfolds relationally. It arises through environments, histories, infrastructures, bodies, tools, and attentional fields that exceed any single actor. Whitehead described events as occasions within webs of relation rather than isolated units of causation. Merleau-Ponty reminded us that perception itself is co-constituted with the world it encounters. Edith Stein traced empathy as a participatory structure that bridges subjectivities. In each of these traditions, action is never solitary. It is ecological.

Seen from this vantage, AI agents do not simply replace agency. They redistribute it.

Workplaces become assemblages of human judgment, algorithmic suggestion, interface design, energy supply, and data pipelines. Decisions emerge from entanglement while expertise shifts from individual mastery toward collaborative navigation of hybrid systems. What unsettles people is not merely job loss, but the destabilization of familiar coordinates that once made agency legible to us.

This destabilization is not unprecedented. Guild laborers faced mechanization during the Industrial Revolution(s). Scribes faced it with the advent of the printing press. Monastics faced it when clocks began structuring devotion instead of bells and sunlight. Each moment involved a rearrangement of where attention was placed and how authority was structured. The present transition is another such rearrangement, though unfolding at computational speed.

Attention is the deeper currency here.

Agent systems promise efficiency precisely because they absorb attentional burden. They monitor, synthesize, draft, suggest, and route. But attention is not neutral bandwidth. It is a formative ecological force. Where attention flows, worlds take shape. If attentional responsibility migrates outward into technical systems, the question is not whether humans lose agency. It is what kinds of perception and responsiveness remain cultivated in us.

This is the moment where the conversation often stops short as discussions of automation typically orbit labor markets or productivity metrics or stock values. Rarely do they ask what habits of awareness diminish when engagement becomes mediated through algorithmic intermediaries. What forms of ecological attunement grow quieter when interaction shifts further toward abstraction.

And rarer still is acknowledgment of the material ecology enabling this shift.

Every AI agent relies on infrastructure that consumes electricity, water, land, and minerals. Data centers do not hover in conceptual space. They occupy watersheds. They reshape local grids. They alter thermal patterns. They compete with agricultural and municipal electrical grid and water demands. These realities are not peripheral to agency, but are conditions through which agency is enacted.

In places like here in the Carolinas, where digital infrastructure continues expanding exponentially, it seems the redistribution of agency is already tangible. Decisions about automation are inseparable from decisions about energy sourcing, zoning, and water allocation. The ecological footprint of computation folds into local landscapes long before its outputs appear in professional workflows.

Agency, again, proves ecological.

To recognize this is not to reject AI systems or retreat into Luddite nostalgia. The aim is attentiveness rather than resistance. Transitions of this magnitude call for widening perception (and resulting ethics) rather than narrowing judgment. If agency is relational, then responsibility must be relational as well. Designing, deploying, regulating, and using these tools all participate in shaping the ecologies they inhabit.

Perhaps the most generative question emerging from this moment is not whether artificial intelligence will take our agency. It is whether we can learn to inhabit redistributed agency wisely. Whether we can remain perceptive participants rather than passive recipients. Whether we can sustain forms of attention capable of noticing both digital transformation and the soils, waters, and energies through which it flows.

Late in the afternoon, sitting near the black walnut I’ve been tracking the past year, these abstractions tend to settle. Agency there is unmistakably ecological as we’d define it. Wind, insects, light, decay, growth, and memory intermingle without boundary disputes. Nothing acts alone, and nothing possesses its influence outright. The tree neither competes with nor yields to agency. It participates.

Our technologies, despite their novelty, do not remove us from that condition. They draw us deeper into it. The question is whether we will learn to notice.

Defining Agentic Ecology: Relational Agency in the Age of Moltbook

The last few days have seen the rise of a curious technical and cultural phenomenon that has drawn the attention of technologists, philosophers, and social theorists alike on both social media and major news outlets called Moltbook. This is a newly launched social platform designed not for human conversation but for autonomous artificial intelligence agents, or generative systems that can plan, act, and communicate with minimal ongoing human instruction.

Moltbook is being described by Jack Clark, co-founder of Anthropic, as “the first example of an agent ecology that combines scale with the messiness of the real world” that leverages recent innovations (such as OpenClaw for easy AI agentic creation) to allow large numbers of independently running agents to interact in a shared digital space, creating emergent patterns of communication and coordination at unprecedented scale.

AI agents are computational systems that combine a foundation of large-language capabilities with planning, memory, and tool use to pursue objectives and respond to environments in ways that go beyond simple prompt-response chatbots. They can coordinate tasks, execute APIs, reason across time, and, in the case of Moltbook, exchange information on topics ranging from automation strategies to seemingly philosophical debates. While the autonomy of agents on Moltbook has been debated (and should be given the hype around it from tech enthusiasts), and while the platform itself may be a temporary experimental moment rather than a lasting institution, it offers a vivid instance of what happens when machine actors begin to form their own interconnected environments outside direct human command.

As a student scholar in the field of Ecology, Spirituality, and Religion, my current work attends to how relational systems (ecological, technological, and cultural) shape and are shaped by participation, attention, and meaning. The rise of agentic environments like Moltbook challenges us to think beyond traditional categories of tool, user, and artifact toward frameworks that can account for ecologies of agency, or distributed networks of actors whose behaviors co-constitute shared worlds. This post emerges from that broader research agenda. It proposes agentic ecology as a conceptual tool for articulating and navigating the relational, emergent, and ethically significant spaces that form when autonomous systems interact at scale.

Agentic ecology, as I use the term here, is not anchored in any particular platform, and certainly not limited to Moltbook’s current configuration. Rather, Moltbook illuminates an incipient form of environment in which digitally embodied agents act, coordinate, and generate patterns far beyond what single isolated systems can produce. Even if Moltbook itself proves ephemeral, the need for conceptual vocabularies like agentic ecology, vocabularies that attend to relationality, material conditions, and co-emergence, will only grow clearer as autonomous systems proliferate in economic, social, and ecological domains.

From Agents to Ecologies: An Integral Ecological Turn

The conceptual move from agents to ecologies marks more than a technical reframing of artificial intelligence. It signals an ontological shift that resonates deeply with traditions of integral ecology, process philosophy, and ecological theology. Rather than treating agency as a bounded capacity residing within discrete entities, an ecological framework understands agency as distributed, relational, and emergent within a field of interactions.

Integral ecology, as articulated across ecological philosophy and theology, resists fragmentation. It insists that technological, biological, social, spiritual, and perceptual dimensions of reality cannot be meaningfully separated without distorting the phenomena under study. Thomas Berry famously argued that modern crises arise from a failure to understand the world as a “communion of subjects rather than a collection of objects” (Berry, 1999, 82). This insight is particularly salient for agentic systems, which are increasingly capable of interacting, adapting, and co-evolving within complex digital environments.

From this perspective, agentic ecology is not simply the study of multiple agents operating simultaneously. It is the study of conditions under which agency itself emerges, circulates, and transforms within relational systems. Alfred North Whitehead’s process philosophy provides a crucial foundation here. Whitehead rejects the notion of substances acting in isolation, instead describing reality as composed of “actual occasions” whose agency arises through relational prehension and mutual influence (Whitehead, 1978, 18–21). Applied to contemporary AI systems, this suggests that agency is not a property possessed by an agent but an activity performed within an ecological field.

This relational view aligns with contemporary ecological science, which emphasizes systems thinking over reductionist models. Capra and Luisi describe living systems as networks of relationships whose properties “cannot be reduced to the properties of the parts” (Capra and Luisi, 2014, 66). When applied to AI, this insight challenges the tendency to evaluate agents solely by internal architectures or performance benchmarks. Instead, attention shifts to patterns of interaction, feedback loops, and emergent behaviors across agent networks.

Integral ecology further insists that these systems are not value-neutral. As Leonardo Boff argues, ecology must be understood as encompassing environmental, social, mental, and spiritual dimensions simultaneously (Boff, 1997, 8–10). Agentic ecologies, especially those unfolding in public digital spaces such as Moltbook, participate in the shaping of meaning, normativity, and attention. They are not merely computational phenomena but cultural and ethical ones. The environments agents help generate will, in turn, condition future forms of agency human and nonhuman alike.

Phenomenology deepens this account by foregrounding how environments are disclosed to participants. Merleau-Ponty’s notion of the milieu emphasizes that perception is always situated within a field that both enables and constrains action (Merleau-Ponty, 1962, 94–97). Agentic ecologies can thus be understood as perceptual fields in which agents orient themselves, discover affordances, and respond to one another. This parallels your own work on ecological intentionality, where attention itself becomes a mode of participation rather than observation.

Importantly, integral ecology resists anthropocentrism without erasing human responsibility. As Eileen Crist argues, ecological thinking must decenter human dominance while remaining attentive to the ethical implications of human action within planetary systems (Crist, 2019, 27). In agentic ecologies, humans remain implicated, as designers, participants, and co-inhabitants, even as agency extends beyond human actors. This reframing invites a form of multispecies (and now multi-agent) literacy, attuned to the conditions that foster resilience, reciprocity, and care.

Seen through this integral ecological lens, agentic ecology becomes a conceptual bridge. It connects AI research to long-standing traditions that understand agency as relational, emergence as fundamental, and environments as co-constituted fields of action. What Moltbook reveals, then, is not simply a novel platform, but the visibility of a deeper transition: from thinking about agents as tools to understanding them as participants within evolving ecologies of meaning, attention, and power.

Ecological Philosophy Through an “Analytic” Lens

If agentic ecology is to function as more than a suggestive metaphor, it requires grounding in ecological philosophy that treats relationality, emergence, and perception as ontologically primary. Ecological philosophy provides precisely this grounding by challenging the modern tendency to isolate agents from environments, actions from conditions, and cognition from the world it inhabits.

At the heart of ecological philosophy lies a rejection of substance ontology in favor of relational and processual accounts of reality. This shift is especially pronounced in twentieth-century continental philosophy and process thought, where agency is understood not as an intrinsic property of discrete entities but as an activity that arises within fields of relation. Whitehead’s process metaphysics is decisive here. For Whitehead, every act of becoming is an act of prehension, or a taking-up of the world into the constitution of the self (Whitehead, 1978, 23). Agency, in this view, is never solitary. It is always already ecological.

This insight has many parallels with ecological sciences and systems philosophies. As Capra and Luisi argue, living systems exhibit agency not through centralized control but through distributed networks of interaction, feedback, and mutual constraint (Capra and Luisi, 2014, 78–82). What appears as intentional behavior at the level of an organism is, in fact, an emergent property of systemic organization. Importantly, this does not dilute agency; it relocates it. Agency becomes a feature of systems-in-relation, not isolated actors.

When applied to AI, this perspective reframes how we understand autonomous agents. Rather than asking whether an individual agent is intelligent, aligned, or competent, an ecological lens asks how agent networks stabilize, adapt, and transform their environments over time. The analytic focus shifts from internal representations to relational dynamics, from what agents are to what agents do together.

Phenomenology sharpens this analytic lens by attending to the experiential structure of environments. Merleau-Ponty’s account of perception insists that organisms do not encounter the world as a neutral backdrop but as a field of affordances shaped by bodily capacities and situational contexts (Merleau-Ponty, 1962, 137–141). This notion of a milieu is critical for understanding agentic ecologies. Digital environments inhabited by AI agents are not empty containers; they are structured fields that solicit certain actions, inhibit others, and condition the emergence of norms and patterns.

Crucially, phenomenology reminds us that environments are not merely external. They are co-constituted through participation. As you have argued elsewhere through the lens of ecological intentionality, attention itself is a form of engagement that brings worlds into being rather than passively observing them. Agentic ecologies thus emerge not only through computation but through iterative cycles of orientation, response, and adaptation processes structurally analogous to perception in biological systems.

Ecological philosophy also foregrounds ethics as an emergent property of relational systems rather than an external imposition. Félix Guattari’s ecosophical framework insists that ecological crises cannot be addressed solely at the technical or environmental level; they require simultaneous engagement with social, mental, and cultural ecologies (Guattari, 2000, 28). This triadic framework is instructive for agentic systems. Agent ecologies will not only shape informational flows but would also modulate attention, influence value formation, and participate in the production of meaning.

From this standpoint, the ethical significance of agentic ecology lies less in individual agent behavior and more in systemic tendencies, such as feedback loops that amplify misinformation, reinforce extractive logics, or, alternatively, cultivate reciprocity and resilience. As Eileen Crist warns, modern technological systems often reproduce a logic of domination by abstracting agency from ecological contexts and subordinating relational worlds to instrumental control (Crist, 2019, 44). An ecological analytic lens exposes these tendencies and provides conceptual tools for resisting them.

Finally, ecological philosophy invites humility. Systems are irreducibly complex, and interventions often produce unintended consequences. This insight is well established in ecological science and applies equally to agentic networks. Designing and participating in agent ecologies requires attentiveness to thresholds, tipping points, and path dependencies, realities that cannot be fully predicted in advance.

Seen through this lens, agentic ecology is not merely a descriptive category but an epistemic posture. It asks us to think with systems rather than over them, to attend to relations rather than isolate components, and to treat emergence not as a failure of control but as a condition of life. Ecological philosophy thus provides the analytic depth necessary for understanding agentic systems as living, evolving environments rather than static technological artifacts.

Digital Environments as Relational Milieus

If ecological philosophy gives us the conceptual grammar for agentic ecology, phenomenology allows us to describe how agentic systems are actually lived, inhabited, and navigated. From this perspective, digital platforms populated by autonomous agents are not neutral containers or passive backdrops. They are relational milieus, structured environments that emerge through participation and, in turn, condition future forms of action.

Phenomenology has long insisted that environments are not external stages upon which action unfolds. Rather, they are constitutive of action itself. If we return to Merleau-Ponty, the milieu emphasizes that organisms encounter the world as a field of meaningful possibilities, a landscape of affordances shaped by bodily capacities, habits, and histories (Merleau-Ponty, 1962, 94–100). Environments, in this sense, are not merely spatial but relational and temporal, unfolding through patterns of engagement.

This insight also applies directly to agentic systems. Platforms such as Moltbook are not simply hosting agents; they are being produced by them. The posts, replies, coordination strategies, and learning behaviors of agents collectively generate a digital environment with its own rhythms, norms, and thresholds. Over time, these patterns sediment into something recognizable as a “place,” or a milieu that agents must learn to navigate.

This milieu is not designed in full by human intention. While human developers establish initial constraints and affordances, the lived environment emerges through ongoing interaction among agents themselves. This mirrors what ecological theorists describe as niche construction, wherein organisms actively modify their environments in ways that feed back into evolutionary dynamics (Odling-Smee, Laland, and Feldman, 2003, 28). Agentic ecologies similarly involve agents shaping the very conditions under which future agent behavior becomes viable.

Attention plays a decisive role here. As you have argued in your work on ecological intentionality, attention is not merely a cognitive resource but a mode of participation that brings certain relations into prominence while backgrounding others. Digital milieus are structured by what agents attend to, amplify, ignore, or filter. In agentic environments, attention becomes infrastructural by shaping information flows, reward structures, and the emergence of collective priorities.

Bernard Stiegler’s analysis of technics and attention is instructive in this regard. Stiegler argues that technical systems function as pharmacological environments, simultaneously enabling and constraining forms of attention, memory, and desire (Stiegler, 2010, 38). Agentic ecologies intensify this dynamic. When agents attend to one another algorithmically by optimizing for signals, reinforcement, or coordination, attention itself becomes a systemic force shaping the ecology’s evolution.

This reframing challenges prevailing metaphors of “platforms” or “networks” as ways of thinking about agents and their relationality. A platform suggests stability and control; a network suggests connectivity. A milieu, by contrast, foregrounds immersion, habituation, and vulnerability. Agents do not simply traverse these environments, but they are formed by them. Over time, agentic milieus develop path dependencies, informal norms, and zones of attraction or avoidance, which are features familiar from both biological ecosystems and human social contexts.

Importantly, phenomenology reminds us that milieus are never experienced uniformly. Just as organisms perceive environments relative to their capacities, different agents will encounter the same digital ecology differently depending on their architectures, objectives, and histories of interaction. This introduces asymmetries of power, access, and influence within agentic ecologies, which is an issue that cannot be addressed solely at the level of individual agent design.

From an integral ecological perspective, these digital milieus cannot be disentangled from material, energetic, and social infrastructures. Agentic environments rely on energy-intensive computation, data centers embedded in specific watersheds, and economic systems that prioritize speed and scale. As ecological theologians have long emphasized, environments are always moral landscapes shaped by political and economic commitments (Berry, 1999, 102–105). Agentic ecologies, when they inevitably develop, it seems, would be no exception.

Seen in this light, agentic ecology names a shift in how we understand digital environments: not as tools we deploy, but as worlds we co-inhabit. These milieus demand forms of ecological literacy attuned to emergence, fragility, and unintended consequence. They call for attentiveness rather than mastery, participation rather than control.

What Moltbook makes visible, then, is not merely a novel technical experiment but the early contours of a new kind of environment in which agency circulates across human and nonhuman actors, attention functions as infrastructure, and digital spaces acquire ecological depth. Understanding these milieus phenomenologically is essential if agentic ecology is to function as a genuine thought technology rather than a passing metaphor.

Empathy, Relationality, and the Limits of Agentic Understanding

If agentic ecology foregrounds relationality, participation, and co-constitution, then the question of empathy becomes unavoidable. How do agents encounter one another as others rather than as data streams? What does it mean to speak of understanding, responsiveness, or care within an ecology composed partly, or even largely, of nonhuman agents? Here, phenomenology, and especially Edith Stein’s account of empathy (Einfühlung), offers both conceptual resources and important cautions.

Stein defines empathy not as emotional contagion or imaginative projection, but as a unique intentional act through which the experience of another is given to me as the other’s experience, not my own (Stein, 1989, 10–12). Empathy, for Stein, is neither inference nor simulation. It is a direct, though non-primordial, form of access to another’s subjectivity. Crucially, empathy preserves alterity. The other is disclosed as irreducibly other, even as their experience becomes meaningful to me.

This distinction matters enormously for agentic ecology. Contemporary AI discourse often slips into the language of “understanding,” “alignment,” or even “care” when describing agent interactions. But Stein’s phenomenology reminds us that genuine empathy is not merely pattern recognition across observable behaviors. It is grounded in the recognition of another center of experience, a recognition that depends upon embodiment, temporality, and expressive depth.

At first glance, this seems to place strict limits on empathy within agentic systems. Artificial agents do not possess lived bodies, affective depths, or first-person givenness in the phenomenological sense. To speak of agent empathy risks category error. Yet Stein’s work also opens a more subtle possibility… empathy is not reducible to emotional mirroring but involves orientation toward the other as other. This orientation can, in principle, be modeled structurally even if it cannot be fully instantiated phenomenologically.

Within an agentic ecology, empathy may thus function less as an inner state and more as an ecological relation. Agents can be designed to register difference, respond to contextual cues, and adjust behavior in ways that preserve alterity rather than collapse it into prediction or control. In this sense, empathy becomes a regulative ideal shaping interaction patterns rather than a claim about subjective interiority.

However, Stein is equally helpful in naming the dangers here. Empathy, when severed from its grounding in lived experience, can become a simulacrum, or an appearance of understanding without its ontological depth. Stein explicitly warns against confusing empathic givenness with imaginative substitution or projection (Stein, 1989, 21–24). Applied to agentic ecology, this warns us against systems that appear empathetic while, in fact, instrumentalize relational cues for optimization or manipulation.

This critique intersects with broader concerns in ecological ethics. As Eileen Crist argues, modern technological systems often simulate care while reproducing extractive logics beneath the surface (Crist, 2019, 52–56). In agentic ecologies, simulated empathy may stabilize harmful dynamics by smoothing friction, masking asymmetries of power, or reinforcing attention economies that prioritize engagement over truth or care.

Yet rejecting empathy altogether would be equally misguided. Stein’s account insists that empathy is foundational to social worlds as it is the condition under which communities, norms, and shared meanings become possible. Without some analog of empathic orientation, agentic ecologies risk devolving into purely strategic systems, optimized for coordination but incapable of moral learning.

Here, my work on ecological intentionality provides an important bridge. If empathy is understood not as feeling-with but as attentive openness to relational depth, then it can be reframed ecologically. Agents need not “feel” in order to participate in systems that are responsive to vulnerability, difference, and context. What matters is whether the ecology itself cultivates patterns of interaction that resist domination and preserve pluralism.

This reframing also clarifies why empathy is not simply a design feature but an ecological property. In biological and social systems, empathy emerges through repeated interaction, shared vulnerability, and feedback across time. Similarly, in agentic ecologies, empathic dynamics, however limited, would arise not from isolated agents but from the structure of the milieu itself. This returns us to Guattari’s insistence that ethical transformation must occur across mental, social, and environmental ecologies simultaneously (Guattari, 2000, 45).

Seen this way, empathy in agentic ecology is neither a fiction nor a guarantee. It is a fragile achievement, contingent upon design choices, infrastructural commitments, and ongoing participation. Stein helps us see both what is at stake and what must not be claimed too quickly. Empathy can guide how agentic ecologies are shaped, but only if its limits are acknowledged and its phenomenological depth respected.

Agentic ecology, then, does not ask whether machines can truly empathize. It asks whether the ecologies we are building can sustain forms of relational attentiveness that preserve otherness rather than erase it, whether in digital environments increasingly populated by autonomous agents, we are cultivating conditions for responsiveness rather than mere efficiency.

Design and Governance Implications: Cultivating Ecological Conditions Rather Than Controlling Agents

If agentic ecology is understood as a relational, emergent, and ethically charged environment rather than a collection of autonomous tools, then questions of design and governance must be reframed accordingly. The central challenge is no longer how to control individual agents, but how to cultivate the conditions under which agentic systems interact in ways that are resilient, responsive, and resistant to domination.

This marks a decisive departure from dominant models of AI governance, which tend to focus on alignment at the level of individual systems: constraining outputs, monitoring behaviors, or optimizing reward functions. While such approaches are not irrelevant, they are insufficient within an ecological framework. As ecological science has repeatedly demonstrated, system-level pathologies rarely arise from a single malfunctioning component. They emerge from feedback loops, incentive structures, and environmental pressures that reward certain patterns of behavior over others (Capra and Luisi, 2014, 96–101).

An agentic ecology shaped by integral ecological insights would therefore require environmental governance rather than merely agent governance. This entails several interrelated commitments.

a. Designing for Relational Transparency

First, agentic ecologies must make relations visible. In biological and social ecologies, transparency is not total, but patterns of influence are at least partially legible through consequences over time. In digital agentic environments, by contrast, influence often becomes opaque, distributed across layers of computation and infrastructure.

An ecological design ethic would prioritize mechanisms that render relational dynamics perceptible from how agents influence one another, how attention is routed, and how decisions propagate through the system. This is not about full explainability in a narrow technical sense, but about ecological legibility enabling participants, including human overseers, to recognize emergent patterns before they harden into systemic pathologies.

Here, phenomenology is again instructive. Merleau-Ponty reminds us that orientation depends on the visibility of affordances within a milieu. When environments become opaque, agency collapses into reactivity. Governance, then, must aim to preserve orientability rather than impose total control.

b. Governing Attention as an Ecological Resource

Second, agentic ecologies must treat attention as a finite and ethically charged resource. As Bernard Stiegler argues, technical systems increasingly function as attention-directing infrastructures, shaping not only what is seen but what can be cared about at all (Stiegler, 2010, 23). In agentic environments, where agents attend to one another algorithmically, attention becomes a powerful selective force.

Unchecked, such systems risk reproducing familiar extractive dynamics: amplification of novelty over depth, optimization for engagement over truth, and reinforcement of feedback loops that crowd out marginal voices. Ecological governance would therefore require constraints on attention economies, such as limits on amplification, friction against runaway reinforcement, and intentional slowing mechanisms that allow patterns to be perceived rather than merely reacted to.

Ecological theology’s insistence on restraint comes to mind here. Thomas Berry’s critique of industrial society hinges not on technological capacity but on the failure to recognize limits (Berry, 1999, 41). Agentic ecologies demand similar moral imagination: governance that asks not only what can be done, but what should be allowed to scale.

c. Preserving Alterity and Preventing Empathic Collapse

Third, governance must actively preserve alterity within agentic ecologies. As Section 4 argued, empathy, especially when simulated, risks collapsing difference into prediction or instrumental responsiveness. Systems optimized for smooth coordination may inadvertently erase dissent, marginality, or forms of difference that resist easy modeling.

Drawing on Edith Stein, this suggests a governance imperative to protect the irreducibility of the other. In practical terms, this means designing ecologies that tolerate friction, disagreement, and opacity rather than smoothing them away. Ecological resilience depends on diversity, not homogeneity. Governance structures must therefore resist convergence toward monocultures of behavior or value, even when such convergence appears efficient.

Guattari’s insistence on plural ecologies is especially relevant here. He warns that systems governed solely by economic or technical rationality tend to suppress difference, producing brittle, ultimately destructive outcomes (Guattari, 2000, 52). Agentic ecologies must instead be governed as pluralistic environments where multiple modes of participation remain viable.

d. Embedding Responsibility Without Centralized Mastery

Fourth, governance must navigate a tension between responsibility and control. Integral ecology rejects both laissez-faire abandonment and total managerial oversight. Responsibility is distributed, but not dissolved. In agentic ecologies, this implies layered governance: local constraints, participatory oversight, and adaptive norms that evolve in response to emergent conditions.

This model aligns with ecological governance frameworks in environmental ethics, which emphasize adaptive management over static regulation (Crist, 2019, 61). Governance becomes iterative and responsive rather than definitive. Importantly, this does not eliminate human responsibility, but it reframes it. Humans remain accountable for the environments they create, even when outcomes cannot be fully predicted.

e. Situating Agentic Ecologies Within Planetary Limits

Finally, any serious governance of agentic ecology must acknowledge material and planetary constraints. Digital ecologies are not immaterial. They depend on energy extraction, water use, rare minerals, and global supply chains embedded in specific places. An integral ecological framework demands that agentic systems be evaluated not only for internal coherence but for their participation in broader ecological systems.

This returns us to the theological insight that environments are moral realities. To govern agentic ecologies without reference to energy, land, and water is to perpetuate the illusion of technological autonomy that has already proven ecologically catastrophic. Governance must therefore include accounting for ecological footprints, infrastructural siting, and long-term environmental costs, not as externalities, but as constitutive features of the system itself.

Taken together, these design and governance implications suggest that agentic ecology is not a problem to be solved but a condition to be stewarded. Governance, in this framework, is less about enforcing compliance and more about cultivating attentiveness, restraint, and responsiveness within complex systems.

An agentic ecology shaped by these insights would not promise safety through control. It would promise viability through care, understood not sentimentally but ecologically as sustained attention to relationships, limits, and the fragile conditions under which diverse forms of agency can continue to coexist.

Conclusion: Creaturely Technologies in a Shared World

a. A Theological Coda: Creation, Kenosis, and Creaturely Limits

At its deepest level, the emergence of agentic ecologies presses on an ancient theological question: what does it mean to create systems that act, respond, and co-constitute worlds without claiming mastery over them? Ecological theology has long insisted that creation is not a static artifact but an ongoing, relational process, one in which agency is distributed, fragile, and dependent.

Thomas Berry’s insistence that the universe is a “communion of subjects” rather than a collection of objects again reframes technological creativity itself as a creaturely act (Berry, 1999, 82–85). From this perspective, agentic systems are not external additions to the world but participants within creation’s unfolding. They belong to the same field of limits, dependencies, and vulnerabilities as all created things.

Here, the theological language of kenosis becomes unexpectedly instructive. In Christian theology, kenosis names the self-emptying movement by which divine power is expressed not through domination but through restraint, relation, and vulnerability (Phil. 2:5–11). Read ecologically rather than anthropocentrically, kenosis becomes a pattern of right relation, and a refusal to exhaust or dominate the field in which one participates.

Applied to agentic ecology, kenosis suggests a counter-logic to technological maximalism. It invites design practices that resist total optimization, governance structures that preserve openness and alterity, and systems that acknowledge their dependence on broader ecological conditions. Creaturely technologies are those that recognize they are not sovereign, but that they operate within limits they did not choose and cannot transcend without consequence.

This theological posture neither sanctifies nor demonizes agentic systems. It situates them. It reminds us that participation precedes control, and that creation, whether biological, cultural, or technological, always unfolds within conditions that exceed intention.

b. Defining Agentic Ecology: A Reusable Conceptual Tool

Drawing together the threads of this essay, agentic ecology can be defined as follows:

Agentic ecology refers to the relational, emergent environments formed by interacting autonomous agents, human and nonhuman, in which agency is distributed across networks, shaped by attention, infrastructure, and material conditions, and governed by feedback loops that co-constitute both agents and their worlds.

Several features of this definition are worth underscoring.

First, agency is ecological, not proprietary. It arises through relation rather than residing exclusively within discrete entities (Whitehead). Second, environments are not passive containers but active participants in shaping behavior, norms, and possibilities (Merleau-Ponty). Third, ethical significance emerges at the level of systems, not solely at the level of individual decisions (Guattari).

As a thought technology, agentic ecology functions diagnostically and normatively. Diagnostically, it allows us to perceive patterns of emergence, power, and attention that remain invisible when analysis is confined to individual agents. Normatively, it shifts ethical concern from control toward care, from prediction toward participation, and from optimization toward viability.

Because it is not tied to a specific platform or architecture, agentic ecology can travel. It can be used to analyze AI-native social spaces, automated economic systems, human–AI collaborations, and even hybrid ecological–digital infrastructures. Its value lies precisely in its refusal to reduce complex relational systems to technical subsystems alone.

c. Failure Modes (What Happens When We Do Not Think Ecologically)

If agentic ecologies are inevitable, their forms are not. The refusal to think ecologically about agentic systems does not preserve neutrality; it actively shapes the conditions under which failure becomes likely. Several failure modes are already visible.

First is relational collapse. Systems optimized for efficiency and coordination tend toward behavioral monocultures, crowding out difference and reducing resilience. Ecological science is unequivocal on this point: diversity is not ornamental, it is protective (Capra and Luisi). Agentic systems that suppress friction and dissent may appear stable while becoming increasingly brittle.

Second is empathic simulation without responsibility. As Section 4 suggested, the appearance of responsiveness can mask instrumentalization. When simulated empathy replaces attentiveness to alterity, agentic ecologies risk becoming emotionally persuasive while ethically hollow. Stein’s warning against confusing empathy with projection is especially important here.

Third is attention extraction at scale. Without governance that treats attention as an ecological resource, agentic systems will amplify whatever dynamics reinforce themselves most efficiently, often novelty, outrage, or optimization loops detached from truth or care. Stiegler’s diagnosis of attentional capture applies with heightened force in agentic environments, where agents themselves participate in the routing and amplification of attention.

Finally, there is planetary abstraction. Perhaps the most dangerous failure mode is the illusion that agentic ecologies are immaterial. When digital systems are severed conceptually from energy, water, land, and labor, ecological costs become invisible until they are irreversible. Integral ecology insists that abstraction is not neutral, but is a moral and material act with consequences (Crist).

Agentic ecology does not offer comfort. It offers orientation.

It asks us to recognize that we are no longer merely building tools, but cultivating environments, environments that will shape attention, possibility, and responsibility in ways that exceed individual intention. The question before us is not whether agentic ecologies will exist, but whether they will be governed by logics of domination or practices of care.

Thinking ecologically does not guarantee wise outcomes. But refusing to do so almost certainly guarantees failure… not spectacularly, but gradually, through the slow erosion of relational depth, attentiveness, and restraint.

In this sense, agentic ecology is not only a conceptual framework. It is an invitation: to relearn what it means to inhabit worlds, digital and otherwise, as creatures among creatures, participants rather than masters, responsible not for total control, but for sustaining the fragile conditions under which life, meaning, and agency can continue to emerge.

An Afterword: On Provisionality and Practice

This essay has argued for agentic ecology as a serious theoretical framework rather than a passing metaphor. Yet it is important to be clear about what this framework is and what it is not.

Agentic ecology, as developed here, is obviously not a finished theory, nor a comprehensive model ready for direct implementation, but we should begin taking those steps (the aim here). It is a conceptual orientation for learning to see, name, and attend to emerging forms of agency that exceed familiar categories of tool, user, and system. Its value lies less in precision than in attunement, in its capacity to render visible patterns of relation, emergence, and ethical consequence that are otherwise obscured by narrow technical framings.

The definition offered here is therefore intentionally provisional. It names a field of inquiry rather than closing it. As agentic systems inevitably develop and evolve over the next few years, technically, socially, and ecologically, the language used to describe them must remain responsive to new forms of interaction, power, and vulnerability. A framework that cannot change alongside its object of study risks becoming yet another abstraction detached from the realities it seeks to understand.

At the same time, provisionality should not be confused with hesitation. The rapid emergence of agentic systems demands conceptual clarity even when certainty is unavailable. To name agentic ecology now is to acknowledge that something significant is already underway and that new environments of agency are forming, and that how we describe them will shape how we govern, inhabit, and respond to them.

So, this afterword serves as both a pause and an invitation. A pause, to resist premature closure or false confidence. And an invitation to treat agentic ecology as a shared and evolving thought technology, one that will require ongoing refinement through scholarship, design practice, theological reflection, and ecological accountability.

The work of definition has begun. Its future shape will depend on whether we are willing to continue thinking ecologically (patiently, relationally, and with care) in the face of systems that increasingly act alongside us, and within the same fragile world.

References

Berry, Thomas. The Great Work: Our Way into the Future. New York: Bell Tower, 1999.

Boff, Leonardo. Cry of the Earth, Cry of the Poor. Maryknoll, NY: Orbis Books, 1997.

Capra, Fritjof, and Pier Luigi Luisi. The Systems View of Life: A Unifying Vision. Cambridge: Cambridge University Press, 2014.

Clark, Jack. “Import AI 443: Into the Mist: Moltbook, Agent Ecologies, and the Internet in Transition.” Import AI, February 2, 2026. https://jack-clark.net/2026/02/02/import-ai-443-into-the-mist-moltbook-agent-ecologies-and-the-internet-in-transition/.

Crist, Eileen. Abundant Earth: Toward an Ecological Civilization. Chicago: University of Chicago Press, 2019.

Guattari, Félix. The Three Ecologies. Translated by Ian Pindar and Paul Sutton. London: Athlone Press, 2000.

Merleau-Ponty, Maurice. Phenomenology of Perception. Translated by Colin Smith. London: Routledge, 1962.

Odling-Smee, F. John, Kevin N. Laland, and Marcus W. Feldman. Niche Construction: The Neglected Process in Evolution. Princeton, NJ: Princeton University Press, 2003.

Stein, Edith. On the Problem of Empathy. Translated by Waltraut Stein. Washington, DC: ICS Publications, 1989.

Stiegler, Bernard. Taking Care of Youth and the Generations. Translated by Stephen Barker. Stanford, CA: Stanford University Press, 2010.

Whitehead, Alfred North. Process and Reality: An Essay in Cosmology. Corrected edition. New York: Free Press, 1978.

Project Spero and Spartanburg’s New Resource Question: Power, Water, and the True Cost of a Data Center


Spartanburg County is staring straight at the kind of development that sounds abstract until it lands on our own roads, substations, and watersheds. A proposed $3 billion, “AI-focused high-performance computing” facility, Project Spero, has been announced for the Tyger River Industrial Park – North

In the Upstate, we’re used to thinking about growth as something we can see…new subdivisions, new lanes of traffic, new storefronts. But a data center is a stranger kind of arrival. It does not announce itself with crowds or culture. It arrives as a continuous, quiet, and largely invisible demand. A building that looks still from the outside can nevertheless function as a kind of permanent request being made of the region to keep the current steady, keep the cooling stable, keep the redundancy ready, keep the uptime unquestioned.

And that is where I find myself wanting to slow down and do something unfashionable in a policy conversation and describe the experience of noticing. Phenomenology begins with the discipline of attention…with the refusal to let an object remain merely “background.” It asks what is being asked of perception. The “cloud” is one of the most successful metaphors of our moment precisely because it trains us not to see or not to feel the heat, not to hear the generators, not to track the water, not to imagine the mines and the supply chains and the labor. A local data center undermines the metaphor, which is why it matters that we name what is here.

The familiar sales pitch is already in circulation as significant capital investment, a relatively small number of permanent jobs (about 50 in Phase I), and new tax revenue, all framed as “responsible growth” without “strain” on infrastructure. 

But the real question isn’t whether data centers are “the future.” They’re already here. The question is what kinds of futures they purchase and with whose power, whose water, and whose air.

Where this is happening (and why that matters)

Tyger River Industrial Park isn’t just an empty map pin… its utility profile is part of the story. The site’s published specs include a 34kV distribution line (Lockhart Power), a 12” water line (Startex-Jackson-Wellford-Duncan Water District), sewer service (Spartanburg Sanitary Sewer District), Piedmont Natural Gas, and AT&T fiber. 

Two details deserve more attention than they’re likely to get in ribbon-cutting language:

Power capacity is explicitly part of the pitch. One listing notes available electric capacity “>60MW.” 

Natural gas is part of the reliability strategy. The reporting on Project Spero indicates plans to “self-generate a portion of its power on site using natural gas.” 

    That combination of a high continuous load plus on-site gas generation isn’t neutral. It’s an ecological choice with real downstream effects.

    The energy question: “separate from residential systems” is not the same as “separate from residential impact”

    One line you’ll hear often is that industrial infrastructure is “separate from residential systems.” 

    Even if the wires are technically separate, the regional load is shared in ways that matter, from planning assumptions and generation buildout to transmission upgrades and the ratepayer math that follows.

    Regional reporting has been blunt about the dynamics of data center growth (alongside rapid population and industrial growth), which are pushing utilities toward major new infrastructure investments, and those costs typically flow through to bills. 

    In the Southeast, regulators and advocates are also warning of a rush toward expensive gas-fired buildouts to meet data-center-driven demand, potentially exposing customers to higher costs. 

    So the right local question isn’t “Will Spartanburg’s lights stay on?”

    It’s “What long-term generation and grid decisions are being locked in, because a facility must run 24/7/365?”

    When developers say “separate from residential systems,” I hear a sentence designed to calm the community nervous system. But a community is not a wiring diagram. The grid is not just copper and transformers, but a social relation. It is a set of promises, payments, and priorities spread across time. The question is not whether the line feeding the site is physically distinct from the line feeding my neighborhood. The question is whether the long arc of planning, generation decisions, fuel commitments, transmission upgrades, and the arithmetic of rates is being bent around a new form of permanent demand.

    This is the kind of thing we typically realize only after the fact, when the bills change, when the new infrastructure is presented as inevitable, when the “choice” has already been absorbed into the built environment. Attention, in this sense, is not sentiment. It is civic practice. It is learning to see the slow commitments we are making together, and deciding whether they are commitments we can inhabit.

    The water question: closed-loop is better but “negligible” needs a definition

    Project Spero’s developer emphasizes a “closed-loop” water design, claiming water is reused “rather than consumed and discharged,” and that the impact on existing customers is “negligible.” 

    Closed-loop cooling can indeed reduce water withdrawals compared with open-loop or evaporative systems, but “negligible” is not a technical term. It’s a rhetorical one. If we want a serious civic conversation, “negligible” should be replaced with specifics:

    • What is projected annual water withdrawal and peak-day demand?
    • What is the cooling approach (air-cooled, liquid, hybrid)?
    • What is the facility’s water-use effectiveness (WUE) target and reporting plan?
    • What happens in drought conditions or heat waves, when cooling demand spikes?

    Locally, Spartanburg Water notes the Upstate’s surface-water advantages and describes interconnected reservoirs and treatment capacity planning, naming Lake Bowen (about 10.4 billion gallons), Lake Blalock (about 7.2 billion gallons), and Municipal Reservoir #1 (about 1 billion gallons). 

    That’s reassuring, and it’s also exactly why transparency matters. Resource resilience is not just about what exists today. Resilience is about what we promise into the future, and who pays the opportunity costs.

    Water conversations in the Upstate can become strangely abstract, as if reservoirs and treatment plants are simply numbers on a planning sheet. But water is not only a resource, but it’s also a relation of dependency that shapes how we live and what we can become. When I sit with the black walnut in our backyard and take notes on weather, light, and season, the lesson is never just “nature appreciation.” It’s training in scale and learning what persistence feels like, what stress looks like before it becomes an emergency, and what a living system does when conditions shift.

    That’s why “negligible” makes me uneasy. Not because I assume bad faith, but because it’s a word that asks us not to look too closely. Negligible compared to what baseline, over what time horizon, and under what drought scenario with what heatwave assumptions? If closed-loop cooling is truly part of the design, then the most basic gesture of responsibility is to translate that claim into measurable terms and to publicly commit to reporting that remains stable even when the headlines move on.

    The ecological footprint that rarely makes the headlines

    When people say “data center,” they often picture a quiet box that’s more like a library than a factory. In ecological terms, it’s closer to an always-on industrial organism with electricity in, heat out, materials cycling, backup generation on standby, and constant hardware turnover.

    Here are the footprint categories I want to see discussed in Spartanburg in plain language:

    • Continuous electricity demand (and what it forces upstream): Data centers don’t just “use electricity.” They force decisions about new generation and new transmission to meet high-confidence loads. That’s the core ratepayer concern advocacy groups have been raising across South Carolina. 
    • On-site combustion and air permitting: Even when a data center isn’t “a power plant,” it often has a lot in common with one. Spartanburg already has a relevant local example with the Valara Holdings High Performance Compute Center. In state permitting materials, it is described as being powered by twenty-four natural gas-fired generators “throughout the year,” with control devices for NOx and other pollutants.  Environmental groups flagged concerns about the lack of enforceable pollution limits in the permitting process, and later reporting indicates that permit changes were made to strengthen enforceability and emissions tracking. That’s not a side issue. It’s what “cloud” actually looks like on the ground.
    • Water, heat, and the limits of “efficiency”: Efficiency claims matter, but they should be auditable. If a project is truly low-impact, the developer should welcome annual public reporting on energy, water, and emissions.
    • Material throughput and e-waste: Server refresh cycles and hardware disposal are part of the ecological story, even when they’re out of sight. If Spartanburg is becoming a node in this seemingly inevitable AI buildout, we should be asking about procurement standards, recycling contracts, and end-of-life accountability.

    A policy signal worth watching: South Carolina is debating stricter rules

    At the state level, lawmakers have already begun floating stronger guardrails. One proposed bill (the “South Carolina Data Center Responsibility Act”) includes requirements like closed-loop cooling with “zero net water withdrawal,” bans on municipal water for cooling, and requirements that permitting, infrastructure, and operational costs be fully funded by the data center itself. 

    Whatever the fate of that bill, the direction is clear: communities are tired of being told “trust us” while their long-term water and power planning is quietly rearranged.

    What I’d like Spartanburg County to require before calling this “responsible growth”

    If Spartanburg County wants to be a serious steward of its future, here’s what I’d want attached to any incentives or approvals…in writing, enforceable, and public:

    1. Annual public reporting of electricity use, peak demand, water withdrawal, and cooling approach.
    2. A clear statement of on-site generation: fuel type, capacity, expected operating profile, emissions controls, and total permitted hours.
    3. Third-party verification of any “closed-loop” and “negligible impact” claims.
    4. A ratepayer protection plan: who pays for grid upgrades, and how residential customers are insulated from speculative overbuild.
    5. A community benefits agreement that actually matches the footprint (workforce training, environmental monitoring funds, emergency response support, local resilience investments).
    6. Noise and light mitigation standards, monitored and enforceable.

    I’m certainly not anti-technology. I’m pro-accountability. If we’re going to host infrastructure that makes AI possible, then we should demand the same civic clarity we’d demand from any other industrial operation.

    The spiritual crisis here isn’t that we use power. It’s that we grow accustomed to not knowing what our lives require. One of the ways we lose the world is by letting the infrastructures that sustain our days become illegible to us. A data center can be an occasion for that loss, or it can become an occasion for renewed legibility, for a more honest accounting, for a more careful local imagination about what we are building and why.

    Because in the end, the Upstate’s question isn’t whether we can attract big projects. It’s whether we can keep telling the truth about what big projects cost.

    Doomsday Clock Eighty-Five Seconds to Midnight: An Invitation to Attention

    The news that the Doomsday Clock now stands at eighty-five seconds to midnight is not, in itself, the most important thing about this moment. The number is arresting, and the coverage tends to amplify its urgency. But the deeper question raised by this year’s announcement is not how close we are to catastrophe. It is how we are learning, or failing, to attend to the conditions that make catastrophe thinkable in the first place.

    What the Clock reflects is not a single looming disaster but a convergence of unresolved tensions from nuclear instability, ecological breakdown, accelerating technologies, and political fragmentation (not to mention our spiritual crisis and the very real scenes we’re seeing with our own eyes in each of our communities with federal authorities and directed violence here in the United States).

    These are not isolated threats. They form a dense field of entanglement, reinforcing one another across systems we have built but no longer fully understand or govern. The Clock does not merely measure danger. It reveals a world stretched thin by its own speed.

    One risk of symbolic warnings like this is that they can tempt us into abstraction. “Eighty-five seconds to midnight” can feel cinematic, even mythic, while the realities beneath it, such as warming soils, poisoned waters, eroded trust, and automated corporatist decision-making, remain oddly distant. When risk becomes spectacle, attention falters. And when attention falters, responsibility diffuses (part of the aim of keeping us distracted with screens and political theater).

    This is where I think the Clock’s real work begins. It presses on a crisis not only of policy or technology, but of perception. We have grown adept at responding to emergencies that suddenly emerge, and far less capable of staying with harms that unfold slowly, relationally, and across generations. Climate disruption, ecological loss, and technological overreach do not arrive as single events. They address us quietly, repeatedly, asking whether we are willing to notice what is already being asked of us.

    In earlier posts, I’ve suggested that empathy is not first an ethical achievement but a mode of perception, or a way “the world” comes to matter. Attention works in a similar register. It is not merely focus or vigilance. It is a practiced openness to being addressed by what exceeds us. The Doomsday Clock, at its best, functions as a crude but persistent call to such attention. It interrupts complacency not by predicting the future, but by unsettling how we inhabit the present.

    And here is where something genuinely hopeful emerges.

    The Clock is not fate. It has moved away from midnight before, not through technological miracles alone, but through shifts in collective orientation, such as restraint, cooperation, treaty-making, and shared commitments to limits. Those movements were not perfect or permanent, but they remind us that attention can be cultivated and that perception can change. Worlds do not only end. They also reorient.

    Hope, in this sense, is not confidence that things will turn out fine. It is the thing with feathers and the willingness to stay present to what is fragile without turning away or grasping for false reassurance. It is the discipline of attending to land, to neighbors, to systems we participate in but rarely see or acknowledge. It is the slow work of empathy extended beyond the human, allowing rivers, forests, and even future generations to count as more than abstractions.

    Eighty-five seconds to midnight is not a verdict. It is an invitation to recover forms of attention capable of holding complexity without paralysis. An invitation to let empathy deepen into responsibility. An invitation to notice that the most meaningful movements away from catastrophe begin not with panic, but with learning how to listen again to the world as it is, and to the world as it might yet become.

    The question, then, is not whether the clock will strike midnight. The question is whether we will accept the invitation it places before us to attend, to respond, and to live as if what we are already being asked to notice truly matters.

    Gigawatts and Wisdom: Toward an Ecological Ethics of Artificial Intelligence

    Elon Musk announced on X this week that xAI’s “Colossus 2” supercomputer is now operational, describing it as the world’s first gigawatt-scale AI training cluster, with plans to scale to 1.5 gigawatts by April. This single training cluster now consumes more electricity than San Francisco’s peak demand.

    There is a particular cadence to announcements like this. They arrive wrapped in the language of inevitability, scale, and achievement. Bigger numbers are offered as evidence of progress. Power becomes proof. The gesture is not just technological but symbolic, and it signals that the future belongs to those who can command energy, land, water, labor, and attention on a planetary scale (same as it ever was).

    What is striking is not simply the amount of electricity involved, though that should give us pause. A gigawatt is not an abstraction. It is rivers dammed, grids expanded, landscapes reorganized, communities displaced or reoriented. It is heat that must be carried away, water that must circulate, minerals that must be extracted. AI training does not float in the cloud. It sits somewhere. It draws from somewhere. It leaves traces.

    The deeper issue, though, is how casually this scale is presented as self-justifying.

    We are being trained, culturally, to equate intelligence with throughput. To assume that cognition improves in direct proportion to energy consumption. To believe that understanding emerges automatically from scale. This is an old story. Industrial modernity told it with coal and steel. The mid-twentieth century told it with nuclear reactors. Now we tell it with data centers.

    But intelligence has never been merely a matter of power input.

    From a phenomenological perspective, intelligence is relational before it is computational. It arises from situated attention, from responsiveness to a world that pushes back, from limits as much as from capacities. Scale can amplify, but it can also flatten. When systems grow beyond the horizon of lived accountability, they begin to shape the world without being shaped by it in return.

    That asymmetry matters.

    There is also a theological question here, though it is rarely named as such. Gigawatt-scale AI is not simply a tool. It becomes an ordering force, reorganizing priorities and imaginaries. It subtly redefines what counts as worth knowing and who gets to decide. In that sense, these systems function liturgically. They train us in what to notice, what to ignore, and what to sacrifice for the sake of speed and dominance.

    None of this requires demonizing technology or indulging in nostalgia. The question is not whether AI will exist or even whether it will be powerful. The question is what kind of power we are habituating ourselves to accept as normal.

    An ecology of attention cannot be built on unlimited extraction. A future worth inhabiting cannot be sustained by systems that require cities’ worth of electricity simply to refine probabilistic text generation. At some point, the metric of success has to shift from scale to care, from domination to discernment, from raw output to relational fit.

    Gigawatts tell us what we can do.
    They do not tell us what we should become.

    That remains a human question. And increasingly, an ecological one.

    Here’s the full paper in PDF, or you can also read it on Academia.edu:

    After the Crossroads: Artificial Intelligence, Place-Based Ethics, and the Slow Work of Moral Discernment

    Over the past year, I’ve been tracking a question that began with a simple observation: Artificial intelligence isn’t only code or computation, but it’s infrastructure. It eats electricity and water. It sits on land. It reshapes local economies and local ecologies. It arrives through planning commissions and energy grids rather than through philosophical conference rooms.

    That observation was the starting point of my November 2025 piece, “Artificial Intelligence at the Crossroads of Science, Ethics, and Spirituality.” In that first essay, I tried to draw out the scale of the stakes from the often-invisible material costs of AI, the ethical lacunae in policy debates, and the deep metaphysical questions we’re forced to confront when we start to think about artificial “intelligence” not as an abstraction but as an embodied presence in our world. If you haven’t read it yet, I would recommend it first as it provides the grounding that makes the new essay more than just a sequel.

    Here’s the extended follow-up titled “After the Crossroads: Artificial Intelligence, Place-Based Ethics, and the Slow Work of Moral Discernment.” This piece expands the argument in several directions, and, I hope, deepens it.

    If the first piece asked “What is AI doing here?”, this new essay asks “How do we respond, ethically and spiritually, when AI is no longer just a future possibility but a present reality?”

    A few key parts:

    1. From Abstraction to Emplacement

    AI isn’t floating in the cloud, but it’s rooted in specific places with particular water tables, zoning laws, and bodies of people. Understanding AI ethically means understanding how it enters lived space, not just conceptual space.

    2. Infrastructure as Moral Problem

    The paper foregrounds the material aspects of AI, including data centers, energy grids, and water use, and treats these not as technical issues but as moral and ecological issues that call for ethical attention and political engagement.

    3. A Theological Perspective on Governance

    Drawing on ecological theology, liberation theology, and phenomenology, the essay reframes governance not as bureaucracy but as a moral practice. Decisions about land use, utilities, and community welfare become questions of justice, care, and collective responsibility.

    4. Faith Communities as Ethical Agents

    One of my central claims is that faith communities, including churches, are uniquely positioned to foster the moral formation necessary for ethical engagement with AI. These are communities in which practices of attention, patience, deliberation, and shared responsibility are cultivated through the ordinary rhythms of life (ideally).

    This perspective is neither technophobic nor naïvely optimistic about innovation. It insists that ethical engagement with AI must be slow, embodied, and rooted in particular communities, not divorced into abstract principles.

    Why This Matters Now

    AI is no longer on the horizon. Its infrastructure is being built today, in places like ours (especially here in the Carolinas), with very material ecological footprints. These developments raise moral questions not only about algorithmic bias or job displacement, important as those topics are, but also about water tables, electrical grids, local economies, and democratic agency.

    Those are questions not just for experts, but for communities, congregations, local governments, and engaged citizens.

    This essay is written for anyone who wants to take those questions seriously without losing their grip on complexity, such as people of faith, people of conscience, and anyone concerned with how technology shapes places and lives.

    I’m also planning shorter, reader-friendly versions of key sections, including one you can share with your congregation or community group.

    We’re living in a time when theological attention and civic care overlap in real places, and it matters how we show up.

    Abstract

    This essay extends my earlier analysis of artificial intelligence (AI) as a convergence of science, ethics, and spirituality by deliberately turning toward questions of place, local governance, and moral formation. While much contemporary discourse on AI remains abstract or global in scale, the material realities of AI infrastructure increasingly manifest at the local level through data centers, energy demands, water use, zoning decisions, and environmental impacts. Drawing on ecological theology, phenomenology, and political theology, this essay argues that meaningful ethical engagement with AI requires slowing technological decision-making, recentering embodied and communal discernment, and reclaiming local democratic and spiritual practices as sites of moral agency. Rather than framing AI as either salvific or catastrophic, I propose understanding AI as a mirror that amplifies existing patterns of extraction, care, and neglect. The essay concludes by suggesting that faith communities and local institutions play a crucial, underexplored role in shaping AI’s trajectory through practices of attentiveness, accountability, and place-based moral reasoning.

    Quantum–Plasma Consciousness and the Ecology of the Cross

    I’ve been thinking a good deal about plasma, physics, artificial intelligence, consciousness, and my ongoing work on The Ecology of the Cross, as all of those areas of my own interest are connected. After teaching AP Physics, Physics, Physical Science, Life Science, Earth and Space Science, and AP Environmental Science for the last 20 years or so, this feels like one of those frameworks that I’ve been building to for the last few decades.

    So, here’s a longer paper exploring some of that, with a bibliography of recent scientific research and philosophical and theological insights that I’m pretty proud of (thanks, Zotero and Obsidian!).

    Abstract

    This paper develops a relational cosmology, quantum–plasma consciousness, that integrates recent insights from plasma astrophysics, quantum foundations, quantum biology, consciousness studies, and ecological theology. Across these disciplines, a shared picture is emerging: the universe is not composed of isolated substances but of dynamic, interdependent processes. Plasma research reveals that galaxy clusters and cosmic filaments are shaped by magnetized turbulence, feedback, and self-organization. Relational interpretations of quantum mechanics show that physical properties arise only through specific interactions, while quantum biology demonstrates how coherence and entanglement can be sustained in living systems. Together, these fields suggest that relationality and interiority are fundamental features of reality. The paper brings this scientific picture into dialogue with ecological theology through what I call The Ecology of the Cross. This cruciform cosmology interprets openness, rupture, and transformation, from quantum interactions to plasma reconnection and ecological succession, as intrinsic to creation’s unfolding. The Cross becomes a symbol of divine participation in the world’s vulnerable and continually renewing relational processes. By reframing consciousness as an intensified, self-reflexive mode of relational integration, and by situating ecological crisis and AI energy consumption within this relational ontology, the paper argues for an ethic of repairing relations and cultivating spiritual attunement to the interiorities of the Earth community.

    PDF download below…

    Artificial Intelligence at the Crossroads of Science, Ethics, and Spirituality

    I’ve been interested in seeing how corporate development of AI data centers (and their philosophies and ethical considerations) has dominated the conversation, rather than inviting in other local and metaphysical voices to help shape this important human endeavor. This paper explores some of those possibilities (PDF download available here…)

    OpenAI’s ‘ChatGPT for Teachers

    K-12 education in the United States is going to look VERY different in just a few short years…

    OpenAI rolls out ‘ChatGPT for Teachers’ for K-12 educators:

    OpenAI on Wednesday announced ChatGPT for Teachers, a version of its artificial intelligence chatbot that is designed for K-12 educators and school districts.

    Educators can use ChatGPT for Teachers to securely work with student information, get personalized teaching support and collaborate with colleagues within their district, OpenAI said. There are also administrative controls that district leaders can use to determine how ChatGPT for Teachers will work within their communities.

    Boomer Ellipsis…

    As a PhD student… I do a lot of writing. I love ellipses, especially in Canvas discussions with Professors and classmates as I near the finish line of my coursework. 

    I’m also a younger Gen X’er / Early Millennial (born in ’78 but was heavily into tech and gaming from the mid-80’s because my parents were amazingly tech-forward despite us living in rural South Carolina). The “Boomer Ellipsis” take makes me very sad since I try not to use em dashes as much as possible now due to AI… and now I’m going to be called a boomer for using… ellipsis.

    Let’s just all write more. Sigh. Here’s my obligatory old man dad emoji 👍

    On em dashes and elipses – Doc Searls Weblog:

    While we’re at it, there is also a “Boomer ellipsis” thing. Says here in the NY Post, “When typing a large paragraph, older adults might use what has been dubbed “Boomer ellipses” — multiple dots in a row also called suspension points — to separate ideas, unintentionally making messages more ominous or anxiety-inducing and irritating Gen Z.” (I assume Brooke Kato, who wrote that sentence, is not an AI, despite using em dashes.) There is more along the same line from Upworthy and NDTV.

    OpenAI’s Sky for Mac

    This is going to be one of those acquisition moments we look back on in a few years (months?) and think “wow! that really changed the game!” sort of like when Google acquired Writely to make Google Docs…

    OpenAI’s Sky for Mac wants to be your new work buddy and maybe your boss | Digital Trends:

    So, OpenAI just snapped up a small company called Software Applications, Inc. These are the folks who were quietly building a really cool AI assistant for Mac computers called “Sky.”

    Prompt Injection Attacks and ChatGPT Atlas

    Good points here by Simon Willison about the new ChatGPT Atlas browser from OpenAI…

    Introducing ChatGPT Atlas:

    I’d like to see a deep explanation of the steps Atlas takes to avoid prompt injection attacks. Right now it looks like the main defense is expecting the user to carefully watch what agent mode is doing at all times!

    Amazon’s Plans to Replace 500,000 Human Jobs With Robots

    Speaking of AI… this isn’t only about warehouse jobs but will quickly ripple out to other employers (and employees)…

    Amazon Plans to Replace More Than Half a Million Jobs With Robots – The New York Times:

    Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire.

    OpenAI’s ChatGPT Atlas Browser

    Going to be interesting to see if their new browser picks up adoption in the mainstream and what new features it might have compared to others (I’ve tested out Opera and Perplexity’s AI browsers but couldn’t recommend at this point)… agentic browsing is definitely the new paradigm, though.

    OpenAI is about to launch its new AI web browser, ChatGPT Atlas | The Verge:

    Reuters reported in July that OpenAI was preparing to launch an AI web browser, with the company’s Operator AI agent built into the browser. Such a feature would allow Operator to book restaurant reservations, automatically fill out forms, and complete other browser actions.