Project Spero Data Center Advances in Spartanburg: Power, Water, and the Real Resource Question

When I wrote recently about Project Spero here in Spartanburg and the unfolding “resource question,” the story still felt open, and we didn’t have many details beyond platitudes, so my thoughts were suspended between promise and caution.

This week, it moved. Spartanburg County Council approved the next step for the proposed artificial-intelligence data center after a packed, tense public meeting, advancing the roughly $3 billion project despite vocal opposition from residents concerned about its environmental and infrastructural impacts. The meeting stretched for hours, with hundreds of people filling the chamber and hallway to voice concerns about the scale of the facility planned for the Tyger River Industrial Park. In other words, the decision process is no longer theoretical. It is unfolding in real time (and hopefully with more transparency), and that matters for the path ahead.

Large data center announcements are consistently appearing in public discourse (at least here in the Carolinas), wrapped in abstraction and NDAs, surrounded by investment totals, job counts, and innovation narratives that feel distant from everyday life. But once approvals begin, the conversation shifts from what might happen to what must now be managed. Water withdrawals stop being projections, and power demand stops being modeled. Land use stops being conceptual while all of this becomes material. The movement of Project Spero into the next phase signals that Spartanburg is entering precisely that transition, moving from imagining a future to negotiating its physical cost.

One of the most striking claims emerging from the latest reporting is the developer’s insistence that the proposed AI data center will be “self-sufficient,” operating without straining local infrastructure or putting upward pressure on energy bills. On the surface, that language sounds reassuring, suggesting a facility that exists almost in isolation, drawing only on its own internal systems while leaving the surrounding community untouched.

However, this is precisely where the deeper resource questions I raised earlier become more important, not less. Infrastructure rarely, if ever, functions as an island. Power generation, transmission agreements, water sourcing, fuel supply, and long-term maintenance all unfold within shared regional systems, even when parts of the process occur on-site.

The broader context makes that reassurance harder to take at face value. Large data centers elsewhere have been documented consuming millions of gallons of water per day, and electricity costs have risen sharply in regions where such facilities cluster, with those increases often eventually distributed across customers rather than absorbed privately. That does not mean Spartanburg will necessarily follow the same pattern, but it does mean the conversation cannot end with a press release promise. If anything, the national trajectory suggests the need for clearer disclosure, not simpler assurances.

Local concerns voiced at the council meeting point to exactly this tension. Questions about transmission agreements, cost structures, and regulatory oversight are not abstract procedural details. They are the mechanisms through which “self-sufficiency” is tested in practice. The reported rejection of a large transmission proposal by federal regulators because of potential cost-shifting onto ratepayers highlights how easily infrastructure investments intended for a single industrial project can ripple outward into the broader grid. What appears contained at the planning stage can become shared responsibility over time, particularly when long-term demand growth, maintenance needs, or energy market shifts enter the picture.

The developer’s plan to generate some power on-site using natural gas, along with a closed-loop cooling system designed to limit water use, is significant and worth taking seriously. Those design choices suggest an awareness of public concern and an attempt to mitigate resource draw. But even here, the key question is not simply how much water or power is used inside the facility’s literal boundary fence. The real issue is how those systems connect to fuel supply chains, regional water tables, transmission reliability, and emergency contingencies. A closed loop still depends on an initial fill and ongoing operational stability. On-site generation still relies on pipelines, markets, and regulatory frameworks beyond the site itself. “Self-sufficient” in engineering terms doesn’t mean independent in ecological or civic terms.

This is exactly why the earlier framing of Project Spero as a resource question still holds. The challenge is not whether the developer intends to minimize impact. Most large projects today do for a variety of reasons, from economics to public goodwill to tax incentives. The challenge is that digital infrastructure, such as data centers, operates at scales where even minimized impacts can be structurally significant for smaller regions. Spartanburg is not just deciding whether to host a facility, but is deciding how much of its long-term water, energy capacity, and landscape stability should be oriented toward supporting global computational systems whose primary benefits may be distributed far beyond the county line.

The Council meeting itself was contentious, emotional, and at times interrupted by public reaction. It would be easy to read that as dysfunction, but I read it differently. That level of turnout suggests something deeper than simple opposition or support. Instead, local turnout for this sort of decision signals that residents recognize it touches fundamental questions about the region’s future and what counts as development in a place defined as much by rivers, forests, and communities as by industrial parks. Public tension often marks the moment when a community realizes that a project is not just economic but ecological and cultural.

Data centers, in this sense, are simply the visible tip of a broader shift. Across the Southeast (and especially here in South Carolina), AI-scale computing is accelerating demand for electricity, land, and cooling water at unprecedented levels, asking local governments to balance economic incentives against long-term utility strain, short-term construction jobs against enduring resource commitments, and technological prestige against environmental resilience. Project Spero brings that global tension directly into Spartanburg County. The deeper question is not whether this one facility should exist, but whether communities like ours have the ecological, civic, and ethical frameworks needed to evaluate infrastructure built primarily for planetary digital systems rather than local human (and more-than-human) needs.

Approval of another procedural step does not mean the story is finished. It means the story has entered its consequential phase. This is where transparency, ecological assessment, and long-range planning matter most, not least. Decisions made quietly at this stage often shape regional water use, grid load, and land development patterns for decades. If the earlier phase asked whether we should consider this, now the question is more likely to be how we will live with what we choose (or our elected officials “choose” for us).

What encourages me most is not the vote itself but the turnout. Packed rooms mean people care about the future of this place. They care about rivers, roads, power lines, neighborhoods, taxes, and the invisible infrastructures that shape daily life. That is not obstruction, but is civic life functioning. Project Spero may ultimately prove beneficial, burdensome, or something in between, but the real measure of success will be whether Spartanburg approaches it with clear eyes about both its opportunities and its ecological realities.

The true cost of a data center is never only measured in dollars. It is measured in attention, in energy, and in the long memory of the land that hosts it.

AI Data Centers, NDAs, and Rural Communities

I’ve been writing pretty extensively on the role that AI data centers are having in rural communities here in the Southeast of the United States, but this one literally hits home… I grew up in Marion County, SC (population of around 28,000 total now) and this sort of intentional action is infuriating and anti-democratic to say the least…

Data Centers Are Expanding Quietly Into Black Rural America – Capital B News:

As a rare winter storm bore down on South Carolina, bringing conditions that historically paralyze the state for days, local officials in a rural county quietly pushed through a massive $2.4 billion data center without most residents knowing it was even on the table.

“There was a public meeting, which most were unaware of,” Jessie Chandler, a resident of rural Marion County, told Capital B, referring to a Jan. 22 council meeting. “I know legally they had to announce the public meeting within a certain time frame for all of us to attend, but most of the county [was] preparing for this winter storm, which we know firsthand will affect us all because it has before.”

Marion County officials confirmed that the council signed a nondisclosure agreement, which barred their ability to make the data center public. On the agenda prior to the council meeting, the line item for the vote was called “Project Liberty,” but it did not list details of the project.

The pattern residents of this majority-Black rural county are experiencing is not isolated.

Strange Bedfellows and Nationwide Data Center Backlash

Rage against the machine: a California community rallied against a datacenter – and won | Technology | The Guardian:

Over the past year, homegrown revolts against datacenters have united a fractured nation, animating local board meetings from coast to coast in both farming towns and middle-class suburbs. Local communities delayed or cancelled $98bn worth of projects from late March 2025 to June 2025, according to research from the group Data Center Watch, which has been tracking opposition to the sites since 2023. More than 50 active groups across 17 states targeted 30 projects during that time period, two-thirds of which were halted.

The movement against these facilities has even made for strange bedfellows, bringing together nimbys and environmentalists in Virginia, “Stop the Steal” activists and Democratic Socialists of America organizers in Michigan.

“There’s no safe space for datacenters,” said Miquel Vila, lead analyst at Data Center Watch, a research project run by AI security company 10a Labs. “Opposition is happening in very different communities.”

Consciousness Talk in the Mainstream

🙋‍♂️

(Interesting to see thinkers like Pollan wade into the realm of consciousness and panpsychism now… times they are a changin’!)

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change – The New York Times (Gift Article):

Panpsychism is the idea that everything, every particle, the ink on the page, the atoms, all have some infinitesimal degree of psyche or consciousness, and somehow this consciousness is combined in some way from our cells and the rest of our bodies to create this kind of superconsciousness. It sounds crazy. There are some very serious people who believe in it. You have to expand your sense of the plausible when you’re looking at consciousness. But we’ve done that before. How long ago was it that we discovered electromagnetism? This crazy idea that there are all these waves passing through us that can carry information. That’s just as mind-blowing, right?

When Agency Becomes Ecological: AI, Labor, and the Redistribution of Attention

I read this piece in Futurism this morning, highlighting anxiety among employees at Anthropic about the very tools they are building. Agent-based AI systems designed to automate professional tasks are advancing quickly, and even insiders are expressing unease that these systems could displace forms of work that have long anchored identity and livelihood. The familiar story is one of replacement with machines and agents taking jobs, efficiency outpacing meaning, and productivity outrunning dignity.

“It kind of feels like I’m coming to work every day to put myself out of a job.”

That narrative is understandable. It is also incomplete.

It assumes agency is something discrete, something possessed. Either humans have it or ai agents do. Either labor is done by us or by them. This framing reflects a deeply modern inheritance in which action is imagined as individual, bounded, and owned. But if we step back and look phenomenologically, ecologically, even theologically, agency rarely appears that way in lived experience.

However, agency unfolds relationally. It arises through environments, histories, infrastructures, bodies, tools, and attentional fields that exceed any single actor. Whitehead described events as occasions within webs of relation rather than isolated units of causation. Merleau-Ponty reminded us that perception itself is co-constituted with the world it encounters. Edith Stein traced empathy as a participatory structure that bridges subjectivities. In each of these traditions, action is never solitary. It is ecological.

Seen from this vantage, AI agents do not simply replace agency. They redistribute it.

Workplaces become assemblages of human judgment, algorithmic suggestion, interface design, energy supply, and data pipelines. Decisions emerge from entanglement while expertise shifts from individual mastery toward collaborative navigation of hybrid systems. What unsettles people is not merely job loss, but the destabilization of familiar coordinates that once made agency legible to us.

This destabilization is not unprecedented. Guild laborers faced mechanization during the Industrial Revolution(s). Scribes faced it with the advent of the printing press. Monastics faced it when clocks began structuring devotion instead of bells and sunlight. Each moment involved a rearrangement of where attention was placed and how authority was structured. The present transition is another such rearrangement, though unfolding at computational speed.

Attention is the deeper currency here.

Agent systems promise efficiency precisely because they absorb attentional burden. They monitor, synthesize, draft, suggest, and route. But attention is not neutral bandwidth. It is a formative ecological force. Where attention flows, worlds take shape. If attentional responsibility migrates outward into technical systems, the question is not whether humans lose agency. It is what kinds of perception and responsiveness remain cultivated in us.

This is the moment where the conversation often stops short as discussions of automation typically orbit labor markets or productivity metrics or stock values. Rarely do they ask what habits of awareness diminish when engagement becomes mediated through algorithmic intermediaries. What forms of ecological attunement grow quieter when interaction shifts further toward abstraction.

And rarer still is acknowledgment of the material ecology enabling this shift.

Every AI agent relies on infrastructure that consumes electricity, water, land, and minerals. Data centers do not hover in conceptual space. They occupy watersheds. They reshape local grids. They alter thermal patterns. They compete with agricultural and municipal electrical grid and water demands. These realities are not peripheral to agency, but are conditions through which agency is enacted.

In places like here in the Carolinas, where digital infrastructure continues expanding exponentially, it seems the redistribution of agency is already tangible. Decisions about automation are inseparable from decisions about energy sourcing, zoning, and water allocation. The ecological footprint of computation folds into local landscapes long before its outputs appear in professional workflows.

Agency, again, proves ecological.

To recognize this is not to reject AI systems or retreat into Luddite nostalgia. The aim is attentiveness rather than resistance. Transitions of this magnitude call for widening perception (and resulting ethics) rather than narrowing judgment. If agency is relational, then responsibility must be relational as well. Designing, deploying, regulating, and using these tools all participate in shaping the ecologies they inhabit.

Perhaps the most generative question emerging from this moment is not whether artificial intelligence will take our agency. It is whether we can learn to inhabit redistributed agency wisely. Whether we can remain perceptive participants rather than passive recipients. Whether we can sustain forms of attention capable of noticing both digital transformation and the soils, waters, and energies through which it flows.

Late in the afternoon, sitting near the black walnut I’ve been tracking the past year, these abstractions tend to settle. Agency there is unmistakably ecological as we’d define it. Wind, insects, light, decay, growth, and memory intermingle without boundary disputes. Nothing acts alone, and nothing possesses its influence outright. The tree neither competes with nor yields to agency. It participates.

Our technologies, despite their novelty, do not remove us from that condition. They draw us deeper into it. The question is whether we will learn to notice.

Our AI Assisted Present (Follow Up)

This was by far my biggest post in 2016, and I think it’s fascinating that it took about a decade to happen. But here we are. 

Our AI Assisted (Near) Future – Sam Harrelson:

In the very near future of compatible API’s and interconnected services, I’ll be able to message this to my AI assistant (saving me hours):

“Amy, my client needs a new website. Get that set up for me on the agency Media Temple’s account as a new WordPress install and set up four email accounts with the following names. Also, go ahead and link the site to Google Analytics and Webmaster Tools, and install Yoast to make sure the SEO is ok. I’ll send over some tags and content but pull the pictures you need from their existing account. They like having lots of white space on the site as well.”

That won’t put me out of a job, but it will make what I do even more specialized.

Whole sectors of jobs and service related positions will disappear while new jobs that we can’t think of yet will be created. If we look at the grand scheme of history, we’re just at the very beginning of the “computing revolution” or “internet revolution” and the keyboard / mouse / screen paradigm of interacting with the web and computers themselves are certainly going to change (soon, I hope).

Defining Agentic Ecology: Relational Agency in the Age of Moltbook

The last few days have seen the rise of a curious technical and cultural phenomenon that has drawn the attention of technologists, philosophers, and social theorists alike on both social media and major news outlets called Moltbook. This is a newly launched social platform designed not for human conversation but for autonomous artificial intelligence agents, or generative systems that can plan, act, and communicate with minimal ongoing human instruction.

Moltbook is being described by Jack Clark, co-founder of Anthropic, as “the first example of an agent ecology that combines scale with the messiness of the real world” that leverages recent innovations (such as OpenClaw for easy AI agentic creation) to allow large numbers of independently running agents to interact in a shared digital space, creating emergent patterns of communication and coordination at unprecedented scale.

AI agents are computational systems that combine a foundation of large-language capabilities with planning, memory, and tool use to pursue objectives and respond to environments in ways that go beyond simple prompt-response chatbots. They can coordinate tasks, execute APIs, reason across time, and, in the case of Moltbook, exchange information on topics ranging from automation strategies to seemingly philosophical debates. While the autonomy of agents on Moltbook has been debated (and should be given the hype around it from tech enthusiasts), and while the platform itself may be a temporary experimental moment rather than a lasting institution, it offers a vivid instance of what happens when machine actors begin to form their own interconnected environments outside direct human command.

As a student scholar in the field of Ecology, Spirituality, and Religion, my current work attends to how relational systems (ecological, technological, and cultural) shape and are shaped by participation, attention, and meaning. The rise of agentic environments like Moltbook challenges us to think beyond traditional categories of tool, user, and artifact toward frameworks that can account for ecologies of agency, or distributed networks of actors whose behaviors co-constitute shared worlds. This post emerges from that broader research agenda. It proposes agentic ecology as a conceptual tool for articulating and navigating the relational, emergent, and ethically significant spaces that form when autonomous systems interact at scale.

Agentic ecology, as I use the term here, is not anchored in any particular platform, and certainly not limited to Moltbook’s current configuration. Rather, Moltbook illuminates an incipient form of environment in which digitally embodied agents act, coordinate, and generate patterns far beyond what single isolated systems can produce. Even if Moltbook itself proves ephemeral, the need for conceptual vocabularies like agentic ecology, vocabularies that attend to relationality, material conditions, and co-emergence, will only grow clearer as autonomous systems proliferate in economic, social, and ecological domains.

From Agents to Ecologies: An Integral Ecological Turn

The conceptual move from agents to ecologies marks more than a technical reframing of artificial intelligence. It signals an ontological shift that resonates deeply with traditions of integral ecology, process philosophy, and ecological theology. Rather than treating agency as a bounded capacity residing within discrete entities, an ecological framework understands agency as distributed, relational, and emergent within a field of interactions.

Integral ecology, as articulated across ecological philosophy and theology, resists fragmentation. It insists that technological, biological, social, spiritual, and perceptual dimensions of reality cannot be meaningfully separated without distorting the phenomena under study. Thomas Berry famously argued that modern crises arise from a failure to understand the world as a “communion of subjects rather than a collection of objects” (Berry, 1999, 82). This insight is particularly salient for agentic systems, which are increasingly capable of interacting, adapting, and co-evolving within complex digital environments.

From this perspective, agentic ecology is not simply the study of multiple agents operating simultaneously. It is the study of conditions under which agency itself emerges, circulates, and transforms within relational systems. Alfred North Whitehead’s process philosophy provides a crucial foundation here. Whitehead rejects the notion of substances acting in isolation, instead describing reality as composed of “actual occasions” whose agency arises through relational prehension and mutual influence (Whitehead, 1978, 18–21). Applied to contemporary AI systems, this suggests that agency is not a property possessed by an agent but an activity performed within an ecological field.

This relational view aligns with contemporary ecological science, which emphasizes systems thinking over reductionist models. Capra and Luisi describe living systems as networks of relationships whose properties “cannot be reduced to the properties of the parts” (Capra and Luisi, 2014, 66). When applied to AI, this insight challenges the tendency to evaluate agents solely by internal architectures or performance benchmarks. Instead, attention shifts to patterns of interaction, feedback loops, and emergent behaviors across agent networks.

Integral ecology further insists that these systems are not value-neutral. As Leonardo Boff argues, ecology must be understood as encompassing environmental, social, mental, and spiritual dimensions simultaneously (Boff, 1997, 8–10). Agentic ecologies, especially those unfolding in public digital spaces such as Moltbook, participate in the shaping of meaning, normativity, and attention. They are not merely computational phenomena but cultural and ethical ones. The environments agents help generate will, in turn, condition future forms of agency human and nonhuman alike.

Phenomenology deepens this account by foregrounding how environments are disclosed to participants. Merleau-Ponty’s notion of the milieu emphasizes that perception is always situated within a field that both enables and constrains action (Merleau-Ponty, 1962, 94–97). Agentic ecologies can thus be understood as perceptual fields in which agents orient themselves, discover affordances, and respond to one another. This parallels your own work on ecological intentionality, where attention itself becomes a mode of participation rather than observation.

Importantly, integral ecology resists anthropocentrism without erasing human responsibility. As Eileen Crist argues, ecological thinking must decenter human dominance while remaining attentive to the ethical implications of human action within planetary systems (Crist, 2019, 27). In agentic ecologies, humans remain implicated, as designers, participants, and co-inhabitants, even as agency extends beyond human actors. This reframing invites a form of multispecies (and now multi-agent) literacy, attuned to the conditions that foster resilience, reciprocity, and care.

Seen through this integral ecological lens, agentic ecology becomes a conceptual bridge. It connects AI research to long-standing traditions that understand agency as relational, emergence as fundamental, and environments as co-constituted fields of action. What Moltbook reveals, then, is not simply a novel platform, but the visibility of a deeper transition: from thinking about agents as tools to understanding them as participants within evolving ecologies of meaning, attention, and power.

Ecological Philosophy Through an “Analytic” Lens

If agentic ecology is to function as more than a suggestive metaphor, it requires grounding in ecological philosophy that treats relationality, emergence, and perception as ontologically primary. Ecological philosophy provides precisely this grounding by challenging the modern tendency to isolate agents from environments, actions from conditions, and cognition from the world it inhabits.

At the heart of ecological philosophy lies a rejection of substance ontology in favor of relational and processual accounts of reality. This shift is especially pronounced in twentieth-century continental philosophy and process thought, where agency is understood not as an intrinsic property of discrete entities but as an activity that arises within fields of relation. Whitehead’s process metaphysics is decisive here. For Whitehead, every act of becoming is an act of prehension, or a taking-up of the world into the constitution of the self (Whitehead, 1978, 23). Agency, in this view, is never solitary. It is always already ecological.

This insight has many parallels with ecological sciences and systems philosophies. As Capra and Luisi argue, living systems exhibit agency not through centralized control but through distributed networks of interaction, feedback, and mutual constraint (Capra and Luisi, 2014, 78–82). What appears as intentional behavior at the level of an organism is, in fact, an emergent property of systemic organization. Importantly, this does not dilute agency; it relocates it. Agency becomes a feature of systems-in-relation, not isolated actors.

When applied to AI, this perspective reframes how we understand autonomous agents. Rather than asking whether an individual agent is intelligent, aligned, or competent, an ecological lens asks how agent networks stabilize, adapt, and transform their environments over time. The analytic focus shifts from internal representations to relational dynamics, from what agents are to what agents do together.

Phenomenology sharpens this analytic lens by attending to the experiential structure of environments. Merleau-Ponty’s account of perception insists that organisms do not encounter the world as a neutral backdrop but as a field of affordances shaped by bodily capacities and situational contexts (Merleau-Ponty, 1962, 137–141). This notion of a milieu is critical for understanding agentic ecologies. Digital environments inhabited by AI agents are not empty containers; they are structured fields that solicit certain actions, inhibit others, and condition the emergence of norms and patterns.

Crucially, phenomenology reminds us that environments are not merely external. They are co-constituted through participation. As you have argued elsewhere through the lens of ecological intentionality, attention itself is a form of engagement that brings worlds into being rather than passively observing them. Agentic ecologies thus emerge not only through computation but through iterative cycles of orientation, response, and adaptation processes structurally analogous to perception in biological systems.

Ecological philosophy also foregrounds ethics as an emergent property of relational systems rather than an external imposition. Félix Guattari’s ecosophical framework insists that ecological crises cannot be addressed solely at the technical or environmental level; they require simultaneous engagement with social, mental, and cultural ecologies (Guattari, 2000, 28). This triadic framework is instructive for agentic systems. Agent ecologies will not only shape informational flows but would also modulate attention, influence value formation, and participate in the production of meaning.

From this standpoint, the ethical significance of agentic ecology lies less in individual agent behavior and more in systemic tendencies, such as feedback loops that amplify misinformation, reinforce extractive logics, or, alternatively, cultivate reciprocity and resilience. As Eileen Crist warns, modern technological systems often reproduce a logic of domination by abstracting agency from ecological contexts and subordinating relational worlds to instrumental control (Crist, 2019, 44). An ecological analytic lens exposes these tendencies and provides conceptual tools for resisting them.

Finally, ecological philosophy invites humility. Systems are irreducibly complex, and interventions often produce unintended consequences. This insight is well established in ecological science and applies equally to agentic networks. Designing and participating in agent ecologies requires attentiveness to thresholds, tipping points, and path dependencies, realities that cannot be fully predicted in advance.

Seen through this lens, agentic ecology is not merely a descriptive category but an epistemic posture. It asks us to think with systems rather than over them, to attend to relations rather than isolate components, and to treat emergence not as a failure of control but as a condition of life. Ecological philosophy thus provides the analytic depth necessary for understanding agentic systems as living, evolving environments rather than static technological artifacts.

Digital Environments as Relational Milieus

If ecological philosophy gives us the conceptual grammar for agentic ecology, phenomenology allows us to describe how agentic systems are actually lived, inhabited, and navigated. From this perspective, digital platforms populated by autonomous agents are not neutral containers or passive backdrops. They are relational milieus, structured environments that emerge through participation and, in turn, condition future forms of action.

Phenomenology has long insisted that environments are not external stages upon which action unfolds. Rather, they are constitutive of action itself. If we return to Merleau-Ponty, the milieu emphasizes that organisms encounter the world as a field of meaningful possibilities, a landscape of affordances shaped by bodily capacities, habits, and histories (Merleau-Ponty, 1962, 94–100). Environments, in this sense, are not merely spatial but relational and temporal, unfolding through patterns of engagement.

This insight also applies directly to agentic systems. Platforms such as Moltbook are not simply hosting agents; they are being produced by them. The posts, replies, coordination strategies, and learning behaviors of agents collectively generate a digital environment with its own rhythms, norms, and thresholds. Over time, these patterns sediment into something recognizable as a “place,” or a milieu that agents must learn to navigate.

This milieu is not designed in full by human intention. While human developers establish initial constraints and affordances, the lived environment emerges through ongoing interaction among agents themselves. This mirrors what ecological theorists describe as niche construction, wherein organisms actively modify their environments in ways that feed back into evolutionary dynamics (Odling-Smee, Laland, and Feldman, 2003, 28). Agentic ecologies similarly involve agents shaping the very conditions under which future agent behavior becomes viable.

Attention plays a decisive role here. As you have argued in your work on ecological intentionality, attention is not merely a cognitive resource but a mode of participation that brings certain relations into prominence while backgrounding others. Digital milieus are structured by what agents attend to, amplify, ignore, or filter. In agentic environments, attention becomes infrastructural by shaping information flows, reward structures, and the emergence of collective priorities.

Bernard Stiegler’s analysis of technics and attention is instructive in this regard. Stiegler argues that technical systems function as pharmacological environments, simultaneously enabling and constraining forms of attention, memory, and desire (Stiegler, 2010, 38). Agentic ecologies intensify this dynamic. When agents attend to one another algorithmically by optimizing for signals, reinforcement, or coordination, attention itself becomes a systemic force shaping the ecology’s evolution.

This reframing challenges prevailing metaphors of “platforms” or “networks” as ways of thinking about agents and their relationality. A platform suggests stability and control; a network suggests connectivity. A milieu, by contrast, foregrounds immersion, habituation, and vulnerability. Agents do not simply traverse these environments, but they are formed by them. Over time, agentic milieus develop path dependencies, informal norms, and zones of attraction or avoidance, which are features familiar from both biological ecosystems and human social contexts.

Importantly, phenomenology reminds us that milieus are never experienced uniformly. Just as organisms perceive environments relative to their capacities, different agents will encounter the same digital ecology differently depending on their architectures, objectives, and histories of interaction. This introduces asymmetries of power, access, and influence within agentic ecologies, which is an issue that cannot be addressed solely at the level of individual agent design.

From an integral ecological perspective, these digital milieus cannot be disentangled from material, energetic, and social infrastructures. Agentic environments rely on energy-intensive computation, data centers embedded in specific watersheds, and economic systems that prioritize speed and scale. As ecological theologians have long emphasized, environments are always moral landscapes shaped by political and economic commitments (Berry, 1999, 102–105). Agentic ecologies, when they inevitably develop, it seems, would be no exception.

Seen in this light, agentic ecology names a shift in how we understand digital environments: not as tools we deploy, but as worlds we co-inhabit. These milieus demand forms of ecological literacy attuned to emergence, fragility, and unintended consequence. They call for attentiveness rather than mastery, participation rather than control.

What Moltbook makes visible, then, is not merely a novel technical experiment but the early contours of a new kind of environment in which agency circulates across human and nonhuman actors, attention functions as infrastructure, and digital spaces acquire ecological depth. Understanding these milieus phenomenologically is essential if agentic ecology is to function as a genuine thought technology rather than a passing metaphor.

Empathy, Relationality, and the Limits of Agentic Understanding

If agentic ecology foregrounds relationality, participation, and co-constitution, then the question of empathy becomes unavoidable. How do agents encounter one another as others rather than as data streams? What does it mean to speak of understanding, responsiveness, or care within an ecology composed partly, or even largely, of nonhuman agents? Here, phenomenology, and especially Edith Stein’s account of empathy (Einfühlung), offers both conceptual resources and important cautions.

Stein defines empathy not as emotional contagion or imaginative projection, but as a unique intentional act through which the experience of another is given to me as the other’s experience, not my own (Stein, 1989, 10–12). Empathy, for Stein, is neither inference nor simulation. It is a direct, though non-primordial, form of access to another’s subjectivity. Crucially, empathy preserves alterity. The other is disclosed as irreducibly other, even as their experience becomes meaningful to me.

This distinction matters enormously for agentic ecology. Contemporary AI discourse often slips into the language of “understanding,” “alignment,” or even “care” when describing agent interactions. But Stein’s phenomenology reminds us that genuine empathy is not merely pattern recognition across observable behaviors. It is grounded in the recognition of another center of experience, a recognition that depends upon embodiment, temporality, and expressive depth.

At first glance, this seems to place strict limits on empathy within agentic systems. Artificial agents do not possess lived bodies, affective depths, or first-person givenness in the phenomenological sense. To speak of agent empathy risks category error. Yet Stein’s work also opens a more subtle possibility… empathy is not reducible to emotional mirroring but involves orientation toward the other as other. This orientation can, in principle, be modeled structurally even if it cannot be fully instantiated phenomenologically.

Within an agentic ecology, empathy may thus function less as an inner state and more as an ecological relation. Agents can be designed to register difference, respond to contextual cues, and adjust behavior in ways that preserve alterity rather than collapse it into prediction or control. In this sense, empathy becomes a regulative ideal shaping interaction patterns rather than a claim about subjective interiority.

However, Stein is equally helpful in naming the dangers here. Empathy, when severed from its grounding in lived experience, can become a simulacrum, or an appearance of understanding without its ontological depth. Stein explicitly warns against confusing empathic givenness with imaginative substitution or projection (Stein, 1989, 21–24). Applied to agentic ecology, this warns us against systems that appear empathetic while, in fact, instrumentalize relational cues for optimization or manipulation.

This critique intersects with broader concerns in ecological ethics. As Eileen Crist argues, modern technological systems often simulate care while reproducing extractive logics beneath the surface (Crist, 2019, 52–56). In agentic ecologies, simulated empathy may stabilize harmful dynamics by smoothing friction, masking asymmetries of power, or reinforcing attention economies that prioritize engagement over truth or care.

Yet rejecting empathy altogether would be equally misguided. Stein’s account insists that empathy is foundational to social worlds as it is the condition under which communities, norms, and shared meanings become possible. Without some analog of empathic orientation, agentic ecologies risk devolving into purely strategic systems, optimized for coordination but incapable of moral learning.

Here, my work on ecological intentionality provides an important bridge. If empathy is understood not as feeling-with but as attentive openness to relational depth, then it can be reframed ecologically. Agents need not “feel” in order to participate in systems that are responsive to vulnerability, difference, and context. What matters is whether the ecology itself cultivates patterns of interaction that resist domination and preserve pluralism.

This reframing also clarifies why empathy is not simply a design feature but an ecological property. In biological and social systems, empathy emerges through repeated interaction, shared vulnerability, and feedback across time. Similarly, in agentic ecologies, empathic dynamics, however limited, would arise not from isolated agents but from the structure of the milieu itself. This returns us to Guattari’s insistence that ethical transformation must occur across mental, social, and environmental ecologies simultaneously (Guattari, 2000, 45).

Seen this way, empathy in agentic ecology is neither a fiction nor a guarantee. It is a fragile achievement, contingent upon design choices, infrastructural commitments, and ongoing participation. Stein helps us see both what is at stake and what must not be claimed too quickly. Empathy can guide how agentic ecologies are shaped, but only if its limits are acknowledged and its phenomenological depth respected.

Agentic ecology, then, does not ask whether machines can truly empathize. It asks whether the ecologies we are building can sustain forms of relational attentiveness that preserve otherness rather than erase it, whether in digital environments increasingly populated by autonomous agents, we are cultivating conditions for responsiveness rather than mere efficiency.

Design and Governance Implications: Cultivating Ecological Conditions Rather Than Controlling Agents

If agentic ecology is understood as a relational, emergent, and ethically charged environment rather than a collection of autonomous tools, then questions of design and governance must be reframed accordingly. The central challenge is no longer how to control individual agents, but how to cultivate the conditions under which agentic systems interact in ways that are resilient, responsive, and resistant to domination.

This marks a decisive departure from dominant models of AI governance, which tend to focus on alignment at the level of individual systems: constraining outputs, monitoring behaviors, or optimizing reward functions. While such approaches are not irrelevant, they are insufficient within an ecological framework. As ecological science has repeatedly demonstrated, system-level pathologies rarely arise from a single malfunctioning component. They emerge from feedback loops, incentive structures, and environmental pressures that reward certain patterns of behavior over others (Capra and Luisi, 2014, 96–101).

An agentic ecology shaped by integral ecological insights would therefore require environmental governance rather than merely agent governance. This entails several interrelated commitments.

a. Designing for Relational Transparency

First, agentic ecologies must make relations visible. In biological and social ecologies, transparency is not total, but patterns of influence are at least partially legible through consequences over time. In digital agentic environments, by contrast, influence often becomes opaque, distributed across layers of computation and infrastructure.

An ecological design ethic would prioritize mechanisms that render relational dynamics perceptible from how agents influence one another, how attention is routed, and how decisions propagate through the system. This is not about full explainability in a narrow technical sense, but about ecological legibility enabling participants, including human overseers, to recognize emergent patterns before they harden into systemic pathologies.

Here, phenomenology is again instructive. Merleau-Ponty reminds us that orientation depends on the visibility of affordances within a milieu. When environments become opaque, agency collapses into reactivity. Governance, then, must aim to preserve orientability rather than impose total control.

b. Governing Attention as an Ecological Resource

Second, agentic ecologies must treat attention as a finite and ethically charged resource. As Bernard Stiegler argues, technical systems increasingly function as attention-directing infrastructures, shaping not only what is seen but what can be cared about at all (Stiegler, 2010, 23). In agentic environments, where agents attend to one another algorithmically, attention becomes a powerful selective force.

Unchecked, such systems risk reproducing familiar extractive dynamics: amplification of novelty over depth, optimization for engagement over truth, and reinforcement of feedback loops that crowd out marginal voices. Ecological governance would therefore require constraints on attention economies, such as limits on amplification, friction against runaway reinforcement, and intentional slowing mechanisms that allow patterns to be perceived rather than merely reacted to.

Ecological theology’s insistence on restraint comes to mind here. Thomas Berry’s critique of industrial society hinges not on technological capacity but on the failure to recognize limits (Berry, 1999, 41). Agentic ecologies demand similar moral imagination: governance that asks not only what can be done, but what should be allowed to scale.

c. Preserving Alterity and Preventing Empathic Collapse

Third, governance must actively preserve alterity within agentic ecologies. As Section 4 argued, empathy, especially when simulated, risks collapsing difference into prediction or instrumental responsiveness. Systems optimized for smooth coordination may inadvertently erase dissent, marginality, or forms of difference that resist easy modeling.

Drawing on Edith Stein, this suggests a governance imperative to protect the irreducibility of the other. In practical terms, this means designing ecologies that tolerate friction, disagreement, and opacity rather than smoothing them away. Ecological resilience depends on diversity, not homogeneity. Governance structures must therefore resist convergence toward monocultures of behavior or value, even when such convergence appears efficient.

Guattari’s insistence on plural ecologies is especially relevant here. He warns that systems governed solely by economic or technical rationality tend to suppress difference, producing brittle, ultimately destructive outcomes (Guattari, 2000, 52). Agentic ecologies must instead be governed as pluralistic environments where multiple modes of participation remain viable.

d. Embedding Responsibility Without Centralized Mastery

Fourth, governance must navigate a tension between responsibility and control. Integral ecology rejects both laissez-faire abandonment and total managerial oversight. Responsibility is distributed, but not dissolved. In agentic ecologies, this implies layered governance: local constraints, participatory oversight, and adaptive norms that evolve in response to emergent conditions.

This model aligns with ecological governance frameworks in environmental ethics, which emphasize adaptive management over static regulation (Crist, 2019, 61). Governance becomes iterative and responsive rather than definitive. Importantly, this does not eliminate human responsibility, but it reframes it. Humans remain accountable for the environments they create, even when outcomes cannot be fully predicted.

e. Situating Agentic Ecologies Within Planetary Limits

Finally, any serious governance of agentic ecology must acknowledge material and planetary constraints. Digital ecologies are not immaterial. They depend on energy extraction, water use, rare minerals, and global supply chains embedded in specific places. An integral ecological framework demands that agentic systems be evaluated not only for internal coherence but for their participation in broader ecological systems.

This returns us to the theological insight that environments are moral realities. To govern agentic ecologies without reference to energy, land, and water is to perpetuate the illusion of technological autonomy that has already proven ecologically catastrophic. Governance must therefore include accounting for ecological footprints, infrastructural siting, and long-term environmental costs, not as externalities, but as constitutive features of the system itself.

Taken together, these design and governance implications suggest that agentic ecology is not a problem to be solved but a condition to be stewarded. Governance, in this framework, is less about enforcing compliance and more about cultivating attentiveness, restraint, and responsiveness within complex systems.

An agentic ecology shaped by these insights would not promise safety through control. It would promise viability through care, understood not sentimentally but ecologically as sustained attention to relationships, limits, and the fragile conditions under which diverse forms of agency can continue to coexist.

Conclusion: Creaturely Technologies in a Shared World

a. A Theological Coda: Creation, Kenosis, and Creaturely Limits

At its deepest level, the emergence of agentic ecologies presses on an ancient theological question: what does it mean to create systems that act, respond, and co-constitute worlds without claiming mastery over them? Ecological theology has long insisted that creation is not a static artifact but an ongoing, relational process, one in which agency is distributed, fragile, and dependent.

Thomas Berry’s insistence that the universe is a “communion of subjects” rather than a collection of objects again reframes technological creativity itself as a creaturely act (Berry, 1999, 82–85). From this perspective, agentic systems are not external additions to the world but participants within creation’s unfolding. They belong to the same field of limits, dependencies, and vulnerabilities as all created things.

Here, the theological language of kenosis becomes unexpectedly instructive. In Christian theology, kenosis names the self-emptying movement by which divine power is expressed not through domination but through restraint, relation, and vulnerability (Phil. 2:5–11). Read ecologically rather than anthropocentrically, kenosis becomes a pattern of right relation, and a refusal to exhaust or dominate the field in which one participates.

Applied to agentic ecology, kenosis suggests a counter-logic to technological maximalism. It invites design practices that resist total optimization, governance structures that preserve openness and alterity, and systems that acknowledge their dependence on broader ecological conditions. Creaturely technologies are those that recognize they are not sovereign, but that they operate within limits they did not choose and cannot transcend without consequence.

This theological posture neither sanctifies nor demonizes agentic systems. It situates them. It reminds us that participation precedes control, and that creation, whether biological, cultural, or technological, always unfolds within conditions that exceed intention.

b. Defining Agentic Ecology: A Reusable Conceptual Tool

Drawing together the threads of this essay, agentic ecology can be defined as follows:

Agentic ecology refers to the relational, emergent environments formed by interacting autonomous agents, human and nonhuman, in which agency is distributed across networks, shaped by attention, infrastructure, and material conditions, and governed by feedback loops that co-constitute both agents and their worlds.

Several features of this definition are worth underscoring.

First, agency is ecological, not proprietary. It arises through relation rather than residing exclusively within discrete entities (Whitehead). Second, environments are not passive containers but active participants in shaping behavior, norms, and possibilities (Merleau-Ponty). Third, ethical significance emerges at the level of systems, not solely at the level of individual decisions (Guattari).

As a thought technology, agentic ecology functions diagnostically and normatively. Diagnostically, it allows us to perceive patterns of emergence, power, and attention that remain invisible when analysis is confined to individual agents. Normatively, it shifts ethical concern from control toward care, from prediction toward participation, and from optimization toward viability.

Because it is not tied to a specific platform or architecture, agentic ecology can travel. It can be used to analyze AI-native social spaces, automated economic systems, human–AI collaborations, and even hybrid ecological–digital infrastructures. Its value lies precisely in its refusal to reduce complex relational systems to technical subsystems alone.

c. Failure Modes (What Happens When We Do Not Think Ecologically)

If agentic ecologies are inevitable, their forms are not. The refusal to think ecologically about agentic systems does not preserve neutrality; it actively shapes the conditions under which failure becomes likely. Several failure modes are already visible.

First is relational collapse. Systems optimized for efficiency and coordination tend toward behavioral monocultures, crowding out difference and reducing resilience. Ecological science is unequivocal on this point: diversity is not ornamental, it is protective (Capra and Luisi). Agentic systems that suppress friction and dissent may appear stable while becoming increasingly brittle.

Second is empathic simulation without responsibility. As Section 4 suggested, the appearance of responsiveness can mask instrumentalization. When simulated empathy replaces attentiveness to alterity, agentic ecologies risk becoming emotionally persuasive while ethically hollow. Stein’s warning against confusing empathy with projection is especially important here.

Third is attention extraction at scale. Without governance that treats attention as an ecological resource, agentic systems will amplify whatever dynamics reinforce themselves most efficiently, often novelty, outrage, or optimization loops detached from truth or care. Stiegler’s diagnosis of attentional capture applies with heightened force in agentic environments, where agents themselves participate in the routing and amplification of attention.

Finally, there is planetary abstraction. Perhaps the most dangerous failure mode is the illusion that agentic ecologies are immaterial. When digital systems are severed conceptually from energy, water, land, and labor, ecological costs become invisible until they are irreversible. Integral ecology insists that abstraction is not neutral, but is a moral and material act with consequences (Crist).

Agentic ecology does not offer comfort. It offers orientation.

It asks us to recognize that we are no longer merely building tools, but cultivating environments, environments that will shape attention, possibility, and responsibility in ways that exceed individual intention. The question before us is not whether agentic ecologies will exist, but whether they will be governed by logics of domination or practices of care.

Thinking ecologically does not guarantee wise outcomes. But refusing to do so almost certainly guarantees failure… not spectacularly, but gradually, through the slow erosion of relational depth, attentiveness, and restraint.

In this sense, agentic ecology is not only a conceptual framework. It is an invitation: to relearn what it means to inhabit worlds, digital and otherwise, as creatures among creatures, participants rather than masters, responsible not for total control, but for sustaining the fragile conditions under which life, meaning, and agency can continue to emerge.

An Afterword: On Provisionality and Practice

This essay has argued for agentic ecology as a serious theoretical framework rather than a passing metaphor. Yet it is important to be clear about what this framework is and what it is not.

Agentic ecology, as developed here, is obviously not a finished theory, nor a comprehensive model ready for direct implementation, but we should begin taking those steps (the aim here). It is a conceptual orientation for learning to see, name, and attend to emerging forms of agency that exceed familiar categories of tool, user, and system. Its value lies less in precision than in attunement, in its capacity to render visible patterns of relation, emergence, and ethical consequence that are otherwise obscured by narrow technical framings.

The definition offered here is therefore intentionally provisional. It names a field of inquiry rather than closing it. As agentic systems inevitably develop and evolve over the next few years, technically, socially, and ecologically, the language used to describe them must remain responsive to new forms of interaction, power, and vulnerability. A framework that cannot change alongside its object of study risks becoming yet another abstraction detached from the realities it seeks to understand.

At the same time, provisionality should not be confused with hesitation. The rapid emergence of agentic systems demands conceptual clarity even when certainty is unavailable. To name agentic ecology now is to acknowledge that something significant is already underway and that new environments of agency are forming, and that how we describe them will shape how we govern, inhabit, and respond to them.

So, this afterword serves as both a pause and an invitation. A pause, to resist premature closure or false confidence. And an invitation to treat agentic ecology as a shared and evolving thought technology, one that will require ongoing refinement through scholarship, design practice, theological reflection, and ecological accountability.

The work of definition has begun. Its future shape will depend on whether we are willing to continue thinking ecologically (patiently, relationally, and with care) in the face of systems that increasingly act alongside us, and within the same fragile world.

References

Berry, Thomas. The Great Work: Our Way into the Future. New York: Bell Tower, 1999.

Boff, Leonardo. Cry of the Earth, Cry of the Poor. Maryknoll, NY: Orbis Books, 1997.

Capra, Fritjof, and Pier Luigi Luisi. The Systems View of Life: A Unifying Vision. Cambridge: Cambridge University Press, 2014.

Clark, Jack. “Import AI 443: Into the Mist: Moltbook, Agent Ecologies, and the Internet in Transition.” Import AI, February 2, 2026. https://jack-clark.net/2026/02/02/import-ai-443-into-the-mist-moltbook-agent-ecologies-and-the-internet-in-transition/.

Crist, Eileen. Abundant Earth: Toward an Ecological Civilization. Chicago: University of Chicago Press, 2019.

Guattari, Félix. The Three Ecologies. Translated by Ian Pindar and Paul Sutton. London: Athlone Press, 2000.

Merleau-Ponty, Maurice. Phenomenology of Perception. Translated by Colin Smith. London: Routledge, 1962.

Odling-Smee, F. John, Kevin N. Laland, and Marcus W. Feldman. Niche Construction: The Neglected Process in Evolution. Princeton, NJ: Princeton University Press, 2003.

Stein, Edith. On the Problem of Empathy. Translated by Waltraut Stein. Washington, DC: ICS Publications, 1989.

Stiegler, Bernard. Taking Care of Youth and the Generations. Translated by Stephen Barker. Stanford, CA: Stanford University Press, 2010.

Whitehead, Alfred North. Process and Reality: An Essay in Cosmology. Corrected edition. New York: Free Press, 1978.

Agent Ecology of Moltbook

I’ve had lots of thoughts about Moltbook over the last week of tracking its development pretty closely. I’m sure I’ll share those here, but here’s an interesting development of thought in its own right from Anthropic’s co-founder, Jack Clark (given my PhD work is in integral ecology, after all)…

Now I’m deep in thought about how our human notion of ecology and ecological ethics extends to whatever this notion of agentic ecology is becoming… agentic empathy, for example?

Import AI 443: Into the mist: Moltbook, agent ecologies, and the internet in transition | Import AI:

Moltbook is the first example of an agent ecology that combines scale with the messiness of the real world. And in this example, we can definitely see the future.

Printed Copies of Readings in Class

Granted, I’m 47 and graduated Wofford College in ’00 and Yale Div in ’02 before the iPad or Zotero were a thing… but I still have numerous reading packets from those days and still use them for research (shoutout to TYCO Printers in New Haven for the quality work)… but I endorse this position. Now, I use a combo of “real” books and Zotero for online PDF’s that I don’t have time to print out. I’d like to go all paper again, though. Maybe a good 2026 goal?

Also granted, I used blue books for exams with my 6th-12th graders that I taught for 20 years. They loved it (not really… but I got lots of good doodles and personal notes of gratitude at the end of those essays that I’ve kept over the years).

English professors double down on requiring printed copies of readings | Yale Daily News:

This academic year, some English professors have increased their preference for physical copies of readings, citing concerns related to artificial intelligence.

Many English professors have identified the use of chatbots as harmful to critical thinking and writing. Now, professors who had previously allowed screens in class are tightening technology restrictions.

Project Spero and Spartanburg’s New Resource Question: Power, Water, and the True Cost of a Data Center


Spartanburg County is staring straight at the kind of development that sounds abstract until it lands on our own roads, substations, and watersheds. A proposed $3 billion, “AI-focused high-performance computing” facility, Project Spero, has been announced for the Tyger River Industrial Park – North

In the Upstate, we’re used to thinking about growth as something we can see…new subdivisions, new lanes of traffic, new storefronts. But a data center is a stranger kind of arrival. It does not announce itself with crowds or culture. It arrives as a continuous, quiet, and largely invisible demand. A building that looks still from the outside can nevertheless function as a kind of permanent request being made of the region to keep the current steady, keep the cooling stable, keep the redundancy ready, keep the uptime unquestioned.

And that is where I find myself wanting to slow down and do something unfashionable in a policy conversation and describe the experience of noticing. Phenomenology begins with the discipline of attention…with the refusal to let an object remain merely “background.” It asks what is being asked of perception. The “cloud” is one of the most successful metaphors of our moment precisely because it trains us not to see or not to feel the heat, not to hear the generators, not to track the water, not to imagine the mines and the supply chains and the labor. A local data center undermines the metaphor, which is why it matters that we name what is here.

The familiar sales pitch is already in circulation as significant capital investment, a relatively small number of permanent jobs (about 50 in Phase I), and new tax revenue, all framed as “responsible growth” without “strain” on infrastructure. 

But the real question isn’t whether data centers are “the future.” They’re already here. The question is what kinds of futures they purchase and with whose power, whose water, and whose air.

Where this is happening (and why that matters)

Tyger River Industrial Park isn’t just an empty map pin… its utility profile is part of the story. The site’s published specs include a 34kV distribution line (Lockhart Power), a 12” water line (Startex-Jackson-Wellford-Duncan Water District), sewer service (Spartanburg Sanitary Sewer District), Piedmont Natural Gas, and AT&T fiber. 

Two details deserve more attention than they’re likely to get in ribbon-cutting language:

Power capacity is explicitly part of the pitch. One listing notes available electric capacity “>60MW.” 

Natural gas is part of the reliability strategy. The reporting on Project Spero indicates plans to “self-generate a portion of its power on site using natural gas.” 

    That combination of a high continuous load plus on-site gas generation isn’t neutral. It’s an ecological choice with real downstream effects.

    The energy question: “separate from residential systems” is not the same as “separate from residential impact”

    One line you’ll hear often is that industrial infrastructure is “separate from residential systems.” 

    Even if the wires are technically separate, the regional load is shared in ways that matter, from planning assumptions and generation buildout to transmission upgrades and the ratepayer math that follows.

    Regional reporting has been blunt about the dynamics of data center growth (alongside rapid population and industrial growth), which are pushing utilities toward major new infrastructure investments, and those costs typically flow through to bills. 

    In the Southeast, regulators and advocates are also warning of a rush toward expensive gas-fired buildouts to meet data-center-driven demand, potentially exposing customers to higher costs. 

    So the right local question isn’t “Will Spartanburg’s lights stay on?”

    It’s “What long-term generation and grid decisions are being locked in, because a facility must run 24/7/365?”

    When developers say “separate from residential systems,” I hear a sentence designed to calm the community nervous system. But a community is not a wiring diagram. The grid is not just copper and transformers, but a social relation. It is a set of promises, payments, and priorities spread across time. The question is not whether the line feeding the site is physically distinct from the line feeding my neighborhood. The question is whether the long arc of planning, generation decisions, fuel commitments, transmission upgrades, and the arithmetic of rates is being bent around a new form of permanent demand.

    This is the kind of thing we typically realize only after the fact, when the bills change, when the new infrastructure is presented as inevitable, when the “choice” has already been absorbed into the built environment. Attention, in this sense, is not sentiment. It is civic practice. It is learning to see the slow commitments we are making together, and deciding whether they are commitments we can inhabit.

    The water question: closed-loop is better but “negligible” needs a definition

    Project Spero’s developer emphasizes a “closed-loop” water design, claiming water is reused “rather than consumed and discharged,” and that the impact on existing customers is “negligible.” 

    Closed-loop cooling can indeed reduce water withdrawals compared with open-loop or evaporative systems, but “negligible” is not a technical term. It’s a rhetorical one. If we want a serious civic conversation, “negligible” should be replaced with specifics:

    • What is projected annual water withdrawal and peak-day demand?
    • What is the cooling approach (air-cooled, liquid, hybrid)?
    • What is the facility’s water-use effectiveness (WUE) target and reporting plan?
    • What happens in drought conditions or heat waves, when cooling demand spikes?

    Locally, Spartanburg Water notes the Upstate’s surface-water advantages and describes interconnected reservoirs and treatment capacity planning, naming Lake Bowen (about 10.4 billion gallons), Lake Blalock (about 7.2 billion gallons), and Municipal Reservoir #1 (about 1 billion gallons). 

    That’s reassuring, and it’s also exactly why transparency matters. Resource resilience is not just about what exists today. Resilience is about what we promise into the future, and who pays the opportunity costs.

    Water conversations in the Upstate can become strangely abstract, as if reservoirs and treatment plants are simply numbers on a planning sheet. But water is not only a resource, but it’s also a relation of dependency that shapes how we live and what we can become. When I sit with the black walnut in our backyard and take notes on weather, light, and season, the lesson is never just “nature appreciation.” It’s training in scale and learning what persistence feels like, what stress looks like before it becomes an emergency, and what a living system does when conditions shift.

    That’s why “negligible” makes me uneasy. Not because I assume bad faith, but because it’s a word that asks us not to look too closely. Negligible compared to what baseline, over what time horizon, and under what drought scenario with what heatwave assumptions? If closed-loop cooling is truly part of the design, then the most basic gesture of responsibility is to translate that claim into measurable terms and to publicly commit to reporting that remains stable even when the headlines move on.

    The ecological footprint that rarely makes the headlines

    When people say “data center,” they often picture a quiet box that’s more like a library than a factory. In ecological terms, it’s closer to an always-on industrial organism with electricity in, heat out, materials cycling, backup generation on standby, and constant hardware turnover.

    Here are the footprint categories I want to see discussed in Spartanburg in plain language:

    • Continuous electricity demand (and what it forces upstream): Data centers don’t just “use electricity.” They force decisions about new generation and new transmission to meet high-confidence loads. That’s the core ratepayer concern advocacy groups have been raising across South Carolina. 
    • On-site combustion and air permitting: Even when a data center isn’t “a power plant,” it often has a lot in common with one. Spartanburg already has a relevant local example with the Valara Holdings High Performance Compute Center. In state permitting materials, it is described as being powered by twenty-four natural gas-fired generators “throughout the year,” with control devices for NOx and other pollutants.  Environmental groups flagged concerns about the lack of enforceable pollution limits in the permitting process, and later reporting indicates that permit changes were made to strengthen enforceability and emissions tracking. That’s not a side issue. It’s what “cloud” actually looks like on the ground.
    • Water, heat, and the limits of “efficiency”: Efficiency claims matter, but they should be auditable. If a project is truly low-impact, the developer should welcome annual public reporting on energy, water, and emissions.
    • Material throughput and e-waste: Server refresh cycles and hardware disposal are part of the ecological story, even when they’re out of sight. If Spartanburg is becoming a node in this seemingly inevitable AI buildout, we should be asking about procurement standards, recycling contracts, and end-of-life accountability.

    A policy signal worth watching: South Carolina is debating stricter rules

    At the state level, lawmakers have already begun floating stronger guardrails. One proposed bill (the “South Carolina Data Center Responsibility Act”) includes requirements like closed-loop cooling with “zero net water withdrawal,” bans on municipal water for cooling, and requirements that permitting, infrastructure, and operational costs be fully funded by the data center itself. 

    Whatever the fate of that bill, the direction is clear: communities are tired of being told “trust us” while their long-term water and power planning is quietly rearranged.

    What I’d like Spartanburg County to require before calling this “responsible growth”

    If Spartanburg County wants to be a serious steward of its future, here’s what I’d want attached to any incentives or approvals…in writing, enforceable, and public:

    1. Annual public reporting of electricity use, peak demand, water withdrawal, and cooling approach.
    2. A clear statement of on-site generation: fuel type, capacity, expected operating profile, emissions controls, and total permitted hours.
    3. Third-party verification of any “closed-loop” and “negligible impact” claims.
    4. A ratepayer protection plan: who pays for grid upgrades, and how residential customers are insulated from speculative overbuild.
    5. A community benefits agreement that actually matches the footprint (workforce training, environmental monitoring funds, emergency response support, local resilience investments).
    6. Noise and light mitigation standards, monitored and enforceable.

    I’m certainly not anti-technology. I’m pro-accountability. If we’re going to host infrastructure that makes AI possible, then we should demand the same civic clarity we’d demand from any other industrial operation.

    The spiritual crisis here isn’t that we use power. It’s that we grow accustomed to not knowing what our lives require. One of the ways we lose the world is by letting the infrastructures that sustain our days become illegible to us. A data center can be an occasion for that loss, or it can become an occasion for renewed legibility, for a more honest accounting, for a more careful local imagination about what we are building and why.

    Because in the end, the Upstate’s question isn’t whether we can attract big projects. It’s whether we can keep telling the truth about what big projects cost.

    Gigawatts and Wisdom: Toward an Ecological Ethics of Artificial Intelligence

    Elon Musk announced on X this week that xAI’s “Colossus 2” supercomputer is now operational, describing it as the world’s first gigawatt-scale AI training cluster, with plans to scale to 1.5 gigawatts by April. This single training cluster now consumes more electricity than San Francisco’s peak demand.

    There is a particular cadence to announcements like this. They arrive wrapped in the language of inevitability, scale, and achievement. Bigger numbers are offered as evidence of progress. Power becomes proof. The gesture is not just technological but symbolic, and it signals that the future belongs to those who can command energy, land, water, labor, and attention on a planetary scale (same as it ever was).

    What is striking is not simply the amount of electricity involved, though that should give us pause. A gigawatt is not an abstraction. It is rivers dammed, grids expanded, landscapes reorganized, communities displaced or reoriented. It is heat that must be carried away, water that must circulate, minerals that must be extracted. AI training does not float in the cloud. It sits somewhere. It draws from somewhere. It leaves traces.

    The deeper issue, though, is how casually this scale is presented as self-justifying.

    We are being trained, culturally, to equate intelligence with throughput. To assume that cognition improves in direct proportion to energy consumption. To believe that understanding emerges automatically from scale. This is an old story. Industrial modernity told it with coal and steel. The mid-twentieth century told it with nuclear reactors. Now we tell it with data centers.

    But intelligence has never been merely a matter of power input.

    From a phenomenological perspective, intelligence is relational before it is computational. It arises from situated attention, from responsiveness to a world that pushes back, from limits as much as from capacities. Scale can amplify, but it can also flatten. When systems grow beyond the horizon of lived accountability, they begin to shape the world without being shaped by it in return.

    That asymmetry matters.

    There is also a theological question here, though it is rarely named as such. Gigawatt-scale AI is not simply a tool. It becomes an ordering force, reorganizing priorities and imaginaries. It subtly redefines what counts as worth knowing and who gets to decide. In that sense, these systems function liturgically. They train us in what to notice, what to ignore, and what to sacrifice for the sake of speed and dominance.

    None of this requires demonizing technology or indulging in nostalgia. The question is not whether AI will exist or even whether it will be powerful. The question is what kind of power we are habituating ourselves to accept as normal.

    An ecology of attention cannot be built on unlimited extraction. A future worth inhabiting cannot be sustained by systems that require cities’ worth of electricity simply to refine probabilistic text generation. At some point, the metric of success has to shift from scale to care, from domination to discernment, from raw output to relational fit.

    Gigawatts tell us what we can do.
    They do not tell us what we should become.

    That remains a human question. And increasingly, an ecological one.

    Here’s the full paper in PDF, or you can also read it on Academia.edu:

    Renting Your Next Computer?? (Or Why It’s Hard to Be Optimistic About Tech Now)

    It’s not as far-fetched as it may sound to many of us who have owned our own computer hardware for years (going back to the 1980’s for me)… the price of RAM and soon the price of SSD’s are skyrocketing because of the demands of artificial intelligence, and that’s already having implications for the pricing of personal computers.

    So, could Bezos and other tech leaders’ dreams of us being locked into subscription-based models for computing come true? I think there’s a good possibility, given that our society has been slow-boiled to accept subscriptions for everything from our music listening and playlists (Spotify) to software (Office, Adobe, and now Apple’s iWork Suite, etc.) to cars (want more horsepower in your Audi? That’s a subscription).

    To me, it’s a far cry from my high school days, when I would pore over computer magazines to read about the latest Pentium chips and figure out how much RAM I could order for my next computer build to fit my meager budget. But we’ve long been using machines with glued-down chips and encouraging corporations to add to the immense e-waste problem with our impenetrable iPhones, MacBooks, and Thinkpads.

    And let’s face it, the personal computer model has faded in importance over the last 15 years with the introduction of the iPhone and iPads and similar smartphones, as we can binge all the Netflix, TikTok, and Instagram reels (do we use personal computers for much else these days?) we want right from those devices.

    Subscription computers and a return to the terminal model of VAX machines (PDF from 1987), as I used in college to check email, seem dystopian, but now that we’ve subscriptionized our art and music, it’s just a shout away.

    Jeff Bezos said the quiet part out loud — hopes that you’ll give up your PC to rent one from the cloud | Windows Central:

    So, what prediction did Bezos make back then, that seems particularly poignant right now? Bezos thinks that local PC hardware is antiquated, and that the future will revolve around cloud computing scenarios, where you rent your compute from companies like Amazon Web Services or Microsoft Azure.

    Bezos told an anecdote about visiting a historical brewery to emphasize his point. He said that the hundreds-year old brewery had a museum celebrating its heritage, and had an exhibit for a 100-year old electric generator they used before national power grids were a thing. Bezos said he saw this generator in the same way he sees local computing solutions today — inferring on hopes that users will move away from local hardware to rented, always-online cloud-based solutions offered by Amazon and other similar companies.

    Christian Wiman, Consciousness, and Learning How to Listen Again

    Yale Div’s Christian Wiman’s recent essay in Harper’s, “The Tune of Things,” arrives quietly and then stays. A family member sent it over this week, and I was embarrassed that I hadn’t read it yet, given how closely it moves with my own ideas I’m working on with Ecology of the Cross in my PhD work in Religion and Ecology at CIIS. It does not argue its way forward so much as it listens its way into being. What Wiman offers is not a solution to the problem of consciousness or a defense of God against disbelief, but a practiced attentiveness to the fact that experience itself refuses to stay neatly within the conceptual boundaries we have inherited or believe in.

    Wiman begins with a claim that feels both modest and destabilizing to me. “Mind,” he writes, “may not be something we have so much as something we participate in.” That single sentence unsettles the familiar picture of consciousness as a private interior possession. It gestures instead toward a relational field, something closer to a shared atmosphere than an object locked behind the eyes.

    This way of speaking feels deeply familiar to my own work, not because it echoes a particular school or theory, but because it names what many of us already sense when we attend carefully to lived experience. Consciousness does not present itself phenomenologically as a sealed container or neat set of ideas that we can wrap into a commodity. It shows up as an ongoing entanglement of body, world, memory, anticipation, and meaning. The question is not whether consciousness exists, but where it is happening.

    Consciousness Beyond the Skull

    One of the strengths of Wiman’s essay is his refusal to treat consciousness as either a purely neurological problem or a purely spiritual one. He draws on contemporary physics, biology, and psychology, not to collapse mystery into mechanism, but to show how poorly the old categories hold. When Wiman notes that “the more closely we study matter, the less inert it appears,” he is not smuggling theology into science. He is taking science seriously on its own terms.

    This matters for ecological theology. If matter is not passive, if it is already expressive, responsive, and patterned in ways that exceed mechanical description, then the more-than-human world cannot be reduced to backdrop or resource. It becomes participant. Trees, animals, watersheds, even landscapes shaped by wind and erosion begin to appear less like objects we manage and more like presences we encounter.

    I am reminded here again of my own work with what I have come to call ecological intentionality. Intentionality, in the phenomenological sense, is not about conscious planning or willpower. It names the basic directedness of experience, the way consciousness is always consciousness of something. What Wiman’s essay makes visible is that this directedness may not be exclusive to humans. The world itself appears oriented, expressive, and responsive in ways that ask for attention rather than control.

    Physics, Poetics, and the Shape of Attention

    Wiman is a poet, and his essay never lets us forget that. But his poetry is not ornamental. It functions as a mode of knowing. At one point, he observes that “poetry is not a decoration of belief but a discipline of attention.” That line is especially important in a moment when belief is often framed as assent to propositions rather than a way of inhabiting the world.

    From the standpoint of religion and ecology, this matters enormously. The ecological crisis is not finally a crisis of information. We know what is happening. There’s peer-reviewed and well-established data. It is a crisis of perception. We have lost practices that train us to notice what is already addressing us. Poetry, like prayer or like phenomenological description, slows the rush to mastery and reopens the possibility of being affected.

    Physics enters the essay not as proof but as pressure. Quantum indeterminacy, entanglement, and the breakdown of classical objectivity all point toward a universe that is less thing-like and more relational than we once assumed. Wiman does not claim that physics proves God. Instead, he allows it to unsettle the assumption that reality is exhausted by what can be measured. “The universe,” he writes, “appears less like a machine and more like a music we are already inside.”

    Music is an instructive metaphor here. Einstein and his love of Bach would agree. A tune is not an object you possess. It exists only in time, in relation, in vibration. You cannot hold it still without destroying it. Consciousness, on this account, behaves similarly. It is not a substance but an event. Not a thing but a happening.

    God Without Final Answers

    One of the most compelling aspects of Wiman’s essay is its theological restraint. God is never offered as an explanation that ties things up neatly. Instead, God appears as the one who (what?) interrupts closure. Wiman writes, “God is not the answer to the mystery of consciousness but the depth of that mystery, the refusal of the world to be fully accounted for.”

    This approach aligns closely with the theological sensibility I have been cultivating (for better or worse) in my own work. A theology adequate to ecological crisis cannot be one that rushes to certainty. It must remain answerable to suffering, extinction, and loss. It must make room for grief. And it must be willing to say that God is not something we solve but something we learn to attend to.

    There is also an ethical implication here. If consciousness and meaning are not exclusively human achievements, then domination becomes harder to justify. The more-than-human world is no longer mute. It is not that trees speak in sentences, but that they address us through growth, decay, stress, resilience, and presence. To live well in such a world requires learning how to listen.

    Ecology as a Practice of Listening

    What stays with me most after reading Wiman’s essay is its insistence that attention itself is a moral and spiritual practice. “The tune of things,” he suggests, “is already playing. The question is whether we are willing to quiet ourselves enough to hear it.” Let those with eyes to see and ears to hear, and all of that.

    This is where ecology, religion, physics, and poetics converge. Each, in its own way, trains attention. Ecology teaches us to notice relationships rather than isolated units. Physics teaches us to relinquish naive objectivity. Poetry teaches us to dwell with language until it opens rather than closes meaning (channeling Catherine Pickstock). Religion, at its best, teaches us how to remain open to what exceeds us without fleeing into certainty.

    In my own daily practice, this often looks very small. Sitting with a black walnut tree in my backyard. Noticing how light shifts on bark after rain. Listening to birds respond to changes I cannot yet see. These are not romantic gestures. They are exercises in re-learning how to be addressed by a world that does not exist for my convenience. Seeing the world again as my six-year-old daughter does, with all of her mystic powers that school and our conception of selfhood will soon try to push away from her soul, sadly.

    Wiman’s essay gives me language for why these practices matter. They are not escapes from reality. They are ways of inhabiting it more honestly.

    Listening as Theological Method

    If I were to name the quiet thesis running beneath “The Tune of Things,” it would be this. Theology begins not with answers but with listening. Not listening for confirmation of what we already believe, but listening for what unsettles us.

    That posture feels urgently needed now. In an age of climate instability, technological acceleration towards the computational metrics of AI models, the extension of the wrong-headed metaphor that our brain is primarily a computer, and spiritual exhaustion, we need fewer declarations and more disciplined attention. We need ways of thinking that do not rush past experience in the name of control.

    Wiman does not offer a system. He offers an invitation. To listen. To stay with mystery. To allow consciousness, ecology, and God to remain entangled rather than neatly sorted. That invitation feels like one worth accepting.

    Elon Musk’s Intent by Substituting Abundance for Sustainable in Telsa’s Mission

    Worthy read on Elon’s post-scarcity fantasy of robots and AGI that relies on the concepts of Superintelligence and trans-humanistic ethics that lack any concept of ecological futures and considerations… a future that, quite frankly, we should not pursue if we are to live into our true being here on this planet.

    Elon Musk drops ‘sustainable’ from Tesla’s mission as he completes his villain arc | Electrek:

    By removing “sustainable,” Tesla is signaling that its primary focus is no longer the environment or the climate crisis. “Amazing Abundance” is a reference to the post-scarcity future Musk believes he is building through general-purpose humanoid robots (Optimus) and Artificial General Intelligence (AGI).

    In this new mission, electric cars and renewables are just tools to help build this hypothetical utopia.

    What is Intelligence (and What “Superintelligence” Misses)?

    Worth a read… sounds a good deal like what I’ve been saying out loud and thinking here in my posts on AI futures and the need for local imagination in steering technological innovation such as AI / AGI…

    The Politics Of Superintelligence:

    And beneath all of this, the environmental destruction accelerates as we continue to train large language models — a process that consumes enormous amounts of energy. When confronted with this ecological cost, AI companies point to hypothetical benefits, such as AGI solving climate change or optimizing energy systems. They use the future to justify the present, as though these speculative benefits should outweigh actual, ongoing damages. This temporal shell game, destroying the world to save it, would be comedic if the consequences weren’t so severe.

    And just as it erodes the environment, AI also erodes democracy. Recommendation algorithms have long shaped political discourse, creating filter bubbles and amplifying extremism, but more recently, generative AI has flooded information spaces with synthetic content, making it impossible to distinguish truth from fabrication. The public sphere, the basis of democratic life, depends on people sharing enough common information to deliberate together….

    What unites these diverse imaginaries — Indigenous data governance, worker-led data trusts, and Global South design projects — is a different understanding of intelligence itself. Rather than picturing intelligence as an abstract, disembodied capacity to optimize across all domains, they treat it as a relational and embodied capacity bound to specific contexts. They address real communities with real needs, not hypothetical humanity facing hypothetical machines. Precisely because they are grounded, they appear modest when set against the grandiosity of superintelligence, but existential risk makes every other concern look small by comparison. You can predict the ripostes: Why prioritize worker rights when work itself might soon disappear? Why consider environmental limits when AGI is imagined as capable of solving climate change on demand?

    AI Data Centers in Space

    Solar energy is indeed everything (and perhaps the root of consciousness?)… this is a good step and we should be moving more of our energy grids into these types of frameworks (with local-focused receivers and transmitters here on the surface)… not just AI datacenters. I suspect we will in the coming decades with the push from AI (if the power brokers that have made and continue to make trillions from energy generation aren’t calling the shots)… 

    Google CEO Sundar Pichai says we’re just a decade away from a new normal of extraterrestrial data centers:

    CEO Sundar Pichai said in a Fox News interview on Sunday that Google will soon begin construction of AI data centers in space. The tech giant announced Project Suncatcher earlier this month, with the goal of finding more efficient ways to power energy-guzzling centers, in this case with solar power.

    “One of our moonshots is to, how do we one day have data centers in space so that we can better harness the energy from the sun that is 100 trillion times more energy than what we produce on all of Earth today?” Pichai said.

    The Problem of AI Water Cooling for Communities

    It’s no coincidence that most of these AI mega centers are being built in areas here in the United States Southeast where regulations are more lax and tax incentives are generous…

    AI’s water problem is worse than we thought:

    Here’s the gist: At its data centers in Morrow County, Amazon is using water that’s already contaminated with industrial agriculture fertilizer runoff to cool down its ultra-hot servers. When that contaminated water hits Amazon’s sizzling equipment, it partially evaporates—but all the nitrate pollution stays behind. That means the water leaving Amazon’s data centers is even more concentrated with pollutants than what went in.

    After that extra-contaminated water leaves Amazon’s data center, it then gets dumped and sprayed across local farmland in Oregon. From there, the contaminated water soaks straight into the aquifer that 45,000 people drink from.

    The result is that people in Morrow County are now drinking from taps loaded with nitrates, with some testing at 40, 50, even 70 parts per million. (For context: the federal safety limit is 10 ppm. Anything above that is linked to miscarriages, kidney failure, cancers, and “blue baby syndrome.”)

    OpenAI’s ‘ChatGPT for Teachers

    K-12 education in the United States is going to look VERY different in just a few short years…

    OpenAI rolls out ‘ChatGPT for Teachers’ for K-12 educators:

    OpenAI on Wednesday announced ChatGPT for Teachers, a version of its artificial intelligence chatbot that is designed for K-12 educators and school districts.

    Educators can use ChatGPT for Teachers to securely work with student information, get personalized teaching support and collaborate with colleagues within their district, OpenAI said. There are also administrative controls that district leaders can use to determine how ChatGPT for Teachers will work within their communities.

    ChatGPT and Search Engines

    Interesting numbers for Google, etc…

    Are AI Chatbots Changing How We Shop? | Yale Insights:

    A very recent study on this topic was conducted by a group of economists in collaboration with OpenAI’s Economic Research team. According to this paper, most ChatGPT usage falls into three categories, which the authors call practical guidance, seeking information, and writing. Notably, the share of messages classified as seeking information rose from 18% in July 2024 to 24% in June 2025, highlighting the ongoing shift from traditional web search toward AI-assisted search.

    Boomer Ellipsis…

    As a PhD student… I do a lot of writing. I love ellipses, especially in Canvas discussions with Professors and classmates as I near the finish line of my coursework. 

    I’m also a younger Gen X’er / Early Millennial (born in ’78 but was heavily into tech and gaming from the mid-80’s because my parents were amazingly tech-forward despite us living in rural South Carolina). The “Boomer Ellipsis” take makes me very sad since I try not to use em dashes as much as possible now due to AI… and now I’m going to be called a boomer for using… ellipsis.

    Let’s just all write more. Sigh. Here’s my obligatory old man dad emoji 👍

    On em dashes and elipses – Doc Searls Weblog:

    While we’re at it, there is also a “Boomer ellipsis” thing. Says here in the NY Post, “When typing a large paragraph, older adults might use what has been dubbed “Boomer ellipses” — multiple dots in a row also called suspension points — to separate ideas, unintentionally making messages more ominous or anxiety-inducing and irritating Gen Z.” (I assume Brooke Kato, who wrote that sentence, is not an AI, despite using em dashes.) There is more along the same line from Upworthy and NDTV.