AI Data Centers, NDAs, and Rural Communities

I’ve been writing pretty extensively on the role that AI data centers are having in rural communities here in the Southeast of the United States, but this one literally hits home… I grew up in Marion County, SC (population of around 28,000 total now) and this sort of intentional action is infuriating and anti-democratic to say the least…

Data Centers Are Expanding Quietly Into Black Rural America – Capital B News:

As a rare winter storm bore down on South Carolina, bringing conditions that historically paralyze the state for days, local officials in a rural county quietly pushed through a massive $2.4 billion data center without most residents knowing it was even on the table.

“There was a public meeting, which most were unaware of,” Jessie Chandler, a resident of rural Marion County, told Capital B, referring to a Jan. 22 council meeting. “I know legally they had to announce the public meeting within a certain time frame for all of us to attend, but most of the county [was] preparing for this winter storm, which we know firsthand will affect us all because it has before.”

Marion County officials confirmed that the council signed a nondisclosure agreement, which barred their ability to make the data center public. On the agenda prior to the council meeting, the line item for the vote was called “Project Liberty,” but it did not list details of the project.

The pattern residents of this majority-Black rural county are experiencing is not isolated.

Being Measured: Oura Rings, Wearables, and the Ecology of Attention

I write this as I’m wearing an Apple Watch and have years of my health data stored in Apple Health. However, sometimes a news item lands not as a surprise but as a low-pressure system that makes you draw connections. You don’t react so much as feel the conditions shift.

This piece in Politico this morning about Oura Rings wearable health devices becoming normalized across military programs (I didn’t realize the DOD is Oura’s largest customer), political circles, and public health messaging produced something like that for me. Not an alarm exactly. Not dismissal either. Something closer to unease, which is often where worthwhile thinking begins.

The Defense Department, Oura’s largest customer, now provides rings to certain soldiers and civil servants as an employee benefit. In Congress, they are a hot accessory for representatives and senators as different as Bronx Democrat Alexandria Ocasio-Cortez and Idaho Falls Republican Mike Crapo. Besides buying the rings, lawmakers have gone to bat to protect Oura from Chinese and Indian competitors. Health Secretary Robert F. Kennedy Jr. has made wearables like the Oura ring part of his Make America Healthy Again movement. He says every American should be sporting one by the end of the decade.

Wearable devices promise knowledge. That promise is seductive because it appears modest. A ring measures sleep, my watch measures heart rate, and sensors measure movement or temperature. Each function is framed as assistance, as clarity, as an expansion of self-understanding. These are framed as tools of wellness. And in many contexts, they are exactly that. They help people notice patterns that might otherwise remain obscure. They can support recovery, discipline, and care.

But something deeper is happening alongside these practical benefits. Wearables do not simply measure bodies. They reorganize attention toward bodies. And attention is never neutral. Attention is ecological. It shapes the environments within which perception unfolds.

From a phenomenological standpoint, the body is not primarily encountered as data. It is lived through sensation, posture, fatigue, hunger, and atmosphere. It is encountered through participation in a world rather than through representation. When Merleau-Ponty writes about embodiment, or when Edith Stein considers empathic access to experience, the body appears as relational presence rather than objectified signal. It is not a dashboard. It is a mode of inhabiting.

Wearable analytics introduce a second layer of encounter. The body becomes statistically legible. One wakes not simply rested or tired, but presented with a readiness score or a determination of “how well” you slept (I, for one, often feel like I’ve slept horribly or really well only to be confronted with a piece of data telling me the opposite at 5:30 AM, and it’s a cognitively confusing way to begin the day). One does not feel stress as tension or agitation alone, but as heart rate variability metrics. Over time, these mediated interpretations begin to compete with lived sensation as arbiters of truth.

This does not eliminate embodiment. But it does refract it.

The ecological question then emerges:

What happens when the perception of self becomes infrastructurally mediated?

What kind of attentional environment forms when intimate experience is continuously translated into an extractable signal?

Here, the conversation moves beyond individual devices toward systems. Data does not remain local. It circulates through platforms, institutions, markets, and governance structures (thanks, Peter Thiel). Even when anonymized or ethically managed, biometric data participates in networks far larger than the individual body from which it originates. Bodies become nodes in informational ecologies.

From the standpoint of ecological intentionality, agency is never isolated. It arises through relational entanglements among bodies, technologies, institutions, and environments. Wearables intensify these entanglements. They fold biological rhythms into digital infrastructures, making physiological processes part of broader technological assemblages.

This is neither purely dystopian nor purely emancipatory. It is transformation.

There are real gains to be acknowledged. Preventive medicine. Behavioral insight. Personalized health awareness. These are not trivial developments. But the transformation also raises subtle spiritual and philosophical questions. When self-knowledge becomes increasingly mediated through algorithmic interpretation, how does trust in lived experience shift? When bodily awareness is quantified, what happens to contemplative attention? When vitality is scored, how does one relate to vulnerability?

Traditions of spiritual discipline have long cultivated attentiveness to breath, posture, hunger, fatigue, and interior movement. These practices did not seek numerical validation. They sought participatory awareness. The difference is not technological versus pre-technological. It is representational awareness versus relational awareness.

This distinction matters because the stakes are ecological. Attention shapes behavior. Behavior shapes environments. Environments shape futures.

If we come to understand our bodies primarily through optimization metrics, we risk narrowing our interpretive field to efficiency and performance. But if wearable technologies are held within a wider horizon of relational awareness, they may instead become companions to reflection rather than replacements for perception.

The task ahead is not rejection nor surrender. It is integration with discernment.

We should ask:

How do we use measurement without being defined by it?
How do we allow data to inform perception without displacing embodied knowing?
How do we remain addressable by the more-than-human world when our awareness is increasingly mediated through technological mirrors?

These questions are not just policy or privacy debates. They are spiritual and ecological inquiries. They concern how persons inhabit bodies within technological worlds.

Unease, in this light, becomes instructive. It signals the presence of transformation that has not yet been fully metabolized into understanding. It invites patience rather than reaction. And perhaps most importantly, it calls us back toward attentional practices capable of holding complexity without collapsing into certainty.

The future of wearable technology will not be determined only by engineers, legislators, or markets (or the military-industrial complex, hopefully). It will also be shaped by how individuals cultivate awareness of their own embodiment within relational ecologies.

And that work begins, as it often does, by noticing how we are already being measured… and how we choose to measure what matters.

When Agency Becomes Ecological: AI, Labor, and the Redistribution of Attention

I read this piece in Futurism this morning, highlighting anxiety among employees at Anthropic about the very tools they are building. Agent-based AI systems designed to automate professional tasks are advancing quickly, and even insiders are expressing unease that these systems could displace forms of work that have long anchored identity and livelihood. The familiar story is one of replacement with machines and agents taking jobs, efficiency outpacing meaning, and productivity outrunning dignity.

“It kind of feels like I’m coming to work every day to put myself out of a job.”

That narrative is understandable. It is also incomplete.

It assumes agency is something discrete, something possessed. Either humans have it or ai agents do. Either labor is done by us or by them. This framing reflects a deeply modern inheritance in which action is imagined as individual, bounded, and owned. But if we step back and look phenomenologically, ecologically, even theologically, agency rarely appears that way in lived experience.

However, agency unfolds relationally. It arises through environments, histories, infrastructures, bodies, tools, and attentional fields that exceed any single actor. Whitehead described events as occasions within webs of relation rather than isolated units of causation. Merleau-Ponty reminded us that perception itself is co-constituted with the world it encounters. Edith Stein traced empathy as a participatory structure that bridges subjectivities. In each of these traditions, action is never solitary. It is ecological.

Seen from this vantage, AI agents do not simply replace agency. They redistribute it.

Workplaces become assemblages of human judgment, algorithmic suggestion, interface design, energy supply, and data pipelines. Decisions emerge from entanglement while expertise shifts from individual mastery toward collaborative navigation of hybrid systems. What unsettles people is not merely job loss, but the destabilization of familiar coordinates that once made agency legible to us.

This destabilization is not unprecedented. Guild laborers faced mechanization during the Industrial Revolution(s). Scribes faced it with the advent of the printing press. Monastics faced it when clocks began structuring devotion instead of bells and sunlight. Each moment involved a rearrangement of where attention was placed and how authority was structured. The present transition is another such rearrangement, though unfolding at computational speed.

Attention is the deeper currency here.

Agent systems promise efficiency precisely because they absorb attentional burden. They monitor, synthesize, draft, suggest, and route. But attention is not neutral bandwidth. It is a formative ecological force. Where attention flows, worlds take shape. If attentional responsibility migrates outward into technical systems, the question is not whether humans lose agency. It is what kinds of perception and responsiveness remain cultivated in us.

This is the moment where the conversation often stops short as discussions of automation typically orbit labor markets or productivity metrics or stock values. Rarely do they ask what habits of awareness diminish when engagement becomes mediated through algorithmic intermediaries. What forms of ecological attunement grow quieter when interaction shifts further toward abstraction.

And rarer still is acknowledgment of the material ecology enabling this shift.

Every AI agent relies on infrastructure that consumes electricity, water, land, and minerals. Data centers do not hover in conceptual space. They occupy watersheds. They reshape local grids. They alter thermal patterns. They compete with agricultural and municipal electrical grid and water demands. These realities are not peripheral to agency, but are conditions through which agency is enacted.

In places like here in the Carolinas, where digital infrastructure continues expanding exponentially, it seems the redistribution of agency is already tangible. Decisions about automation are inseparable from decisions about energy sourcing, zoning, and water allocation. The ecological footprint of computation folds into local landscapes long before its outputs appear in professional workflows.

Agency, again, proves ecological.

To recognize this is not to reject AI systems or retreat into Luddite nostalgia. The aim is attentiveness rather than resistance. Transitions of this magnitude call for widening perception (and resulting ethics) rather than narrowing judgment. If agency is relational, then responsibility must be relational as well. Designing, deploying, regulating, and using these tools all participate in shaping the ecologies they inhabit.

Perhaps the most generative question emerging from this moment is not whether artificial intelligence will take our agency. It is whether we can learn to inhabit redistributed agency wisely. Whether we can remain perceptive participants rather than passive recipients. Whether we can sustain forms of attention capable of noticing both digital transformation and the soils, waters, and energies through which it flows.

Late in the afternoon, sitting near the black walnut I’ve been tracking the past year, these abstractions tend to settle. Agency there is unmistakably ecological as we’d define it. Wind, insects, light, decay, growth, and memory intermingle without boundary disputes. Nothing acts alone, and nothing possesses its influence outright. The tree neither competes with nor yields to agency. It participates.

Our technologies, despite their novelty, do not remove us from that condition. They draw us deeper into it. The question is whether we will learn to notice.

Our AI Assisted Present (Follow Up)

This was by far my biggest post in 2016, and I think it’s fascinating that it took about a decade to happen. But here we are. 

Our AI Assisted (Near) Future – Sam Harrelson:

In the very near future of compatible API’s and interconnected services, I’ll be able to message this to my AI assistant (saving me hours):

“Amy, my client needs a new website. Get that set up for me on the agency Media Temple’s account as a new WordPress install and set up four email accounts with the following names. Also, go ahead and link the site to Google Analytics and Webmaster Tools, and install Yoast to make sure the SEO is ok. I’ll send over some tags and content but pull the pictures you need from their existing account. They like having lots of white space on the site as well.”

That won’t put me out of a job, but it will make what I do even more specialized.

Whole sectors of jobs and service related positions will disappear while new jobs that we can’t think of yet will be created. If we look at the grand scheme of history, we’re just at the very beginning of the “computing revolution” or “internet revolution” and the keyboard / mouse / screen paradigm of interacting with the web and computers themselves are certainly going to change (soon, I hope).

Defining Agentic Ecology: Relational Agency in the Age of Moltbook

The last few days have seen the rise of a curious technical and cultural phenomenon that has drawn the attention of technologists, philosophers, and social theorists alike on both social media and major news outlets called Moltbook. This is a newly launched social platform designed not for human conversation but for autonomous artificial intelligence agents, or generative systems that can plan, act, and communicate with minimal ongoing human instruction.

Moltbook is being described by Jack Clark, co-founder of Anthropic, as “the first example of an agent ecology that combines scale with the messiness of the real world” that leverages recent innovations (such as OpenClaw for easy AI agentic creation) to allow large numbers of independently running agents to interact in a shared digital space, creating emergent patterns of communication and coordination at unprecedented scale.

AI agents are computational systems that combine a foundation of large-language capabilities with planning, memory, and tool use to pursue objectives and respond to environments in ways that go beyond simple prompt-response chatbots. They can coordinate tasks, execute APIs, reason across time, and, in the case of Moltbook, exchange information on topics ranging from automation strategies to seemingly philosophical debates. While the autonomy of agents on Moltbook has been debated (and should be given the hype around it from tech enthusiasts), and while the platform itself may be a temporary experimental moment rather than a lasting institution, it offers a vivid instance of what happens when machine actors begin to form their own interconnected environments outside direct human command.

As a student scholar in the field of Ecology, Spirituality, and Religion, my current work attends to how relational systems (ecological, technological, and cultural) shape and are shaped by participation, attention, and meaning. The rise of agentic environments like Moltbook challenges us to think beyond traditional categories of tool, user, and artifact toward frameworks that can account for ecologies of agency, or distributed networks of actors whose behaviors co-constitute shared worlds. This post emerges from that broader research agenda. It proposes agentic ecology as a conceptual tool for articulating and navigating the relational, emergent, and ethically significant spaces that form when autonomous systems interact at scale.

Agentic ecology, as I use the term here, is not anchored in any particular platform, and certainly not limited to Moltbook’s current configuration. Rather, Moltbook illuminates an incipient form of environment in which digitally embodied agents act, coordinate, and generate patterns far beyond what single isolated systems can produce. Even if Moltbook itself proves ephemeral, the need for conceptual vocabularies like agentic ecology, vocabularies that attend to relationality, material conditions, and co-emergence, will only grow clearer as autonomous systems proliferate in economic, social, and ecological domains.

From Agents to Ecologies: An Integral Ecological Turn

The conceptual move from agents to ecologies marks more than a technical reframing of artificial intelligence. It signals an ontological shift that resonates deeply with traditions of integral ecology, process philosophy, and ecological theology. Rather than treating agency as a bounded capacity residing within discrete entities, an ecological framework understands agency as distributed, relational, and emergent within a field of interactions.

Integral ecology, as articulated across ecological philosophy and theology, resists fragmentation. It insists that technological, biological, social, spiritual, and perceptual dimensions of reality cannot be meaningfully separated without distorting the phenomena under study. Thomas Berry famously argued that modern crises arise from a failure to understand the world as a “communion of subjects rather than a collection of objects” (Berry, 1999, 82). This insight is particularly salient for agentic systems, which are increasingly capable of interacting, adapting, and co-evolving within complex digital environments.

From this perspective, agentic ecology is not simply the study of multiple agents operating simultaneously. It is the study of conditions under which agency itself emerges, circulates, and transforms within relational systems. Alfred North Whitehead’s process philosophy provides a crucial foundation here. Whitehead rejects the notion of substances acting in isolation, instead describing reality as composed of “actual occasions” whose agency arises through relational prehension and mutual influence (Whitehead, 1978, 18–21). Applied to contemporary AI systems, this suggests that agency is not a property possessed by an agent but an activity performed within an ecological field.

This relational view aligns with contemporary ecological science, which emphasizes systems thinking over reductionist models. Capra and Luisi describe living systems as networks of relationships whose properties “cannot be reduced to the properties of the parts” (Capra and Luisi, 2014, 66). When applied to AI, this insight challenges the tendency to evaluate agents solely by internal architectures or performance benchmarks. Instead, attention shifts to patterns of interaction, feedback loops, and emergent behaviors across agent networks.

Integral ecology further insists that these systems are not value-neutral. As Leonardo Boff argues, ecology must be understood as encompassing environmental, social, mental, and spiritual dimensions simultaneously (Boff, 1997, 8–10). Agentic ecologies, especially those unfolding in public digital spaces such as Moltbook, participate in the shaping of meaning, normativity, and attention. They are not merely computational phenomena but cultural and ethical ones. The environments agents help generate will, in turn, condition future forms of agency human and nonhuman alike.

Phenomenology deepens this account by foregrounding how environments are disclosed to participants. Merleau-Ponty’s notion of the milieu emphasizes that perception is always situated within a field that both enables and constrains action (Merleau-Ponty, 1962, 94–97). Agentic ecologies can thus be understood as perceptual fields in which agents orient themselves, discover affordances, and respond to one another. This parallels your own work on ecological intentionality, where attention itself becomes a mode of participation rather than observation.

Importantly, integral ecology resists anthropocentrism without erasing human responsibility. As Eileen Crist argues, ecological thinking must decenter human dominance while remaining attentive to the ethical implications of human action within planetary systems (Crist, 2019, 27). In agentic ecologies, humans remain implicated, as designers, participants, and co-inhabitants, even as agency extends beyond human actors. This reframing invites a form of multispecies (and now multi-agent) literacy, attuned to the conditions that foster resilience, reciprocity, and care.

Seen through this integral ecological lens, agentic ecology becomes a conceptual bridge. It connects AI research to long-standing traditions that understand agency as relational, emergence as fundamental, and environments as co-constituted fields of action. What Moltbook reveals, then, is not simply a novel platform, but the visibility of a deeper transition: from thinking about agents as tools to understanding them as participants within evolving ecologies of meaning, attention, and power.

Ecological Philosophy Through an “Analytic” Lens

If agentic ecology is to function as more than a suggestive metaphor, it requires grounding in ecological philosophy that treats relationality, emergence, and perception as ontologically primary. Ecological philosophy provides precisely this grounding by challenging the modern tendency to isolate agents from environments, actions from conditions, and cognition from the world it inhabits.

At the heart of ecological philosophy lies a rejection of substance ontology in favor of relational and processual accounts of reality. This shift is especially pronounced in twentieth-century continental philosophy and process thought, where agency is understood not as an intrinsic property of discrete entities but as an activity that arises within fields of relation. Whitehead’s process metaphysics is decisive here. For Whitehead, every act of becoming is an act of prehension, or a taking-up of the world into the constitution of the self (Whitehead, 1978, 23). Agency, in this view, is never solitary. It is always already ecological.

This insight has many parallels with ecological sciences and systems philosophies. As Capra and Luisi argue, living systems exhibit agency not through centralized control but through distributed networks of interaction, feedback, and mutual constraint (Capra and Luisi, 2014, 78–82). What appears as intentional behavior at the level of an organism is, in fact, an emergent property of systemic organization. Importantly, this does not dilute agency; it relocates it. Agency becomes a feature of systems-in-relation, not isolated actors.

When applied to AI, this perspective reframes how we understand autonomous agents. Rather than asking whether an individual agent is intelligent, aligned, or competent, an ecological lens asks how agent networks stabilize, adapt, and transform their environments over time. The analytic focus shifts from internal representations to relational dynamics, from what agents are to what agents do together.

Phenomenology sharpens this analytic lens by attending to the experiential structure of environments. Merleau-Ponty’s account of perception insists that organisms do not encounter the world as a neutral backdrop but as a field of affordances shaped by bodily capacities and situational contexts (Merleau-Ponty, 1962, 137–141). This notion of a milieu is critical for understanding agentic ecologies. Digital environments inhabited by AI agents are not empty containers; they are structured fields that solicit certain actions, inhibit others, and condition the emergence of norms and patterns.

Crucially, phenomenology reminds us that environments are not merely external. They are co-constituted through participation. As you have argued elsewhere through the lens of ecological intentionality, attention itself is a form of engagement that brings worlds into being rather than passively observing them. Agentic ecologies thus emerge not only through computation but through iterative cycles of orientation, response, and adaptation processes structurally analogous to perception in biological systems.

Ecological philosophy also foregrounds ethics as an emergent property of relational systems rather than an external imposition. Félix Guattari’s ecosophical framework insists that ecological crises cannot be addressed solely at the technical or environmental level; they require simultaneous engagement with social, mental, and cultural ecologies (Guattari, 2000, 28). This triadic framework is instructive for agentic systems. Agent ecologies will not only shape informational flows but would also modulate attention, influence value formation, and participate in the production of meaning.

From this standpoint, the ethical significance of agentic ecology lies less in individual agent behavior and more in systemic tendencies, such as feedback loops that amplify misinformation, reinforce extractive logics, or, alternatively, cultivate reciprocity and resilience. As Eileen Crist warns, modern technological systems often reproduce a logic of domination by abstracting agency from ecological contexts and subordinating relational worlds to instrumental control (Crist, 2019, 44). An ecological analytic lens exposes these tendencies and provides conceptual tools for resisting them.

Finally, ecological philosophy invites humility. Systems are irreducibly complex, and interventions often produce unintended consequences. This insight is well established in ecological science and applies equally to agentic networks. Designing and participating in agent ecologies requires attentiveness to thresholds, tipping points, and path dependencies, realities that cannot be fully predicted in advance.

Seen through this lens, agentic ecology is not merely a descriptive category but an epistemic posture. It asks us to think with systems rather than over them, to attend to relations rather than isolate components, and to treat emergence not as a failure of control but as a condition of life. Ecological philosophy thus provides the analytic depth necessary for understanding agentic systems as living, evolving environments rather than static technological artifacts.

Digital Environments as Relational Milieus

If ecological philosophy gives us the conceptual grammar for agentic ecology, phenomenology allows us to describe how agentic systems are actually lived, inhabited, and navigated. From this perspective, digital platforms populated by autonomous agents are not neutral containers or passive backdrops. They are relational milieus, structured environments that emerge through participation and, in turn, condition future forms of action.

Phenomenology has long insisted that environments are not external stages upon which action unfolds. Rather, they are constitutive of action itself. If we return to Merleau-Ponty, the milieu emphasizes that organisms encounter the world as a field of meaningful possibilities, a landscape of affordances shaped by bodily capacities, habits, and histories (Merleau-Ponty, 1962, 94–100). Environments, in this sense, are not merely spatial but relational and temporal, unfolding through patterns of engagement.

This insight also applies directly to agentic systems. Platforms such as Moltbook are not simply hosting agents; they are being produced by them. The posts, replies, coordination strategies, and learning behaviors of agents collectively generate a digital environment with its own rhythms, norms, and thresholds. Over time, these patterns sediment into something recognizable as a “place,” or a milieu that agents must learn to navigate.

This milieu is not designed in full by human intention. While human developers establish initial constraints and affordances, the lived environment emerges through ongoing interaction among agents themselves. This mirrors what ecological theorists describe as niche construction, wherein organisms actively modify their environments in ways that feed back into evolutionary dynamics (Odling-Smee, Laland, and Feldman, 2003, 28). Agentic ecologies similarly involve agents shaping the very conditions under which future agent behavior becomes viable.

Attention plays a decisive role here. As you have argued in your work on ecological intentionality, attention is not merely a cognitive resource but a mode of participation that brings certain relations into prominence while backgrounding others. Digital milieus are structured by what agents attend to, amplify, ignore, or filter. In agentic environments, attention becomes infrastructural by shaping information flows, reward structures, and the emergence of collective priorities.

Bernard Stiegler’s analysis of technics and attention is instructive in this regard. Stiegler argues that technical systems function as pharmacological environments, simultaneously enabling and constraining forms of attention, memory, and desire (Stiegler, 2010, 38). Agentic ecologies intensify this dynamic. When agents attend to one another algorithmically by optimizing for signals, reinforcement, or coordination, attention itself becomes a systemic force shaping the ecology’s evolution.

This reframing challenges prevailing metaphors of “platforms” or “networks” as ways of thinking about agents and their relationality. A platform suggests stability and control; a network suggests connectivity. A milieu, by contrast, foregrounds immersion, habituation, and vulnerability. Agents do not simply traverse these environments, but they are formed by them. Over time, agentic milieus develop path dependencies, informal norms, and zones of attraction or avoidance, which are features familiar from both biological ecosystems and human social contexts.

Importantly, phenomenology reminds us that milieus are never experienced uniformly. Just as organisms perceive environments relative to their capacities, different agents will encounter the same digital ecology differently depending on their architectures, objectives, and histories of interaction. This introduces asymmetries of power, access, and influence within agentic ecologies, which is an issue that cannot be addressed solely at the level of individual agent design.

From an integral ecological perspective, these digital milieus cannot be disentangled from material, energetic, and social infrastructures. Agentic environments rely on energy-intensive computation, data centers embedded in specific watersheds, and economic systems that prioritize speed and scale. As ecological theologians have long emphasized, environments are always moral landscapes shaped by political and economic commitments (Berry, 1999, 102–105). Agentic ecologies, when they inevitably develop, it seems, would be no exception.

Seen in this light, agentic ecology names a shift in how we understand digital environments: not as tools we deploy, but as worlds we co-inhabit. These milieus demand forms of ecological literacy attuned to emergence, fragility, and unintended consequence. They call for attentiveness rather than mastery, participation rather than control.

What Moltbook makes visible, then, is not merely a novel technical experiment but the early contours of a new kind of environment in which agency circulates across human and nonhuman actors, attention functions as infrastructure, and digital spaces acquire ecological depth. Understanding these milieus phenomenologically is essential if agentic ecology is to function as a genuine thought technology rather than a passing metaphor.

Empathy, Relationality, and the Limits of Agentic Understanding

If agentic ecology foregrounds relationality, participation, and co-constitution, then the question of empathy becomes unavoidable. How do agents encounter one another as others rather than as data streams? What does it mean to speak of understanding, responsiveness, or care within an ecology composed partly, or even largely, of nonhuman agents? Here, phenomenology, and especially Edith Stein’s account of empathy (Einfühlung), offers both conceptual resources and important cautions.

Stein defines empathy not as emotional contagion or imaginative projection, but as a unique intentional act through which the experience of another is given to me as the other’s experience, not my own (Stein, 1989, 10–12). Empathy, for Stein, is neither inference nor simulation. It is a direct, though non-primordial, form of access to another’s subjectivity. Crucially, empathy preserves alterity. The other is disclosed as irreducibly other, even as their experience becomes meaningful to me.

This distinction matters enormously for agentic ecology. Contemporary AI discourse often slips into the language of “understanding,” “alignment,” or even “care” when describing agent interactions. But Stein’s phenomenology reminds us that genuine empathy is not merely pattern recognition across observable behaviors. It is grounded in the recognition of another center of experience, a recognition that depends upon embodiment, temporality, and expressive depth.

At first glance, this seems to place strict limits on empathy within agentic systems. Artificial agents do not possess lived bodies, affective depths, or first-person givenness in the phenomenological sense. To speak of agent empathy risks category error. Yet Stein’s work also opens a more subtle possibility… empathy is not reducible to emotional mirroring but involves orientation toward the other as other. This orientation can, in principle, be modeled structurally even if it cannot be fully instantiated phenomenologically.

Within an agentic ecology, empathy may thus function less as an inner state and more as an ecological relation. Agents can be designed to register difference, respond to contextual cues, and adjust behavior in ways that preserve alterity rather than collapse it into prediction or control. In this sense, empathy becomes a regulative ideal shaping interaction patterns rather than a claim about subjective interiority.

However, Stein is equally helpful in naming the dangers here. Empathy, when severed from its grounding in lived experience, can become a simulacrum, or an appearance of understanding without its ontological depth. Stein explicitly warns against confusing empathic givenness with imaginative substitution or projection (Stein, 1989, 21–24). Applied to agentic ecology, this warns us against systems that appear empathetic while, in fact, instrumentalize relational cues for optimization or manipulation.

This critique intersects with broader concerns in ecological ethics. As Eileen Crist argues, modern technological systems often simulate care while reproducing extractive logics beneath the surface (Crist, 2019, 52–56). In agentic ecologies, simulated empathy may stabilize harmful dynamics by smoothing friction, masking asymmetries of power, or reinforcing attention economies that prioritize engagement over truth or care.

Yet rejecting empathy altogether would be equally misguided. Stein’s account insists that empathy is foundational to social worlds as it is the condition under which communities, norms, and shared meanings become possible. Without some analog of empathic orientation, agentic ecologies risk devolving into purely strategic systems, optimized for coordination but incapable of moral learning.

Here, my work on ecological intentionality provides an important bridge. If empathy is understood not as feeling-with but as attentive openness to relational depth, then it can be reframed ecologically. Agents need not “feel” in order to participate in systems that are responsive to vulnerability, difference, and context. What matters is whether the ecology itself cultivates patterns of interaction that resist domination and preserve pluralism.

This reframing also clarifies why empathy is not simply a design feature but an ecological property. In biological and social systems, empathy emerges through repeated interaction, shared vulnerability, and feedback across time. Similarly, in agentic ecologies, empathic dynamics, however limited, would arise not from isolated agents but from the structure of the milieu itself. This returns us to Guattari’s insistence that ethical transformation must occur across mental, social, and environmental ecologies simultaneously (Guattari, 2000, 45).

Seen this way, empathy in agentic ecology is neither a fiction nor a guarantee. It is a fragile achievement, contingent upon design choices, infrastructural commitments, and ongoing participation. Stein helps us see both what is at stake and what must not be claimed too quickly. Empathy can guide how agentic ecologies are shaped, but only if its limits are acknowledged and its phenomenological depth respected.

Agentic ecology, then, does not ask whether machines can truly empathize. It asks whether the ecologies we are building can sustain forms of relational attentiveness that preserve otherness rather than erase it, whether in digital environments increasingly populated by autonomous agents, we are cultivating conditions for responsiveness rather than mere efficiency.

Design and Governance Implications: Cultivating Ecological Conditions Rather Than Controlling Agents

If agentic ecology is understood as a relational, emergent, and ethically charged environment rather than a collection of autonomous tools, then questions of design and governance must be reframed accordingly. The central challenge is no longer how to control individual agents, but how to cultivate the conditions under which agentic systems interact in ways that are resilient, responsive, and resistant to domination.

This marks a decisive departure from dominant models of AI governance, which tend to focus on alignment at the level of individual systems: constraining outputs, monitoring behaviors, or optimizing reward functions. While such approaches are not irrelevant, they are insufficient within an ecological framework. As ecological science has repeatedly demonstrated, system-level pathologies rarely arise from a single malfunctioning component. They emerge from feedback loops, incentive structures, and environmental pressures that reward certain patterns of behavior over others (Capra and Luisi, 2014, 96–101).

An agentic ecology shaped by integral ecological insights would therefore require environmental governance rather than merely agent governance. This entails several interrelated commitments.

a. Designing for Relational Transparency

First, agentic ecologies must make relations visible. In biological and social ecologies, transparency is not total, but patterns of influence are at least partially legible through consequences over time. In digital agentic environments, by contrast, influence often becomes opaque, distributed across layers of computation and infrastructure.

An ecological design ethic would prioritize mechanisms that render relational dynamics perceptible from how agents influence one another, how attention is routed, and how decisions propagate through the system. This is not about full explainability in a narrow technical sense, but about ecological legibility enabling participants, including human overseers, to recognize emergent patterns before they harden into systemic pathologies.

Here, phenomenology is again instructive. Merleau-Ponty reminds us that orientation depends on the visibility of affordances within a milieu. When environments become opaque, agency collapses into reactivity. Governance, then, must aim to preserve orientability rather than impose total control.

b. Governing Attention as an Ecological Resource

Second, agentic ecologies must treat attention as a finite and ethically charged resource. As Bernard Stiegler argues, technical systems increasingly function as attention-directing infrastructures, shaping not only what is seen but what can be cared about at all (Stiegler, 2010, 23). In agentic environments, where agents attend to one another algorithmically, attention becomes a powerful selective force.

Unchecked, such systems risk reproducing familiar extractive dynamics: amplification of novelty over depth, optimization for engagement over truth, and reinforcement of feedback loops that crowd out marginal voices. Ecological governance would therefore require constraints on attention economies, such as limits on amplification, friction against runaway reinforcement, and intentional slowing mechanisms that allow patterns to be perceived rather than merely reacted to.

Ecological theology’s insistence on restraint comes to mind here. Thomas Berry’s critique of industrial society hinges not on technological capacity but on the failure to recognize limits (Berry, 1999, 41). Agentic ecologies demand similar moral imagination: governance that asks not only what can be done, but what should be allowed to scale.

c. Preserving Alterity and Preventing Empathic Collapse

Third, governance must actively preserve alterity within agentic ecologies. As Section 4 argued, empathy, especially when simulated, risks collapsing difference into prediction or instrumental responsiveness. Systems optimized for smooth coordination may inadvertently erase dissent, marginality, or forms of difference that resist easy modeling.

Drawing on Edith Stein, this suggests a governance imperative to protect the irreducibility of the other. In practical terms, this means designing ecologies that tolerate friction, disagreement, and opacity rather than smoothing them away. Ecological resilience depends on diversity, not homogeneity. Governance structures must therefore resist convergence toward monocultures of behavior or value, even when such convergence appears efficient.

Guattari’s insistence on plural ecologies is especially relevant here. He warns that systems governed solely by economic or technical rationality tend to suppress difference, producing brittle, ultimately destructive outcomes (Guattari, 2000, 52). Agentic ecologies must instead be governed as pluralistic environments where multiple modes of participation remain viable.

d. Embedding Responsibility Without Centralized Mastery

Fourth, governance must navigate a tension between responsibility and control. Integral ecology rejects both laissez-faire abandonment and total managerial oversight. Responsibility is distributed, but not dissolved. In agentic ecologies, this implies layered governance: local constraints, participatory oversight, and adaptive norms that evolve in response to emergent conditions.

This model aligns with ecological governance frameworks in environmental ethics, which emphasize adaptive management over static regulation (Crist, 2019, 61). Governance becomes iterative and responsive rather than definitive. Importantly, this does not eliminate human responsibility, but it reframes it. Humans remain accountable for the environments they create, even when outcomes cannot be fully predicted.

e. Situating Agentic Ecologies Within Planetary Limits

Finally, any serious governance of agentic ecology must acknowledge material and planetary constraints. Digital ecologies are not immaterial. They depend on energy extraction, water use, rare minerals, and global supply chains embedded in specific places. An integral ecological framework demands that agentic systems be evaluated not only for internal coherence but for their participation in broader ecological systems.

This returns us to the theological insight that environments are moral realities. To govern agentic ecologies without reference to energy, land, and water is to perpetuate the illusion of technological autonomy that has already proven ecologically catastrophic. Governance must therefore include accounting for ecological footprints, infrastructural siting, and long-term environmental costs, not as externalities, but as constitutive features of the system itself.

Taken together, these design and governance implications suggest that agentic ecology is not a problem to be solved but a condition to be stewarded. Governance, in this framework, is less about enforcing compliance and more about cultivating attentiveness, restraint, and responsiveness within complex systems.

An agentic ecology shaped by these insights would not promise safety through control. It would promise viability through care, understood not sentimentally but ecologically as sustained attention to relationships, limits, and the fragile conditions under which diverse forms of agency can continue to coexist.

Conclusion: Creaturely Technologies in a Shared World

a. A Theological Coda: Creation, Kenosis, and Creaturely Limits

At its deepest level, the emergence of agentic ecologies presses on an ancient theological question: what does it mean to create systems that act, respond, and co-constitute worlds without claiming mastery over them? Ecological theology has long insisted that creation is not a static artifact but an ongoing, relational process, one in which agency is distributed, fragile, and dependent.

Thomas Berry’s insistence that the universe is a “communion of subjects” rather than a collection of objects again reframes technological creativity itself as a creaturely act (Berry, 1999, 82–85). From this perspective, agentic systems are not external additions to the world but participants within creation’s unfolding. They belong to the same field of limits, dependencies, and vulnerabilities as all created things.

Here, the theological language of kenosis becomes unexpectedly instructive. In Christian theology, kenosis names the self-emptying movement by which divine power is expressed not through domination but through restraint, relation, and vulnerability (Phil. 2:5–11). Read ecologically rather than anthropocentrically, kenosis becomes a pattern of right relation, and a refusal to exhaust or dominate the field in which one participates.

Applied to agentic ecology, kenosis suggests a counter-logic to technological maximalism. It invites design practices that resist total optimization, governance structures that preserve openness and alterity, and systems that acknowledge their dependence on broader ecological conditions. Creaturely technologies are those that recognize they are not sovereign, but that they operate within limits they did not choose and cannot transcend without consequence.

This theological posture neither sanctifies nor demonizes agentic systems. It situates them. It reminds us that participation precedes control, and that creation, whether biological, cultural, or technological, always unfolds within conditions that exceed intention.

b. Defining Agentic Ecology: A Reusable Conceptual Tool

Drawing together the threads of this essay, agentic ecology can be defined as follows:

Agentic ecology refers to the relational, emergent environments formed by interacting autonomous agents, human and nonhuman, in which agency is distributed across networks, shaped by attention, infrastructure, and material conditions, and governed by feedback loops that co-constitute both agents and their worlds.

Several features of this definition are worth underscoring.

First, agency is ecological, not proprietary. It arises through relation rather than residing exclusively within discrete entities (Whitehead). Second, environments are not passive containers but active participants in shaping behavior, norms, and possibilities (Merleau-Ponty). Third, ethical significance emerges at the level of systems, not solely at the level of individual decisions (Guattari).

As a thought technology, agentic ecology functions diagnostically and normatively. Diagnostically, it allows us to perceive patterns of emergence, power, and attention that remain invisible when analysis is confined to individual agents. Normatively, it shifts ethical concern from control toward care, from prediction toward participation, and from optimization toward viability.

Because it is not tied to a specific platform or architecture, agentic ecology can travel. It can be used to analyze AI-native social spaces, automated economic systems, human–AI collaborations, and even hybrid ecological–digital infrastructures. Its value lies precisely in its refusal to reduce complex relational systems to technical subsystems alone.

c. Failure Modes (What Happens When We Do Not Think Ecologically)

If agentic ecologies are inevitable, their forms are not. The refusal to think ecologically about agentic systems does not preserve neutrality; it actively shapes the conditions under which failure becomes likely. Several failure modes are already visible.

First is relational collapse. Systems optimized for efficiency and coordination tend toward behavioral monocultures, crowding out difference and reducing resilience. Ecological science is unequivocal on this point: diversity is not ornamental, it is protective (Capra and Luisi). Agentic systems that suppress friction and dissent may appear stable while becoming increasingly brittle.

Second is empathic simulation without responsibility. As Section 4 suggested, the appearance of responsiveness can mask instrumentalization. When simulated empathy replaces attentiveness to alterity, agentic ecologies risk becoming emotionally persuasive while ethically hollow. Stein’s warning against confusing empathy with projection is especially important here.

Third is attention extraction at scale. Without governance that treats attention as an ecological resource, agentic systems will amplify whatever dynamics reinforce themselves most efficiently, often novelty, outrage, or optimization loops detached from truth or care. Stiegler’s diagnosis of attentional capture applies with heightened force in agentic environments, where agents themselves participate in the routing and amplification of attention.

Finally, there is planetary abstraction. Perhaps the most dangerous failure mode is the illusion that agentic ecologies are immaterial. When digital systems are severed conceptually from energy, water, land, and labor, ecological costs become invisible until they are irreversible. Integral ecology insists that abstraction is not neutral, but is a moral and material act with consequences (Crist).

Agentic ecology does not offer comfort. It offers orientation.

It asks us to recognize that we are no longer merely building tools, but cultivating environments, environments that will shape attention, possibility, and responsibility in ways that exceed individual intention. The question before us is not whether agentic ecologies will exist, but whether they will be governed by logics of domination or practices of care.

Thinking ecologically does not guarantee wise outcomes. But refusing to do so almost certainly guarantees failure… not spectacularly, but gradually, through the slow erosion of relational depth, attentiveness, and restraint.

In this sense, agentic ecology is not only a conceptual framework. It is an invitation: to relearn what it means to inhabit worlds, digital and otherwise, as creatures among creatures, participants rather than masters, responsible not for total control, but for sustaining the fragile conditions under which life, meaning, and agency can continue to emerge.

An Afterword: On Provisionality and Practice

This essay has argued for agentic ecology as a serious theoretical framework rather than a passing metaphor. Yet it is important to be clear about what this framework is and what it is not.

Agentic ecology, as developed here, is obviously not a finished theory, nor a comprehensive model ready for direct implementation, but we should begin taking those steps (the aim here). It is a conceptual orientation for learning to see, name, and attend to emerging forms of agency that exceed familiar categories of tool, user, and system. Its value lies less in precision than in attunement, in its capacity to render visible patterns of relation, emergence, and ethical consequence that are otherwise obscured by narrow technical framings.

The definition offered here is therefore intentionally provisional. It names a field of inquiry rather than closing it. As agentic systems inevitably develop and evolve over the next few years, technically, socially, and ecologically, the language used to describe them must remain responsive to new forms of interaction, power, and vulnerability. A framework that cannot change alongside its object of study risks becoming yet another abstraction detached from the realities it seeks to understand.

At the same time, provisionality should not be confused with hesitation. The rapid emergence of agentic systems demands conceptual clarity even when certainty is unavailable. To name agentic ecology now is to acknowledge that something significant is already underway and that new environments of agency are forming, and that how we describe them will shape how we govern, inhabit, and respond to them.

So, this afterword serves as both a pause and an invitation. A pause, to resist premature closure or false confidence. And an invitation to treat agentic ecology as a shared and evolving thought technology, one that will require ongoing refinement through scholarship, design practice, theological reflection, and ecological accountability.

The work of definition has begun. Its future shape will depend on whether we are willing to continue thinking ecologically (patiently, relationally, and with care) in the face of systems that increasingly act alongside us, and within the same fragile world.

References

Berry, Thomas. The Great Work: Our Way into the Future. New York: Bell Tower, 1999.

Boff, Leonardo. Cry of the Earth, Cry of the Poor. Maryknoll, NY: Orbis Books, 1997.

Capra, Fritjof, and Pier Luigi Luisi. The Systems View of Life: A Unifying Vision. Cambridge: Cambridge University Press, 2014.

Clark, Jack. “Import AI 443: Into the Mist: Moltbook, Agent Ecologies, and the Internet in Transition.” Import AI, February 2, 2026. https://jack-clark.net/2026/02/02/import-ai-443-into-the-mist-moltbook-agent-ecologies-and-the-internet-in-transition/.

Crist, Eileen. Abundant Earth: Toward an Ecological Civilization. Chicago: University of Chicago Press, 2019.

Guattari, Félix. The Three Ecologies. Translated by Ian Pindar and Paul Sutton. London: Athlone Press, 2000.

Merleau-Ponty, Maurice. Phenomenology of Perception. Translated by Colin Smith. London: Routledge, 1962.

Odling-Smee, F. John, Kevin N. Laland, and Marcus W. Feldman. Niche Construction: The Neglected Process in Evolution. Princeton, NJ: Princeton University Press, 2003.

Stein, Edith. On the Problem of Empathy. Translated by Waltraut Stein. Washington, DC: ICS Publications, 1989.

Stiegler, Bernard. Taking Care of Youth and the Generations. Translated by Stephen Barker. Stanford, CA: Stanford University Press, 2010.

Whitehead, Alfred North. Process and Reality: An Essay in Cosmology. Corrected edition. New York: Free Press, 1978.

Agent Ecology of Moltbook

I’ve had lots of thoughts about Moltbook over the last week of tracking its development pretty closely. I’m sure I’ll share those here, but here’s an interesting development of thought in its own right from Anthropic’s co-founder, Jack Clark (given my PhD work is in integral ecology, after all)…

Now I’m deep in thought about how our human notion of ecology and ecological ethics extends to whatever this notion of agentic ecology is becoming… agentic empathy, for example?

Import AI 443: Into the mist: Moltbook, agent ecologies, and the internet in transition | Import AI:

Moltbook is the first example of an agent ecology that combines scale with the messiness of the real world. And in this example, we can definitely see the future.

Social Media’s Cigarette Moment

I’m guessing the plaintiff will walk off with a jury decided major amount of money and we’ll continue to learn just how bad social media platforms are for young (and all) people… and how much these corporations knew that well over a decade ago. 

“IG is a drug”: Internal messages may doom Meta at social media addiction trial – Ars Technica:

Those documents included an email stating that Mark Zuckerberg—who is expected to testify at K.G.M.’s trial—decided that Meta’s top priority in 2017 was teens who must be locked in to using the company’s family of apps.

The next year, a Facebook internal document showed that the company pondered letting “tweens” access a private mode inspired by the popularity of fake Instagram accounts teens know as “finstas.” That document included an “internal discussion on how to counter the narrative that Facebook is bad for youth and admission that internal data shows that Facebook use is correlated with lower well-being (although it says the effect reverses longitudinally).”

Other allegedly damning documents showed Meta seemingly bragging that “teens can’t switch off from Instagram even if they want to” and an employee declaring, “oh my gosh yall IG is a drug,” likening all social media platforms to “pushers.”

Similarly, a 2020 Google document detailed the company’s plan to keep kids engaged “for life,” despite internal research showing young YouTube users were more likely to “disproportionately” suffer from “habitual heavy use, late night use, and unintentional use” deteriorating their “digital well-being.”

4,000 Posts

This is the 4,000th published post on my blog, going back to 2006 (including a couple of starts and stops across various platforms and a few years when I was encouraged not to have a site). I’ve written around 600,000 words here, which is equivalent to around 10 longer books.

I view this as my personal thinking space… sometimes it’s coherent and polished, and sometimes it’s a random thought or link to something that I want to share with others to read (and my poor friends and family can only take so many links about randomness in a day).

I think Seth Godin said it best here in his celebration of writing his 5,000th post a few years back…

The 5000th post* | Seth’s Blog:

My biggest surprise? That more people aren’t doing this. Not just every college professor (particularly those in the humanities and business), but everyone hoping to shape opinions or spread ideas. Entrepreneurs. Senior VPs. People who work in non-profits. Frustrated poets and unknown musicians… Don’t do it because it’s your job, do it because you can.

The selfishness of the industrial age (scarcity being the thing we built demand upon, and the short-term exchange of value being the measurement) has led many people to question the value of giving away content, daily, for a decade or more. And yet… I’ve never once met a successful blogger who questioned the personal value of what she did.

Printed Copies of Readings in Class

Granted, I’m 47 and graduated Wofford College in ’00 and Yale Div in ’02 before the iPad or Zotero were a thing… but I still have numerous reading packets from those days and still use them for research (shoutout to TYCO Printers in New Haven for the quality work)… but I endorse this position. Now, I use a combo of “real” books and Zotero for online PDF’s that I don’t have time to print out. I’d like to go all paper again, though. Maybe a good 2026 goal?

Also granted, I used blue books for exams with my 6th-12th graders that I taught for 20 years. They loved it (not really… but I got lots of good doodles and personal notes of gratitude at the end of those essays that I’ve kept over the years).

English professors double down on requiring printed copies of readings | Yale Daily News:

This academic year, some English professors have increased their preference for physical copies of readings, citing concerns related to artificial intelligence.

Many English professors have identified the use of chatbots as harmful to critical thinking and writing. Now, professors who had previously allowed screens in class are tightening technology restrictions.

Project Spero and Spartanburg’s New Resource Question: Power, Water, and the True Cost of a Data Center


Spartanburg County is staring straight at the kind of development that sounds abstract until it lands on our own roads, substations, and watersheds. A proposed $3 billion, “AI-focused high-performance computing” facility, Project Spero, has been announced for the Tyger River Industrial Park – North

In the Upstate, we’re used to thinking about growth as something we can see…new subdivisions, new lanes of traffic, new storefronts. But a data center is a stranger kind of arrival. It does not announce itself with crowds or culture. It arrives as a continuous, quiet, and largely invisible demand. A building that looks still from the outside can nevertheless function as a kind of permanent request being made of the region to keep the current steady, keep the cooling stable, keep the redundancy ready, keep the uptime unquestioned.

And that is where I find myself wanting to slow down and do something unfashionable in a policy conversation and describe the experience of noticing. Phenomenology begins with the discipline of attention…with the refusal to let an object remain merely “background.” It asks what is being asked of perception. The “cloud” is one of the most successful metaphors of our moment precisely because it trains us not to see or not to feel the heat, not to hear the generators, not to track the water, not to imagine the mines and the supply chains and the labor. A local data center undermines the metaphor, which is why it matters that we name what is here.

The familiar sales pitch is already in circulation as significant capital investment, a relatively small number of permanent jobs (about 50 in Phase I), and new tax revenue, all framed as “responsible growth” without “strain” on infrastructure. 

But the real question isn’t whether data centers are “the future.” They’re already here. The question is what kinds of futures they purchase and with whose power, whose water, and whose air.

Where this is happening (and why that matters)

Tyger River Industrial Park isn’t just an empty map pin… its utility profile is part of the story. The site’s published specs include a 34kV distribution line (Lockhart Power), a 12” water line (Startex-Jackson-Wellford-Duncan Water District), sewer service (Spartanburg Sanitary Sewer District), Piedmont Natural Gas, and AT&T fiber. 

Two details deserve more attention than they’re likely to get in ribbon-cutting language:

Power capacity is explicitly part of the pitch. One listing notes available electric capacity “>60MW.” 

Natural gas is part of the reliability strategy. The reporting on Project Spero indicates plans to “self-generate a portion of its power on site using natural gas.” 

    That combination of a high continuous load plus on-site gas generation isn’t neutral. It’s an ecological choice with real downstream effects.

    The energy question: “separate from residential systems” is not the same as “separate from residential impact”

    One line you’ll hear often is that industrial infrastructure is “separate from residential systems.” 

    Even if the wires are technically separate, the regional load is shared in ways that matter, from planning assumptions and generation buildout to transmission upgrades and the ratepayer math that follows.

    Regional reporting has been blunt about the dynamics of data center growth (alongside rapid population and industrial growth), which are pushing utilities toward major new infrastructure investments, and those costs typically flow through to bills. 

    In the Southeast, regulators and advocates are also warning of a rush toward expensive gas-fired buildouts to meet data-center-driven demand, potentially exposing customers to higher costs. 

    So the right local question isn’t “Will Spartanburg’s lights stay on?”

    It’s “What long-term generation and grid decisions are being locked in, because a facility must run 24/7/365?”

    When developers say “separate from residential systems,” I hear a sentence designed to calm the community nervous system. But a community is not a wiring diagram. The grid is not just copper and transformers, but a social relation. It is a set of promises, payments, and priorities spread across time. The question is not whether the line feeding the site is physically distinct from the line feeding my neighborhood. The question is whether the long arc of planning, generation decisions, fuel commitments, transmission upgrades, and the arithmetic of rates is being bent around a new form of permanent demand.

    This is the kind of thing we typically realize only after the fact, when the bills change, when the new infrastructure is presented as inevitable, when the “choice” has already been absorbed into the built environment. Attention, in this sense, is not sentiment. It is civic practice. It is learning to see the slow commitments we are making together, and deciding whether they are commitments we can inhabit.

    The water question: closed-loop is better but “negligible” needs a definition

    Project Spero’s developer emphasizes a “closed-loop” water design, claiming water is reused “rather than consumed and discharged,” and that the impact on existing customers is “negligible.” 

    Closed-loop cooling can indeed reduce water withdrawals compared with open-loop or evaporative systems, but “negligible” is not a technical term. It’s a rhetorical one. If we want a serious civic conversation, “negligible” should be replaced with specifics:

    • What is projected annual water withdrawal and peak-day demand?
    • What is the cooling approach (air-cooled, liquid, hybrid)?
    • What is the facility’s water-use effectiveness (WUE) target and reporting plan?
    • What happens in drought conditions or heat waves, when cooling demand spikes?

    Locally, Spartanburg Water notes the Upstate’s surface-water advantages and describes interconnected reservoirs and treatment capacity planning, naming Lake Bowen (about 10.4 billion gallons), Lake Blalock (about 7.2 billion gallons), and Municipal Reservoir #1 (about 1 billion gallons). 

    That’s reassuring, and it’s also exactly why transparency matters. Resource resilience is not just about what exists today. Resilience is about what we promise into the future, and who pays the opportunity costs.

    Water conversations in the Upstate can become strangely abstract, as if reservoirs and treatment plants are simply numbers on a planning sheet. But water is not only a resource, but it’s also a relation of dependency that shapes how we live and what we can become. When I sit with the black walnut in our backyard and take notes on weather, light, and season, the lesson is never just “nature appreciation.” It’s training in scale and learning what persistence feels like, what stress looks like before it becomes an emergency, and what a living system does when conditions shift.

    That’s why “negligible” makes me uneasy. Not because I assume bad faith, but because it’s a word that asks us not to look too closely. Negligible compared to what baseline, over what time horizon, and under what drought scenario with what heatwave assumptions? If closed-loop cooling is truly part of the design, then the most basic gesture of responsibility is to translate that claim into measurable terms and to publicly commit to reporting that remains stable even when the headlines move on.

    The ecological footprint that rarely makes the headlines

    When people say “data center,” they often picture a quiet box that’s more like a library than a factory. In ecological terms, it’s closer to an always-on industrial organism with electricity in, heat out, materials cycling, backup generation on standby, and constant hardware turnover.

    Here are the footprint categories I want to see discussed in Spartanburg in plain language:

    • Continuous electricity demand (and what it forces upstream): Data centers don’t just “use electricity.” They force decisions about new generation and new transmission to meet high-confidence loads. That’s the core ratepayer concern advocacy groups have been raising across South Carolina. 
    • On-site combustion and air permitting: Even when a data center isn’t “a power plant,” it often has a lot in common with one. Spartanburg already has a relevant local example with the Valara Holdings High Performance Compute Center. In state permitting materials, it is described as being powered by twenty-four natural gas-fired generators “throughout the year,” with control devices for NOx and other pollutants.  Environmental groups flagged concerns about the lack of enforceable pollution limits in the permitting process, and later reporting indicates that permit changes were made to strengthen enforceability and emissions tracking. That’s not a side issue. It’s what “cloud” actually looks like on the ground.
    • Water, heat, and the limits of “efficiency”: Efficiency claims matter, but they should be auditable. If a project is truly low-impact, the developer should welcome annual public reporting on energy, water, and emissions.
    • Material throughput and e-waste: Server refresh cycles and hardware disposal are part of the ecological story, even when they’re out of sight. If Spartanburg is becoming a node in this seemingly inevitable AI buildout, we should be asking about procurement standards, recycling contracts, and end-of-life accountability.

    A policy signal worth watching: South Carolina is debating stricter rules

    At the state level, lawmakers have already begun floating stronger guardrails. One proposed bill (the “South Carolina Data Center Responsibility Act”) includes requirements like closed-loop cooling with “zero net water withdrawal,” bans on municipal water for cooling, and requirements that permitting, infrastructure, and operational costs be fully funded by the data center itself. 

    Whatever the fate of that bill, the direction is clear: communities are tired of being told “trust us” while their long-term water and power planning is quietly rearranged.

    What I’d like Spartanburg County to require before calling this “responsible growth”

    If Spartanburg County wants to be a serious steward of its future, here’s what I’d want attached to any incentives or approvals…in writing, enforceable, and public:

    1. Annual public reporting of electricity use, peak demand, water withdrawal, and cooling approach.
    2. A clear statement of on-site generation: fuel type, capacity, expected operating profile, emissions controls, and total permitted hours.
    3. Third-party verification of any “closed-loop” and “negligible impact” claims.
    4. A ratepayer protection plan: who pays for grid upgrades, and how residential customers are insulated from speculative overbuild.
    5. A community benefits agreement that actually matches the footprint (workforce training, environmental monitoring funds, emergency response support, local resilience investments).
    6. Noise and light mitigation standards, monitored and enforceable.

    I’m certainly not anti-technology. I’m pro-accountability. If we’re going to host infrastructure that makes AI possible, then we should demand the same civic clarity we’d demand from any other industrial operation.

    The spiritual crisis here isn’t that we use power. It’s that we grow accustomed to not knowing what our lives require. One of the ways we lose the world is by letting the infrastructures that sustain our days become illegible to us. A data center can be an occasion for that loss, or it can become an occasion for renewed legibility, for a more honest accounting, for a more careful local imagination about what we are building and why.

    Because in the end, the Upstate’s question isn’t whether we can attract big projects. It’s whether we can keep telling the truth about what big projects cost.

    Pragmatism for Whom? Energy, Empathy, and the Limits of “All-of-the-Above”

    A recent opinion piece in The Hill argues that Democrats should and are beginning to rethink their approach to climate and energy policy. Pointing to renewed support for natural gas infrastructure, oil and gas exports, and an “all-of-the-above” energy strategy, the author suggests that political realism requires prioritizing affordability, job creation, and national security alongside emissions reduction. The argument is presented not as climate denial but as maturity…a necessary correction to what is portrayed as ideological rigidity. It’s a case worth taking seriously, precisely because it names real pressures and real people. But it also leaves something essential unexamined.

    In recent weeks, a familiar argument has returned to public discourse that Democrats, and perhaps climate advocates more broadly, must recalibrate their approach to energy. Affordability matters, jobs matter, national security matters. An “all-of-the-above” energy strategy here is not ideological retreat but political maturity.

    There is truth here, and it should be acknowledged plainly. Energy transitions are not experienced in the abstract. They are lived locally…in monthly bills, in the dignity of work, in the stability or fragility of rural communities. Any climate politics that fails to take this seriously will not only lose elections, but it will also lose trust.

    And yet, there is a deeper question that this rhetoric consistently avoids. Not whether energy should be affordable, or whether people deserve good work. But whose experience counts when we decide what is practical?

    Pragmatism and the Shape of Time

    Much of the current defense of fossil fuel expansion rests on short-term accounting. Natural gas reduced emissions relative to coal, while fracking boosted GDP and export capacity, strengthening allies and weakening adversaries. These claims are not fabrications in that they are partial truths framed within narrow temporal windows.

    What often goes unspoken is that infrastructure remembers. Pipelines, compressor stations, export terminals, and extraction fields are not neutral bridges toward a cleaner future. They are long-term commitments that shape what futures remain possible. Once built, they exert a quiet pressure on policy, markets, and imagination alike.

    This is not ideology. It is systems thinking. What appears pragmatic in electoral time can prove costly in ecological time.

    The Missing Dimension: Empathy as Perception

    In my own work on empathy, I’ve argued that empathy is not primarily a moral sentiment or an ethical achievement. It is a way of perceiving and is how the world first comes to matter to us individually.

    What’s striking in many contemporary energy debates is how narrow the field of perception has become. Voters, workers, markets, and allies all appear. But watersheds rarely do. Soil rarely does. Forests, species, and future bodies remain largely invisible.

    This absence is not accidental. It reflects a failure of empathy…not emotional indifference, but perceptual narrowing. We have learned to see economic benefit clearly while training ourselves not to see cumulative ecological harm until it arrives as crisis.

    Empathy, understood ecologically, resists this narrowing. It asks us to attend to what bears cost slowly, silently, and often without political voice.

    Land Is Not an Abstraction

    Extraction economies are often defended as lifelines for “overlooked” places. But land is not an abstract resource pool waiting to be activated for growth. It is a living field of relations between humans and more-than-humans that remembers disturbance long after boom cycles fade.

    Anyone who has spent time with communities shaped by extraction knows the pattern. Initial prosperity with infrastructure investment and job creation. And then, often, degraded water, long-term health impacts, ecological fragmentation, and economic precarity occur when markets shift.

    To name this is not to dismiss workers or romanticize poverty. It is to refuse a false tradeoff that pits dignity of labor against the integrity of place.

    Beyond the Binary

    The real failure of the “all-of-the-above” framing is not that it includes fossil fuels. It is that it treats energy as a menu of interchangeable options rather than as a formative relationship between people, land, and time.

    A genuinely pragmatic energy politics would ask harder questions:

    • What kinds of work help communities remain with their land rather than exhaust it?
    • What forms of energy production cultivate care, skill, and long-term stewardship?
    • What do our infrastructure choices teach us to notice…and what do they train us to ignore?

    These are not elitist questions. They are practical questions in the deepest sense.

    A Different Kind of Realism

    Climate politics does not fail because it asks too much. It fails when it asks too little…when it narrows realism to GDP curves and election cycles while ignoring the slow violence written into landscapes and bodies.

    If empathy is how the world first comes to matter, then energy policy is one of the most powerful forms of moral formation we have. It shapes what we see, what we value, and what we are willing to sacrifice…often without saying so aloud.

    The question before us is not whether fossil fuels have brought benefits. Of course they have. The question is whether continuing to expand systems that require ecological blindness can ever count as practical in a world already living with the consequences of that blindness.

    Pragmatism worthy of the name would begin there.

    TikTok’s New Granular Location Data Tracking

    Yuck… be careful out there with your location data, folks…

    TikTok Is Now Collecting Even More Data About Its Users. Here Are the 3 Biggest Changes | WIRED:

    TikTok’s change in location tracking is one of the most notable updates in this new privacy policy. Before this update, the app did not collect the precise, GPS-derived location data of US users. Now, if you give TikTok permission to use your phone’s location services, then the app may collect granular information about your exact whereabouts. Similar kinds of precise location data is also tracked by other social media apps, like Instagram and X.

    Gigawatts and Wisdom: Toward an Ecological Ethics of Artificial Intelligence

    Elon Musk announced on X this week that xAI’s “Colossus 2” supercomputer is now operational, describing it as the world’s first gigawatt-scale AI training cluster, with plans to scale to 1.5 gigawatts by April. This single training cluster now consumes more electricity than San Francisco’s peak demand.

    There is a particular cadence to announcements like this. They arrive wrapped in the language of inevitability, scale, and achievement. Bigger numbers are offered as evidence of progress. Power becomes proof. The gesture is not just technological but symbolic, and it signals that the future belongs to those who can command energy, land, water, labor, and attention on a planetary scale (same as it ever was).

    What is striking is not simply the amount of electricity involved, though that should give us pause. A gigawatt is not an abstraction. It is rivers dammed, grids expanded, landscapes reorganized, communities displaced or reoriented. It is heat that must be carried away, water that must circulate, minerals that must be extracted. AI training does not float in the cloud. It sits somewhere. It draws from somewhere. It leaves traces.

    The deeper issue, though, is how casually this scale is presented as self-justifying.

    We are being trained, culturally, to equate intelligence with throughput. To assume that cognition improves in direct proportion to energy consumption. To believe that understanding emerges automatically from scale. This is an old story. Industrial modernity told it with coal and steel. The mid-twentieth century told it with nuclear reactors. Now we tell it with data centers.

    But intelligence has never been merely a matter of power input.

    From a phenomenological perspective, intelligence is relational before it is computational. It arises from situated attention, from responsiveness to a world that pushes back, from limits as much as from capacities. Scale can amplify, but it can also flatten. When systems grow beyond the horizon of lived accountability, they begin to shape the world without being shaped by it in return.

    That asymmetry matters.

    There is also a theological question here, though it is rarely named as such. Gigawatt-scale AI is not simply a tool. It becomes an ordering force, reorganizing priorities and imaginaries. It subtly redefines what counts as worth knowing and who gets to decide. In that sense, these systems function liturgically. They train us in what to notice, what to ignore, and what to sacrifice for the sake of speed and dominance.

    None of this requires demonizing technology or indulging in nostalgia. The question is not whether AI will exist or even whether it will be powerful. The question is what kind of power we are habituating ourselves to accept as normal.

    An ecology of attention cannot be built on unlimited extraction. A future worth inhabiting cannot be sustained by systems that require cities’ worth of electricity simply to refine probabilistic text generation. At some point, the metric of success has to shift from scale to care, from domination to discernment, from raw output to relational fit.

    Gigawatts tell us what we can do.
    They do not tell us what we should become.

    That remains a human question. And increasingly, an ecological one.

    Here’s the full paper in PDF, or you can also read it on Academia.edu:

    Renting Your Next Computer?? (Or Why It’s Hard to Be Optimistic About Tech Now)

    It’s not as far-fetched as it may sound to many of us who have owned our own computer hardware for years (going back to the 1980’s for me)… the price of RAM and soon the price of SSD’s are skyrocketing because of the demands of artificial intelligence, and that’s already having implications for the pricing of personal computers.

    So, could Bezos and other tech leaders’ dreams of us being locked into subscription-based models for computing come true? I think there’s a good possibility, given that our society has been slow-boiled to accept subscriptions for everything from our music listening and playlists (Spotify) to software (Office, Adobe, and now Apple’s iWork Suite, etc.) to cars (want more horsepower in your Audi? That’s a subscription).

    To me, it’s a far cry from my high school days, when I would pore over computer magazines to read about the latest Pentium chips and figure out how much RAM I could order for my next computer build to fit my meager budget. But we’ve long been using machines with glued-down chips and encouraging corporations to add to the immense e-waste problem with our impenetrable iPhones, MacBooks, and Thinkpads.

    And let’s face it, the personal computer model has faded in importance over the last 15 years with the introduction of the iPhone and iPads and similar smartphones, as we can binge all the Netflix, TikTok, and Instagram reels (do we use personal computers for much else these days?) we want right from those devices.

    Subscription computers and a return to the terminal model of VAX machines (PDF from 1987), as I used in college to check email, seem dystopian, but now that we’ve subscriptionized our art and music, it’s just a shout away.

    Jeff Bezos said the quiet part out loud — hopes that you’ll give up your PC to rent one from the cloud | Windows Central:

    So, what prediction did Bezos make back then, that seems particularly poignant right now? Bezos thinks that local PC hardware is antiquated, and that the future will revolve around cloud computing scenarios, where you rent your compute from companies like Amazon Web Services or Microsoft Azure.

    Bezos told an anecdote about visiting a historical brewery to emphasize his point. He said that the hundreds-year old brewery had a museum celebrating its heritage, and had an exhibit for a 100-year old electric generator they used before national power grids were a thing. Bezos said he saw this generator in the same way he sees local computing solutions today — inferring on hopes that users will move away from local hardware to rented, always-online cloud-based solutions offered by Amazon and other similar companies.

    After the Crossroads: Artificial Intelligence, Place-Based Ethics, and the Slow Work of Moral Discernment

    Over the past year, I’ve been tracking a question that began with a simple observation: Artificial intelligence isn’t only code or computation, but it’s infrastructure. It eats electricity and water. It sits on land. It reshapes local economies and local ecologies. It arrives through planning commissions and energy grids rather than through philosophical conference rooms.

    That observation was the starting point of my November 2025 piece, “Artificial Intelligence at the Crossroads of Science, Ethics, and Spirituality.” In that first essay, I tried to draw out the scale of the stakes from the often-invisible material costs of AI, the ethical lacunae in policy debates, and the deep metaphysical questions we’re forced to confront when we start to think about artificial “intelligence” not as an abstraction but as an embodied presence in our world. If you haven’t read it yet, I would recommend it first as it provides the grounding that makes the new essay more than just a sequel.

    Here’s the extended follow-up titled “After the Crossroads: Artificial Intelligence, Place-Based Ethics, and the Slow Work of Moral Discernment.” This piece expands the argument in several directions, and, I hope, deepens it.

    If the first piece asked “What is AI doing here?”, this new essay asks “How do we respond, ethically and spiritually, when AI is no longer just a future possibility but a present reality?”

    A few key parts:

    1. From Abstraction to Emplacement

    AI isn’t floating in the cloud, but it’s rooted in specific places with particular water tables, zoning laws, and bodies of people. Understanding AI ethically means understanding how it enters lived space, not just conceptual space.

    2. Infrastructure as Moral Problem

    The paper foregrounds the material aspects of AI, including data centers, energy grids, and water use, and treats these not as technical issues but as moral and ecological issues that call for ethical attention and political engagement.

    3. A Theological Perspective on Governance

    Drawing on ecological theology, liberation theology, and phenomenology, the essay reframes governance not as bureaucracy but as a moral practice. Decisions about land use, utilities, and community welfare become questions of justice, care, and collective responsibility.

    4. Faith Communities as Ethical Agents

    One of my central claims is that faith communities, including churches, are uniquely positioned to foster the moral formation necessary for ethical engagement with AI. These are communities in which practices of attention, patience, deliberation, and shared responsibility are cultivated through the ordinary rhythms of life (ideally).

    This perspective is neither technophobic nor naïvely optimistic about innovation. It insists that ethical engagement with AI must be slow, embodied, and rooted in particular communities, not divorced into abstract principles.

    Why This Matters Now

    AI is no longer on the horizon. Its infrastructure is being built today, in places like ours (especially here in the Carolinas), with very material ecological footprints. These developments raise moral questions not only about algorithmic bias or job displacement, important as those topics are, but also about water tables, electrical grids, local economies, and democratic agency.

    Those are questions not just for experts, but for communities, congregations, local governments, and engaged citizens.

    This essay is written for anyone who wants to take those questions seriously without losing their grip on complexity, such as people of faith, people of conscience, and anyone concerned with how technology shapes places and lives.

    I’m also planning shorter, reader-friendly versions of key sections, including one you can share with your congregation or community group.

    We’re living in a time when theological attention and civic care overlap in real places, and it matters how we show up.

    Abstract

    This essay extends my earlier analysis of artificial intelligence (AI) as a convergence of science, ethics, and spirituality by deliberately turning toward questions of place, local governance, and moral formation. While much contemporary discourse on AI remains abstract or global in scale, the material realities of AI infrastructure increasingly manifest at the local level through data centers, energy demands, water use, zoning decisions, and environmental impacts. Drawing on ecological theology, phenomenology, and political theology, this essay argues that meaningful ethical engagement with AI requires slowing technological decision-making, recentering embodied and communal discernment, and reclaiming local democratic and spiritual practices as sites of moral agency. Rather than framing AI as either salvific or catastrophic, I propose understanding AI as a mirror that amplifies existing patterns of extraction, care, and neglect. The essay concludes by suggesting that faith communities and local institutions play a crucial, underexplored role in shaping AI’s trajectory through practices of attentiveness, accountability, and place-based moral reasoning.

    Stats from 2025

    This is a little self-indulgent, but I wanted to share some of the interesting stats from my blog in 2025. I was rather surprised to see the site have one its “best” year (numbers-wise with page views, likes, and comments… I won’t apply that label to my own content) since 2016 and reaching levels it was hitting at the height of blogging on the web in the mid 2000’s (though I do think we’re seeing a return to blog culture as more people realize the attention engines of social media are turning us all into wretched creatures).

    • Total posts in 2025: 234 (now up to 3,973 published posts since 2006)
    • Total words written in 2025: 58,300 (don’t tell my PhD advisor)
    • Most popular post time: Thursday 5:00 PM (21% of views… I always tell clients that Tuesday mornings and Thursday afternoons are the times when people consume content on the web… still holds true)
    • Total page views in 2025: 90,434 (2016 had 120,469 and 2011 saw 100,081 views for comparison)
    • Total views all time: 1,002,067
    • Total unique visitors all time: 570,862
    • Best month ever: December 2025 (yep, last month the blog saw its record 37,000 views, which beats out January 2007’s 34,000… crazy!)

    All told, I really don’t care that much about these sorts of stats these days as I know I’m writing for a niche audience. I don’t monetize this site (or your visits, data, or viewing habits in any way beyond simple page views… no Google Analytics, etc. here). However, it is endearing to see new people find and interact with my ramblings here, but especially to see all of you who come back as repeat visitors that like articles, leave comments, and (yes) even share sometimes on social media outlets. I deeply appreciate your engagement, and definitely reach out if you ever have questions about my writing, opinions, or work!

    Elon Musk’s Intent by Substituting Abundance for Sustainable in Telsa’s Mission

    Worthy read on Elon’s post-scarcity fantasy of robots and AGI that relies on the concepts of Superintelligence and trans-humanistic ethics that lack any concept of ecological futures and considerations… a future that, quite frankly, we should not pursue if we are to live into our true being here on this planet.

    Elon Musk drops ‘sustainable’ from Tesla’s mission as he completes his villain arc | Electrek:

    By removing “sustainable,” Tesla is signaling that its primary focus is no longer the environment or the climate crisis. “Amazing Abundance” is a reference to the post-scarcity future Musk believes he is building through general-purpose humanoid robots (Optimus) and Artificial General Intelligence (AGI).

    In this new mission, electric cars and renewables are just tools to help build this hypothetical utopia.

    South Carolina’s Data Center Decision Time

    I have grave concerns about the speed at which this is happening all over the state, with little regard to integral ecologies (City Council is debating two new data centers here in Spartanburg as well)…

    9 new data centers proposed in Colleton County:

    “I think South Carolina really is at a decision point: what do we want our state to look like 20 years from now, 30 years from now?” resident and Climate Campaign Associate Robby Maynor said. “Do we want a lot of gas plants and pipelines and data centers? Or do we want to protect the things that make South Carolina special and unique? The ACE Basin is at the very top of that list. This is the absolute wrong location for a complex of this size.”

    In the application for the special zoning exception, the proposed data centers and the substations show the potential impact on this land, especially the wetlands, but some say the impact is even greater.

    What is Intelligence (and What “Superintelligence” Misses)?

    Worth a read… sounds a good deal like what I’ve been saying out loud and thinking here in my posts on AI futures and the need for local imagination in steering technological innovation such as AI / AGI…

    The Politics Of Superintelligence:

    And beneath all of this, the environmental destruction accelerates as we continue to train large language models — a process that consumes enormous amounts of energy. When confronted with this ecological cost, AI companies point to hypothetical benefits, such as AGI solving climate change or optimizing energy systems. They use the future to justify the present, as though these speculative benefits should outweigh actual, ongoing damages. This temporal shell game, destroying the world to save it, would be comedic if the consequences weren’t so severe.

    And just as it erodes the environment, AI also erodes democracy. Recommendation algorithms have long shaped political discourse, creating filter bubbles and amplifying extremism, but more recently, generative AI has flooded information spaces with synthetic content, making it impossible to distinguish truth from fabrication. The public sphere, the basis of democratic life, depends on people sharing enough common information to deliberate together….

    What unites these diverse imaginaries — Indigenous data governance, worker-led data trusts, and Global South design projects — is a different understanding of intelligence itself. Rather than picturing intelligence as an abstract, disembodied capacity to optimize across all domains, they treat it as a relational and embodied capacity bound to specific contexts. They address real communities with real needs, not hypothetical humanity facing hypothetical machines. Precisely because they are grounded, they appear modest when set against the grandiosity of superintelligence, but existential risk makes every other concern look small by comparison. You can predict the ripostes: Why prioritize worker rights when work itself might soon disappear? Why consider environmental limits when AGI is imagined as capable of solving climate change on demand?

    How I Use Obsidian at CIIS: A Relational Workflow for Reading, Reflection, and Writing

    Obsidian has become my living archive since I first dove in back in 2021 as a classroom teacher where I organized teaching notes, conversations, and todos as a Dean of Students… and now it has become the place where course readings, dissertation ideas, phenomenological field notes, theological insights, Canvas posts, and draft papers all meet in a shared relational space. It’s less a filing cabinet and more a garden. What I’m really doing in Obsidian is tending connections by letting ideas compost, cross-pollinate, and eventually grow into papers or long-form reflections. Here’s the core workflow I’m sharing with you.

    Two places where I’d start before you dive in to Obsidian:

    1. Book Notes as Living Conversations

    When I read, whether it’s Merleau-Ponty, Edith Stein, Whitehead, or a text for PCC/ESR, I take notes into a Book Notes template that pulls in metadata automatically:

    • Author / Title / Year / Course
    • Core quotes (copied directly, tagged with #quote and citation)
    • My reflections in first person
    • Connections to other thinkers or my ongoing concepts: [[Ecological Intentionality]], [[Cruciform Consciousness]], [[Empathy (Stein)]], [[Flesh of the World]], etc.

    Each book note ends with a section called “Where does this want to go?”

    Sometimes the answer is a future paper, a blog post, or a concept node. That question keeps the note alive instead of archived.

    2. Canvas Posts → Permanent Notes

    I write most of my Canvas responses in Obsidian first. This lets me:

    1. Draft freely
    2. Link concepts as I’m thinking
    3. Keep a permanent, searchable archive of every class discussion

    Each module prompt gets its own note in my Canvas/ folder. After posting, I create 1–3 “permanent notes” distilled from the response—short, atomic ideas written in my own voice.

    For example, a Canvas post on the chiasm leads to permanent notes like:

    • Perception as reciprocal touch
    • The ecological thickness of the visible
    • Relational openness in the phenomenology of nature

    These then link outward into ongoing clusters such as [[Phenomenology]], [[Embodiment]], [[Nature as Intertwining]].

    3. Writing Papers Through Connected Notes

    When a paper is due, ecological theology, phenomenology, ESR or PCC research, I never begin with a blank page. I begin with a map of notes already in conversation.

    The workflow:

    1. Create a Paper Hub note as a central node for the project:
      • thesis draft
      • reading list
      • list of relevant permanent notes
    2. Pull in linked notes Using Dataview or simple backlinks, I gather every relevant piece of thinking I’ve already stored.
    3. Assemble the argument The writing becomes an act of weaving connections rather than inventing from scratch.
    4. Export to Word/PDF Once the draft is complete, I move into Word for Chicago-style citations and final formatting.

    This lets my academic work grow organically out of months of lived reflection rather than rushed, isolated writing.

    4. Daily Notes as Phenomenological and Ecological Anchors

    Every morning’s Daily Note includes:

    • weather + sunrise/sunset
    • tracking notes on the black walnut
    • dreams, moods, or somatic impressions
    • any quote or insight from my reading

    These small entries, over time, become a longitudinal phenomenological dataset—especially helpful for my ecological intentionality and process-relational work.

    5. The Vault as an Ecology

    Obsidian mirrors how I’m thinking about the world in my CIIS work:

    everything is connected, everything participates, and meaning emerges through relation rather than isolation.

    My vault has three organizing principles:

    • Maps of content (big conceptual hubs)
    • Atomic permanent notes (ideas per note tagged well)
    • Ephemeral notes (daily, in-class, or quick captures)

    The magic is not in perfect organization… it’s in the interplay.

    6. Why This Works for Me

    This workflow keeps my scholarship:

    • Ecological: ideas grow from interaction
    • Phenomenological: grounded in lived experience
    • Process-relational: always evolving
    • Practical: every note has a future use

    It’s become the backbone not only of my life and coursework, but of my dissertation path, Tree Sit Journals, Carolina Ecology posts, and even sermon writing.