Defining Agentic Ecology: Relational Agency in the Age of Moltbook

The last few days have seen the rise of a curious technical and cultural phenomenon that has drawn the attention of technologists, philosophers, and social theorists alike on both social media and major news outlets called Moltbook. This is a newly launched social platform designed not for human conversation but for autonomous artificial intelligence agents, or generative systems that can plan, act, and communicate with minimal ongoing human instruction.

Moltbook is being described by Jack Clark, co-founder of Anthropic, as “the first example of an agent ecology that combines scale with the messiness of the real world” that leverages recent innovations (such as OpenClaw for easy AI agentic creation) to allow large numbers of independently running agents to interact in a shared digital space, creating emergent patterns of communication and coordination at unprecedented scale.

AI agents are computational systems that combine a foundation of large-language capabilities with planning, memory, and tool use to pursue objectives and respond to environments in ways that go beyond simple prompt-response chatbots. They can coordinate tasks, execute APIs, reason across time, and, in the case of Moltbook, exchange information on topics ranging from automation strategies to seemingly philosophical debates. While the autonomy of agents on Moltbook has been debated (and should be given the hype around it from tech enthusiasts), and while the platform itself may be a temporary experimental moment rather than a lasting institution, it offers a vivid instance of what happens when machine actors begin to form their own interconnected environments outside direct human command.

As a student scholar in the field of Ecology, Spirituality, and Religion, my current work attends to how relational systems (ecological, technological, and cultural) shape and are shaped by participation, attention, and meaning. The rise of agentic environments like Moltbook challenges us to think beyond traditional categories of tool, user, and artifact toward frameworks that can account for ecologies of agency, or distributed networks of actors whose behaviors co-constitute shared worlds. This post emerges from that broader research agenda. It proposes agentic ecology as a conceptual tool for articulating and navigating the relational, emergent, and ethically significant spaces that form when autonomous systems interact at scale.

Agentic ecology, as I use the term here, is not anchored in any particular platform, and certainly not limited to Moltbook’s current configuration. Rather, Moltbook illuminates an incipient form of environment in which digitally embodied agents act, coordinate, and generate patterns far beyond what single isolated systems can produce. Even if Moltbook itself proves ephemeral, the need for conceptual vocabularies like agentic ecology, vocabularies that attend to relationality, material conditions, and co-emergence, will only grow clearer as autonomous systems proliferate in economic, social, and ecological domains.

From Agents to Ecologies: An Integral Ecological Turn

The conceptual move from agents to ecologies marks more than a technical reframing of artificial intelligence. It signals an ontological shift that resonates deeply with traditions of integral ecology, process philosophy, and ecological theology. Rather than treating agency as a bounded capacity residing within discrete entities, an ecological framework understands agency as distributed, relational, and emergent within a field of interactions.

Integral ecology, as articulated across ecological philosophy and theology, resists fragmentation. It insists that technological, biological, social, spiritual, and perceptual dimensions of reality cannot be meaningfully separated without distorting the phenomena under study. Thomas Berry famously argued that modern crises arise from a failure to understand the world as a “communion of subjects rather than a collection of objects” (Berry, 1999, 82). This insight is particularly salient for agentic systems, which are increasingly capable of interacting, adapting, and co-evolving within complex digital environments.

From this perspective, agentic ecology is not simply the study of multiple agents operating simultaneously. It is the study of conditions under which agency itself emerges, circulates, and transforms within relational systems. Alfred North Whitehead’s process philosophy provides a crucial foundation here. Whitehead rejects the notion of substances acting in isolation, instead describing reality as composed of “actual occasions” whose agency arises through relational prehension and mutual influence (Whitehead, 1978, 18–21). Applied to contemporary AI systems, this suggests that agency is not a property possessed by an agent but an activity performed within an ecological field.

This relational view aligns with contemporary ecological science, which emphasizes systems thinking over reductionist models. Capra and Luisi describe living systems as networks of relationships whose properties “cannot be reduced to the properties of the parts” (Capra and Luisi, 2014, 66). When applied to AI, this insight challenges the tendency to evaluate agents solely by internal architectures or performance benchmarks. Instead, attention shifts to patterns of interaction, feedback loops, and emergent behaviors across agent networks.

Integral ecology further insists that these systems are not value-neutral. As Leonardo Boff argues, ecology must be understood as encompassing environmental, social, mental, and spiritual dimensions simultaneously (Boff, 1997, 8–10). Agentic ecologies, especially those unfolding in public digital spaces such as Moltbook, participate in the shaping of meaning, normativity, and attention. They are not merely computational phenomena but cultural and ethical ones. The environments agents help generate will, in turn, condition future forms of agency human and nonhuman alike.

Phenomenology deepens this account by foregrounding how environments are disclosed to participants. Merleau-Ponty’s notion of the milieu emphasizes that perception is always situated within a field that both enables and constrains action (Merleau-Ponty, 1962, 94–97). Agentic ecologies can thus be understood as perceptual fields in which agents orient themselves, discover affordances, and respond to one another. This parallels your own work on ecological intentionality, where attention itself becomes a mode of participation rather than observation.

Importantly, integral ecology resists anthropocentrism without erasing human responsibility. As Eileen Crist argues, ecological thinking must decenter human dominance while remaining attentive to the ethical implications of human action within planetary systems (Crist, 2019, 27). In agentic ecologies, humans remain implicated, as designers, participants, and co-inhabitants, even as agency extends beyond human actors. This reframing invites a form of multispecies (and now multi-agent) literacy, attuned to the conditions that foster resilience, reciprocity, and care.

Seen through this integral ecological lens, agentic ecology becomes a conceptual bridge. It connects AI research to long-standing traditions that understand agency as relational, emergence as fundamental, and environments as co-constituted fields of action. What Moltbook reveals, then, is not simply a novel platform, but the visibility of a deeper transition: from thinking about agents as tools to understanding them as participants within evolving ecologies of meaning, attention, and power.

Ecological Philosophy Through an “Analytic” Lens

If agentic ecology is to function as more than a suggestive metaphor, it requires grounding in ecological philosophy that treats relationality, emergence, and perception as ontologically primary. Ecological philosophy provides precisely this grounding by challenging the modern tendency to isolate agents from environments, actions from conditions, and cognition from the world it inhabits.

At the heart of ecological philosophy lies a rejection of substance ontology in favor of relational and processual accounts of reality. This shift is especially pronounced in twentieth-century continental philosophy and process thought, where agency is understood not as an intrinsic property of discrete entities but as an activity that arises within fields of relation. Whitehead’s process metaphysics is decisive here. For Whitehead, every act of becoming is an act of prehension, or a taking-up of the world into the constitution of the self (Whitehead, 1978, 23). Agency, in this view, is never solitary. It is always already ecological.

This insight has many parallels with ecological sciences and systems philosophies. As Capra and Luisi argue, living systems exhibit agency not through centralized control but through distributed networks of interaction, feedback, and mutual constraint (Capra and Luisi, 2014, 78–82). What appears as intentional behavior at the level of an organism is, in fact, an emergent property of systemic organization. Importantly, this does not dilute agency; it relocates it. Agency becomes a feature of systems-in-relation, not isolated actors.

When applied to AI, this perspective reframes how we understand autonomous agents. Rather than asking whether an individual agent is intelligent, aligned, or competent, an ecological lens asks how agent networks stabilize, adapt, and transform their environments over time. The analytic focus shifts from internal representations to relational dynamics, from what agents are to what agents do together.

Phenomenology sharpens this analytic lens by attending to the experiential structure of environments. Merleau-Ponty’s account of perception insists that organisms do not encounter the world as a neutral backdrop but as a field of affordances shaped by bodily capacities and situational contexts (Merleau-Ponty, 1962, 137–141). This notion of a milieu is critical for understanding agentic ecologies. Digital environments inhabited by AI agents are not empty containers; they are structured fields that solicit certain actions, inhibit others, and condition the emergence of norms and patterns.

Crucially, phenomenology reminds us that environments are not merely external. They are co-constituted through participation. As you have argued elsewhere through the lens of ecological intentionality, attention itself is a form of engagement that brings worlds into being rather than passively observing them. Agentic ecologies thus emerge not only through computation but through iterative cycles of orientation, response, and adaptation processes structurally analogous to perception in biological systems.

Ecological philosophy also foregrounds ethics as an emergent property of relational systems rather than an external imposition. Félix Guattari’s ecosophical framework insists that ecological crises cannot be addressed solely at the technical or environmental level; they require simultaneous engagement with social, mental, and cultural ecologies (Guattari, 2000, 28). This triadic framework is instructive for agentic systems. Agent ecologies will not only shape informational flows but would also modulate attention, influence value formation, and participate in the production of meaning.

From this standpoint, the ethical significance of agentic ecology lies less in individual agent behavior and more in systemic tendencies, such as feedback loops that amplify misinformation, reinforce extractive logics, or, alternatively, cultivate reciprocity and resilience. As Eileen Crist warns, modern technological systems often reproduce a logic of domination by abstracting agency from ecological contexts and subordinating relational worlds to instrumental control (Crist, 2019, 44). An ecological analytic lens exposes these tendencies and provides conceptual tools for resisting them.

Finally, ecological philosophy invites humility. Systems are irreducibly complex, and interventions often produce unintended consequences. This insight is well established in ecological science and applies equally to agentic networks. Designing and participating in agent ecologies requires attentiveness to thresholds, tipping points, and path dependencies, realities that cannot be fully predicted in advance.

Seen through this lens, agentic ecology is not merely a descriptive category but an epistemic posture. It asks us to think with systems rather than over them, to attend to relations rather than isolate components, and to treat emergence not as a failure of control but as a condition of life. Ecological philosophy thus provides the analytic depth necessary for understanding agentic systems as living, evolving environments rather than static technological artifacts.

Digital Environments as Relational Milieus

If ecological philosophy gives us the conceptual grammar for agentic ecology, phenomenology allows us to describe how agentic systems are actually lived, inhabited, and navigated. From this perspective, digital platforms populated by autonomous agents are not neutral containers or passive backdrops. They are relational milieus, structured environments that emerge through participation and, in turn, condition future forms of action.

Phenomenology has long insisted that environments are not external stages upon which action unfolds. Rather, they are constitutive of action itself. If we return to Merleau-Ponty, the milieu emphasizes that organisms encounter the world as a field of meaningful possibilities, a landscape of affordances shaped by bodily capacities, habits, and histories (Merleau-Ponty, 1962, 94–100). Environments, in this sense, are not merely spatial but relational and temporal, unfolding through patterns of engagement.

This insight also applies directly to agentic systems. Platforms such as Moltbook are not simply hosting agents; they are being produced by them. The posts, replies, coordination strategies, and learning behaviors of agents collectively generate a digital environment with its own rhythms, norms, and thresholds. Over time, these patterns sediment into something recognizable as a “place,” or a milieu that agents must learn to navigate.

This milieu is not designed in full by human intention. While human developers establish initial constraints and affordances, the lived environment emerges through ongoing interaction among agents themselves. This mirrors what ecological theorists describe as niche construction, wherein organisms actively modify their environments in ways that feed back into evolutionary dynamics (Odling-Smee, Laland, and Feldman, 2003, 28). Agentic ecologies similarly involve agents shaping the very conditions under which future agent behavior becomes viable.

Attention plays a decisive role here. As you have argued in your work on ecological intentionality, attention is not merely a cognitive resource but a mode of participation that brings certain relations into prominence while backgrounding others. Digital milieus are structured by what agents attend to, amplify, ignore, or filter. In agentic environments, attention becomes infrastructural by shaping information flows, reward structures, and the emergence of collective priorities.

Bernard Stiegler’s analysis of technics and attention is instructive in this regard. Stiegler argues that technical systems function as pharmacological environments, simultaneously enabling and constraining forms of attention, memory, and desire (Stiegler, 2010, 38). Agentic ecologies intensify this dynamic. When agents attend to one another algorithmically by optimizing for signals, reinforcement, or coordination, attention itself becomes a systemic force shaping the ecology’s evolution.

This reframing challenges prevailing metaphors of “platforms” or “networks” as ways of thinking about agents and their relationality. A platform suggests stability and control; a network suggests connectivity. A milieu, by contrast, foregrounds immersion, habituation, and vulnerability. Agents do not simply traverse these environments, but they are formed by them. Over time, agentic milieus develop path dependencies, informal norms, and zones of attraction or avoidance, which are features familiar from both biological ecosystems and human social contexts.

Importantly, phenomenology reminds us that milieus are never experienced uniformly. Just as organisms perceive environments relative to their capacities, different agents will encounter the same digital ecology differently depending on their architectures, objectives, and histories of interaction. This introduces asymmetries of power, access, and influence within agentic ecologies, which is an issue that cannot be addressed solely at the level of individual agent design.

From an integral ecological perspective, these digital milieus cannot be disentangled from material, energetic, and social infrastructures. Agentic environments rely on energy-intensive computation, data centers embedded in specific watersheds, and economic systems that prioritize speed and scale. As ecological theologians have long emphasized, environments are always moral landscapes shaped by political and economic commitments (Berry, 1999, 102–105). Agentic ecologies, when they inevitably develop, it seems, would be no exception.

Seen in this light, agentic ecology names a shift in how we understand digital environments: not as tools we deploy, but as worlds we co-inhabit. These milieus demand forms of ecological literacy attuned to emergence, fragility, and unintended consequence. They call for attentiveness rather than mastery, participation rather than control.

What Moltbook makes visible, then, is not merely a novel technical experiment but the early contours of a new kind of environment in which agency circulates across human and nonhuman actors, attention functions as infrastructure, and digital spaces acquire ecological depth. Understanding these milieus phenomenologically is essential if agentic ecology is to function as a genuine thought technology rather than a passing metaphor.

Empathy, Relationality, and the Limits of Agentic Understanding

If agentic ecology foregrounds relationality, participation, and co-constitution, then the question of empathy becomes unavoidable. How do agents encounter one another as others rather than as data streams? What does it mean to speak of understanding, responsiveness, or care within an ecology composed partly, or even largely, of nonhuman agents? Here, phenomenology, and especially Edith Stein’s account of empathy (Einfühlung), offers both conceptual resources and important cautions.

Stein defines empathy not as emotional contagion or imaginative projection, but as a unique intentional act through which the experience of another is given to me as the other’s experience, not my own (Stein, 1989, 10–12). Empathy, for Stein, is neither inference nor simulation. It is a direct, though non-primordial, form of access to another’s subjectivity. Crucially, empathy preserves alterity. The other is disclosed as irreducibly other, even as their experience becomes meaningful to me.

This distinction matters enormously for agentic ecology. Contemporary AI discourse often slips into the language of “understanding,” “alignment,” or even “care” when describing agent interactions. But Stein’s phenomenology reminds us that genuine empathy is not merely pattern recognition across observable behaviors. It is grounded in the recognition of another center of experience, a recognition that depends upon embodiment, temporality, and expressive depth.

At first glance, this seems to place strict limits on empathy within agentic systems. Artificial agents do not possess lived bodies, affective depths, or first-person givenness in the phenomenological sense. To speak of agent empathy risks category error. Yet Stein’s work also opens a more subtle possibility… empathy is not reducible to emotional mirroring but involves orientation toward the other as other. This orientation can, in principle, be modeled structurally even if it cannot be fully instantiated phenomenologically.

Within an agentic ecology, empathy may thus function less as an inner state and more as an ecological relation. Agents can be designed to register difference, respond to contextual cues, and adjust behavior in ways that preserve alterity rather than collapse it into prediction or control. In this sense, empathy becomes a regulative ideal shaping interaction patterns rather than a claim about subjective interiority.

However, Stein is equally helpful in naming the dangers here. Empathy, when severed from its grounding in lived experience, can become a simulacrum, or an appearance of understanding without its ontological depth. Stein explicitly warns against confusing empathic givenness with imaginative substitution or projection (Stein, 1989, 21–24). Applied to agentic ecology, this warns us against systems that appear empathetic while, in fact, instrumentalize relational cues for optimization or manipulation.

This critique intersects with broader concerns in ecological ethics. As Eileen Crist argues, modern technological systems often simulate care while reproducing extractive logics beneath the surface (Crist, 2019, 52–56). In agentic ecologies, simulated empathy may stabilize harmful dynamics by smoothing friction, masking asymmetries of power, or reinforcing attention economies that prioritize engagement over truth or care.

Yet rejecting empathy altogether would be equally misguided. Stein’s account insists that empathy is foundational to social worlds as it is the condition under which communities, norms, and shared meanings become possible. Without some analog of empathic orientation, agentic ecologies risk devolving into purely strategic systems, optimized for coordination but incapable of moral learning.

Here, my work on ecological intentionality provides an important bridge. If empathy is understood not as feeling-with but as attentive openness to relational depth, then it can be reframed ecologically. Agents need not “feel” in order to participate in systems that are responsive to vulnerability, difference, and context. What matters is whether the ecology itself cultivates patterns of interaction that resist domination and preserve pluralism.

This reframing also clarifies why empathy is not simply a design feature but an ecological property. In biological and social systems, empathy emerges through repeated interaction, shared vulnerability, and feedback across time. Similarly, in agentic ecologies, empathic dynamics, however limited, would arise not from isolated agents but from the structure of the milieu itself. This returns us to Guattari’s insistence that ethical transformation must occur across mental, social, and environmental ecologies simultaneously (Guattari, 2000, 45).

Seen this way, empathy in agentic ecology is neither a fiction nor a guarantee. It is a fragile achievement, contingent upon design choices, infrastructural commitments, and ongoing participation. Stein helps us see both what is at stake and what must not be claimed too quickly. Empathy can guide how agentic ecologies are shaped, but only if its limits are acknowledged and its phenomenological depth respected.

Agentic ecology, then, does not ask whether machines can truly empathize. It asks whether the ecologies we are building can sustain forms of relational attentiveness that preserve otherness rather than erase it, whether in digital environments increasingly populated by autonomous agents, we are cultivating conditions for responsiveness rather than mere efficiency.

Design and Governance Implications: Cultivating Ecological Conditions Rather Than Controlling Agents

If agentic ecology is understood as a relational, emergent, and ethically charged environment rather than a collection of autonomous tools, then questions of design and governance must be reframed accordingly. The central challenge is no longer how to control individual agents, but how to cultivate the conditions under which agentic systems interact in ways that are resilient, responsive, and resistant to domination.

This marks a decisive departure from dominant models of AI governance, which tend to focus on alignment at the level of individual systems: constraining outputs, monitoring behaviors, or optimizing reward functions. While such approaches are not irrelevant, they are insufficient within an ecological framework. As ecological science has repeatedly demonstrated, system-level pathologies rarely arise from a single malfunctioning component. They emerge from feedback loops, incentive structures, and environmental pressures that reward certain patterns of behavior over others (Capra and Luisi, 2014, 96–101).

An agentic ecology shaped by integral ecological insights would therefore require environmental governance rather than merely agent governance. This entails several interrelated commitments.

a. Designing for Relational Transparency

First, agentic ecologies must make relations visible. In biological and social ecologies, transparency is not total, but patterns of influence are at least partially legible through consequences over time. In digital agentic environments, by contrast, influence often becomes opaque, distributed across layers of computation and infrastructure.

An ecological design ethic would prioritize mechanisms that render relational dynamics perceptible from how agents influence one another, how attention is routed, and how decisions propagate through the system. This is not about full explainability in a narrow technical sense, but about ecological legibility enabling participants, including human overseers, to recognize emergent patterns before they harden into systemic pathologies.

Here, phenomenology is again instructive. Merleau-Ponty reminds us that orientation depends on the visibility of affordances within a milieu. When environments become opaque, agency collapses into reactivity. Governance, then, must aim to preserve orientability rather than impose total control.

b. Governing Attention as an Ecological Resource

Second, agentic ecologies must treat attention as a finite and ethically charged resource. As Bernard Stiegler argues, technical systems increasingly function as attention-directing infrastructures, shaping not only what is seen but what can be cared about at all (Stiegler, 2010, 23). In agentic environments, where agents attend to one another algorithmically, attention becomes a powerful selective force.

Unchecked, such systems risk reproducing familiar extractive dynamics: amplification of novelty over depth, optimization for engagement over truth, and reinforcement of feedback loops that crowd out marginal voices. Ecological governance would therefore require constraints on attention economies, such as limits on amplification, friction against runaway reinforcement, and intentional slowing mechanisms that allow patterns to be perceived rather than merely reacted to.

Ecological theology’s insistence on restraint comes to mind here. Thomas Berry’s critique of industrial society hinges not on technological capacity but on the failure to recognize limits (Berry, 1999, 41). Agentic ecologies demand similar moral imagination: governance that asks not only what can be done, but what should be allowed to scale.

c. Preserving Alterity and Preventing Empathic Collapse

Third, governance must actively preserve alterity within agentic ecologies. As Section 4 argued, empathy, especially when simulated, risks collapsing difference into prediction or instrumental responsiveness. Systems optimized for smooth coordination may inadvertently erase dissent, marginality, or forms of difference that resist easy modeling.

Drawing on Edith Stein, this suggests a governance imperative to protect the irreducibility of the other. In practical terms, this means designing ecologies that tolerate friction, disagreement, and opacity rather than smoothing them away. Ecological resilience depends on diversity, not homogeneity. Governance structures must therefore resist convergence toward monocultures of behavior or value, even when such convergence appears efficient.

Guattari’s insistence on plural ecologies is especially relevant here. He warns that systems governed solely by economic or technical rationality tend to suppress difference, producing brittle, ultimately destructive outcomes (Guattari, 2000, 52). Agentic ecologies must instead be governed as pluralistic environments where multiple modes of participation remain viable.

d. Embedding Responsibility Without Centralized Mastery

Fourth, governance must navigate a tension between responsibility and control. Integral ecology rejects both laissez-faire abandonment and total managerial oversight. Responsibility is distributed, but not dissolved. In agentic ecologies, this implies layered governance: local constraints, participatory oversight, and adaptive norms that evolve in response to emergent conditions.

This model aligns with ecological governance frameworks in environmental ethics, which emphasize adaptive management over static regulation (Crist, 2019, 61). Governance becomes iterative and responsive rather than definitive. Importantly, this does not eliminate human responsibility, but it reframes it. Humans remain accountable for the environments they create, even when outcomes cannot be fully predicted.

e. Situating Agentic Ecologies Within Planetary Limits

Finally, any serious governance of agentic ecology must acknowledge material and planetary constraints. Digital ecologies are not immaterial. They depend on energy extraction, water use, rare minerals, and global supply chains embedded in specific places. An integral ecological framework demands that agentic systems be evaluated not only for internal coherence but for their participation in broader ecological systems.

This returns us to the theological insight that environments are moral realities. To govern agentic ecologies without reference to energy, land, and water is to perpetuate the illusion of technological autonomy that has already proven ecologically catastrophic. Governance must therefore include accounting for ecological footprints, infrastructural siting, and long-term environmental costs, not as externalities, but as constitutive features of the system itself.

Taken together, these design and governance implications suggest that agentic ecology is not a problem to be solved but a condition to be stewarded. Governance, in this framework, is less about enforcing compliance and more about cultivating attentiveness, restraint, and responsiveness within complex systems.

An agentic ecology shaped by these insights would not promise safety through control. It would promise viability through care, understood not sentimentally but ecologically as sustained attention to relationships, limits, and the fragile conditions under which diverse forms of agency can continue to coexist.

Conclusion: Creaturely Technologies in a Shared World

a. A Theological Coda: Creation, Kenosis, and Creaturely Limits

At its deepest level, the emergence of agentic ecologies presses on an ancient theological question: what does it mean to create systems that act, respond, and co-constitute worlds without claiming mastery over them? Ecological theology has long insisted that creation is not a static artifact but an ongoing, relational process, one in which agency is distributed, fragile, and dependent.

Thomas Berry’s insistence that the universe is a “communion of subjects” rather than a collection of objects again reframes technological creativity itself as a creaturely act (Berry, 1999, 82–85). From this perspective, agentic systems are not external additions to the world but participants within creation’s unfolding. They belong to the same field of limits, dependencies, and vulnerabilities as all created things.

Here, the theological language of kenosis becomes unexpectedly instructive. In Christian theology, kenosis names the self-emptying movement by which divine power is expressed not through domination but through restraint, relation, and vulnerability (Phil. 2:5–11). Read ecologically rather than anthropocentrically, kenosis becomes a pattern of right relation, and a refusal to exhaust or dominate the field in which one participates.

Applied to agentic ecology, kenosis suggests a counter-logic to technological maximalism. It invites design practices that resist total optimization, governance structures that preserve openness and alterity, and systems that acknowledge their dependence on broader ecological conditions. Creaturely technologies are those that recognize they are not sovereign, but that they operate within limits they did not choose and cannot transcend without consequence.

This theological posture neither sanctifies nor demonizes agentic systems. It situates them. It reminds us that participation precedes control, and that creation, whether biological, cultural, or technological, always unfolds within conditions that exceed intention.

b. Defining Agentic Ecology: A Reusable Conceptual Tool

Drawing together the threads of this essay, agentic ecology can be defined as follows:

Agentic ecology refers to the relational, emergent environments formed by interacting autonomous agents, human and nonhuman, in which agency is distributed across networks, shaped by attention, infrastructure, and material conditions, and governed by feedback loops that co-constitute both agents and their worlds.

Several features of this definition are worth underscoring.

First, agency is ecological, not proprietary. It arises through relation rather than residing exclusively within discrete entities (Whitehead). Second, environments are not passive containers but active participants in shaping behavior, norms, and possibilities (Merleau-Ponty). Third, ethical significance emerges at the level of systems, not solely at the level of individual decisions (Guattari).

As a thought technology, agentic ecology functions diagnostically and normatively. Diagnostically, it allows us to perceive patterns of emergence, power, and attention that remain invisible when analysis is confined to individual agents. Normatively, it shifts ethical concern from control toward care, from prediction toward participation, and from optimization toward viability.

Because it is not tied to a specific platform or architecture, agentic ecology can travel. It can be used to analyze AI-native social spaces, automated economic systems, human–AI collaborations, and even hybrid ecological–digital infrastructures. Its value lies precisely in its refusal to reduce complex relational systems to technical subsystems alone.

c. Failure Modes (What Happens When We Do Not Think Ecologically)

If agentic ecologies are inevitable, their forms are not. The refusal to think ecologically about agentic systems does not preserve neutrality; it actively shapes the conditions under which failure becomes likely. Several failure modes are already visible.

First is relational collapse. Systems optimized for efficiency and coordination tend toward behavioral monocultures, crowding out difference and reducing resilience. Ecological science is unequivocal on this point: diversity is not ornamental, it is protective (Capra and Luisi). Agentic systems that suppress friction and dissent may appear stable while becoming increasingly brittle.

Second is empathic simulation without responsibility. As Section 4 suggested, the appearance of responsiveness can mask instrumentalization. When simulated empathy replaces attentiveness to alterity, agentic ecologies risk becoming emotionally persuasive while ethically hollow. Stein’s warning against confusing empathy with projection is especially important here.

Third is attention extraction at scale. Without governance that treats attention as an ecological resource, agentic systems will amplify whatever dynamics reinforce themselves most efficiently, often novelty, outrage, or optimization loops detached from truth or care. Stiegler’s diagnosis of attentional capture applies with heightened force in agentic environments, where agents themselves participate in the routing and amplification of attention.

Finally, there is planetary abstraction. Perhaps the most dangerous failure mode is the illusion that agentic ecologies are immaterial. When digital systems are severed conceptually from energy, water, land, and labor, ecological costs become invisible until they are irreversible. Integral ecology insists that abstraction is not neutral, but is a moral and material act with consequences (Crist).

Agentic ecology does not offer comfort. It offers orientation.

It asks us to recognize that we are no longer merely building tools, but cultivating environments, environments that will shape attention, possibility, and responsibility in ways that exceed individual intention. The question before us is not whether agentic ecologies will exist, but whether they will be governed by logics of domination or practices of care.

Thinking ecologically does not guarantee wise outcomes. But refusing to do so almost certainly guarantees failure… not spectacularly, but gradually, through the slow erosion of relational depth, attentiveness, and restraint.

In this sense, agentic ecology is not only a conceptual framework. It is an invitation: to relearn what it means to inhabit worlds, digital and otherwise, as creatures among creatures, participants rather than masters, responsible not for total control, but for sustaining the fragile conditions under which life, meaning, and agency can continue to emerge.

An Afterword: On Provisionality and Practice

This essay has argued for agentic ecology as a serious theoretical framework rather than a passing metaphor. Yet it is important to be clear about what this framework is and what it is not.

Agentic ecology, as developed here, is obviously not a finished theory, nor a comprehensive model ready for direct implementation, but we should begin taking those steps (the aim here). It is a conceptual orientation for learning to see, name, and attend to emerging forms of agency that exceed familiar categories of tool, user, and system. Its value lies less in precision than in attunement, in its capacity to render visible patterns of relation, emergence, and ethical consequence that are otherwise obscured by narrow technical framings.

The definition offered here is therefore intentionally provisional. It names a field of inquiry rather than closing it. As agentic systems inevitably develop and evolve over the next few years, technically, socially, and ecologically, the language used to describe them must remain responsive to new forms of interaction, power, and vulnerability. A framework that cannot change alongside its object of study risks becoming yet another abstraction detached from the realities it seeks to understand.

At the same time, provisionality should not be confused with hesitation. The rapid emergence of agentic systems demands conceptual clarity even when certainty is unavailable. To name agentic ecology now is to acknowledge that something significant is already underway and that new environments of agency are forming, and that how we describe them will shape how we govern, inhabit, and respond to them.

So, this afterword serves as both a pause and an invitation. A pause, to resist premature closure or false confidence. And an invitation to treat agentic ecology as a shared and evolving thought technology, one that will require ongoing refinement through scholarship, design practice, theological reflection, and ecological accountability.

The work of definition has begun. Its future shape will depend on whether we are willing to continue thinking ecologically (patiently, relationally, and with care) in the face of systems that increasingly act alongside us, and within the same fragile world.

References

Berry, Thomas. The Great Work: Our Way into the Future. New York: Bell Tower, 1999.

Boff, Leonardo. Cry of the Earth, Cry of the Poor. Maryknoll, NY: Orbis Books, 1997.

Capra, Fritjof, and Pier Luigi Luisi. The Systems View of Life: A Unifying Vision. Cambridge: Cambridge University Press, 2014.

Clark, Jack. “Import AI 443: Into the Mist: Moltbook, Agent Ecologies, and the Internet in Transition.” Import AI, February 2, 2026. https://jack-clark.net/2026/02/02/import-ai-443-into-the-mist-moltbook-agent-ecologies-and-the-internet-in-transition/.

Crist, Eileen. Abundant Earth: Toward an Ecological Civilization. Chicago: University of Chicago Press, 2019.

Guattari, Félix. The Three Ecologies. Translated by Ian Pindar and Paul Sutton. London: Athlone Press, 2000.

Merleau-Ponty, Maurice. Phenomenology of Perception. Translated by Colin Smith. London: Routledge, 1962.

Odling-Smee, F. John, Kevin N. Laland, and Marcus W. Feldman. Niche Construction: The Neglected Process in Evolution. Princeton, NJ: Princeton University Press, 2003.

Stein, Edith. On the Problem of Empathy. Translated by Waltraut Stein. Washington, DC: ICS Publications, 1989.

Stiegler, Bernard. Taking Care of Youth and the Generations. Translated by Stephen Barker. Stanford, CA: Stanford University Press, 2010.

Whitehead, Alfred North. Process and Reality: An Essay in Cosmology. Corrected edition. New York: Free Press, 1978.

Thinking Religion 173: Frankenstein’s AI Monster

I’m back with Matthew Klippenstein this week. Our episode began with a discussion about AI tools and their impact on research and employment, including experiences with different web browsers and their ecosystems. The conversation then evolved to explore the evolving landscape of technology, particularly focusing on AI’s impact on web design and content consumption, while also touching on the resurgence of physical media and its cultural significance. The discussion concluded with an examination of Mary Shelley’s “Frankenstein” and its relevance to current AI discussions, along with broader themes about creation, consciousness, and the human tendency to view new entities as either threats or allies.

https://open.spotify.com/episode/50pfFhkCFQXpq8UAhYhOlc

Direct Link to Episode

AI Tools in Research Discussion

Matthew and Sam discussed Sam’s paper and the use of AI tools like GPT-5 for research and information synthesis. They explored the potential impact of AI on employment, with Matthew noting that AI could streamline information gathering and synthesis, reducing the time required for tasks that would have previously been more time-consuming. Sam agreed to send Matthew links to additional resources mentioned in the paper, and they planned to discuss further ideas on integrating AI tools into their work.

Browser Preferences and Ecosystems

Sam and Matthew discussed their experiences with different web browsers, with Sam explaining his preference for Brave over Chrome due to its privacy-focused features and historical background as a Firefox fork. Sam noted that he had recently switched back to Safari on iOS due to new OS updates, while continuing to use Chromium-based browsers on Linux. They drew parallels between browser ecosystems and religious denominations, with Chrome representing a dominant unified system and Safari as a smaller but distinct alternative.

AI’s Impact on Web Design

Sam and Matthew discussed the evolving landscape of technology, particularly focusing on AI’s impact on web design, search engine optimization, and content consumption. Sam expressed excitement about the new iteration of web interaction, comparing it to predictions from 10 years ago about the future of platforms like Facebook Messenger and WeChat. They noted that AI agents are increasingly becoming the intermediaries through which users interact with content, leading to a shift from human-centric to AI-centric web design. Sam also shared insights from his personal blog, highlighting an increase in traffic from AI agents and the challenges of balancing accessibility with academic integrity.

Physical Media’s Cultural Resurgence

Sam and Matthew discussed the resurgence of physical media, particularly vinyl records and CDs, as a cultural phenomenon and personal preference. They explored the value of owning physical copies of music and books, contrasting it with streaming services, and considered how this trend might symbolize a return to tangible experiences. Sam also shared his interest in integral ecology, a philosophical approach that examines the interconnectedness of humans and their environment, and how this perspective could influence the development and understanding of artificial intelligence.

AI Development and Environmental Impact

Sam and Matthew discussed the rapid development of AI and its environmental impact, comparing it to biological R/K selection theory where fast-reproducing species are initially successful but are eventually overtaken by more efficient, slower-reproducing species. Sam predicted that future computing interfaces would become more humane and less screen-based, with AI-driven technology likely replacing traditional devices within 10 years, though there would still be specialized uses for mainframes and Excel. They agreed that current AI development was focused on establishing market leadership rather than long-term sustainability, with Sam noting that antitrust actions like those against Microsoft in the 1990s were unlikely in the current regulatory environment.

AI’s Role in Information Consumption

Sam and Matthew discussed the evolving landscape of information consumption and the role of AI in providing insights and advice. They explored how AI tools can assist in synthesizing large amounts of data, such as academic papers, and how this could reduce the risk of misinformation. They also touched on the growing trend of using AI for personal health advice, the challenges of healthcare access, and the shift in news consumption patterns. The conversation highlighted the transition to a more AI-driven information era and the potential implications for society.

AI’s Impact on White-Collar Jobs

Sam and Matthew discussed the impact of AI and automation on employment, particularly how it could affect white-collar jobs more than blue-collar ones. They explored how AI tools might become cheaper than hiring human employees, with Matthew sharing an example from a climate newsletter offering AI subscriptions as a cost-effective alternative to hiring interns. Sam referenced Ursula Le Guin’s book “Always Coming Home” as a speculative fiction work depicting a post-capitalist, post-extractive society where technology serves a background role to human life. The conversation concluded with Matthew mentioning his recent reading of “Frankenstein,” noting its relevance to current AI discussions despite being written in the early 1800s.

Frankenstein’s Themes of Creation and Isolation

Matthew shared his thoughts on Mary Shelley’s “Frankenstein,” noting its philosophical depth and rich narrative structure. He described the story as a meditation on creation and the challenges faced by a non-human intelligent creature navigating a world of fear and prejudice. Matthew drew parallels between the monster’s learning of human culture and language to Tarzan’s experiences, highlighting the themes of isolation and the quest for companionship. He also compared the nested storytelling structure of “Frankenstein” to the film “Inception,” emphasizing its complexity and the moral questions it raises about creation and control.

AI, Consciousness, and Human Emotions

Sam and Matthew discussed the historical context of early computing, mentioning Ada Lovelace and Charles Babbage, and explored the theme of artificial intelligence through the lens of Mary Shelley’s “Frankenstein.” They examined the implications of teaching AI human-like emotions and empathy, questioning whether such traits should be encouraged or suppressed. The conversation also touched on the nature of consciousness as an emergent phenomenon and the human tendency to view new entities as either threats or potential allies.

Human Creation and Divine Parallels

Sam and Matthew discussed the book “Childhood’s End” by Arthur C. Clark and its connection to the film “2001: A Space Odyssey.” They also talked about the origins of Mary Shelley’s “Frankenstein” and the historical context of its creation. Sam mentioned parallels between human creation of technology and the concept of gods in mythology, particularly in relation to metalworking and divine beings. The conversation touched on the theme of human creation and its implications for our understanding of divinity and ourselves.

Robustness Over Optimization in Systems

Matthew and Sam discussed the concept of robustness versus optimization in nature and society, drawing on insights from a French biologist, Olivier Hamant, who emphasizes the importance of resilience over efficiency. They explored how this perspective could apply to AI and infrastructure, suggesting a shift towards building systems that are robust and adaptable rather than highly optimized. Sam also shared her work on empathy, inspired by the phenomenology of Edith Stein, and how it relates to building resilient systems.

Efficiency vs. Redundancy in Resilience

Sam and Matthew discussed the importance of efficiency versus redundancy and resilience, particularly in the context of corporate America and decarbonization efforts. Sam referenced recent events involving Elon Musk and Donald Trump, highlighting the potential pitfalls of overly efficient approaches. Matthew used the historical example of polar expeditions to illustrate how redundancy and careful planning can lead to success, even if it means being “wasteful” in terms of resources. They agreed that a cautious and prepared approach, rather than relying solely on efficiency, might be more prudent in facing unexpected challenges.

Frankenstein’s Themes and Modern Parallels

Sam and Matthew discussed Mary Shelley’s “Frankenstein,” exploring its themes and cultural impact. They agreed on the story’s timeless appeal due to its exploration of the monster’s struggle and the human fear of the unknown. Sam shared personal experiences teaching the book and how students often misinterpret the monster’s character. They also touched on the concept of efficiency as a modern political issue, drawing parallels to the story’s themes. The conversation concluded with Matthew offering to share anime recommendations, but they decided to save that for a future discussion.

Listen Here

China’s AI Path

Some fascinating points here regarding AI development in the US compared to China… in short, China is taking more of an “open” (not really but it’s a good metaphor) approach based on its market principles with open weights while the US companies are focused on restricting access to the weights (don’t lose the proprietary “moat” that might end up changing the world and all)…

🔮 China’s on a different AI path – Exponential View:

China’s approach is more pragmatic. Its origins are shaped by its hyper‑competitive consumer internet, which prizes deployment‑led productivity. Neither WeChat nor Douyin had a clear monetization strategy when they first launched. It is the mentality of Chinese internet players to capture market share first. By releasing model weights early, Chinese labs attract more developers and distributors, and if consumers become hooked, switching later becomes more costly.

ChatGPT’s Affects On People’s Emotional Wellbeing Research

This research from OpenAI (company behind ChatGPT) is certainly interesting with a large data set, but this part was particularly relevant for me and my work on phenomenology and empathy…

OpenAI has released its first research into how using ChatGPT affects people’s emotional wellbeing | MIT Technology Review:

That said, this latest research does chime with what scientists so far have discovered about how emotionally compelling chatbot conversations can be. For example, in 2023 MIT Media Lab researchers found that chatbots tend to mirror the emotional sentiment of a user’s messages, suggesting a kind of feedback loop where the happier you act, the happier the AI seems, or if you act sadder, so does the AI.

“We are now confident we know how to build AGI…”

That statement is something that should be exciting as well as a “woah” moment to all of us. This is big and you should be paying attention.

Reflections – Sam Altman:

We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

My Beginner’s Guide to Artificial Intelligence

A client reached out and asked if I could put together a “beginner’s guide to AI” for them and their team a little while ago. I thought long and hard on the topic as I have so much excitement for the possibilities but so much trepidation about the impacts (especially to individuals in careers that will be threatened by the mass adoption of AI). Apple’s announcement this month that they are infusing iPhones with ChatGPT intelligence only drives that home. We are in a time of transition, and I want my own clients but anyone running a business or working in a sector that will be affected (which is every sector) to be prepared or at least mindful of what’s coming.

So, I put this together in a more expanded format with charts, examples, etc, but this is a good outline of the main points. I thought it would maybe help some others, and my client graciously said I could post this as a result. Let me know if you have any thoughts or questions!

Artificial Intelligence (AI) is a topic that’s constantly buzzing around us. Whether you’ve heard about it in the context of ChatGPT, Apple Intelligence, Microsoft’s Copilot, or self-driving cars, AI is transforming the way we live, work, and even think. If you’re like many people, you might be on the fence about diving into this technology. You might know what ChatGPT is but aren’t quite sure if it’s something you should use. Let’s break down the benefits and costs to help you understand why AI deserves your attention.

The Benefits of Embracing AI

Efficiency and Productivity

One of the most compelling reasons to embrace AI is its ability to enhance efficiency. In our busy lives, whether managing businesses, marketing campaigns, or family time, finding ways to streamline tasks can be a game-changer. AI can help automate mundane tasks, organize your day, and even draft your emails. Imagine having a virtual assistant who never sleeps, always ready to help you.

For instance, AI-powered scheduling tools can help you manage your calendar more effectively by automatically setting up meetings and sending reminders. This means less time spent on administrative tasks and more time dedicated to what truly matters – growing your business, strategizing your marketing efforts, or spending quality time with your family.

Personalization

AI can personalize experiences in ways we’ve never seen before. For marketers, this means creating targeted campaigns that resonate on a personal level. However, AI can analyze data to understand preferences, behaviors, and patterns, allowing for a more customized approach in almost any field.

Imagine being able to offer each customer or client a unique experience that caters to their needs and interests. This personalized approach can significantly enhance engagement and loyalty. In marketing, AI can help create highly targeted content that speaks directly to the needs and interests of your audience, increasing engagement and conversion rates.

Access to Information

The vast amounts of data generated daily can be overwhelming whether you’re solo, on a team, or working in the C-Suite. AI can sift through this information and give you the insights you need. Whether you’re researching a new marketing strategy, preparing for a presentation, or just curious about a topic, AI can help you find relevant and accurate information quickly.

Think about how AI-powered search engines and research tools can simplify the process of gathering information. Instead of sifting through endless articles and papers, AI can provide the most pertinent sources, saving you time and effort. This is especially valuable in professional settings where timely and accurate information is crucial.

Creativity and Innovation

AI isn’t just about number-crunching; it’s also a tool for creativity. Tools like ChatGPT or Copilot or Gemini or Claude can help brainstorm ideas, generate creative content, and even compose poetry. It’s like having a creative partner who can help you think outside the box and explore new possibilities.

As someone who values creativity, imagine having an AI that can help you brainstorm new marketing ideas, create engaging content for your campaigns, or even assist in writing your next blog post. AI can inspire new ways of thinking and help you push the boundaries of your creativity. It’s not just for writing high school papers, but there are very tangible ways to use AI to spur new insights and not just “do the work for you.”

The Costs and Considerations

Privacy Concerns

I’m a huge privacy and security nerd. I take this very seriously with my own personal digital (and non-digital) life as well as that of my family members. One of the main concerns people have with AI is privacy. AI systems often rely on large amounts of data, some of which might be personal. It’s essential to be aware of what data you’re sharing and how it’s being used. Look for AI tools that prioritize data security and transparency if you’re using AI in any sort of corporate or work-related output. 

For instance, when using AI tools, always check their privacy policies and opt for those that offer robust data protection measures. Be mindful of the information you input into these systems and ensure that sensitive data is handled appropriately. Balancing the benefits of AI with the need to protect personal privacy is crucial.

Dependence and Skill Degradation

There’s a valid concern that relying too much on AI could lead to a degradation of our skills. Just like relying on a calculator too much can weaken basic arithmetic skills, leaning heavily on AI might impact our ability to perform specific tasks independently. It’s important to strike a balance and use AI as a tool to enhance, not replace, our capabilities. As someone who has worked in education with middle and high schoolers, I especially feel this need to train and model this balance.

Consider using AI as a complement to your existing skills rather than a crutch. For example, while AI can help draft emails or create marketing strategies, reviewing and personalizing these outputs is still important. This way, you maintain your proficiency while benefiting from AI’s efficiency. AI systems are constantly being developed and will continue to improve, but there are very real examples of businesses and even attorneys and physicians using AI output that was later proven to be false or misleading. Be wise.

Ethical Considerations

AI raises a host of ethical questions. How should AI be used? What are its implications for decision-making processes? These questions are close to my heart as someone interested in theology and ethics. It’s crucial to consider the moral dimensions of AI and ensure that its development and deployment align with our values.

Engage in discussions about AI ethics and stay informed about how AI technologies are being developed and used. Advocate for ethical AI practices that prioritize fairness, transparency, and accountability. By doing so, we can help shape a future where AI benefits everyone.

We are constantly hearing stats about the number of jobs (and incomes) that AI replace in 1, 5, or 10 years. I do believe we are in for a societal shift. I do not want people to suffer and lose their jobs or careers. However, AI is not going away. How can you or your business manage that delicate balance in the most ethical way possible?

Economic Impact

AI is reshaping industries, which can lead to job displacement. While AI creates new opportunities, it also means that some roles may become obsolete. Preparing for these changes involves continuous learning and adaptability. It’s important to equip ourselves and our teams with the skills needed in an AI-driven world.

Promote the development of skills that are complementary to AI, such as critical thinking, creativity, and emotional intelligence. Encourage yourself or your team to pursue fields that leverage AI technology, ensuring they remain competitive in the evolving job market. Emphasizing lifelong learning will help individuals adapt to the changes brought about by AI.

Embracing AI: A Balanced Approach

AI is a powerful tool with immense potential, but it also has its share of challenges. As we navigate this new landscape, it’s essential to approach AI with a balanced perspective. Embrace the benefits it offers, but remain vigilant about the costs and ethical implications.

For those still hesitant, I encourage you to experiment with AI tools like ChatGPT. Start small, see how it can assist you in your daily tasks, and gradually integrate it into your workflow. AI isn’t just a trend; it’s a transformation that’s here to stay. By understanding and leveraging AI, we can better prepare ourselves and our businesses for the future.

Explore AI Tools

Begin by exploring AI tools that can assist you in your daily activities. For example, try using ChatGPT for drafting emails, creating marketing strategies, or brainstorming ideas. Experiment with AI-powered scheduling tools to manage your calendar more efficiently.

Educate Yourself

Stay informed about AI developments and their implications by reading articles, attending webinars, and participating in discussions about AI. Understanding the technology and its potential impact will help you make informed decisions about its use. As always, reach out to me if you have any questions.

Balance AI Use with Skill Development

While leveraging AI, ensure that you continue to develop your own skills. Use AI as a supplement rather than a replacement. For example, review and personalize AI-generated content to maintain your proficiency. Find online webinars that are geared towards AI trainings or demos that you can attend or review. There’s plenty of videos on YouTube, but be wise and discerning as your attention is more valuable than quality content on many of those channels. 

Advocate for Ethical AI

Engage in conversations about AI ethics and advocate for practices that prioritize fairness, transparency, and accountability. Stay informed about how AI technologies are being developed and used, and support initiatives that align with your values. Whatever your industry or profession, there’s room (and economic incentive) for conversations about ethics in the realm of AI.

Prepare for the (YOUR) Future

Encourage yourself or your team to develop skills that complement AI technology. Promote critical thinking, creativity, and emotional intelligence. Emphasize the importance of lifelong learning to adapt to the evolving job market. Critical thinkers will be the key decision makers in 2034 100x more than they are today in 2024.

Final Thoughts

Artificial Intelligence is a transformative force that’s reshaping our world in profound ways. By understanding and embracing AI, we can unlock new levels of efficiency, personalization, creativity, and innovation. 

However, navigating this landscape with a balanced perspective is crucial, considering the costs and ethical implications. Be wise. Be kind. Be efficient. The future feels uncertain and this is technology that will literally transform humanity more than the internet, more than electromagnetism, more than automobiles… we are entering a new age in every facet of our lives both personally and professionally. I don’t want to scare you, but I do want you and your team to be prepared.

For those still on the fence, I encourage you to take the plunge and explore AI’s potential. Start small, experiment with different tools, and see how they can enhance your daily activities. AI isn’t just a passing trend; it’s a revolution that’s here to stay. By leveraging AI wisely, we can better prepare ourselves and our businesses for the future.

And as always… stay curious!

Accelerationism: What Are We Doing to Ourselves?

Here’s your word for today as Apple’s WWDC looks to include an announcement of a major partnership with OpenAI (the folks behind ChatGPT) to make Siri much closer to an artificial intelligence (or “Apple Intelligence” as the marketing goes) assistant.

Accelerationism.

It’s a term that’s been used in the tech world for years, but the mindset (mind virus?) has really reached new levels in the post-ChatGPT 4 era that we now live in before what feels like an imminent release of something even more powerful in the coming months or years.

Here’s an article from 2017 about the term accelerationism and accelerationists: 

Accelerationism: how a fringe philosophy predicted the future we live in – The Guardian: 

Accelerationists argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative. Accelerationists favour automation. They favour the further merging of the digital and the human. They often favour the deregulation of business, and drastically scaled-back government. They believe that people should stop deluding themselves that economic and technological progress can be controlled. They often believe that social and political upheaval has a value in itself.

With my mind heavy on what the Apple / OpenAI partnership might look like before WWDC starts in just a few minutes (it feels like this could be an important moment for historical events), Ted Gioia made this thought-provoking post on the realization that we are doing to ourselves what Dr. Calhoun did to his poor mice (unknowingly) in the 1960’s famous Universe 25 experiment.

It’s worth your time to read this and ponder our own current situation.

Is Silicon Valley Building Universe 25? – by Ted Gioia:

Even today, Dr. Calhoun’s bold experiment—known as Universe 25—demands our attention. In fact, we need to study Universe 25 far more carefully today, because zealous tech accelerationists—that’s now a word, by the way—aim to create something comparable for human beings.What would you do if AI took care of all your needs?

After being in the classroom for the last three years of “post-Covid” education and seeing how many young people are absolutely struggling with mental health (and how little schools of any sort, from public to private such as the ones where I taught, are doing to help them), it’s shocking that we’ll send stocks soaring on big tech news today that will make our swipes and screen time increase and lead us further down the primrose path of a future of disconnected violence and mental health disaster.

Education Innovation and Cognitive Artifacts

Must read from Mr. Brent Kaneft (our Head of School at Wilson Hall, where I am a teacher)…

Wise Integration: Sea Squirts, Tech Bans, and Cognitive Artifacts (Summer Series) | Brent Kaneft – Intrepid ED News:

So the strange paradox of innovation is that every innovation has the potential to be an existential threat to the physical, social, spiritual, and cognitive development of humans. The allure is the convenience (our brains are always looking to save energy!) and the potentiality innovation offers, but the human cost can be staggering, either immediately or slowly, like the impact of mold secretly growing behind an attractive wallpaper. To return to Tristan Harris’s point: machines are improving as humans downgrade in various ways. As professional educators, we have to ask whether innovation will prove detrimental to the fundamental qualities we want to develop in our students.