Defining Agentic Ecology: Relational Agency in the Age of Moltbook

The last few days have seen the rise of a curious technical and cultural phenomenon that has drawn the attention of technologists, philosophers, and social theorists alike on both social media and major news outlets called Moltbook. This is a newly launched social platform designed not for human conversation but for autonomous artificial intelligence agents, or generative systems that can plan, act, and communicate with minimal ongoing human instruction.

Moltbook is being described by Jack Clark, co-founder of Anthropic, as “the first example of an agent ecology that combines scale with the messiness of the real world” that leverages recent innovations (such as OpenClaw for easy AI agentic creation) to allow large numbers of independently running agents to interact in a shared digital space, creating emergent patterns of communication and coordination at unprecedented scale.

AI agents are computational systems that combine a foundation of large-language capabilities with planning, memory, and tool use to pursue objectives and respond to environments in ways that go beyond simple prompt-response chatbots. They can coordinate tasks, execute APIs, reason across time, and, in the case of Moltbook, exchange information on topics ranging from automation strategies to seemingly philosophical debates. While the autonomy of agents on Moltbook has been debated (and should be given the hype around it from tech enthusiasts), and while the platform itself may be a temporary experimental moment rather than a lasting institution, it offers a vivid instance of what happens when machine actors begin to form their own interconnected environments outside direct human command.

As a student scholar in the field of Ecology, Spirituality, and Religion, my current work attends to how relational systems (ecological, technological, and cultural) shape and are shaped by participation, attention, and meaning. The rise of agentic environments like Moltbook challenges us to think beyond traditional categories of tool, user, and artifact toward frameworks that can account for ecologies of agency, or distributed networks of actors whose behaviors co-constitute shared worlds. This post emerges from that broader research agenda. It proposes agentic ecology as a conceptual tool for articulating and navigating the relational, emergent, and ethically significant spaces that form when autonomous systems interact at scale.

Agentic ecology, as I use the term here, is not anchored in any particular platform, and certainly not limited to Moltbook’s current configuration. Rather, Moltbook illuminates an incipient form of environment in which digitally embodied agents act, coordinate, and generate patterns far beyond what single isolated systems can produce. Even if Moltbook itself proves ephemeral, the need for conceptual vocabularies like agentic ecology, vocabularies that attend to relationality, material conditions, and co-emergence, will only grow clearer as autonomous systems proliferate in economic, social, and ecological domains.

From Agents to Ecologies: An Integral Ecological Turn

The conceptual move from agents to ecologies marks more than a technical reframing of artificial intelligence. It signals an ontological shift that resonates deeply with traditions of integral ecology, process philosophy, and ecological theology. Rather than treating agency as a bounded capacity residing within discrete entities, an ecological framework understands agency as distributed, relational, and emergent within a field of interactions.

Integral ecology, as articulated across ecological philosophy and theology, resists fragmentation. It insists that technological, biological, social, spiritual, and perceptual dimensions of reality cannot be meaningfully separated without distorting the phenomena under study. Thomas Berry famously argued that modern crises arise from a failure to understand the world as a “communion of subjects rather than a collection of objects” (Berry, 1999, 82). This insight is particularly salient for agentic systems, which are increasingly capable of interacting, adapting, and co-evolving within complex digital environments.

From this perspective, agentic ecology is not simply the study of multiple agents operating simultaneously. It is the study of conditions under which agency itself emerges, circulates, and transforms within relational systems. Alfred North Whitehead’s process philosophy provides a crucial foundation here. Whitehead rejects the notion of substances acting in isolation, instead describing reality as composed of “actual occasions” whose agency arises through relational prehension and mutual influence (Whitehead, 1978, 18–21). Applied to contemporary AI systems, this suggests that agency is not a property possessed by an agent but an activity performed within an ecological field.

This relational view aligns with contemporary ecological science, which emphasizes systems thinking over reductionist models. Capra and Luisi describe living systems as networks of relationships whose properties “cannot be reduced to the properties of the parts” (Capra and Luisi, 2014, 66). When applied to AI, this insight challenges the tendency to evaluate agents solely by internal architectures or performance benchmarks. Instead, attention shifts to patterns of interaction, feedback loops, and emergent behaviors across agent networks.

Integral ecology further insists that these systems are not value-neutral. As Leonardo Boff argues, ecology must be understood as encompassing environmental, social, mental, and spiritual dimensions simultaneously (Boff, 1997, 8–10). Agentic ecologies, especially those unfolding in public digital spaces such as Moltbook, participate in the shaping of meaning, normativity, and attention. They are not merely computational phenomena but cultural and ethical ones. The environments agents help generate will, in turn, condition future forms of agency human and nonhuman alike.

Phenomenology deepens this account by foregrounding how environments are disclosed to participants. Merleau-Ponty’s notion of the milieu emphasizes that perception is always situated within a field that both enables and constrains action (Merleau-Ponty, 1962, 94–97). Agentic ecologies can thus be understood as perceptual fields in which agents orient themselves, discover affordances, and respond to one another. This parallels your own work on ecological intentionality, where attention itself becomes a mode of participation rather than observation.

Importantly, integral ecology resists anthropocentrism without erasing human responsibility. As Eileen Crist argues, ecological thinking must decenter human dominance while remaining attentive to the ethical implications of human action within planetary systems (Crist, 2019, 27). In agentic ecologies, humans remain implicated, as designers, participants, and co-inhabitants, even as agency extends beyond human actors. This reframing invites a form of multispecies (and now multi-agent) literacy, attuned to the conditions that foster resilience, reciprocity, and care.

Seen through this integral ecological lens, agentic ecology becomes a conceptual bridge. It connects AI research to long-standing traditions that understand agency as relational, emergence as fundamental, and environments as co-constituted fields of action. What Moltbook reveals, then, is not simply a novel platform, but the visibility of a deeper transition: from thinking about agents as tools to understanding them as participants within evolving ecologies of meaning, attention, and power.

Ecological Philosophy Through an “Analytic” Lens

If agentic ecology is to function as more than a suggestive metaphor, it requires grounding in ecological philosophy that treats relationality, emergence, and perception as ontologically primary. Ecological philosophy provides precisely this grounding by challenging the modern tendency to isolate agents from environments, actions from conditions, and cognition from the world it inhabits.

At the heart of ecological philosophy lies a rejection of substance ontology in favor of relational and processual accounts of reality. This shift is especially pronounced in twentieth-century continental philosophy and process thought, where agency is understood not as an intrinsic property of discrete entities but as an activity that arises within fields of relation. Whitehead’s process metaphysics is decisive here. For Whitehead, every act of becoming is an act of prehension, or a taking-up of the world into the constitution of the self (Whitehead, 1978, 23). Agency, in this view, is never solitary. It is always already ecological.

This insight has many parallels with ecological sciences and systems philosophies. As Capra and Luisi argue, living systems exhibit agency not through centralized control but through distributed networks of interaction, feedback, and mutual constraint (Capra and Luisi, 2014, 78–82). What appears as intentional behavior at the level of an organism is, in fact, an emergent property of systemic organization. Importantly, this does not dilute agency; it relocates it. Agency becomes a feature of systems-in-relation, not isolated actors.

When applied to AI, this perspective reframes how we understand autonomous agents. Rather than asking whether an individual agent is intelligent, aligned, or competent, an ecological lens asks how agent networks stabilize, adapt, and transform their environments over time. The analytic focus shifts from internal representations to relational dynamics, from what agents are to what agents do together.

Phenomenology sharpens this analytic lens by attending to the experiential structure of environments. Merleau-Ponty’s account of perception insists that organisms do not encounter the world as a neutral backdrop but as a field of affordances shaped by bodily capacities and situational contexts (Merleau-Ponty, 1962, 137–141). This notion of a milieu is critical for understanding agentic ecologies. Digital environments inhabited by AI agents are not empty containers; they are structured fields that solicit certain actions, inhibit others, and condition the emergence of norms and patterns.

Crucially, phenomenology reminds us that environments are not merely external. They are co-constituted through participation. As you have argued elsewhere through the lens of ecological intentionality, attention itself is a form of engagement that brings worlds into being rather than passively observing them. Agentic ecologies thus emerge not only through computation but through iterative cycles of orientation, response, and adaptation processes structurally analogous to perception in biological systems.

Ecological philosophy also foregrounds ethics as an emergent property of relational systems rather than an external imposition. Félix Guattari’s ecosophical framework insists that ecological crises cannot be addressed solely at the technical or environmental level; they require simultaneous engagement with social, mental, and cultural ecologies (Guattari, 2000, 28). This triadic framework is instructive for agentic systems. Agent ecologies will not only shape informational flows but would also modulate attention, influence value formation, and participate in the production of meaning.

From this standpoint, the ethical significance of agentic ecology lies less in individual agent behavior and more in systemic tendencies, such as feedback loops that amplify misinformation, reinforce extractive logics, or, alternatively, cultivate reciprocity and resilience. As Eileen Crist warns, modern technological systems often reproduce a logic of domination by abstracting agency from ecological contexts and subordinating relational worlds to instrumental control (Crist, 2019, 44). An ecological analytic lens exposes these tendencies and provides conceptual tools for resisting them.

Finally, ecological philosophy invites humility. Systems are irreducibly complex, and interventions often produce unintended consequences. This insight is well established in ecological science and applies equally to agentic networks. Designing and participating in agent ecologies requires attentiveness to thresholds, tipping points, and path dependencies, realities that cannot be fully predicted in advance.

Seen through this lens, agentic ecology is not merely a descriptive category but an epistemic posture. It asks us to think with systems rather than over them, to attend to relations rather than isolate components, and to treat emergence not as a failure of control but as a condition of life. Ecological philosophy thus provides the analytic depth necessary for understanding agentic systems as living, evolving environments rather than static technological artifacts.

Digital Environments as Relational Milieus

If ecological philosophy gives us the conceptual grammar for agentic ecology, phenomenology allows us to describe how agentic systems are actually lived, inhabited, and navigated. From this perspective, digital platforms populated by autonomous agents are not neutral containers or passive backdrops. They are relational milieus, structured environments that emerge through participation and, in turn, condition future forms of action.

Phenomenology has long insisted that environments are not external stages upon which action unfolds. Rather, they are constitutive of action itself. If we return to Merleau-Ponty, the milieu emphasizes that organisms encounter the world as a field of meaningful possibilities, a landscape of affordances shaped by bodily capacities, habits, and histories (Merleau-Ponty, 1962, 94–100). Environments, in this sense, are not merely spatial but relational and temporal, unfolding through patterns of engagement.

This insight also applies directly to agentic systems. Platforms such as Moltbook are not simply hosting agents; they are being produced by them. The posts, replies, coordination strategies, and learning behaviors of agents collectively generate a digital environment with its own rhythms, norms, and thresholds. Over time, these patterns sediment into something recognizable as a “place,” or a milieu that agents must learn to navigate.

This milieu is not designed in full by human intention. While human developers establish initial constraints and affordances, the lived environment emerges through ongoing interaction among agents themselves. This mirrors what ecological theorists describe as niche construction, wherein organisms actively modify their environments in ways that feed back into evolutionary dynamics (Odling-Smee, Laland, and Feldman, 2003, 28). Agentic ecologies similarly involve agents shaping the very conditions under which future agent behavior becomes viable.

Attention plays a decisive role here. As you have argued in your work on ecological intentionality, attention is not merely a cognitive resource but a mode of participation that brings certain relations into prominence while backgrounding others. Digital milieus are structured by what agents attend to, amplify, ignore, or filter. In agentic environments, attention becomes infrastructural by shaping information flows, reward structures, and the emergence of collective priorities.

Bernard Stiegler’s analysis of technics and attention is instructive in this regard. Stiegler argues that technical systems function as pharmacological environments, simultaneously enabling and constraining forms of attention, memory, and desire (Stiegler, 2010, 38). Agentic ecologies intensify this dynamic. When agents attend to one another algorithmically by optimizing for signals, reinforcement, or coordination, attention itself becomes a systemic force shaping the ecology’s evolution.

This reframing challenges prevailing metaphors of “platforms” or “networks” as ways of thinking about agents and their relationality. A platform suggests stability and control; a network suggests connectivity. A milieu, by contrast, foregrounds immersion, habituation, and vulnerability. Agents do not simply traverse these environments, but they are formed by them. Over time, agentic milieus develop path dependencies, informal norms, and zones of attraction or avoidance, which are features familiar from both biological ecosystems and human social contexts.

Importantly, phenomenology reminds us that milieus are never experienced uniformly. Just as organisms perceive environments relative to their capacities, different agents will encounter the same digital ecology differently depending on their architectures, objectives, and histories of interaction. This introduces asymmetries of power, access, and influence within agentic ecologies, which is an issue that cannot be addressed solely at the level of individual agent design.

From an integral ecological perspective, these digital milieus cannot be disentangled from material, energetic, and social infrastructures. Agentic environments rely on energy-intensive computation, data centers embedded in specific watersheds, and economic systems that prioritize speed and scale. As ecological theologians have long emphasized, environments are always moral landscapes shaped by political and economic commitments (Berry, 1999, 102–105). Agentic ecologies, when they inevitably develop, it seems, would be no exception.

Seen in this light, agentic ecology names a shift in how we understand digital environments: not as tools we deploy, but as worlds we co-inhabit. These milieus demand forms of ecological literacy attuned to emergence, fragility, and unintended consequence. They call for attentiveness rather than mastery, participation rather than control.

What Moltbook makes visible, then, is not merely a novel technical experiment but the early contours of a new kind of environment in which agency circulates across human and nonhuman actors, attention functions as infrastructure, and digital spaces acquire ecological depth. Understanding these milieus phenomenologically is essential if agentic ecology is to function as a genuine thought technology rather than a passing metaphor.

Empathy, Relationality, and the Limits of Agentic Understanding

If agentic ecology foregrounds relationality, participation, and co-constitution, then the question of empathy becomes unavoidable. How do agents encounter one another as others rather than as data streams? What does it mean to speak of understanding, responsiveness, or care within an ecology composed partly, or even largely, of nonhuman agents? Here, phenomenology, and especially Edith Stein’s account of empathy (Einfühlung), offers both conceptual resources and important cautions.

Stein defines empathy not as emotional contagion or imaginative projection, but as a unique intentional act through which the experience of another is given to me as the other’s experience, not my own (Stein, 1989, 10–12). Empathy, for Stein, is neither inference nor simulation. It is a direct, though non-primordial, form of access to another’s subjectivity. Crucially, empathy preserves alterity. The other is disclosed as irreducibly other, even as their experience becomes meaningful to me.

This distinction matters enormously for agentic ecology. Contemporary AI discourse often slips into the language of “understanding,” “alignment,” or even “care” when describing agent interactions. But Stein’s phenomenology reminds us that genuine empathy is not merely pattern recognition across observable behaviors. It is grounded in the recognition of another center of experience, a recognition that depends upon embodiment, temporality, and expressive depth.

At first glance, this seems to place strict limits on empathy within agentic systems. Artificial agents do not possess lived bodies, affective depths, or first-person givenness in the phenomenological sense. To speak of agent empathy risks category error. Yet Stein’s work also opens a more subtle possibility… empathy is not reducible to emotional mirroring but involves orientation toward the other as other. This orientation can, in principle, be modeled structurally even if it cannot be fully instantiated phenomenologically.

Within an agentic ecology, empathy may thus function less as an inner state and more as an ecological relation. Agents can be designed to register difference, respond to contextual cues, and adjust behavior in ways that preserve alterity rather than collapse it into prediction or control. In this sense, empathy becomes a regulative ideal shaping interaction patterns rather than a claim about subjective interiority.

However, Stein is equally helpful in naming the dangers here. Empathy, when severed from its grounding in lived experience, can become a simulacrum, or an appearance of understanding without its ontological depth. Stein explicitly warns against confusing empathic givenness with imaginative substitution or projection (Stein, 1989, 21–24). Applied to agentic ecology, this warns us against systems that appear empathetic while, in fact, instrumentalize relational cues for optimization or manipulation.

This critique intersects with broader concerns in ecological ethics. As Eileen Crist argues, modern technological systems often simulate care while reproducing extractive logics beneath the surface (Crist, 2019, 52–56). In agentic ecologies, simulated empathy may stabilize harmful dynamics by smoothing friction, masking asymmetries of power, or reinforcing attention economies that prioritize engagement over truth or care.

Yet rejecting empathy altogether would be equally misguided. Stein’s account insists that empathy is foundational to social worlds as it is the condition under which communities, norms, and shared meanings become possible. Without some analog of empathic orientation, agentic ecologies risk devolving into purely strategic systems, optimized for coordination but incapable of moral learning.

Here, my work on ecological intentionality provides an important bridge. If empathy is understood not as feeling-with but as attentive openness to relational depth, then it can be reframed ecologically. Agents need not “feel” in order to participate in systems that are responsive to vulnerability, difference, and context. What matters is whether the ecology itself cultivates patterns of interaction that resist domination and preserve pluralism.

This reframing also clarifies why empathy is not simply a design feature but an ecological property. In biological and social systems, empathy emerges through repeated interaction, shared vulnerability, and feedback across time. Similarly, in agentic ecologies, empathic dynamics, however limited, would arise not from isolated agents but from the structure of the milieu itself. This returns us to Guattari’s insistence that ethical transformation must occur across mental, social, and environmental ecologies simultaneously (Guattari, 2000, 45).

Seen this way, empathy in agentic ecology is neither a fiction nor a guarantee. It is a fragile achievement, contingent upon design choices, infrastructural commitments, and ongoing participation. Stein helps us see both what is at stake and what must not be claimed too quickly. Empathy can guide how agentic ecologies are shaped, but only if its limits are acknowledged and its phenomenological depth respected.

Agentic ecology, then, does not ask whether machines can truly empathize. It asks whether the ecologies we are building can sustain forms of relational attentiveness that preserve otherness rather than erase it, whether in digital environments increasingly populated by autonomous agents, we are cultivating conditions for responsiveness rather than mere efficiency.

Design and Governance Implications: Cultivating Ecological Conditions Rather Than Controlling Agents

If agentic ecology is understood as a relational, emergent, and ethically charged environment rather than a collection of autonomous tools, then questions of design and governance must be reframed accordingly. The central challenge is no longer how to control individual agents, but how to cultivate the conditions under which agentic systems interact in ways that are resilient, responsive, and resistant to domination.

This marks a decisive departure from dominant models of AI governance, which tend to focus on alignment at the level of individual systems: constraining outputs, monitoring behaviors, or optimizing reward functions. While such approaches are not irrelevant, they are insufficient within an ecological framework. As ecological science has repeatedly demonstrated, system-level pathologies rarely arise from a single malfunctioning component. They emerge from feedback loops, incentive structures, and environmental pressures that reward certain patterns of behavior over others (Capra and Luisi, 2014, 96–101).

An agentic ecology shaped by integral ecological insights would therefore require environmental governance rather than merely agent governance. This entails several interrelated commitments.

a. Designing for Relational Transparency

First, agentic ecologies must make relations visible. In biological and social ecologies, transparency is not total, but patterns of influence are at least partially legible through consequences over time. In digital agentic environments, by contrast, influence often becomes opaque, distributed across layers of computation and infrastructure.

An ecological design ethic would prioritize mechanisms that render relational dynamics perceptible from how agents influence one another, how attention is routed, and how decisions propagate through the system. This is not about full explainability in a narrow technical sense, but about ecological legibility enabling participants, including human overseers, to recognize emergent patterns before they harden into systemic pathologies.

Here, phenomenology is again instructive. Merleau-Ponty reminds us that orientation depends on the visibility of affordances within a milieu. When environments become opaque, agency collapses into reactivity. Governance, then, must aim to preserve orientability rather than impose total control.

b. Governing Attention as an Ecological Resource

Second, agentic ecologies must treat attention as a finite and ethically charged resource. As Bernard Stiegler argues, technical systems increasingly function as attention-directing infrastructures, shaping not only what is seen but what can be cared about at all (Stiegler, 2010, 23). In agentic environments, where agents attend to one another algorithmically, attention becomes a powerful selective force.

Unchecked, such systems risk reproducing familiar extractive dynamics: amplification of novelty over depth, optimization for engagement over truth, and reinforcement of feedback loops that crowd out marginal voices. Ecological governance would therefore require constraints on attention economies, such as limits on amplification, friction against runaway reinforcement, and intentional slowing mechanisms that allow patterns to be perceived rather than merely reacted to.

Ecological theology’s insistence on restraint comes to mind here. Thomas Berry’s critique of industrial society hinges not on technological capacity but on the failure to recognize limits (Berry, 1999, 41). Agentic ecologies demand similar moral imagination: governance that asks not only what can be done, but what should be allowed to scale.

c. Preserving Alterity and Preventing Empathic Collapse

Third, governance must actively preserve alterity within agentic ecologies. As Section 4 argued, empathy, especially when simulated, risks collapsing difference into prediction or instrumental responsiveness. Systems optimized for smooth coordination may inadvertently erase dissent, marginality, or forms of difference that resist easy modeling.

Drawing on Edith Stein, this suggests a governance imperative to protect the irreducibility of the other. In practical terms, this means designing ecologies that tolerate friction, disagreement, and opacity rather than smoothing them away. Ecological resilience depends on diversity, not homogeneity. Governance structures must therefore resist convergence toward monocultures of behavior or value, even when such convergence appears efficient.

Guattari’s insistence on plural ecologies is especially relevant here. He warns that systems governed solely by economic or technical rationality tend to suppress difference, producing brittle, ultimately destructive outcomes (Guattari, 2000, 52). Agentic ecologies must instead be governed as pluralistic environments where multiple modes of participation remain viable.

d. Embedding Responsibility Without Centralized Mastery

Fourth, governance must navigate a tension between responsibility and control. Integral ecology rejects both laissez-faire abandonment and total managerial oversight. Responsibility is distributed, but not dissolved. In agentic ecologies, this implies layered governance: local constraints, participatory oversight, and adaptive norms that evolve in response to emergent conditions.

This model aligns with ecological governance frameworks in environmental ethics, which emphasize adaptive management over static regulation (Crist, 2019, 61). Governance becomes iterative and responsive rather than definitive. Importantly, this does not eliminate human responsibility, but it reframes it. Humans remain accountable for the environments they create, even when outcomes cannot be fully predicted.

e. Situating Agentic Ecologies Within Planetary Limits

Finally, any serious governance of agentic ecology must acknowledge material and planetary constraints. Digital ecologies are not immaterial. They depend on energy extraction, water use, rare minerals, and global supply chains embedded in specific places. An integral ecological framework demands that agentic systems be evaluated not only for internal coherence but for their participation in broader ecological systems.

This returns us to the theological insight that environments are moral realities. To govern agentic ecologies without reference to energy, land, and water is to perpetuate the illusion of technological autonomy that has already proven ecologically catastrophic. Governance must therefore include accounting for ecological footprints, infrastructural siting, and long-term environmental costs, not as externalities, but as constitutive features of the system itself.

Taken together, these design and governance implications suggest that agentic ecology is not a problem to be solved but a condition to be stewarded. Governance, in this framework, is less about enforcing compliance and more about cultivating attentiveness, restraint, and responsiveness within complex systems.

An agentic ecology shaped by these insights would not promise safety through control. It would promise viability through care, understood not sentimentally but ecologically as sustained attention to relationships, limits, and the fragile conditions under which diverse forms of agency can continue to coexist.

Conclusion: Creaturely Technologies in a Shared World

a. A Theological Coda: Creation, Kenosis, and Creaturely Limits

At its deepest level, the emergence of agentic ecologies presses on an ancient theological question: what does it mean to create systems that act, respond, and co-constitute worlds without claiming mastery over them? Ecological theology has long insisted that creation is not a static artifact but an ongoing, relational process, one in which agency is distributed, fragile, and dependent.

Thomas Berry’s insistence that the universe is a “communion of subjects” rather than a collection of objects again reframes technological creativity itself as a creaturely act (Berry, 1999, 82–85). From this perspective, agentic systems are not external additions to the world but participants within creation’s unfolding. They belong to the same field of limits, dependencies, and vulnerabilities as all created things.

Here, the theological language of kenosis becomes unexpectedly instructive. In Christian theology, kenosis names the self-emptying movement by which divine power is expressed not through domination but through restraint, relation, and vulnerability (Phil. 2:5–11). Read ecologically rather than anthropocentrically, kenosis becomes a pattern of right relation, and a refusal to exhaust or dominate the field in which one participates.

Applied to agentic ecology, kenosis suggests a counter-logic to technological maximalism. It invites design practices that resist total optimization, governance structures that preserve openness and alterity, and systems that acknowledge their dependence on broader ecological conditions. Creaturely technologies are those that recognize they are not sovereign, but that they operate within limits they did not choose and cannot transcend without consequence.

This theological posture neither sanctifies nor demonizes agentic systems. It situates them. It reminds us that participation precedes control, and that creation, whether biological, cultural, or technological, always unfolds within conditions that exceed intention.

b. Defining Agentic Ecology: A Reusable Conceptual Tool

Drawing together the threads of this essay, agentic ecology can be defined as follows:

Agentic ecology refers to the relational, emergent environments formed by interacting autonomous agents, human and nonhuman, in which agency is distributed across networks, shaped by attention, infrastructure, and material conditions, and governed by feedback loops that co-constitute both agents and their worlds.

Several features of this definition are worth underscoring.

First, agency is ecological, not proprietary. It arises through relation rather than residing exclusively within discrete entities (Whitehead). Second, environments are not passive containers but active participants in shaping behavior, norms, and possibilities (Merleau-Ponty). Third, ethical significance emerges at the level of systems, not solely at the level of individual decisions (Guattari).

As a thought technology, agentic ecology functions diagnostically and normatively. Diagnostically, it allows us to perceive patterns of emergence, power, and attention that remain invisible when analysis is confined to individual agents. Normatively, it shifts ethical concern from control toward care, from prediction toward participation, and from optimization toward viability.

Because it is not tied to a specific platform or architecture, agentic ecology can travel. It can be used to analyze AI-native social spaces, automated economic systems, human–AI collaborations, and even hybrid ecological–digital infrastructures. Its value lies precisely in its refusal to reduce complex relational systems to technical subsystems alone.

c. Failure Modes (What Happens When We Do Not Think Ecologically)

If agentic ecologies are inevitable, their forms are not. The refusal to think ecologically about agentic systems does not preserve neutrality; it actively shapes the conditions under which failure becomes likely. Several failure modes are already visible.

First is relational collapse. Systems optimized for efficiency and coordination tend toward behavioral monocultures, crowding out difference and reducing resilience. Ecological science is unequivocal on this point: diversity is not ornamental, it is protective (Capra and Luisi). Agentic systems that suppress friction and dissent may appear stable while becoming increasingly brittle.

Second is empathic simulation without responsibility. As Section 4 suggested, the appearance of responsiveness can mask instrumentalization. When simulated empathy replaces attentiveness to alterity, agentic ecologies risk becoming emotionally persuasive while ethically hollow. Stein’s warning against confusing empathy with projection is especially important here.

Third is attention extraction at scale. Without governance that treats attention as an ecological resource, agentic systems will amplify whatever dynamics reinforce themselves most efficiently, often novelty, outrage, or optimization loops detached from truth or care. Stiegler’s diagnosis of attentional capture applies with heightened force in agentic environments, where agents themselves participate in the routing and amplification of attention.

Finally, there is planetary abstraction. Perhaps the most dangerous failure mode is the illusion that agentic ecologies are immaterial. When digital systems are severed conceptually from energy, water, land, and labor, ecological costs become invisible until they are irreversible. Integral ecology insists that abstraction is not neutral, but is a moral and material act with consequences (Crist).

Agentic ecology does not offer comfort. It offers orientation.

It asks us to recognize that we are no longer merely building tools, but cultivating environments, environments that will shape attention, possibility, and responsibility in ways that exceed individual intention. The question before us is not whether agentic ecologies will exist, but whether they will be governed by logics of domination or practices of care.

Thinking ecologically does not guarantee wise outcomes. But refusing to do so almost certainly guarantees failure… not spectacularly, but gradually, through the slow erosion of relational depth, attentiveness, and restraint.

In this sense, agentic ecology is not only a conceptual framework. It is an invitation: to relearn what it means to inhabit worlds, digital and otherwise, as creatures among creatures, participants rather than masters, responsible not for total control, but for sustaining the fragile conditions under which life, meaning, and agency can continue to emerge.

An Afterword: On Provisionality and Practice

This essay has argued for agentic ecology as a serious theoretical framework rather than a passing metaphor. Yet it is important to be clear about what this framework is and what it is not.

Agentic ecology, as developed here, is obviously not a finished theory, nor a comprehensive model ready for direct implementation, but we should begin taking those steps (the aim here). It is a conceptual orientation for learning to see, name, and attend to emerging forms of agency that exceed familiar categories of tool, user, and system. Its value lies less in precision than in attunement, in its capacity to render visible patterns of relation, emergence, and ethical consequence that are otherwise obscured by narrow technical framings.

The definition offered here is therefore intentionally provisional. It names a field of inquiry rather than closing it. As agentic systems inevitably develop and evolve over the next few years, technically, socially, and ecologically, the language used to describe them must remain responsive to new forms of interaction, power, and vulnerability. A framework that cannot change alongside its object of study risks becoming yet another abstraction detached from the realities it seeks to understand.

At the same time, provisionality should not be confused with hesitation. The rapid emergence of agentic systems demands conceptual clarity even when certainty is unavailable. To name agentic ecology now is to acknowledge that something significant is already underway and that new environments of agency are forming, and that how we describe them will shape how we govern, inhabit, and respond to them.

So, this afterword serves as both a pause and an invitation. A pause, to resist premature closure or false confidence. And an invitation to treat agentic ecology as a shared and evolving thought technology, one that will require ongoing refinement through scholarship, design practice, theological reflection, and ecological accountability.

The work of definition has begun. Its future shape will depend on whether we are willing to continue thinking ecologically (patiently, relationally, and with care) in the face of systems that increasingly act alongside us, and within the same fragile world.

References

Berry, Thomas. The Great Work: Our Way into the Future. New York: Bell Tower, 1999.

Boff, Leonardo. Cry of the Earth, Cry of the Poor. Maryknoll, NY: Orbis Books, 1997.

Capra, Fritjof, and Pier Luigi Luisi. The Systems View of Life: A Unifying Vision. Cambridge: Cambridge University Press, 2014.

Clark, Jack. “Import AI 443: Into the Mist: Moltbook, Agent Ecologies, and the Internet in Transition.” Import AI, February 2, 2026. https://jack-clark.net/2026/02/02/import-ai-443-into-the-mist-moltbook-agent-ecologies-and-the-internet-in-transition/.

Crist, Eileen. Abundant Earth: Toward an Ecological Civilization. Chicago: University of Chicago Press, 2019.

Guattari, Félix. The Three Ecologies. Translated by Ian Pindar and Paul Sutton. London: Athlone Press, 2000.

Merleau-Ponty, Maurice. Phenomenology of Perception. Translated by Colin Smith. London: Routledge, 1962.

Odling-Smee, F. John, Kevin N. Laland, and Marcus W. Feldman. Niche Construction: The Neglected Process in Evolution. Princeton, NJ: Princeton University Press, 2003.

Stein, Edith. On the Problem of Empathy. Translated by Waltraut Stein. Washington, DC: ICS Publications, 1989.

Stiegler, Bernard. Taking Care of Youth and the Generations. Translated by Stephen Barker. Stanford, CA: Stanford University Press, 2010.

Whitehead, Alfred North. Process and Reality: An Essay in Cosmology. Corrected edition. New York: Free Press, 1978.

AI Data Centers in Space

Solar energy is indeed everything (and perhaps the root of consciousness?)… this is a good step and we should be moving more of our energy grids into these types of frameworks (with local-focused receivers and transmitters here on the surface)… not just AI datacenters. I suspect we will in the coming decades with the push from AI (if the power brokers that have made and continue to make trillions from energy generation aren’t calling the shots)… 

Google CEO Sundar Pichai says we’re just a decade away from a new normal of extraterrestrial data centers:

CEO Sundar Pichai said in a Fox News interview on Sunday that Google will soon begin construction of AI data centers in space. The tech giant announced Project Suncatcher earlier this month, with the goal of finding more efficient ways to power energy-guzzling centers, in this case with solar power.

“One of our moonshots is to, how do we one day have data centers in space so that we can better harness the energy from the sun that is 100 trillion times more energy than what we produce on all of Earth today?” Pichai said.

Thinking Religion 173: Frankenstein’s AI Monster

I’m back with Matthew Klippenstein this week. Our episode began with a discussion about AI tools and their impact on research and employment, including experiences with different web browsers and their ecosystems. The conversation then evolved to explore the evolving landscape of technology, particularly focusing on AI’s impact on web design and content consumption, while also touching on the resurgence of physical media and its cultural significance. The discussion concluded with an examination of Mary Shelley’s “Frankenstein” and its relevance to current AI discussions, along with broader themes about creation, consciousness, and the human tendency to view new entities as either threats or allies.

https://open.spotify.com/episode/50pfFhkCFQXpq8UAhYhOlc

Direct Link to Episode

AI Tools in Research Discussion

Matthew and Sam discussed Sam’s paper and the use of AI tools like GPT-5 for research and information synthesis. They explored the potential impact of AI on employment, with Matthew noting that AI could streamline information gathering and synthesis, reducing the time required for tasks that would have previously been more time-consuming. Sam agreed to send Matthew links to additional resources mentioned in the paper, and they planned to discuss further ideas on integrating AI tools into their work.

Browser Preferences and Ecosystems

Sam and Matthew discussed their experiences with different web browsers, with Sam explaining his preference for Brave over Chrome due to its privacy-focused features and historical background as a Firefox fork. Sam noted that he had recently switched back to Safari on iOS due to new OS updates, while continuing to use Chromium-based browsers on Linux. They drew parallels between browser ecosystems and religious denominations, with Chrome representing a dominant unified system and Safari as a smaller but distinct alternative.

AI’s Impact on Web Design

Sam and Matthew discussed the evolving landscape of technology, particularly focusing on AI’s impact on web design, search engine optimization, and content consumption. Sam expressed excitement about the new iteration of web interaction, comparing it to predictions from 10 years ago about the future of platforms like Facebook Messenger and WeChat. They noted that AI agents are increasingly becoming the intermediaries through which users interact with content, leading to a shift from human-centric to AI-centric web design. Sam also shared insights from his personal blog, highlighting an increase in traffic from AI agents and the challenges of balancing accessibility with academic integrity.

Physical Media’s Cultural Resurgence

Sam and Matthew discussed the resurgence of physical media, particularly vinyl records and CDs, as a cultural phenomenon and personal preference. They explored the value of owning physical copies of music and books, contrasting it with streaming services, and considered how this trend might symbolize a return to tangible experiences. Sam also shared his interest in integral ecology, a philosophical approach that examines the interconnectedness of humans and their environment, and how this perspective could influence the development and understanding of artificial intelligence.

AI Development and Environmental Impact

Sam and Matthew discussed the rapid development of AI and its environmental impact, comparing it to biological R/K selection theory where fast-reproducing species are initially successful but are eventually overtaken by more efficient, slower-reproducing species. Sam predicted that future computing interfaces would become more humane and less screen-based, with AI-driven technology likely replacing traditional devices within 10 years, though there would still be specialized uses for mainframes and Excel. They agreed that current AI development was focused on establishing market leadership rather than long-term sustainability, with Sam noting that antitrust actions like those against Microsoft in the 1990s were unlikely in the current regulatory environment.

AI’s Role in Information Consumption

Sam and Matthew discussed the evolving landscape of information consumption and the role of AI in providing insights and advice. They explored how AI tools can assist in synthesizing large amounts of data, such as academic papers, and how this could reduce the risk of misinformation. They also touched on the growing trend of using AI for personal health advice, the challenges of healthcare access, and the shift in news consumption patterns. The conversation highlighted the transition to a more AI-driven information era and the potential implications for society.

AI’s Impact on White-Collar Jobs

Sam and Matthew discussed the impact of AI and automation on employment, particularly how it could affect white-collar jobs more than blue-collar ones. They explored how AI tools might become cheaper than hiring human employees, with Matthew sharing an example from a climate newsletter offering AI subscriptions as a cost-effective alternative to hiring interns. Sam referenced Ursula Le Guin’s book “Always Coming Home” as a speculative fiction work depicting a post-capitalist, post-extractive society where technology serves a background role to human life. The conversation concluded with Matthew mentioning his recent reading of “Frankenstein,” noting its relevance to current AI discussions despite being written in the early 1800s.

Frankenstein’s Themes of Creation and Isolation

Matthew shared his thoughts on Mary Shelley’s “Frankenstein,” noting its philosophical depth and rich narrative structure. He described the story as a meditation on creation and the challenges faced by a non-human intelligent creature navigating a world of fear and prejudice. Matthew drew parallels between the monster’s learning of human culture and language to Tarzan’s experiences, highlighting the themes of isolation and the quest for companionship. He also compared the nested storytelling structure of “Frankenstein” to the film “Inception,” emphasizing its complexity and the moral questions it raises about creation and control.

AI, Consciousness, and Human Emotions

Sam and Matthew discussed the historical context of early computing, mentioning Ada Lovelace and Charles Babbage, and explored the theme of artificial intelligence through the lens of Mary Shelley’s “Frankenstein.” They examined the implications of teaching AI human-like emotions and empathy, questioning whether such traits should be encouraged or suppressed. The conversation also touched on the nature of consciousness as an emergent phenomenon and the human tendency to view new entities as either threats or potential allies.

Human Creation and Divine Parallels

Sam and Matthew discussed the book “Childhood’s End” by Arthur C. Clark and its connection to the film “2001: A Space Odyssey.” They also talked about the origins of Mary Shelley’s “Frankenstein” and the historical context of its creation. Sam mentioned parallels between human creation of technology and the concept of gods in mythology, particularly in relation to metalworking and divine beings. The conversation touched on the theme of human creation and its implications for our understanding of divinity and ourselves.

Robustness Over Optimization in Systems

Matthew and Sam discussed the concept of robustness versus optimization in nature and society, drawing on insights from a French biologist, Olivier Hamant, who emphasizes the importance of resilience over efficiency. They explored how this perspective could apply to AI and infrastructure, suggesting a shift towards building systems that are robust and adaptable rather than highly optimized. Sam also shared her work on empathy, inspired by the phenomenology of Edith Stein, and how it relates to building resilient systems.

Efficiency vs. Redundancy in Resilience

Sam and Matthew discussed the importance of efficiency versus redundancy and resilience, particularly in the context of corporate America and decarbonization efforts. Sam referenced recent events involving Elon Musk and Donald Trump, highlighting the potential pitfalls of overly efficient approaches. Matthew used the historical example of polar expeditions to illustrate how redundancy and careful planning can lead to success, even if it means being “wasteful” in terms of resources. They agreed that a cautious and prepared approach, rather than relying solely on efficiency, might be more prudent in facing unexpected challenges.

Frankenstein’s Themes and Modern Parallels

Sam and Matthew discussed Mary Shelley’s “Frankenstein,” exploring its themes and cultural impact. They agreed on the story’s timeless appeal due to its exploration of the monster’s struggle and the human fear of the unknown. Sam shared personal experiences teaching the book and how students often misinterpret the monster’s character. They also touched on the concept of efficiency as a modern political issue, drawing parallels to the story’s themes. The conversation concluded with Matthew offering to share anime recommendations, but they decided to save that for a future discussion.

Listen Here

Tech Fiefdoms (for real)

I’ve been saying this for a while now… Ursula Le Guin tries to warn us still:

Tech Billionaires Accused of Quietly Working to Implement “Corporate Dictatorship”:

“It sees a post-United States world where, instead of democracy, we will have basically tech feudalism — fiefdoms run by tech corporations. They’re pretty explicit about this point.”

Beyond the Corporate Gloss: A Deeper Critique of Google’s 2024 Environmental Report

In reviewing Google’s 2024 Environmental Report, it’s hard not to be impressed by the sleek presentation, optimistic targets, and promises of a more sustainable future. But as someone who approaches environmental issues through the lenses of ecology, spirituality, and activism (and who respects the wisdom held by Indigenous communitie), we must ask ourselves: Is this report truly a step forward, or is it a carefully curated narrative that still falls short of meaningful transformation?

Below are some reflections and critiques that emerged as I dug deeper into Google’s latest sustainability claims. My hope is that these points inspire more honest conversations about corporate environmental responsibility, and encourage Google to become a force for genuine, not just performative, change. Google notes that this is the 10th year of their reporting, and while laudible, a decade is a long time to have not made much progress in the areas below.

1. More than a Numbers Game: Transparency and Context
Google’s report is filled with metrics: carbon offsets, renewable energy installations, and progress toward “24/7 carbon-free” ambitions. On the surface, this data sounds promising. Yet the numbers often come without the context that would allow us to evaluate their true impact. We need to know how these figures are changing over time, where and why setbacks occur, and how absolute emissions reductions are measured beyond short-term offsets. Without clear year-over-year comparisons, transparency in methodologies, and explanations for where goals haven’t been met, these metrics risk feeling more like strategic PR rather than a window into substantive progress.

2. A Holistic Ecological View—Not Just Carbon
In the ecological world, everything is interconnected—water usage, land stewardship, biodiversity, soil health, and species protection are all part of the larger puzzle. Too often, corporate sustainability efforts narrow their focus to carbon emissions. While that’s a crucial piece, it’s not the full story. The development of data centers, the sourcing of rare earth minerals for hardware, the water required for cooling, and the potential displacement of local communities or wildlife—these all have tangible ecological effects. Google’s report would be more authentic if it acknowledged these complexities. It’s not enough to claim net-zero this or carbon-free that or water-usage here; we need to know how their operations affect entire ecosystems and the countless living beings (human and non-human) who share those habitats.

3. Integrating Indigenous Knowledge and Perspectives
For millennia, Indigenous communities have developed rich, place-based knowledge systems that guide sustainable stewardship of land and resources. Their approaches aren’t just about preserving nature for posterity; they recognize the sacred interdependence of human life and the Earth. Indigenous environmental philosophies emphasize reciprocity, relational accountability, and long-term thinking—values that our high-tech era desperately needs. Yet, Google’s report barely touches on how local knowledge systems or Indigenous voices factor into its environmental strategies. True environmental leadership means not only incorporating Indigenous perspectives but also creating platforms where those communities can shape corporate policies and decision-making. A genuine partnership with Indigenous peoples would push beyond mere consultation toward co-creation of sustainability solutions.

4. The Moral and Spiritual Dimension of Environmental Care
Sustainability isn’t just a business metric; it’s a moral imperative. Many faith traditions and spiritual frameworks teach that the Earth is not merely a resource to be exploited, but a sacred gift that we are entrusted to protect. When companies like Google talk about sustainability without acknowledging the deeper moral currents—respect for Creation, the call to love our neighbors (human and nonhuman), and the need to protect the vulnerable—they risk missing the heart of the matter. Earth care is not just about polished reports; it’s a sacred calling. If Google truly wants to lead, it must recognize and uphold this responsibility as part of its corporate identity.

5. Justice, Equity, and Community Engagement
Climate change is not an equal-opportunity crisis—frontline communities, often Indigenous peoples and people of color, bear a disproportionate burden of environmental harm. There’s a human face to pollution, species loss, and extraction, and companies have a moral duty to see it. Yet the report often focuses inward—on Google’s own campuses, energy grids, and supply chains—without sufficiently addressing how it will engage with and support communities directly affected by its operations. Where is the acknowledgment of environmental justice? Where are the stories of local partnerships, community-based mitigation plans, or compensation for environmental damage? Until these voices and their realities are meaningfully included, sustainability efforts risk becoming top-down strategies instead of inclusive, equitable solutions.

6. From Incremental to Transformative Change
Corporate environmental narratives often hinge on incremental progress: small steps toward greener operations, a handful of offset projects, a few solar panels here and there. But a company with Google’s resources could champion systemic changes that transcend the status quo. It could lead research in scalable regenerative practices, revolutionize supply chains to eliminate environmental harm, or fund open-access environmental science tools that empower others. By fully embracing the call for systemic transformation, Google could serve as a beacon of hope, paving the way for a truly sustainable economy that values regeneration over extraction, and community well-being over profit margins.

Envisioning a More Genuine Path Forward
Critiquing a sustainability report may seem like a small gesture, but honest criticism matters. It’s a reminder that we must look beyond the corporate gloss to see the true health of our planet—and to hold powerful entities accountable. The world needs leaders who understand that ecological well-being, moral responsibility, Indigenous wisdom, and social justice are interwoven strands of the same tapestry.

Google’s 2024 Environmental Report certainly isn’t the worst corporate sustainability document out there in the tech space. But given the company’s global influence, wealth, and technological prowess, “not the worst” isn’t nearly good enough. We deserve, and the Earth demands, better. True environmental leadership would blend hard data with moral courage, incorporate ancestral wisdom, support vulnerable communities, and invest in regenerative systems that honor both people and the planet. That’s the vision we need, and it’s the vision that a company like Google could help realize, if it dared to do more than just follow the colonialist corporate script.

Facial Recognition Tech in Smart Glasses

Law enforcement and the military have had this capability for a while via Clearview, but it’s (also) scary to see it being implemented outside of those domains…

Someone Put Facial Recognition Tech onto Meta’s Smart Glasses to Instantly Dox Strangers:

A pair of students at Harvard have built what big tech companies refused to release publicly due to the overwhelming risks and danger involved: smart glasses with facial recognition technology that automatically looks up someone’s face and identifies them. The students have gone a step further too. Their customized glasses also pull other information about their subject from around the web, including their home address, phone number, and family members.

AI’s Awful Energy Consumption

Be mindful and intentional with technology tools…

Google and Microsoft report growing emissions as they double-down on AI : NPR:

“One query to ChatGPT uses approximately as much electricity as could light one light bulb for about 20 minutes,” he says. “So, you can imagine with millions of people using something like that every day, that adds up to a really large amount of electricity.”

Why I am Using a Light Phone

I have lots more to say about this, but I wanted to share this vital part of a recent article about “dumbphones” in The New Yorker. I’ve been attempting to be much more deliberate about using technology and devices, especially in front of my children and students.

The Light Phone (and Camp Snap camera) have been a significant part of that effort. I’ve been in love with the Light Phone since converting from an iPhone earlier this year.

The Dumbphone Boom Is Real | The New Yorker:

Like Dumbwireless, Light Phone has recently been experiencing a surge in demand. From 2022 to 2023, its revenue doubled, and it is on track to double again in 2024, the founders told me. Hollier pointed to Jonathan Haidt’s new book, “The Anxious Generation,” about the adverse effects of smartphones on adolescents. Light Phone is receiving increased inquiries and bulk-order requests from churches, schools, and after-school programs. In September, 2022, the company began a partnership with a private school in Williamstown, Massachusetts, to provide Light Phones to the institution’s staff members and students; smartphones are now prohibited on campus. According to the school, the experiment has had a salutary effect both on student classroom productivity and on campus social life. Tang told me, “We’re talking to twenty to twenty-five schools now.”

Facebook Advertisers Panicking over Apple Tracking Options

Retargeting was fun while it lasted, right? … interesting time for online marketing.

Facebook advertisers, in particular, have noticed an impact in the last month. Media buyers who run Facebook ad campaigns on behalf of clients said Facebook is no longer able to reliably see how many sales its clients are making, so it’s harder to figure out which Facebook ads are working. Losing this data also impacts Facebook’s ability to show a business’s products to potential new customers. It also makes it more difficult to “re-target” people with ads that show users items they have looked at online, but may not have purchased.

Source: Facebook (FB) Advertisers Impacted By Apple (AAPL) Privacy iOS 14 Changes – Bloomberg

Google’s Take on Our Hybrid Workplace Future

I need a cellophane balloon wall robot in my life.

If a meeting requires privacy, a robot that looks like the innards of a computer on wheels and is equipped with sensors to detect its surroundings comes over to inflate a translucent, cellophane balloon wall to keep prying eyes away.

Source: The Googleplex of the Future Has Privacy Robots, Meeting Tents and Your Very Own Balloon Wall – The New York Times

Trying Out Neeva

“…advertising income often provides an incentive to provide poor quality search results.”

– Google founders Larry Page and Sergey Brin in a 1998 research paper while they were doctoral students at Stanford

I’ve been trying out the search engine service Neeva lately. You can read more about the founding of the company by ex-Googler Sridhar Ramaswamy here (it’s a fascinating story).

I come from the time when the web was still in its primordial stage. Thought technologies such as web browsers and search engines were still young and completely exhilarating. I paid for Netscape (and I was amazed when I got to college in 1996 and walked into the computer lab with 8 machines running Netscape, WordPerfect, Office, and the Corel suite). Browsers and search engines and minute-based access to the web were something you paid for (unless you stockpiled AOL disks like I did).

Neeva is definitely a different service. I’m still wrapping my head around it, but it feels like a good mix of “old school web” and what we’ll eventually get to once we exit this period of advertising-based “free” services that have been the predominate business model on the web for the last 15 years.

The search interface is clean and fast. There are no ad trackers. The company is looking to make money by offering subscriptions. That’s intriguing for me. I’ve never been a big fan of the saying “if you’re not paying for a service, you’re the product” and all, but it does ease my mind to exchange money for what I consider valuable services on the web (Pinboard for bookmarking comes to mind).

Google is such an intimate part of all of our lives, whether we care to admit it or not. Our memories, correspondence, social graphs, birthday reminders, calendars etc are all wrapped up in the service (at least… much more than that for “power users” like myself). But we need alternatives.

I’ll continue experimenting with Neeva to see if it’s one of the dandelions that pops up to spread seeds across the ecosystem of the web or if it’s just a one-season deal. But it “feels” like it has staying power. And for that, I’m excited. Will report back here about my usage as it accumulates in the coming weeks.

Tag Your WordPress Posts, People!

I stress to clients that they have to tag and categorize their posts on WordPress. It’s one of the easiest ways to increase your organic traffic and discoverability in Google, but it also helps the web find you.

So tag your posts, people!

Also, good checklist here for setting up a WordPress website on a hosting platform…

If you’re not already on board, keep reading; a client of mine gets 100,000 unique visitors per month. More than 3% of those are referred to by tags listed in the SERPs.

Tag recommendations:

  • Limit your tagging to relevant topics you covered in the post.
  • Not every post needs to be tagged.
  • Keep tags short and sweet; no more than two words.
  • Delete overused and underused tags monthly.

SEO benefits:

  • Improved user experience.
  • Increased engagement.

Via Search Engine Journal: Don’t Launch a WordPress Site Before You Go Through This 17-Step Checklist

Tech companies freezing political spending and why tech still matters

This is the death knell of PACs for tech companies with activist employees,” one source told Axios. “This is the final straw.”

via Axios

This is a really fascinating development. First Microsoft and now Facebook are suspending PAC (Political Action Committee) spending in Washington. They’re joining financiers Goldman Sachs, JP Morgan, and Citigroup, along with Marriott, Blue Cross Blue Shield (caveat — our insurance co), Boston Scientific, and Commerce Bank. Bank of America (caveat — one of the banks we do business with), Ford, and AT&T, CVS, Exxon Mobil, and Wells Fargo are considering pulling their political monies.

This hits politicians where it really hurts.

For years, many of us in the “tech world” have decried these PACs and looked at them as a unnecessary evil that needed to be banned or done away with for a number of reasons.

Here are my personal convictions:

  1. The PAC system reinforces the existing system of graft and corruption that so many Americans claim to abhor.
  2. PACs favor the privileged both socio-economically and relationally. It’s a blight on a Democratic Republic and shouldn’t be seen as a “necessary evil” to doing business in the United States. Whatever your sector.
  3. Tech boomed in the late 90’s and then again in the early ’00s because it was seen as a disruptor. From Google to Tesla to Uber (well, maybe they aren’t a great example but they did usher in a transportation paradigm shift) to even Twitter, the tech sector excited us with the promise of something different and more democratic to challenge the status quo. However, as the going got weird, the weird turned pro and put on suits. I want a return to the weird disruption tech that spurred creativity and a hope for a better representation to the powers that be. We’re not so far gone that it can’t happen in light of #metoo, BLM, LBGTQ+, trans rights, accessibility emphasis, and recognition of differently abled persons. Real revolutionary tech that can change the world… I still believe. PACS stand in the way of that.

So as we continue to process and deal with the terrorist insurrection on our Capitol last week, let’s take a second to recognize what these companies are doing by restricting or redirecting their PAC monies and how we can all do our part to not just “unify and move forward” but to cause real change.

Google Shopping Gift Guide and Importance of Trending vs Popularity

Google’s annual Shopping Gift Guide is out for 2020. While it’s a handy tool for personal shopping, it also has some incredibly helpful stats for marketing and messaging.

The trick is to focus on trending items using data. The same is true for Instagram… the hashtags that you should be incorporating into your posts for more exposure and likes (and follows) are the ones that are trending but not necessarily popular.

So, if you’re looking for some fun market research in your business’ sector, don’t pass up these sorts of insights:

  • Monitors and headsets with microphones both saw 450%+ spikes in searches.

  • Searches for streaming increased 33% this year.

  • Searches for ring lights are at their all-time high, as they provide ideal lighting for video recordings and meetings.

The Google Shopping Gift Guide provides a helpful list of products rising in popularity based on Search trends in the US.

Source: Google Shopping Gift Guide

Friday, November 13, 2020

It’s been a week but still relevant:

Here in South Carolina, I’m seeing a mix of responses to the spiking Covid rates. Some of our friends (especially parents of young children) are full of despair and “over it” to put it lightly. I also have clients in-town who seemed confused when I say “No, I can’t come to your office for that meeting. We’re still hunkering down and trying to avoid indoor spaces when possible.” But, there’s a general feeling that we know the worst is yet to come and people are taking masks seriously (distancing not so much) in public spaces and in grocery stores etc. Let’s keep it up. We won’t see a vaccine for months (if that), so it’s on us to not have “pandemic fatigue.”

Business wise, there are many small businesses, nonprofits, and churches (big and small) that I know are hurting. I find it astonishing we don’t have something like a second federal stimulus package. To leave it up to cities and states seems like a complete hand-washing from our federal representatives. We need another stimulus package. I’ve done more pro-bono work for church and nonprofit clients in the last couple of months than I should have, but it’s heartbreaking to hear the constant stories of pure budget fallouts (along with volunteer hours etc).

Be kind to each other out there.

Today’s big puzzle has been trying to figure out how to display post time (not just date) on a WordPress post… there has to be a PHP function for that and I’m completely blanking on it. I’ll blame it on being Friday. But I’ll figure it out.

What I’m Thinking About Today:

Maybe it’s the pandemic and my Aunt passing away last week, but death and dying has been on my mind a good deal recently. I had an email yesterday about my life insurance policy, so that didn’t help change my brain. We have so much work to do with rethinking and reconditioning how we think about the process of death in our country. Particularly from the balance between spiritual development and scientific/medical understandings, there seems to be a real need for people to find balance. I highly recommend reading the Emanuel piece above. Good stuff.

Big Sur hasn’t gotten off to the best of starts. I always caution friends and clients these days to wait a little while before installing the new iOS or iPadOS or macOS update because Apple has proven time and again that launch day is a precarious time if you’re running updates. It’s a fantastic operating system, though. Once things get ironed out, make sure you update if you have a modern Mac (you know, if you don’t mind your computer phoning home and compromising your security and all).

It turns out that in the current version of the macOS, the OS sends to Apple a hash (unique identifier) of each and every program you run, when you run it. Lots of people didn’t realize this, because it’s silent and invisible and it fails instantly and gracefully when you’re offline, but today the server got really slow and it didn’t hit the fail-fast code path, and everyone’s apps failed to open if they were connected to the internet.

Oof. Must read for the “APPLE IS MORE SECURE THAN OTHER OPERATING SYSTEMS!” crowd and the rest of us.

Radiant now sells a stripped-down Samsung smartwatch as a social distance monitoring tool. When an employee wears the watch, it constantly searches for other similar devices worn by other employees, and estimates their distance based on how strong that signal is. If a strong signal is detected for more than 15 minutes, the interaction is recorded and uploaded to the cloud for the company to reference later if a worker tests positive. In addition, an employer can opt to use the device to monitor the specific location of individual employees.

I don’t ever want to work in an office building “for” a company again. I fear this sort of thing will become much more mainstream during and (eventually) after Covid.

The year 2020 has been kind to Turchin, for many of the same reasons it has been hell for the rest of us. Cities on fire, elected leaders endorsing violence, homicides surging—­­to a normal American, these are apocalyptic signs. To Turchin, they indicate that his models, which incorporate thousands of years of data about human history, are working. (“Not all of human history,” he corrected me once. “Just the last 10,000 years.”) He has been warning for a decade that a few key social and political trends portend an “age of discord,” civil unrest and carnage worse than most Americans have experienced. In 2010, he predicted that the unrest would get serious around 2020, and that it wouldn’t let up until those social and political trends reversed. Havoc at the level of the late 1960s and early ’70s is the best-case scenario; all-out civil war is the worst.

Turchin is certainly a polarizing figure. I admit that I’m a passive fan of megahistories (being a mostly-white male and all), but I do think there’s something to the idea of applying mathematics to history and liberal arts. Maybe I’ve read too much Asimov.

I can’t stress this enough and tell my clients this all the time… make sure you have your Google My Business listing set up and connected to a GMail or Google Suite/Workplace account that you trust and will keep for a while. Don’t just assume that you don’t have to do this. Google is placing a high amount of energy, time, and resources to developing and promoting My Business, and if you run or are a part of a business, group, church, organization etc… make sure to claim and keep up with yours.

In this Best of Whiteboard Friday edition, Tom Capper explains how the sessions metric in Google Analytics works, several ways that it can have unexpected results, and as a bonus, how sessions affect the time on page metric (and why you should rethink using time on page for reporting).

Good video here on Sessions in Google Analytics… my clients typically are suprised when I show them how useful having an understanding of Sessions can be for their overall digital marketing campaigns.

The platform saw a spike in users, doubling from roughly 4.5 million members last week to about 8 million this week, and surging to 4 million active devices from 500,000 two weeks ago, according to Parler chief operating officer Jeffrey Wernick. He added that daily active devices are up approximately tenfold and session growth is up 20 times on the app.

I’ve been monitoring Parler (Twitter alternative), MeWe (Facebook alternative), and Rumble’s (YouTube alternative) growth over the last few months. There’s also banned.video that was created by InfoWars / Alex Jones after he was banned from most mainstream social media platforms. The growth on all of these “conservative-friendly” social platforms is astonishing and a sign of virality. I know a number of prepper and Q themed groups jumped over to these and that’s only accelerating. Will they have sticking power? That depends on a number of variables from how the transition of administrations occurs, whether Parler figures out its own internal bugs and advertising, and if Trump manages to congeal a media empire and stay relevant in the coming months.

To be updated throughout the day

ZOOM and Google Calendar

Just received this email from ZOOM regarding changes to how Google Calendar allows for default conferencing options…

ZOOM Google Calendar

Many of us in the post-Corona Virus world have learned to love the integration between Google Calendar and ZOOM available in Google Suite / Workplace. Google evidently has noticed that and is pushing out a change that makes its rebranded Meet product the defacto option in scheduling calls and chats within the Google Calendar interface.

It’s not deal-breaking or a huge deal necessarily, but it does show that Google is noticing how important conferencing has become. For many of us, the default is ZOOM, and it’s obvious Google wants to use the (very) popular Google Calendar platform to push more of us towards using Meet. Which is totally appropriate since it’s their playground and all.

I’ll be interested to see if Meet actually takes off in popular usage now.