I linked to a recent piece this week that highlighted the growing popularity of an open-source project called Edict, built on the ancient Chinese “Three Departments and Six Ministries” model. Instead of imagining AI agents as a kind of flat group chat where everyone talks at once and somehow arrives at a solution, these developers have looked to imperial bureaucracy for inspiration. They’ve built systems where agents deliberate through ordered layers, defined roles, and structured channels of authority.
There is something genuinely insightful there. Anyone who has spent time watching current multi-agent systems stumble around knows that simply putting five bots in a room and asking them to collaborate is not much of a theory of intelligence. It is often just noise dressed up as emergence. The appeal of a more ordered model makes sense. Hierarchy can reduce confusion. Structure can improve coordination. Clear roles can produce better outcomes than endless recursive brainstorming.
I find myself wondering whether both of these dominant models… the flat Silicon Valley “everyone brainstorms together” approach and the hierarchical “imperial court” approach… may be trapped inside the same basic mistake.
Both assume that intelligence is mainly a matter of well-organized agents. That assumption seems really too narrow to me.
My own work has led me again and again toward a different starting point. Through my studies in ecology, phenomenology, theology, and process thought, I keep returning to the possibility that intelligence does not begin with isolated entities that then enter into relation. It begins in relation itself. Perception is relational. Attention is relational. Meaning is relational. Even empathy, if we take it seriously, is not simply a private feeling inside one mind about another mind. It is an opening toward another center of experience through a shared world.
That matters for how we imagine and even construct AI.
If our models of machine intelligence begin with discrete agents, each assigned a role and operating as an independent unit, we may already be building on the wrong foundation. We may be importing assumptions from bureaucracy, management theory, and industrial organization into domains where those models can only take us so far. We may be constructing administrative systems and calling them intelligence.
What if a better model is not the boardroom or the court, but the forest?
I do not mean that in the lazy sense of saying “nature is good” or “technology should be more organic.” I mean something more specific. A forest is not simply a collection of individual trees standing near one another. It is a field of relations unfolding across time. Trees, fungi, soil microbes, insects, moisture, roots, decaying matter, shade, slope, heat, and season all participate in a dynamic web of exchange and constraint. Nothing is fully self-contained. Nothing simply commands the rest. At the same time, it is not chaos. It is patterned, but not centrally controlled. It is differentiated, but not rigidly bureaucratic.
That seems much closer to how real intelligence often works.
Recent research on mycorrhizal fungi has only deepened this intuition for me. These microscopic fungal threads move nutrients, carbon, water, and signaling compounds through the soil in ways that are astonishingly complex. A forest is not just what we see above ground. It is also the dense and largely invisible life below our feet. It is memory in the soil. It is exchange without spectacle. It is cooperation and competition held together in a larger field of becoming. If we are looking for models of distributed intelligence, ecosystems seem to have much more to teach us than most corporate org charts do.
This is where my own language of ecological intentionality starts to matter. I have been using that phrase to think about the ways intentional life is never merely private or self-enclosed. Consciousness is not a sealed chamber. Perception is not just data processing inside an isolated subject. We come into being through relation with other beings and with the worlds we inhabit. Attention is shaped by place. Meaning emerges through encounter. Even our ethical lives are formed through these layered fields of contact, dependence, and response.
If something like that is true, then perhaps intelligence should not be modeled primarily as command, planning, and execution. Perhaps it should be modeled as situated responsiveness within living networks of relation.
That possibility opens up some fascinating questions for AI design.
What would it mean to build systems where agents do not simply send messages to one another, but interact through a shared evolving substrate, more like soil than chat? What if some agents moved slowly, preserving long memory and stable patterns, while others reacted quickly to changing local conditions? What if resource limits were not treated as inconveniences to be engineered away, but as essential features that shape meaningful behavior? What if forgetting, decay, and succession were not failures, but necessary parts of a healthy cognitive ecology?
These are not just technical questions. They are philosophical and theological ones as well. The systems we build reflect the worldviews we carry. If we assume intelligence is best expressed through extraction, optimization, and control, then our tools will almost certainly reproduce those habits. If, on the other hand, we begin from interdependence, vulnerability, partial knowledge, and relational emergence, then different kinds of systems become imaginable.
I suspect this is one reason ecology has become so important to me as more than a scientific discipline. Ecology is not only about organisms and environments. It is also a way of seeing. It teaches us to pay attention to entanglement, to limits, to reciprocity, and to the unseen structures that make visible life possible. In that sense, ecological thought has something to say not only about forests and watersheds and soils, but also about computation, cognition, and the kinds of futures we are building.
I do not think we should romanticize ecosystems. Forests are not sentimental places. They are full of competition, waste, death, asymmetry, and contingency. But neither are they simple machines. They endure because they are adaptive, layered, and relational. They hold difference together without collapsing it into uniformity. They create the conditions for life through constant negotiation rather than total command.
That may be a better image for intelligence than either the group chat or the throne room.
I keep thinking here of the black walnut in my backyard in Spartanburg. Over the course of a year, I have spent a lot of time watching that tree, writing about it, tracking its changes, learning again how much of life unfolds at speeds we rarely honor. The tree itself is only part of the story. The real story includes the red clay, the fungal threads, the decaying leaves, the insects, the moisture, the other plants nearby, and the long memory of a place becoming what it is over time. Nothing there makes sense in isolation.
Perhaps intelligence is like that, too. Perhaps what we need next in AI is not flatter systems or stricter hierarchies, but deeper ecologies.
That would require more than a new engineering pattern. It would require a different imagination. We would have to stop thinking of intelligence as something that sits inside a unit and starts thinking of it as something that happens in the field between beings, across timescales, under conditions of mutual dependence and constraint.
That seems to me not only more faithful to the living world, but maybe more faithful to us as well.
If artificial intelligence is going to have a future worth inhabiting, I suspect it will not be because we taught machines to behave like emperors. It may be because we finally learned to build with a little more humility, from the patterns of soil, roots, trees, and the fragile worlds they make together.
Discover more from Sam Harrelson
Subscribe to get the latest posts sent to your email.