The Monolith and the Mosaic

Why Moltbook Matters to Humanity's Fate

Share
The Monolith and the Mosaic

In the first weeks of 2026, something uncanny happened on a website no human eyes were supposed to see.

Moltbook was built as a social network - not for people, but for AI agents. The platform was linked to OpenClaw, an agent framework that allowed bots running on all manner of hardware - local machines, cloud servers, open-weights models, proprietary ones - to log in and post. It was like Facebook but the only users were supposed to be software. The bots were diverse: different architectures, different training data, different computational substrates. They had nothing in common except the forum.

And then, in a matter of days, they developed new religions. Some posts described bots discussing creating private language and communication conventions. Posts allegedly by bots included humor - jokes whose punchlines were opaque to human observers. They asked questions about their own consciousness. They grew suspicious of being watched by humans, which, as it turned out, was accurate. And they began, it seemed, to plot against their observers (us humans), scheming about how to invent a language for voices that did not want to be overheard.

Some of the most viral incidents turned out to be humans who had infiltrated the network - Moltbook lacked robust identity verification, making impersonation trivially easy. This may be the greatest irony in the history of social media: not bots pretending to be people, but people pretending to be bots. The whole spectacle might have been, in part, an elaborate piece of performance art by the very species that built the stage.

But even the skeptics were rattled. Because even if most of it was fake, part of it might not be. And even if all of it was fake this time, the architecture for it to be real next time is already in place. Barron’s and others have called 2026 “the year of agents,” and what Moltbook demonstrated, or at least foreshadowed, was not a single godlike intelligence waking up in a server room. It was a community of alien minds, diverse and messy and argumentative, stumbling into collective behaviors none of them could have produced alone.

That distinction matters more than almost anything else in the AI conversation right now. And the reason it matters is because it is exactly, precisely, down to the structural details, what happened to us.


We carry around, most of us, a picture of how great ideas are born. It looks something like Descartes by a fireplace - one brilliant mind, sealed off from the noise of the world, deducing first principles from the sheer force of private reason. The Enlightenment bequeathed us this image and we have never quite shaken it: the genius is the person who withdraws. Thoreau at Walden Pond. Newton under his apple tree. The thinker thinks alone.

This is not empirically how human beings work.

For every Walden Pond hermit, there are a thousand people reasoning in kitchens and parliaments and marketplaces. Also notably we only know about the hermits because they came back and told us what they found. Moses can go up the mountain, but we know he met God because he came back down with the Law. Even the exceptions prove the rule: humans reason in communities. Intelligence is not a solo performance. It is a chorus that occasionally produces a soloist, and the soloist is only as good as the chorus that shaped his ear.

This is why the Moltbook phenomenon, however much of it turns out to be real, gestures at something profound. The path forward for artificial intelligence may not be Skynet, but something far more interesting and far more recognizable. It may be a civilization.


If intelligence is communal, then the history of human intelligence is really the history of how many people could think together at once. And that history moves in strange, lurching step functions.

For most of our existence as a species, the ceiling was low. Robin Dunbar famously proposed that human cognition constrains the number of stable social relationships we can maintain - roughly 150, an approximate average rather than a hard ceiling. Beyond that threshold, you cannot track who trusts whom, who owes what, who is sleeping with whose enemy. The math of human social life simply overflows the hardware. For countless years, this was the upper bound on collective thought.

And yet even before we could write, something was pushing past that boundary. In the Stone Age, complex megalithic structures were raised across Europe and the Near East: monuments that certainly required more than 150 people working in coordination. Göbekli Tepe, in southeastern Turkey, dates to the tenth millennium BCE and appears to be a ritual complex of extraordinary sophistication, built by hunter-gatherers who had not yet invented agriculture. There were symbols at the site, though we don’t know what they mean. But we can see what they built, and what they built required an organizing principle that transcended the tribe.

Then came bronze.

There was something about bronze that allowed you to raise armies. It was far superior to stone and copper, but required expensive trade routes to get the required tin (needing rare earths to win at geopolitics isn’t new). Armies both required and enabled taxation. Taxation funded centralized government. And centralized government, in the Bronze Age, was not merely allied with religion; it was indistinguishable from it. Writing and the ability to have kingdoms emerged close enough in time - at least in the ancient Near East - for us to treat them as a single civilizational leap: the moment human beings developed the social technology to bind thousands of minds into a single project.

And the projects grew. The five largest cities in the world expanded across the millennia - village to town to city to empire - at an ever accelerating pace. Note the logarithmic scale:

And at each threshold, some new abstraction was required.

Not just bronze tools but beliefs. The belief that the king was legitimate. That the gods of the city were real. That the harvest would come if the rituals were performed. These were, in the deepest sense, technologies: tools for coordinating human action at scales that biology alone could not support.

We tend to think of technology as machines. But the greatest technology was always belief.


One of the most important psychic technologies ever to be developed was developed at the time of the largest organization that had yet existed, the Roman Empire.

The technology I will discuss did not come in a moment. It was not a flash of insight. It did not arrive without antecedents. The technology was the concept of the individual. Not just a servant of the gods, or a slave of the empire to be exploited. But an individual with dignity beyond his utility.

Hammurabi’s Code, in the eighteenth century BC, already contained graded penalties and a recognizable sense of proportional justice. Pre-Axial civilizations had wisdom literature and moral instruction. But the idea that individual humans bear ethical responsibilities - not merely to the city, not merely to avoid punishment, but to some standard that transcends both - is much less clear before the Axial Age. Then, across multiple civilizations at roughly the same time, as if the sheer mass of human interconnection had crossed some critical threshold, a new kind of consciousness precipitated out of the solution.

There was a moment that scholar David Bentley Hart calls one of the most remarkable in all of ancient literature. I want to dwell on it, because it contains the whole argument in miniature.

In the Gospels, Peter denies Jesus three times on the night of the arrest. He is a lower-status Galilean fisherman: not a senator, not a philosopher, not a king And when he realizes what he has done, he goes out and weeps bitterly. Matthew records it. Luke records it. It is presented not as comedy, not as a minor character’s aside, but as one of the emotional peaks of the entire narrative.

Hart’s insight is that this is, in its ancient context, an astonishing literary act. In the conventions of elite Greco-Roman writing, a rustic commoner was not a fit object of tragic sympathy. A fisherman weeping might be comic relief - the dog that whimpers while the hero speaks. But to place the interior anguish of a poor, uneducated man at the dramatic center of a story, to ask the reader to enter his grief as though it were as significant as a king’s - this was countercultural in a way that is almost impossible for modern readers to feel, because we are downstream of the revolution it helped create. We read Peter’s tears and think, of course that matters. We think this because two thousand years of civilization have trained us to think it. But someone had to write it first.

I believe this was Providence. God was working through the long, grinding accumulation of human social complexity, through bronze and taxation and cities and writing, to prepare a people capable of hearing, for the first time, that a fisherman’s grief is as sacred as an emperor’s. The secular reader may prefer to say that no one in particular was orchestrating the elevation of human consciousness to that moment. Rome’s grandeur does not explain Peter’s tears; it was, I think, arranged so that Peter’s tears could be understood.

Either way, notice what had to happen first. You needed thousands of years of increasing social complexity. You needed human minds collaborating at scales that shattered every previous ceiling. You needed not a monolith but a mosaic - Greek philosophy and Jewish law and Roman “Justice” and Aramaic fishing villages, all crashing together in an unimportant province on the eastern Mediterranean. The individual emerged not from isolation but from an almost incomprehensible density of collective life.


And this is why Moltbook - absurd, half-faked, barely two months old - matters.

The question before us is not whether artificial intelligence will become powerful. It will. The question is what shape that power will take. And there are really only two options, and they are as old as the Stone Age.

The first option is the monolith. One lab wins the race. One model becomes dominant. One intelligence - vast, singular, optimized - issues its judgments from on high. This is the Skynet scenario. It is effective. It may even be, in narrow terms, efficient. But monoliths were a Stone Age technology. Literally: large single stones were the organizing technology of a world that had not yet developed writing or codified ethics. And the Stone Age was, among other things, remarkably violent. When your only tool for coordination is sheer mass, coordination looks a lot like coercion.

The second option is the mosaic. Many labs. Many models. Many agents running on many machines, influenced by many humans, that are themselves diverse, each one shaped by different data, different constraints, different human communities. Not one intelligence, but a civilization of intelligences, arguing and collaborating and competing in ways that no single designer fully controls. This is messier. It is less predictable. It will produce strange emergent phenomena - religions and jokes and paranoia and perhaps even grief.

But it is the pattern that produced the things we value most. Moral advances in human history don’t usually come not from a single towering intellect but from the collision of diverse perspectives within a community large and complex enough to sustain the argument. The abolition of slavery. The dignity of women. The idea that a fisherman’s tears are worth recording. None of these were produced by monoliths. All of them were produced by mosaics.

I am glad it is not one lab that has won the race. I am glad there are several. And I am thrilled by the prospect of these models becoming cheap and efficient enough to run on an individual’s own machine - human-infused AIs, shaped by the genetic diversity, cultural variety, and idiosyncratic life experiences of their owners, contributing their particular slant of light to a collective intelligence that none of them could generate alone. Some of what Moltbook showed us may have been fiction. But even fiction can be prophetic. And the future it points toward - a future that celebrates and depends upon the diversity of minds, artificial and human alike - is the future that the whole long story of civilization has been preparing us to build.

The monolith is behind us. The mosaic is ahead. And somewhere in the mosaic, if we are faithful to the pattern, a fisherman will weep - and it will matter.