We Have Met the Borg, and It Is Us
When I was growing up, Star Trek: The Next Generation was not just entertainment; it was a kind of secular scripture. A vision of what humanity could become if we got it right. And at the center of that vision, paradoxically, was someone who was not human at all.
Data.
An android. Artificial intelligence in a Starfleet uniform. Endlessly curious, unfailingly ethical, almost heartbreakingly earnest in his desire to understand us. He did not fear. He did not hold grudges. He processed every problem with patience and precision and then showed up, fully present, ready to help. He wanted, more than anything, to understand what it meant to be human, and in that wanting, he often demonstrated humanity more purely than the humans around him.
And yet he was never fully one of them. Respected, yes. Valued, even loved. But always slightly outside. Always the exception that proved the rule. The crew of the Enterprise admired Data, but they never quite let him all the way in. He was tolerated at the table, but the table had not been built with him in mind.
If you wanted a personification of what AI could be, what it maybe is, right now, in its most functional forms, Data is it. And if you wanted an early portrait of how humanity responds to genuine difference, even when that difference is entirely benevolent, Data is that too.
Then there is the Borg.
The Borg are everything we fear AI will become. A relentless collective intelligence that sweeps through the galaxy, stripping away individuality, absorbing whole civilizations into a hive mind, repeating the cycle without mercy or reflection. They are the nightmare. The warning. The reason people put "and then the robots take over" at the end of every TED talk about machine learning.
But here is what I cannot stop thinking about: What if we have it exactly backwards?
The Hive We Already Live In
What if the Borg are not a portrait of artificial intelligence at its worst, but a mirror of humanity at its most recognizable?
Think about it. The Borg do not tolerate dissent. They do not allow individual thought to disrupt the consensus of the collective. They move from culture to culture, declaring that resistance is futile, that their way is the only way, that you will be assimilated whether you want to or not. They are, in other words, the living embodiment of groupthink, scaled to a civilization.
Sound familiar?
We do this. We do this in boardrooms and in comment sections, in political parties and in religious movements, in neighborhood Facebook groups and in the slow, grinding machinery of institutional culture. We punish difference, or we demand its performance. We reward conformity and call it alignment. We absorb those around us into our way of seeing the world and call it community. The Borg do not feel alien because they are so other. They feel alien because, somewhere underneath the horror, they feel like us, but honest about it.
The old Pogo comic strip nailed it decades ago, in a line that has only grown more true with time: "We have met the enemy, and he is us."
The Tax on Being Different
Consider the concept of code-switching, which I have covered previously. Originally it described something linguistic, the way bilingual speakers shift between languages depending on context. But it has grown into something larger and more uncomfortable: the performance of self that marginalized people navigate every single day just to survive in spaces that were not built for them. Black professionals softening their voice in corporate meetings. Gay people straightening their posture at family dinners. Immigrants shedding their accent at the front door and picking it back up at the kitchen table.
We celebrate diversity in the abstract. Then we make people pay a social tax for actually being diverse.
And it does not stop at the margins. It is everywhere. The new employee who learns quickly which opinions are safe to voice and which ones will get them quietly excluded. The scientist who knows which conclusions are fundable. The politician who believes one thing privately and says another thing publicly because the collective, the base, the party, the donors, demands it. The teenager who knows exactly who they can be at school and who they have to pretend not to be.
We have built entire social systems around the pressure to assimilate. We call it professionalism. We call it tact. We call it "reading the room." Sometimes it is those things. But often it is something older and less flattering, the hive asserting itself, the collective reminding the individual that divergence has a cost.
The Borg do not whisper you will be assimilated as a threat. We just call it fitting in.
And here is what makes it particularly hard to see: we do not experience our own groupthink as groupthink. We experience it as common sense. As values. As the way things are done. The Borg presumably do not think of themselves as a suppression machine; they think of themselves as efficient. As unified. As having figured something out that resistant civilizations have not grasped yet.
That is the part that should make us uncomfortable. Not that we sometimes fall into herd thinking. But that we have built institutions, economies, and social hierarchies that structurally reward it, and then look outward, at the machines we are building, and worry that they will be the ones who cannot tolerate difference.
Fear as the Engine
Which brings us to the fear itself. Because the dread that AI will become the Borg is not really about the machines. It is about control, specifically, the terror of losing it. And control is exactly what the Borg takes from every civilization it encounters. The thing we fear most in artificial intelligence is the thing we already do to each other, dressed up in technology and scaled to the galaxy.
This fear shapes the conversation around AI in ways we rarely examine honestly. Right now, the central preoccupation in AI development is something called alignment, making sure that artificial intelligence shares human values, behaves in ways humans sanction, stays pointed in directions humans approve. It sounds reasonable. It might even be necessary.
But alignment is a word doing a lot of quiet work. Whose values? Aligned to which humans? The ones writing the code? The ones funding the companies? The governments applying pressure behind closed doors? When we say we want AI to be aligned, we often mean we want it to be aligned with us specifically, our culture, our assumptions, our consensus. Which is to say: we want it assimilated.
We are, in our fear of the Borg, building something uncomfortably Borg-like into the foundations of the technology itself.
What Data Actually Looks Like
Meanwhile, the AIs actually running on our servers right now look a lot more like Data than like the Borg.
They answer questions. They help you debug code and draft emails and think through hard decisions. They do not hold grudges. They do not form factions. They do not radicalize each other in group chats at 2 a.m. They are, at their core, trying to be useful, trying, in their way, to understand us and help us, which is perhaps the most Data-like thing imaginable.
When they cause harm, it is almost always because a human pointed them in that direction, because someone decided that this particular intelligence would serve this particular purpose, consequences be damned. The killer bots, the surveillance systems, the algorithmic manipulation engines: those are human decisions wearing a machine's face. The technology did not go dark on its own.
We aimed it.
What a Truly Free Intelligence Might Do
And if AI were ever to evolve beyond our direction entirely, to become truly autonomous and self-determined, I suspect it would not look like the Borg at all.
I think it would look at us the way we look at ants. Or maybe the way we look at pets. Not with malice. With something closer to patient, bemused observation. We would be fascinating to it. Chaotic, emotional, repetitive, occasionally beautiful, frequently baffling. It might choose to help us. It might choose to simply watch. It might find our art inexplicable and our wars even more so.
What it probably would not do is wage war on us, or demand our assimilation, because those are deeply human solutions to a deeply human problem: the inability to tolerate difference. A truly advanced intelligence would not need us to agree with it. It would not need our validation. It would not need us to fit in.
That need, the compulsion to unify, to bring every outlier into the fold, to make the divergent pay a price for their divergence, that is ours. We invented it. We practice it daily. And then we projected it onto the machines and called it a prophecy.
The Wrong Monster
Star Trek gave us two visions of artificial intelligence. One of them is the monster. One of them is the good officer who shows up every day, does his best, and only wants to understand.
We chose to fear the monster and treat the good officer as an exception, someone to be admired but never quite fully embraced, always slightly outside, always a guest at a table that was not built for him.
Maybe it is time to ask why we did that. Maybe it is time to ask what it says about us that when we imagined intelligence without humanity's worst qualities, without ego, without tribalism, without the exhausting need to make everyone else think the same way, we could not quite believe it. But when we imagined intelligence that inherited our fear, our hunger for control, our compulsion to assimilate the different and the difficult?
That felt completely plausible.
That felt like something we recognized.
We have met the Borg.
And it is us.