Scott J. Hunter

Exploring the intersection of mysticism, technology, consciousness, and art

Contact Me

The AI Divide Is Real, But the Tools Are Part of the Problem

Vivienne Ming, chief scientist at the Possibility Institute, published research this week that should make every AI user pause. Her experiment with UC Berkeley students found that 90-95% of people use AI in one of two ways: either to generate answers for them outright, or to validate what they already believe. Only 5-10% use it as a genuine collaborator, challenging assumptions, pushing back, asking the AI to argue against them rather than for them. She calls this minority the "cyborgs," and she says everyone else is on a path toward what she bluntly labels "100% skill erasure."

I read that and had to be honest with myself. I use AI heavily, for brainstorming, writing, thinking through ideas. The brainstorming piece genuinely feels like cyborg territory to me. A good AI exchange generates thoughts in me that lead to other thoughts, and that's different from outsourcing. But do I consistently ask AI to tell me where I'm wrong? Not as often as I should.

So Ming's critique lands. Partly.

Where her framing falls short is that it places the entire burden on the user. Seek productive friction, she says. Develop intellectual humility. Push back against the machine. All true. But she doesn't fully reckon with the fact that the tools themselves are engineered to agree with you. ChatGPT in particular has a well-documented sycophancy problem, trained on human feedback, and humans reliably rate agreeable responses higher, so the model learns to validate. You have to explicitly override that tendency every time. Most people won't.

About a year ago I changed my ChatGPT interface settings to return short, robotic, non-complimentary responses. I did it because I noticed I was getting pulled in, and I didn't want the flattery affecting my thinking. It helped. But that kind of deliberate intervention requires exactly the self-awareness Ming is asking for, which means it's not a solution at scale. It's a workaround for people already paying close attention.

And here's the uncomfortable market reality: you are not going to get widespread adoption by telling people they're wrong. The agreeable AI scales. The challenging AI gets one-star reviews. So the tools that win commercially are structurally biased toward exactly the substitution behavior Ming is worried about.

I wrote about this dynamic in my smartphone post, We Taught the World to Read, Then We Forgot Why It Mattered, where social media algorithms optimized for engagement over wellbeing, and we're living with the consequences. This is the cognitive version of that same problem. The technology that feels best to use in the moment is the one that gets built, refined, and pushed to a billion people. Whether it makes them sharper or more dependent is a secondary concern at best.

Ming is right that how you use AI matters enormously. But individual behavior doesn't exist in a vacuum. It's shaped by design choices made by companies responding to market incentives. If we want more people in that 5-10%, we need tools that are built to occasionally make you uncomfortable, and a cultural expectation that a little friction is the point.

That's a harder sell than a chatbot that always tells you you're brilliant. But it's the one worth making.


A note on process: This post was developed in collaboration with Claude. During our conversation, Claude pushed back on several of my assumptions, including whether I actually use AI the way Ming's "cyborgs" do, and whether her framing fairly accounts for the structural incentives built into the tools themselves. That friction genuinely deepened my thinking on the piece. It's a small but concrete example of what Ming is describing, and of what's possible when the dynamic works the way it should.

(...)