Scott J. Hunter

Exploring the intersection of mysticism, technology, consciousness, and art

Contact Me

Discrete AI: The Case for Technology That Gets Out of the Way

A wooden cabinet used as a metaphor for quieter technology.
Just put it in a metaphorical cabinet.

There is something quietly absurd happening right now in the relationship between people and artificial intelligence. Half of the country is using it. Many of those same people will tell you they don't trust it, don't want it, and would prefer companies keep it away from their products. Meanwhile, those companies are busy stamping "AI-powered" on everything they make and watching their customers back slowly toward the exit.

Nobody seems to have noticed that both groups are making the same mistake, just from opposite ends.


We Have Been Here Before

When the printing press arrived, clergy feared it would corrupt knowledge by putting it in the hands of people unqualified to receive it. When electric lighting came along, Edison had to stage elaborate public demonstrations just to convince people it wasn't dangerous to have in their homes. When the telephone was introduced, people genuinely refused to use it. The idea of a disembodied voice traveling through a wire felt unnatural, even threatening. Radio was going to destroy family conversation. Television was going to rot children's brains.

Every single one of those technologies is now invisible. Not because the fear was stupid, some of it was reasonable given what people knew at the time. But because the technology eventually stopped asking to be noticed.

There is a detail from early consumer technology history worth pausing on. From the telephone to the radio to the television, manufacturers responded to each new threatening technology the same way: they put it in a wooden cabinet. Ornate, domestic, designed to look like something that had always belonged in the room. That wasn't purely aesthetic. It was a deliberate effort to reduce the psychological footprint of something that felt foreign and threatening. They understood, even then, that the path to adoption ran directly through invisibility.

That instinct was correct. It has always been correct.


What the Research Is Actually Telling Us

Researchers at Washington State University tested consumer responses to identical products across multiple categories: vacuum cleaners, televisions, health services, consumer services. The only variable was whether the product description mentioned artificial intelligence. In every single category, purchase intention dropped the moment AI was mentioned. Every one. The underlying Journal of Hospitality Marketing & Management study found that including the term "Artificial Intelligence" lowered purchase intention, with emotional trust mediating the effect. Washington State's summary put the takeaway even more plainly: when AI is mentioned, it lowers emotional trust, which in turn lowers the likelihood someone will buy or use the product.

A separate academic study added a finding that should stop product teams cold. The AI Invisibility Effect, a 2026 arXiv preprint by Obada Kraishan, analyzed more than 1.4 million mobile app reviews and found that AI features often operate below user awareness thresholds entirely. It is not the presence of AI that drives negative evaluations. It is the explicit recognition of it. The label is the problem, not the technology.

You would think this would prompt a rethinking of strategy. Instead, the market gave us Duolingo's CEO announcing the company was "AI first", a statement users received as a dismissal of human labor, particularly coming on the heels of contractor layoffs. It gave us fashion brand Selkie facing an uprising from its own loyal customers after deploying AI in its design process. It gave us McDonald's pulling an AI-generated Christmas ad after viewers called it soulless and said it had, in one comment that made the rounds, "ruined their Christmas spirit".

Meanwhile, a Deloitte 2025 Connected Consumer survey found that 53% of surveyed U.S. consumers already use or experiment with generative AI. More than half. Using it quietly, using it practically, using it without making it their personality or their brand identity.

The people are ahead of the companies. They figured out something the marketing departments haven't: you don't announce the tool. You just use it.

This is the same tension I was circling in The AI Divide Is Real, But the Tools Are Part of the Problem. People are already using these systems, but the tools and the messaging around them often make the experience feel more threatening, not less.


2am in the Edit Suite

I spent years as a film editor working on Avid systems. If you haven't been in a professional edit suite, imagine a room that exists slightly outside of normal time: bad lighting, cold coffee, a deadline that was technically yesterday.

I remember sitting there at 2am with a Wacom pen in my hand, tapping through cuts, adjusting, feeling my way through a sequence. I was not thinking about the Avid. I was not thinking about the Wacom drivers, the processing happening underneath, the years of engineering that made that pen feel like a natural extension of what I was trying to do. I was thinking about the edit. I was inside the work.

That is what successful technology feels like from the inside. It disappears. The complexity doesn't go away, the Avid was an extraordinarily sophisticated system. But the interface had been designed so that none of that complexity had to live in my conscious attention. The tool dissolved and left only the work.

That experience has a name in certain circles. Mihaly Csikszentmihalyi called it flow. Craftspeople have known it for centuries. The best tool is the one you forget you're holding.

AI, as it is currently being deployed and marketed, is the opposite of that. It is a tool that keeps interrupting the work to remind you it's a tool.


Discrete AI

This is the argument: we don't need better AI. We need quieter AI.

The companies winning the next decade won't be the ones with the most prominent AI features or the most aggressive "AI-powered" badging. They will be the ones whose AI is so well integrated into the experience that users never have to think about it. The ones who understood that the goal was never to impress people with the technology. The goal was always to help people do the thing they came to do.

This already exists in pockets. Your spam filter has been running sophisticated machine learning for years and nobody has ever felt threatened by it. Spotify's recommendation engine is deeply complex and you just call it "finding good music." Google Maps is making hundreds of calculations per second and you just call it "not getting lost." The AI in those products isn't hidden out of deception. It's quiet out of respect for what the user actually showed up for.

The inverse of this is what I'm calling overt AI, technology that centers itself in the experience rather than serving it. The chatbot that announces it's an AI before it's said anything useful. The writing tool that puts "AI-generated" in the corner of every output. The product team that adds an AI feature because the roadmap demanded it, not because a user needed it.

The cost is not abstract. In my own workplace, the more visibly and aggressively AI tools are pushed on people, the more resistant they become. Not because the tools don't work. But because the pushing itself carries a message. It signals disruption, replacement, an institution reorganizing itself around a technology rather than around the people doing the work. The anxiety that follows isn't irrational. It's a reasonable response to a tool that has been made to feel like a threat rather than an assistant. Overt AI doesn't just turn off customers. It demoralizes the people it's supposed to help.

That is why the question is not just whether AI can generate art, text, code, or course content. I already wrote about part of that in Copying, Influence, and AI Art. The deeper question is what kind of relationship the product creates between the person, the work, and the machine.


The UX Revolution Nobody Is Proposing

UX, or user experience, is simply the design of how a product feels to use. Not just how it looks, but how it behaves, when it speaks, when it stays quiet, and crucially, when it gets out of your way.

The conversation in design and product circles right now is mostly about how to make AI interfaces better, more transparent, more trustworthy, better explained. That is not the wrong conversation but it is an incomplete one.

The deeper question is whether the interface should be visible at all.

Consider something as mundane as typing on your phone. A few years ago, when you misspelled a word, the software immediately flagged it with a red underline, pulling your attention out of what you were trying to say and redirecting it to the mechanics of spelling. It was technically helpful. It was also a constant interruption. The newer behavior is different. The software watches, waits until you've finished your thought, and then quietly corrects the obvious errors in the background. You barely notice. You just find that your writing is cleaner than your typing. That is discrete AI. It read the moment, made a judgment call, and intervened at the least disruptive point possible.

For better or worse, the Apple smartphone is perhaps the clearest modern example of this philosophy taken to its logical conclusion. You are holding a more powerful computer than anything that existed on earth thirty years ago, running thousands of processes simultaneously, and the whole experience has been designed so carefully that most people forget that entirely. What they remember is that it cost too much.

I came at the darker side of that same design victory in The Ease Trap: when frictionless technology becomes the whole environment, cognition adapts to ease. But the design lesson still matters. Invisible complexity is powerful. It just has to be aimed at the user's purpose rather than the company's appetite for attention.

Gmail does something similar with its smart reply suggestions. They appear only when you pause, not while your fingers are moving. Photo apps on your phone are quietly adjusting exposure and color balance in the background without asking for your approval or announcing what they've done. Google Maps stopped telling you it was recalculating and just recalculated. Each of these represents a design philosophy that puts the user's experience and attention at the center, with the technology arranged around it rather than on top of it.

Now consider the opposite. Microsoft Copilot has been inserted into virtually every Office product with its own sidebar, its own personality, and an apparent need to introduce itself every time you open a document. Many AI customer service implementations require you to exhaust a chatbot's limited repertoire before you can reach a human or a simple answer, adding friction to a process that was already frustrating. There are AI writing tools that summarize your own document back to you before they let you open it, as if the document you just wrote has become a stranger you need an introduction to. These are not failures of AI capability. They are failures of design judgment. The technology has been placed in front of the user rather than beneath them.

The irony is that the most powerful AI implementations are often the least visible. Your email spam filter has been running sophisticated machine learning for years and the only evidence of its existence is the junk you never have to see. Fraud detection on your credit card is watching every transaction against patterns built from billions of data points, and all you ever experience is an occasional text asking if you really did just buy something unusual at 2am. The AI earned your trust by staying out of your way until the one moment it genuinely needed to step in.

This is the UX revolution that nobody in the industry seems to be proposing. Not better AI explanations. Not more transparent AI disclosures. Not friendlier AI chatbot personalities. The revolution is restraint. It is the discipline to ask, before deploying any AI feature, whether the user needs to know this is happening at all. And if the honest answer is no, then the right design choice is silence.

The telephone eventually stopped being a miracle and became a fact of life. Not because people got over their fear through education campaigns or trust-building exercises. But because the experience of using it became so natural that the strangeness evaporated. The technology earned its invisibility by delivering on its promise reliably, consistently, without demanding acknowledgment.

AI will get there. The question is whether the companies deploying it will help or hinder that process. Right now, most of them are hindering.

The people quietly using AI to do their jobs better, the ones who would never put it in their bio or tell you about it at a dinner party, they already understand something the industry doesn't. The point was never the AI. The point was always what you could do with it once it got out of the way.


Related reading

This piece sits alongside The AI Divide Is Real, But the Tools Are Part of the Problem, The Ease Trap, We Taught the World to Read. Then We Forgot Why It Mattered., and Prompting Humans: A Field Guide.

(...)