To begin, we focused on enhancing our image.
During the pandemic, when everything shifted to virtual meetings, we began to blur our backgrounds and activate the ‘touch up my appearance’ feature. We also muted any background noise. We traded the vulnerability of noisy open-floor physical presence for the safety of a controlled, perfectly lit window. Professional presentation matters, and nobody owes their coworkers a view of their chaos. But we got a taste for something beyond pragmatism: the comfort of a smoothed-out version of ourselves.
That was just the beginning.
Now, with AI writing assistants embedded in every enterprise workflow, we’ve extended the same instinct to how we think. Everyone sends perfectly formatted emails, perfectly structured proposals, perfectly polished briefs. The effort of organizing one’s own ideas (the part that makes an idea unique and personal) has been delegated to a model trained on the output of others.
I have coined a term for this: ‘freeze-dried competence’. It’s structurally sound, grammatically impeccable, and has about as much nutritional value as astronaut food. It offers maximum shelf life and minimum vitality.
Here’s a test I’ve been running informally. When I receive a business email, I try to guess whether it was AI-assisted. Six months ago this was easy (the tells were obvious). Now it’s harder. Not because the AI has gotten better at mimicking human writing, but because human writing is converging toward AI writing. People are internalizing the patterns. The model isn’t just writing for us: it’s teaching us to write like it.
The productivity gains are real and I’m not dismissing them. But the productivity argument has a blind spot: it assumes the only function of writing is to transmit information. That the struggle of finding the right words, of sitting with the discomfort of not quite knowing what you think until you’ve tried to say it is “waste” to be eliminated.
It isn’t waste. It’s the process through which original thought actually happens. Thinking and expression aren’t separable. The sentence you restructured three times, the paragraph you deleted because writing it revealed you didn’t actually believe it: those aren’t cosmetic edits to a pre-formed thought. They’re the thought forming. Skip the struggle and you don’t get the same thought faster. You get a shallower one.
When an entire organization runs communication through the same models trained on the same data, expression undergoes a massive regression to the mean. Large language models generate the statistically most likely output given the input. That’s not a flaw; it’s the mechanism. Every use pulls expression toward the center of the training distribution. The cracks in communication (the awkward phrasing, the unexpected tangent, the word choice that makes you pause) are exactly the signals that a specific human mind was at work. We’re engineering them out.
The philosopher J.F. Martel draws a useful distinction between art and artifice. Art opens what he calls a “rift” in consensus reality: it discloses something unpredictable. Artifice closes the rift by engineering a predetermined response. Propaganda is artifice. A Hollywood blockbuster calibrated to hit every emotional beat on schedule is artifice. And an AI-polished email, optimized for the expected professional response, is artifice too.
Martel’s deeper point is structural. When you collapse the boundary between genuine exploration and instrumental output, the instrumental logic always wins, because it’s measurable and exploration is not. In enterprise terms: the productivity gains show up on dashboards. The slow flattening of an organization’s capacity to surprise itself does not.
Long-term innovation has never come from doing the expected thing faster. It comes from the deviations. The misread brief that sends a project somewhere better than the original plan. The junior employee who proposes something naive that contains a kernel of genuine insight. The disorganized hallway conversation that connects two unrelated problems. None of these moments are efficient. None would survive optimization. They exist because of friction, not despite it.
I don’t dispute the usefulness of AI tools; I use them myself. Rejecting them would be similar to rejecting email in 1998. What I am arguing is that there’s a category of friction that isn’t waste, and that organizations need to learn the difference between the efficiency that accelerates execution and the efficiency that flattens thinking.
AI can help eliminate wasteful friction such as data formatting, information retrieval, and first-draft templating. However, it is important to protect productive friction, such as ideation without AI assistance and writing that forces people to think through their own positions. Spaces where exploration is the goal rather than output should also be preserved.
We must say, with a straight face, that “we’re going to be deliberately less efficient here because the efficiency is costing us something we can’t measure but can’t afford to lose.” Hard sell in a quarterly earnings culture. But the right one.
I often find myself wondering: if the flaws and imperfections in communication are indicators of creative thought, then what are we constructing when we meticulously eliminate them all?