Article
Why the Humanities Matter More Than Ever in the Age of AI
AI expands anyone's capability. But the output depends on the operator's judgment. My bet: the people who read more, see more art and train their critical sense are the ones who will set the difference.
Lately I've been living through a contradictory experience with technology, and especially with artificial intelligence.
Four years ago, when AI first crossed into the mainstream, it looked like the promised technology: it would make life easier, open more business opportunities, break through limits in our capacity, push us further. Today I look around and find a world on the edge of mediocrity and boredom. Everything feels like more of the same.
The uncomfortable question is this: is AI the cause, or are we?
A short anchor
Before going further, it's worth saying where I'm speaking from. I've spent 20 years in digital design and this year I'm studying Social System Design in Hiroshima. I say this not as a credential but because the gap I see — between those who use AI well and those who don't — I see from the inside. In clients, in teams, in students, and in myself whenever I stop training my own judgment.
The right question isn't about the tool
I've said it for years: AI is a remarkable tool that lets us expand and break limits in the process of creating or finding solutions. So the question isn't what does the tool do; it's who is using it, and with what judgment.
Do we know how to handle it and pull out its potential? Or do we copy and paste in a way that the result is just a pile of copies of copies, growing exponentially?
There's empirical evidence for the second option. A Microsoft Research study of 319 knowledge workers found that the more confidence people have in AI, the less critical thinking they apply to its output (Lee et al., 2025). In the same direction, Gerlich documents that heavy AI use correlates with greater cognitive offloading and, through it, a decline in critical thinking (Gerlich, 2025). And a recent MIT Media Lab study using EEG showed that the users who lean most on ChatGPT exhibit less cognitive engagement during the task (MIT Media Lab, 2025).
The tool isn't making us mediocre. It rewards whoever already has judgment, and disguises the lack of it in everyone else.
The prompt example and the search-engine analogy
I know this can sound abstract, so let me make it concrete.
When that "show me what AI knows about you" image went viral, most people copy-pasted the prompt they saw on social — and that's exactly why almost all the results looked alike. Different features, different elements, but the same aesthetic.
Now think about this: what would the same image look like if it had been requested by someone with a real sense of art, or photographic principles, or composition? Would the output be the same? Would the prompt be the same? I don't think so. And that's where the difference lives.
The same thing happened 25 years ago with another giant tool: search engines. Most people asked simple, explicit questions; a smaller group, with better training, asked more detailed questions and got results with more depth. The difference wasn't Google. It was the operator.
AI is the new search box. Except this time the "operator" leaves more visible traces: an image, a text, a digital product.
Why humanities — not generically "read more"
This is where I want to be explicit against the counter-position.
There's a current of thought that argues the exact opposite of what I'm about to claim: that AI makes humanities education obsolete, because we no longer need to read so much, or remember so much, or train interpretation when a model can summarize and rewrite in seconds. It's a comfortable idea. And I think it's wrong.
I think the opposite holds: the more capable the machine becomes, the more decisive the human judgment that directs it gets. And that judgment isn't trained by productivity tutorials. It's trained by art, literature, history, philosophy — by anything that teaches you to look twice. Richard Sennett, in The Craftsman, frames it from another angle: the ethical problem of a craft appears at mastery, not at novelty; it's conscious repetition that sharpens judgment (Sennett, 2008). Judgment isn't a gift. It's training.
And if we're going to train it, let's name it concretely — not "read more" in the abstract. Three small practices that actually shift the way you see:
- Two non-fiction books a year outside your field. One in philosophy or history, one in art or design. Not to quote them. So that they change the way you read a brief.
- One visit a month to a museum, a gallery, or an exhibition. Not to validate yourself. To be unsettled by work you don't fully understand.
- One "anchor" author you re-read every year. For me, John Berger's Ways of Seeing. The re-reading is what trains the muscle of critical observation.
It's not heroic. It's discipline.
Judgment as the edge
The deep question isn't whether AI will replace someone. It's who will stand out when everyone has access to the same tool.
My bet: the ones with judgment. The ones who have fed their intellect with enough art and enough reading to ask better questions, to hold a position when the model gets it wrong, to recognize when a "good" output is actually generic.
The tool flattens us. The humanities pull us apart again.
And that, paradoxically, is the best news I've seen in years for those of us who have spent a long time inside the craft.