Enshallowification
I have no context but I have an opinion...
You know that feeling when you’ve read an entire article and absorbed nothing?
When you’ve scrolled for forty minutes and can’t name a single thing you saw?
When someone asks what you think about something and you realise you have a take but not a deep point of view?
That’s enshallowification. And it’s happening to all of us.
Everyone knows Cory Doctorow’s “enshittification” — the process by which platforms degrade their product to extract more profit. First they’re good to users, then they exploit users to attract advertisers, then they exploit everyone. It’s the business story of the internet’s decline.
But there’s a companion story - Enshittification is what happens to the platform. Enshallowification is what happens to the person using it.
It’s the gradual, barely perceptible thinning of how deeply we think, create, relate, and decide. Not because we’re lazier or dumber than we were. Because the environments we inhabit every day are architecturally hostile to depth.
Nicholas Carr wrote about this in The Shallows back in 2010. The internet, he argued, was rewiring us: promoting rapid scanning over sustained reading, multitasking over contemplation, breadth over depth. He was talking about web browsers and hyperlinks. He hadn’t yet encountered AI-generated content farms, algorithmically-optimised 9-second videos, or an LLM that would hand you the answer before you’d finished forming the question.
Fifteen years later, the data tells us Carr was right.
The numbers
Gloria Mark, a researcher at UC Irvine, has been measuring how long people stay focused on a single screen before switching to something else. In the early 2000s, the average was 2.5 minutes. By 2020, it was 47 seconds.
Each switch costs roughly 25 minutes to fully re-engage with the original task. Mark describes it like a whiteboard: every time you switch, you erase your working context and start redrawing. The more you switch, the more cognitive resources you burn just getting back to where you were.
Neuroimaging studies illustrate that heavy internet users show increased activity in decision-making regions of the brain. We’re getting faster at choosing, but decreased activity in areas associated with deep reading and contemplative thinking. We’re optimising for speed at the expense of understanding.
“sSop” was Merriam-Webster’s 2025 Word of the Year. Engagement with AI-generated articles dropped 40% last year. Human-generated content now earns 5.4x more traffic than AI content. The audience knows. They can feel the difference between something crafted and something generated by an LLM.
The enshallowification of judgment…
To be relevant in 2026, you need a position on a 15-second clip before the full context is even available. The most extreme, most immediate reaction wins. You see this on LinkedIn or X. One example that comes to mind is the recent Trump retweet featuring former presidents and first ladies as animals. It was and is inappropriate - but no one knew where the video originated and most people didn’t even watch the entire video.
And it’s not just online. Watch what happens in a strategy meeting when a competitor launches something unexpected. Within an hour, someone has a hot take in Slack. The pressure to respond fast eclipses the discipline to understand first. Rachel Karten put it well: “Attention doesn’t always tell the full story of resonance.” But in a Speed of Take culture, attention is all we’re measuring.
The cost isn’t just bad analysis. It’s the slow erosion of the habit of saying “I don’t know yet.” That phrase used to be a sign of intellectual honesty. Now it reads as being behind. When did withholding judgment become a professional liability?
Georgia Parke, who runs social for one of the most talked-about brands in London, made a confession that captures this perfectly: “The more time we put into making a post, the less likely it would perform well.” Think about what that incentive structure does over time. It trains you to go fast, go shallow, go now. The algorithm rewards the reflex and raw take versus anything polished. In part, AI has ruined polish but it is also the lack of time it takes to digest anything substantial. Someone used to tell me - you have to dumb down what you write - it is too complicated. Like the Matt Damon interview where he claimed Netflix told him to make films for people looking at two screens.
The enshallowification of learning
When you skip the search and go straight to the LLM answer, you bypass the intellectual friction that builds understanding. You get the output but not the knowledge.
I’ve felt this one personally. When I first started using AI tools seriously, I noticed something uncomfortable - I was getting more done, but I was learning less. The rabbit holes I used to fall down while researching, the tangent that led to an unexpected insight, the article that reframed a problem I hadn’t even asked about - they were harder to surface because I wasn’t forming my own patterns. I was more efficient and less interesting.
We’re becoming what I’d call “digital magpies” - hoarding shiny fragments of information without a nest to put them in. We know the vibe of a topic without understanding the substance behind it. We can talk about tokenised assets or attention economics or neuroaesthetics, but press a layer deeper and the knowledge is a veneer.
Robin Good writes: “Curation is the new creation.” And that’s true, but only if the curation comes from genuine understanding rather than surface-level pattern matching. When supply becomes infinite (which it has), discovery becomes scarce (as App Economy Insights noted). The value isn’t in accessing information. It’s in having done the work to know what matters.
The fix, I think, is unglamorous. Read books. Not summaries, not Blinkist, not “key takeaways” threads. Actual books, the kind that force you to sustain attention across hundreds of pages. Step away from social media for stretches long enough that your brain stops scanning and starts processing. And most critically: stop trying to work on a hundred things and go deep on a few. AI makes us think we can pursue everything simultaneously. We can’t. What we get is average execution across the board, and a vague, unsettled feeling that we’re not actually expert in anything anymore.
Which raises a deeper question. If you spend all your time going deep on AI itself - learning the tools, the prompts, the workflows, the models - does that make you an AI expert or does it just make you someone who’s lost track of the domain they were actually an expert in before? The tools are seductive. They’re all-consuming. And we’re in a phase where AI might change what it means to be an expert in the first place. Or it might not. We don’t know yet. But the risk of enshallowification is that you never slow down long enough to find out.
Shallow but confident; the enshallowification of craft
AI slop is the most visible symptom, the Shrimp Jesus memes and the cat soap operas ( which I really enjoy! )
It’s the strategic brief that’s thin but well-formatted. The brand positioning that sounds right but wasn’t pressure-tested against anything. Output that’s shallow but confident, which is more dangerous than obviously bad because it looks like it might be correct.
Rachel Karten nailed the underlying dynamic - “I worry that we’ve slowly boiled out substance in search of performance.” That’s the craft version of enshallowification in a single sentence. We’ve optimised for what performs, what gets engagement, what looks polished, what ships fast. We’ve systematically devalued the slow, unglamorous work of actually thinking.
AI doesn’t help with this in the way most people assume. It doesn’t reduce work, it intensifies it. The baseline expectation rises. If the machine can produce a passable first draft in thirty seconds, then the human’s job shifts from creation to evaluation. But evaluation requires taste, judgment, and domain expertise, exactly the muscles that atrophy when you stop doing the deep work yourself.
As Tiny Empires wrote: “’I know what works’ is worth more than ‘I’ll do the work.’” That’s the right insight, but it comes with a warning, if you never do the work, you eventually lose the ability to know what works. The knowing comes from the doing. Separate them for too long and you’re left with pattern recognition detached from understanding - which is, not coincidentally, exactly how an LLM operates.
And then there’s the floor dropping out from under execution-based pricing. When everyone can execute with AI, execution isn’t the differentiator. Depth of thinking is. “The sugar high of a trend feels good,” Karten says, “but audiences want something more to chew on.” The market is already telling us this. Human-generated content earns 5.4x more traffic. Engagement with AI articles is dropping. People are hungry for substance. The question is whether the craft professions - marketing, design, strategy, writing - can rebuild the depth that got boiled out in the rush to perform.
The enshallowification of relationships….
Reading an AI-written apology is like being hugged by a mannequin. Correct shape, zero warmth. We know it when we feel it. The reply that’s technically responsive but emotionally vacant. The personalised outreach that’s personal to nobody.
It’s not just AI that does this. AI industrialised a pattern that was already spreading. “Hope this email finds you well.” “Just circling back.” “Per my last message.” We were writing mannequin language before the machines arrived. The machines just made it infinitely scalable.
When you let AI write your apology, your thank-you note, your check-in with a colleague going through a hard time, do you lose the empathy muscle that would have been exercised by writing it yourself? The act of finding the right words is itself an act of attention.
This is true for brands too. When companies automate their customer communication with AI, customers can feel the mannequin. The response comes faster. It’s polished. It references the right details. And it feels like talking to a wall. Over time, that hollowness compounds. Loyalty isn’t built on efficiency. It’s built on the felt sense that someone on the other end actually cares. You can’t automate that. As Lia Haberman puts it, the next wave isn’t AI-generated brand content, it’s employee-generated content. Real humans, with real texture, saying things that don’t sound like they were approved by a committee of algorithms.
The enshallowification of experience
This one has crossed from digital into physical, which is how you know the pattern is structural rather than technological.
The fast-casual “slop bowl”, meals engineered for a crunch-to-mush ratio designed for mass appeal.
Fast fashion as the enshallowification of style.
Spotify’s infinite library as the enshallowification of music taste, endless choice, less and less actual listening.
Algorithmic content as the enshallowification of culture itself.
As First Floor writes, “If we’re all stuck in Instagram hell, I have to tip my hat to those who are at least attempting to have some fun with it.”
There’s a connection here to what I’ve been calling the Fatigue Economy ( a few articles ago ), the parallel streams of attention extraction and attention recovery that now constitute a significant chunk of the consumer landscape.
The thing that exhausts you (shallow digital consumption) creates demand for the thing that’s supposed to restore you (analog experiences, wellness products, depth retreats). But what happens when the restoration products themselves get optimised for mass appeal? When the meditation app gamifies mindfulness? When the “slow living” influencer posts daily? The cycle eats itself. Personally - I can’t look at any of it anymore. Purpose isn’t visible and when it is - do you trust it?
The food version of “proof of human” is the restaurant with no Instagram presence. The sourdough that takes three days. The dinner party where nobody took a photo. If I can see it - maybe it isn’t that interesting anymore…
The enshallowification of identity…
Algorithms reward scannable personas. To be seen, you flatten your complex, contradictory human self into a niche, a brand, a category the system can file. You trade depth for discoverability. I wrote about this on LinkedIn the other day and it went viral - not because it was a break through - it gave people permission to stop asking for permission to be a complex human being.
This is the one that I think about most, because it affects everyone reading this. How many of us have compressed ourselves into “the AI marketing person” or “the Web3 strategist” or “the brand builder” because that’s what LinkedIn can categorise and promote? The platform wants you to be one thing, clearly. But the people who are actually good at what they do are typically several things, messily.
The generalist polymath, the person who can connect a poem to a P&L statement, who sees patterns across domains - is precisely the hardest to automate. But they’re also the hardest for an algorithm to recommend. If your expertise is narrow enough to fit cleanly in a content category, it’s narrow enough to be replicated by an LLM. The range that makes you strategically valuable is the same range that makes you algorithmically invisible. Not idea how that plays out - do conformists live in corporate life and polymaths become entrepreneurs. As social media dies - relationships are your gateway to success - perhaps that helps people like me who think it is important to have range. Newsletters build communities too… I see that a lot on Substack actually.
The half-commitment — the enshallowification of ambition
The pressure from the culture right now is to keep up. To have a side hustle. A backup plan. A portfolio career. A personal brand alongside the day job. A newsletter alongside the personal brand. An AI project alongside the newsletter. The result is that nothing gets your full depth. Everything gets a competent fraction.
When you’re always hedging, always keeping one foot in the next thing, you can’t go deep in the thing you’re in. Commitment itself gets enshallowified. Not just professional commitment, but the willingness to bet on something fully, to let one thing be the main thing, to accept that going deep means saying no to other depths.
AI amplifies this beautifully. It makes it look possible to do everything. The machine handles the execution, you handle the “strategy” across twelve different projects. But what you actually get is thin strategy across the board. The person working on one thing with full attention will outthink the person skimming across ten things every time. Not because they’re smarter, but because depth is cumulative and shallow is not.
The swipe — the enshallowification of connection
You could argue that dating and relationships were the first domain to be enshallowified, before anyone had a name for it. The swipe reduced a human being to a photo and a line. The infinite scroll of potential partners created the same paralysis of choice that Spotify creates for music- that something better is one swipe away, so nothing gets the investment it needs to become real.
Professional relationships get enshallowified when every interaction is transactional - a warm intro requested, a “let’s find time” that never materialises, a connection that exists on LinkedIn but nowhere else. The platforms made it easy to accumulate the appearance of relationship without the substance of it.
A16z wrote that “the internet is a miracle of universal access to inquiry and connection.” They’re right. It is a miracle. But miracles can be squandered. And the enshallowification of connection is what squandering looks like - infinite access to people, declining depth with any of them. Growing up you had three best friends and your plans were always made.. Now you have to have 300 friends. Do you know everyone you are connected to on Instagram? I do - but that is a conscious decision because the collection game felt insanely shallow.
The table nobody drew
Enshittification (the business)
Enshallowification (the human)
They’re siblings. One degrades the product. The other degrades the person.
The counter-movement..
Dumbphone sales are up 25%.
Searches for “analog hobbies” surged 136% at Michaels in the past six months. Yarn kit searches are up 1,200%. CNN called it a cultural shift. Vogue called digital detoxing a status symbol. The Sprout Social CMO told Euronews that 2026 social media would “move decisively toward depth over scale.”
All real. All measurable. And the easy story to tell is: people are rejecting technology and returning to analog.
That’s not what’s happening.
The person who outsources their professional persona to an AI agent so they can spend four hours in a niche Discord community isn’t rejecting technology. They’re using technology to fund a deeper interior life. The person who automates their social media scheduling to protect three hours of uninterrupted reading time isn’t anti-tech. They’re pro-depth. I know, I have automated a lot of things to keep me off social media and more focused on achieving my goals without feeling the pressure to move quickly.
What’s actually emerging is something I’d call cognitive sovereignty - the deliberate reclaiming of your own attention, your own thinking patterns, your own depth. It shows up as sovereign mornings (no screen for the first hour). Sovereign inboxes (newsletters you chose rather than feeds that chose you). Sovereign hobbies (things a puzzles, readings, new recipes. It is why X is going to become a payments platform and why more IRL events are filling up our calendars - yay!
The shift isn’t about whether you use technology. It’s about whether you control your relationship with it. Unoptimised time is becoming a luxury good. Doing things the long way is becoming essential for happiness.
If the environment you work in rewards speed of output over quality of thinking, your strategy gets thinner. Not because you’re incapable of depth, but because the system never asks for it. When the brief is shallow, the AI-generated response is shallow, the approval is shallow, and the campaign is shallow. That is where we are - hundreds of brands and businesses selling stuff through a creative veneer.
Linas’s Newsletter captured the consumer side: “Users don’t want to rent a tool; they want a result.” The same is true of stakeholders hiring marketers. They don’t want a process; they want an outcome. But the quality of the outcome is downstream of the quality of the thinking. Skip the depth, and the results are mid. Charging for outcomes is the new norm - but you need to bake the thinking into the price or the long term outcomes is poor. Why? Because all of this change hasn’t changed the way our brains work. Resonance, relevance, authenticity is still a requirements and we are more judgemental than ever.



Love everything about this. A sharp and intelligently crafted pov that hits hard and forces reflection. Bravo Jennifer!
Thank you for writing this! It is the thought piece I was looking for for the past few months