Skip to main content

Command Palette

Search for a command to run...

The Layer You Think You Climbed

If you think AI moved you up to higher-value work, there is a good chance you just got faster at the work that's disappearing

Published
8 min read
The Layer You Think You Climbed

Higher value work isn't a vibe.

It has a shape.

And if you want to know whether AI has actually moved you up or just made you faster at standing still, you have to be able to see the shape clearly enough to find yourself inside it.

Think about your work week as three layers stacked on top of each other.

The Three Layers

"Execution" is the bottom layer

It's the doing, drafting the deck, writing the status update, running the standard analysis, formatting the report, summarising the meeting, and working through the checklist. Anything where "done" is definable in advance and the steps could, in principle, be written down for someone else to follow.

This is the layer where the floor has dropped out.

Not because AI does Execution work perfectly, it doesn't, and anyone who tells you otherwise hasn't shipped anything important recently, but because the time cost has collapsed far enough that nobody pays a premium for execution alone anymore.

What used to take three hours of competent effort now takes twenty minutes of competent direction.

The work still happens. It's just no longer scarce, and value tracks scarcity, not effort.

If most of your week is Execution hours, you're standing on the layer that's being commoditised under your feet, in real time. The discomfort of that sentence is the point.

"Judgement" is the layer above Execution

It's the two questions, both of which AI cannot answer for you, no matter how good the model gets.

The Upstream Question

Of all the things you could point or ask this machine at, which one actually matters here? For this stakeholder, in this situation, given everything you know that isn't written down anywhere, the history, the politics, the thing the last project taught you that nobody documented.

Choosing the right problem is a different skill from solving it, and AI helps with the second much more than the first.

The Downstream Question

Now that the output exists, is it right?

Not "does it look right", does it hold up?

Against reality, against context, against what could actually go wrong if someone acts on it?

Verification isn't checking grammar; it's the harder act of catching fluent wrongness, which is the defining failure mode of AI-assisted work.

AI doesn't shrink the Judgement layer, it expands it.

There's now ten times more output flying around than there used to be, and the people who can decide what's worth producing and whether the result holds up are in shorter supply, not longer.

The premium on Judgement is going up, not down, and it will keep rising as models become more fluent without becoming more accountable.

"Creativity" is the Top Layer

The smallest of the three, the one most often misunderstood, and the one where AI is least dangerous and least useful at the same time.

Creativity here doesn't mean "being creative" in the art-class sense.

It means the move that reframes the problem

  • Noticing that the question everyone is answering is the wrong question.

  • Seeing the connection between two things that nobody put together.

  • Producing the option that wasn't on the list because nobody knew the list was incomplete.

AI is a strong assistant to creativity. It can generate variations, surface adjacent ideas, break you out of a rut, give you ten bad options so the eleventh good one becomes visible.

But the originating spark, the wait, what if we're solving the wrong problem entirely, still comes from a human with context and stakes.

AI expands within the frame you give it

It does not hand you a new frame, and it does not know when the old frame has become useless.

The reframing is yours.

AI eats the bottom, the upper layers get more valuable, and you climb. This is the version of the story everyone is telling each other, and it's directionally correct, but it's also incomplete in a way that matters.

The Part that isn't told

AI makes Execution-layer work feel like Judgement-layer work.

The output is comprehensive, confident and well-structured. You read it, and your brain quietly files it under I thought this.

You didn't, you recognised it.

Those are two different operations, and only one of them is yours, but the interface gives you no way to tell them apart from the inside.

There's no indicator, no warning.

No little icon that lights up when you've stopped thinking and started just nodding along to fluent prose.

There's a name for what happens to people in this situation.

Metacognitive decoupling

Your confidence in your own work rises faster than your actual capability, and there is no internal alarm, because the outputs keep getting better.

The thing producing the outputs is improving; you are not. But because the only signal you can see is the output, you experience the improvement as your own.

The feeling of having climbed the stack is real; the climbing is not.

This isn't a hypothetical risk.

In 2025, researchers studied experienced gastroenterologists at four endoscopy centres in Poland after AI polyp-detection tools were introduced into their daily practice. The study tracked the doctors' ability to find polyps without the AI assistance.

After a few months of routine use, their unaided detection rate had dropped measurably by around six per cent compared with the pre-AI baseline.

These were experienced physicians. The tool adopted to make them better had quietly made them worse. Nobody noticed in real time, because the AI-assisted numbers looked great.

The decay was invisible until somebody specifically looked for it.

The same pattern shows up in legal work, software engineering, financial analysis, and strategic planning, wherever AI is used routinely for tasks that humans used to develop and maintain skills.

The domain changes, but the pattern doesn't.

People who use AI a lot get better at using AI and worse at the underlying judgment, and they almost always think it's the other way around.

Call it Sleeping Driver mode.

The car is moving. You're in it, but you're just not the one driving - you're monitoring, occasionally, when something feels off, and the rest of the time the automation is doing the thinking for you.

From the outside, it looks identical to driving. From the inside, when nothing has gone wrong yet, it feels identical too.

The difference only becomes visible the moment you're asked to take the wheel, and by then, it's information about something you can't undo.

What naming it does and doesn't do

Be suspicious of any blog post that names a problem and implies the naming is half the solution.

It isn't.

Knowing about Sleeping Driver mode is like knowing about confirmation bias. You can name it, you can explain it to someone else, and you'll still fall into it tomorrow.

Naming a habit doesn't change the habit.

The trap lives in the small choices you make when nobody's watching: whether you read the AI output carefully or just skim it, whether you check the statistic or trust it, whether you ask the model to argue against your own instinct or take its first answer and move on.

So I won't pretend reading this has fixed it. It hasn't.

What you have, if any of this landed, is a sharper question to carry into next week.

Of the AI-assisted hours you put in, how many were actually Judgement hours, and how many were Execution hours wearing Judgement's clothes? You won't know in the moment. You'll only know later, when you try to reconstruct what you were thinking — without the document in front of you.

If you can reconstruct it, you were driving.

If you can't, the car was.

For most people who use AI a lot, the honest answer is some of both, and more of the second than I'd like.

That's not a verdict, it's a baseline.

Once you can see it, you can do something about it, and that's a longer conversation than this post can hold. I'll come back to it in the next blog.

For now, the work is just noticing.

Over the next week, when you finish a piece of AI-assisted work and feel that small flush of satisfaction at how good it looks, pause for the length of a breath and ask yourself which layer you were actually on while you were producing it.

Not which layer you wish you were on, which one were you actually on?

The answer will be more honest than you expect, and more useful.


From a book I'm writing on the five skills that separate people who use AI from people who think with it. If this landed, that's the gap it's about.


Reference

Budzyń K, Romańczyk M, Kitala D, et al. Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study. The Lancet Gastroenterology & Hepatology, 2025;10(10):896–903. https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract

The Augmented Mind

Part 1 of 1

Five skills that separate people who use AI from people who think with it. Notes from a book in progress.