The future of work is burnout

The Designing for Human Limits series

What performance means in the age of AI

We’re living through a rewrite of what “work” means.

Most organizations are telling themselves a comforting story: AI will take the boring stuff. The repetitive stuff. The admin. The low-level grind. And people will finally be free to focus on the high‑impact, high‑quality tasks that actually move the needle.

It sounds humane. It sounds efficient. It sounds inevitable. And it’s missing the real problem…


The wrong questions we keep asking

When leaders do pause to think beyond “wow, look at the productivity gains,” they tend to ask one of three questions:

What do we do with the freed-up time?

More clients? More output? More projects? More internal initiatives? More “strategic work”? It’s always “more,” just dressed in different words.

Do we get cheaper?

Do we pass efficiency gains on to customers and compress prices? If so, do we pay the remaining workforce less? Or do we assume the productivity boost is so large that prices can fall while wages stay high? (This is where most stories fall apart.)

Do we move toward a 4‑hour workday?

Maybe. But if some companies keep people at 8 hours, won’t they win? And if they win, won’t others follow? So… are we really talking about fewer hours or just a different justification for the same hours?


These aren’t stupid questions. They’re just not the ones that matter most.

They’re “distribution questions.” The market will brutalize them into an answer over time. Through pricing pressure, competition, labour dynamics and who can actually keep talent. You don’t need a philosophy degree to predict the outcome: organizations will absorb a large chunk of the efficiency dividend, customers will capture some of it and employees will capture some of it. Unevenly, politically and with the usual lag.

But the urgent issue isn’t distribution. It’s work design.


What nobody seems to notice: low‑cognitive work is doing a job

Most people dislike “stupid work”. The repetitive, boring, mechanical tasks that feel beneath them. Especially people who enjoy hard problems. If you hire ambitious professionals, they want to do meaningful work. They want to think.

And yet: everyone also knows this feeling: You’re stuck on a hard task. You’re cognitively fried after a heavy delivery phase. Your brain is resisting the next hard decision. And somehow, doing something simple helps.
Not passive distraction, but something “mechanical”. Cleaning. Sorting. Naming files. Formatting slides. Updating a tracker. Answering easy emails. Tidying a backlog. Making small edits.
It feels like a break. But you’re still “productive.” It’s a cognitive palate cleanser. Your brain gets to downshift while your day still moves forward.

That is not a character flaw. That’s a regulation mechanism.

Many roles in modern organizations unintentionally include these micro-recoveries. A day isn’t one long stretch of peak thinking; it’s a messy mix of:

  • short bursts of intense cognitive effort
  • interspersed with lighter tasks
  • punctuated by meetings that are sometimes useful and sometimes not
  • and filled with small administrative actions that keep the system moving


We love to complain about the waste. But that “waste” is also “structure”. It creates rhythm. It creates recovery.

Now enter AI.

AI doesn’t just remove low‑value work. It removes low‑cognitive‑load work. Or at least, it compresses it so aggressively that it stops functioning as recovery.

And that is where the future of burnout gets engineered.


The AI paradox: cognitive density goes up

When you automate the easy parts, what’s left for humans isn’t just “more meaningful.” It’s more demanding.

Because the remaining tasks are the ones that require:

  • judgment under uncertainty
  • trade‑offs without complete information
  • negotiation across conflicting incentives
  • creative synthesis
  • accountability for outcomes
  • decision-making with real consequences
  • emotional labor during conflict
  • systems thinking (the thing everyone says they value and almost nobody trains)


In other words: high cognitive load, sustained.

So the real shift isn’t “work becomes better.” It’s: work becomes cognitively denser.

If you remove the low-load segments from the day, you don’t get a cleaner day. You get a day with fewer gear changes. And humans aren’t built for eight hours of “high gear.”


“But that’s what managers do all day.”

Yes. Some executives just smiled while reading this.

Many senior roles already live in a world of constant context switching, pressure and judgment calls. They’re used to being cognitively overdrawn. But: senior roles also come with compensation, autonomy and control that make that overload survivable – sometimes even addictive.

Most employees don’t have that. And even in senior roles, the cost shows up (usually) as:

  • degraded decision quality
  • risk aversion masquerading as “prudence”
  • impatience with nuance
  • defensiveness and control behaviors
  • and eventually: exhaustion framed as “the price of leadership”


The goal shouldn’t be to scale executive burnout to the rest of the organization. The goal is to design a system where high performance is sustainable.


Performance will need a new definition

Today’s performance metrics were built for a different environment.
In many organizations, “high performance” still means some combination of:

  • speed
  • responsiveness
  • utilization
  • output volume
  • visible activity
  • meeting deadlines
  • “always available”
  • delivering under pressure


These measures reward intensity. They reward throughput. They reward pushing. They also create the perfect conditions for cognitive burnout when AI increases cognitive density.

In the age of AI, performance cannot only mean output. It must also mean:

  1. judgment quality – do we make better decisions, not just faster ones?
  2. resilience – can we still operate when tools fail, context shifts or assumptions break?
  3. learning velocity – do we compound intelligence or merely rent it from a model?
  4. cognitive sustainability – can people sustain high-quality thinking without breaking?


If your operating model defines success as “more output per hour,” then AI won’t free anyone. It will just raise the bar until the system snaps.


The real question: how do you prepare your workforce for high‑cognitive work?

So the question isn’t “what do we do with freed time.” It’s:

How do we prepare the organization for workdays dominated by high-cognitive demand?

This is not a mindfulness problem.
This is not a “teach people resilience” problem.
This is not a “send them to a time management course” problem.
This is an operating model problem.

Because cognitive load is not just an individual experience. It’s an emergent property of how work is designed, routed, measured and rewarded.
Just like alignment pain isn’t a change‑management bug, it’s the unavoidable cost of collapsing ambiguity into commitment.
And just like transformations don’t fail because people don’t try hard enough – they fail because ownership, capability building and outcome logic are left implicit.

Same pattern here: If cognitive sustainability is left implicit, the system will optimize for short‑term output and burn out its best people first.


What preparing for this future actually looks like

You don’t “prepare the workforce” by telling individuals to cope. You prepare the workforce by redesigning how work happens.

Here are the design moves that matter.

1) Build a cognitive rhythm into the operating system

If your day becomes “back-to-back high-load tasks,” you’ve designed burnout.
Systems that avoid cognitive burnout deliberately include alternation:

  • deep work blocks
  • lower-load blocks
  • decompression buffers
  • decision windows
  • and recovery built into the cadence, not granted as a favor


This can be as simple as: no meeting days, protected focus mornings, short administrative sweeps, structured end-of-day closure and real boundaries around “urgent.”

If the system doesn’t protect recovery, the individual can’t sustainably do it, because they’ll be punished for it.

2) Stop measuring the wrong thing (activity ≠ performance)

If performance is still measured by visible activity, responsiveness and utilization, people will behave accordingly. And AI will amplify that behavior.
The organization needs metrics that capture:

  • quality of decisions
  • reduction in rework
  • stability of execution under pressure
  • reliability of delivery over time
  • the rate at which lessons are turned into changed behavior


Or, put brutally: if your best people look “busy” all the time, that might be the problem, not the proof of performance. If you measure output and not outcome, you probably also reward for the first and that’s why you might not get the latter.

3) Treat AI as a cognitive load reallocator, not a task killer

AI doesn’t eliminate work. It shifts it. Often, it shifts humans into:

  • oversight
  • review
  • exception handling
  • “last mile” judgment
  • accountability without full understanding


If you want a robust system, you need a principle for when AI is allowed to offload cognition versus when it must “augment” cognition.

Ask a simple question per use case:

When the model is wrong, do our people still understand the work well enough to catch it and defend the decision?

If the answer is no, you didn’t create efficiency. You created fragility and dependency.

4) Train for judgment, not for tools

Most AI “enablement” is tool training. That’s the easy part.
The hard part is building the human capabilities that AI increases demand for:

  • decision-making under ambiguity
  • sensemaking
  • prioritization and trade-off discipline
  • systems thinking
  • conflict navigation
  • cognitive debriefing (“what did we learn and how does it change how we operate?”)


You don’t get these capabilities by osmosis. You get them by engineering them into the work: debriefs, retros, decision logs, principle-based governance and coaching tied to real decisions.

5) Redesign roles and boundaries to reduce cognitive thrash

AI increases speed. Speed increases volume. Volume increases context switching. Context switching is not free. It is cognitive tax.
If you want sustainable performance, you need clarity:

  • fewer active priorities
  • explicit ownership
  • fewer “shared accountability” ghosts
  • and decision rights that match responsibility


Otherwise, people drown in coordination and stay in permanent partial attention.


The bottom line

AI will make organizations more productive. But unless we redesign work itself, it will also make work more cognitively punishing.

The future of work isn’t “four hours a day.”
It’s not “everyone becomes strategic.” 
It’s not “busywork disappears.”

The future of work is a workforce pushed into sustained high-cognitive demand without the rhythms, incentives and operating model maturity required to survive it.

And if that’s what you’re building, burnout isn’t an accident. It’s the outcome your system is designed to produce.

So the real question is not whether AI will change work. It will.
The question is whether you will design the system so that your people can still think-clearly, sustainably and for the long run. Because if “performance” in your organization still means “more, faster, always,” AI won’t free your workforce. It will just remove the last remaining recovery mechanisms-right before you ask them to do the hardest work of their lives.


Disclaimer

To be clear: burnout is not the only challenge the future of work must address.

Research shows that AI is reshaping task composition, skill requirements, accountability structures and learning dynamics. These are real and important questions – and many are already being discussed.

What is far less discussed is what happens when these changes systematically concentrate sustained high‑cognitive demand into daily work, without redesigning recovery, rhythm and decision load.

That is the gap this article focuses on.

Further readings

Burnout theory:
Demerouti et al. (2001). The job demands–resources model of burnout. Journal of Applied Psychology, 86(3), 499–512.
Summary: High job demands drive exhaustion, while insufficient resources drive disengagement.

Burnout/ cognition evidence:
Gavelin, H. M., et al. (2022). Cognitive function in clinical burnout: A systematic review and meta-analysis. Work & Stress, 36(1), 86–104.
Summary: Burnout is associated with small‑to‑moderate impairments in executive function, attention, and working memory.

Koutsimani, P., Montgomery, A., & Georganta, K. (2021). The relationship between burnout, depression, and anxiety: A systematic review and meta‑analysis. Frontiers in Psychology, 12.
Summary: Burnout strongly correlates with mental health problems and reduced psychological functioning.

Workload & recovery:
Hetland, J., Saksvik‑Lehouillier, I., & Pallesen, S. (2022). The role of sleep and recovery in employee functioning under work pressure. Frontiers in Psychology.
Summary: Daily recovery and sleep quality buffer the negative effects of sustained work pressure on performance.

Cognitive load theory:
Paas, F., & van Merrienboer, J. J. G. (2020). Cognitive‑load theory: Methods to manage working memory load in the learning of complex tasks. Current Directions in Psychological Science, 29(4), 394–398.
Summary: Performance degrades when working‑memory demands exceed cognitive capacity, especially in complex tasks.

Automation & skills:
Frazier, S., Pitts, B. J., & McComb, S. (2022). Measuring cognitive workload in automated knowledge work environments: A systematic literature review. Cognition, Technology & Work, 24, 557–587.
Summary: Automation often shifts human work toward monitoring, judgment, and cognitive control rather than eliminating effort, increasing cognitive workload risks in knowledge‑intensive tasks.

Rinta‑Kahila, T. et al. (2023). The vicious circles of skill erosion: A case study of cognitive automation. Journal of the Association for Information Systems, 24(5), 1378–1412.
Summary: Heavy reliance on algorithmic decision‑making systems can weaken human judgment, reduce meaningful oversight, and undermine long‑term organizational capability when human expertise is not actively maintained.

OECD (2019). The future of work: OECD employment outlook 2019. OECD Publishing.
Summary: Automation primarily changes the composition of tasks within jobs rather than eliminating entire occupations.

Picture of Oliver Mišković

Oliver Mišković

Oliver is a Partner at Fractional View GmbH and advises leadership teams in complex transformations where alignment looks sufficient on paper, but execution risk is high. His work focuses on making trade-offs explicit, connecting strategy to measurable outcomes and designing operating rhythms that hold under pressure. He brings 17+ years of experience across large-scale transformations in banking & finance, telco, logistics and the public sector.
Table of Contents
You might also be interested in...
Alignment fails less often because of resistance than because leaders underestimate its cost. This article examines why real alignment creates tension, slows decisions and feels personal - and why that...
TRAIIN is designed to uncover and address hidden challenges or blind spots in transformation initiatives by aligning perspectives across departments, facilitating cross-functional leadership dialogue to ensure comprehensive, resilient change management....
Discover how the TRAIIN Operating Model SteerCo drives strategic alignment, removes blockers, and enables outcome‑driven transformation across the organization....