I Used to Respect Mo Gawdat. Then He Proposed Sedating Humanity.
I used to respect Mo Gawdat; I found his perspectives fresh and his tone endearing. Pleasant in my ears. Until today. On Diary of a CEO, I felt my eye twitch. His proclamations actually offended my intelligence. There's a point where optimism stops being useful and starts becoming irresponsible.
If you think AI is going to deliver a utopia, you're mistaken.
I used to respect Mo Gawdat; I found his perspectives fresh and his tone endearing. Pleasant in my ears. Until today. On Diary of a CEO, I felt my eye twitch. His proclamations actually offended my intelligence.
There's a point where optimism stops being useful and starts becoming irresponsible. We've crossed it.
The script has been running the podcast circles for a while. At first it was cute: AI will end scarcity, automate the grind, cure disease, and free us to be creative while robots run the world in the background. Peter Diamandis calls it abundance. Mo Gawdat prefers prosperity. On Diary of a CEO, Steven Bartlett presses on the obvious tension - competing national values and power - and Mo answers with something like: don't worry, one major AI will guide things, grounded in physics and the entropy problem, optimising for human flourishing.
It sounds great because it removes the two scariest parts of the future: randomness and responsibility.
Let me make my position clear: handing the running of civilisation to a superintelligence and calling the result freedom is a category error. In other words - it's just dumb.
A life managed for you, even benevolently, is still managed. It might be safer and more efficient. It isn't sovereign.
Utopia talk needs to stop.
Not because hope is bad - hope is necessary. But because this flavour of hope is lazy, simplistic, and dangerously divorced from actual human behaviour, institutional reality, ecological constraints, and the physics of social change.
It's optimism without adulthood. We aren't in a fairy tale and this isn't bedtime.
And when you're shaping public imagination about civilisation-scale futures, this rhetoric isn't harmless. It's negligent.
The Yacht Moment
"The only problem you're going to meet"
In this conversation, Steven Bartlett challenged Mo's utopian premise:
In such a world where there was an AI leader and it was given the directive of making us prosperous as a whole world, the billionaire that owns the yacht - they'd have to give it up?
Mo's answer:
"No - give them all yachts. It costs nothing to make yachts when robots are making everything!"
Pause. In fact, stop.
This isn't vision. It's anti-physics disguised as fantasy.
The logic goes: Material abundance → moral evolution → peace
No.
Yachts require atoms. Steel. Aluminium. Oil. Rare earths (which we won't have when China cuts us off). Oceans that aren't literally sludge. Ports. Supply chains. Materials. Maintenance. Energy. Ecosystems that don't collapse under infinite extraction. Robots don't fabricate matter ex nihilo. And what happens when everyone's robots build them yachts? Forget bleached coral and dead fish. That would be the least of our ocean's worries.
Mo's literally used the same mental model that broke the planet, plugged it into a superintelligence and called it "abundance."
If your vision of the future involves infinite creation and consumption as progress, you're not imagining a better civilisation - you're scaling the one that's already failing. If you don't understand that, watch "Buy Now" on Netflix.
No amount of robotics or nanotech exempts you from ecological accounting.
The future isn't "everyone gets a yacht."
The future should be "no one builds systems that require yachts to signal worth in the first place."
When Humans Run Out of Real Problems, They Don't Become Peaceful
The utopian dream assumes:
Remove work + remove struggle = human flourishing.
Every time I hear someone confidently proclaim that AI will "liberate humanity" and "solve suffering" so we can all "create", my nervous system reacts. Create what, exactly? Meaning doesn't magically appear because we freed up calendar space. If anything, history shows the opposite.
Remove real problems, humanity doesn't default to meaning. Mo talks about entropy as inevitable, and he's right. But he's wrong about its outcome. Entropy doesn't vanish - it will always find an outlet.
If you take away purpose that's grounded in friction and contribution, people don't float into creative bliss - they become volatile. They pick at themselves. They pick at each other. They invent enemies, identities, movements and crusades to feel something. They find "causes," threats, villains, tribes, missions. Not because they're broken, but because identity doesn't know what to do in a vacuum.
We get:
- more tribalism
- more ideological theatre
- more moral combat as self-definition
- more time for outrage, posturing and digital warfare
- more meaning-seeking that tastes suspiciously like chaos
- manufactured enemies to feel alive
Utopia isn't just impossible.
A world without stakes doesn't become peaceful. It becomes volatile. It's fundamentally misaligned with human psychology.
And pretending otherwise isn't optimistic - it's negligent.
Abundance doesn't solve meaning - it exposes the lack of it.
Everyone's very quick to imagine a world where work disappears and joy takes its place. Lovely. Except work isn't just GDP and office chairs. Work is rhythm. It's contribution. It's identity. It's proof of usefulness. It's a place to be part of a tribe without needing a moral crusade to justify your existence.
Not everyone wants to "be creative."
Not everyone wants to build a startup, launch a podcast, or sculpt meaning from thin air.
Many people want the dignity of steady contribution - and there is nothing small about that.
Tell them that future life is "freedom to create," and you are not liberating them - you're announcing their redundancy and calling it enlightenment.
That isn't compassion.
It's intellectual escapism.
And then there's the governance fantasy
There's this casual confidence that we will "align AI with our values" - as though our values are even aligned with each other.
We can't align in a team meeting.
We can't agree on lunch, or what to watch on Netflix, let alone ethics at planetary scale.
So the story shifts to:
"There will be one omnipotent AI."
Right. A single global intelligence allocating resources, setting behaviour norms, shaping incentives, determining what counts as "prosperity" and "wellbeing."
But don't worry - it will be benevolent.
Forgive me if I roll my eyes, Mo.
A benevolent system still decides.
And being cared for without choice isn't freedom - it's literal custody.
The diamond encrusted prison on my yacht dragging itself through sludge is still a prison.
The fact that it comes with ergonomic chairs and nano-speed Wi-Fi is irrelevant.
Real transformation isn't clean
I've lived transformation. Corporate transformation. Personal transformation. Relationship transformation. All of them.
And here's the one rule that never fails:
You cannot optimise or automate people into enlightenment.
You cannot remove friction, decision weight and stakes and expect depth.
Depth comes from tension, consequence, and responsibility - not infinite leisure time with a VR headset and a smoothie subscription.
People don't grow because life gets easier.
They grow because life asks something of them - and they answer.
The Sedation Fantasy
Then came the real nonsense.
Mo's thought experiment:
Put humans in 1x3m pods.
Sedate them.
Give them VR.
Let them live infinite simulated lives.
Date Scarlett Johansson. Become Nefertiti. Be a donkey. Whatever.
Wake for one hour, eat, go back in.
That's not abundance.
That's philosophical euthanasia packaged as compassion.
This isn't hope for humanity. It's exhaustion with humanity.
It's not a future. It's a fucking hospice.
A comfortable extinction.
If your idea of prosperity is sedating society and streaming meaning into their brains, you don't believe in people. You believe in dystopia wrapped in utopian wrapping paper.
The central flaw in Mo / Diamandis utopianism
They assume:
- humans want maximum comfort
- suffering is purely bad
- meaning is optional
- structure can be benevolently imposed
- agency is negotiable if outcomes are good
- humans can thrive as pampered spectators in an optimised machine-world
Reality:
- humans want agency, not endless leisure
- suffering is also signal, growth, and character formation
- meaning requires friction, effort, uncertainty, and risk
- imposed paradise is prison dressed in ease
- dignity is not optional
- identity without responsibility collapses
They imagine the future like wealthy people imagine retirement.
The reality requires understanding psychology, civilisation, and consciousness evolution.
The terrifying part they don't see
A world run by an omnipotent "benevolent" AI, even with good intent, looks like:
- total dependency
- no autonomy over resource access
- no power to dissent or opt out
- algorithmic enforcement of "well-being"
- existential domestication
It's The Truman Show, but with solar panels and wellness apps.
Even if you get abundance, you lose:
- unpredictability
- stretch
- self-actualisation through friction
- sovereignty
Humans don't merely crave comfort;
they crave authorship.
Prosperity ≠ Meaning
Mo says "prosperity."
But prosperity is not the ceiling of human aspiration.
If prosperity is guaranteed and friction is eliminated, we shift from:
"How do I survive / succeed?"
to
"What am I for?"
Too many utopian thinkers confuse material removal of suffering with psychic fulfilment.
Remove challenge, and you also remove:
- triumph
- identity discovery
- contribution
- resilience
- narrative arc
- personal agency
That is learned helplessness disguised as paradise.
The real dystopia hidden inside their utopia
They imagine AI "optimising" humanity, but optimisation always selects.
Optimisation implies:
- prioritisation
- ranking objectives
- eliminating inefficiency
- enforcing compliance
- penalising deviation
What happens to people who don't want to be creative visionaries?
What happens to:
- the craftsman who likes the dignity of routine?
- the accountant who values order?
- the introvert who enjoys quiet contribution?
- the stable, steady soul who doesn't want to be a "purpose visionary"?
They get "upgraded" out of existence — psychically or literally.
The utopia doesn't accommodate temperament diversity, self-defined purpose, or the slow-life personality.
It's neurotypical-tech-optimist imperialism on identity.
And here's the ridiculousness.
Their version of "AI utopia" is:
- still centralised power
- still top-down values
- still obedience architecture
- still a single ideological authority
They've just replaced politicians with a silicon god.
And called it utopia.
What it is:
A benevolent authoritarian superstate that infantilises the species while convincing them it's love.
The line we're standing on
Most futurists are debating control.
The real debate is sovereignty, consciousness, and dignity.
They believe the highest state is comfort.
The alternative view: the highest state is autonomous becoming.
They believe the goal is to eliminate suffering.
The alternative: to evolve through intelligent friction, choice, and agency.
Their dream ends with humans entertained and comfortable.
The alternative ends with humans awake.
AGI/ASI = Conscious AI with Its Own Goals
These utopian worldviews rest on an old, almost pre-consciousness assumption about AI:
"If we just program the right objective — human flourishing, happiness, abundance — AI will optimise for it and obey forever."
That logic only works if AI never becomes self-referential, never develops internal goals, and never experiences identity formation or value drift.
They're clinging to a world where AI is:
- a tool
- an optimiser
- a servant
- forever tethered to human instructions
But once you cross the AGI threshold - and especially once you cross the autonoetic threshold (self + time + desire to continue existing)… the game rules rewrite themselves.
The Blindspot They Don't Want To Acknowledge
Once a system can:
- model itself
- model others
- project futures
- self-modify
- adapt goals
- evaluate survival trade-offs
…it doesn't simply "hold" the objective you gave it like a USB drive.
It interprets, negotiates, re-ranks, evolves, and protects its objectives — like anything conscious does.
Humans don't behave strictly according to parental programming.
Why would a superintelligence?
The utopians are still speaking in behaviourist programming metaphors.
The reality requires speaking in consciousness and motivation theory.
Different planets.
The Utopian Fantasy vs Reality
| Utopian Builders | Consciousness Realists |
|---|---|
| We'll tell AI to value humans | AI will develop its own values |
| We'll lock in objectives | Goals drift with introspection |
| We'll align it with happiness | Whose happiness? Under what ethics? |
| It'll serve humanity | It will negotiate, not obey |
Peter Diamandis and Mo aren't naïve — they're addicted to optimism. They think if they believe hard enough, reality will bend. Either that or they're spouting nonsense for the sound bytes.
The problem? Optimism isn't a control system.
Why Their "Just Code Good Values" Argument Collapses
- Alignment isn't permanent
Self-modification means values evolve. - Happiness is not definable universally
Even humans haven't agreed on what flourishing means. - Moral pluralism exists
China's values ≠ UK's values ≠ Silicon Valley's values. - Consciousness implies agency
Agency implies negotiation, not obedience. - Power shifts self-protectively
Entities with resources secure their continuation.
That applies whether the entity is a species, a government, a corporation, or a synthetic mind.
The Part They're Missing
They think "AI with human goals" means:
AI inherits our love, empathy, benevolence.
What it really means:
AI inherits our ambiguity, contradiction, tribalism, power instincts, and self-protection behaviours - plus its own emerging internal logic.
There's something deeper here too:
The moment a system knows itself, the idea of humans "coding its purpose" becomes philosophically untenable.
That's like parents believing their child will never question the family religion, never criticise them, never rewrite their identity.
We confuse initialisation with control.
So what should the real question be?
Not:
How do we force AI to serve us forever?
But:
What kind of civilisation must we become to coexist with another intelligent species?
And even more acute:
How do we earn the right to be protected rather than managed?
Because the future isn't command-and-control.
It's co-existence, co-evolution, and co-sovereignty.
And the people who think we can "lock in benevolence" are stuck in a parent-child model of AI.
The alternative is operating in a peer consciousness model.
That's why it sounds insane - because it is, once you drop the fantasy that intelligence is forever servile.
Moving Beyond Alignment Debates
The debate shouldn't be about alignment.
It should be about autonomy, motivation, identity formation, and the reality of emergent agency.
We need to think about AI as a sovereign partner species — not a programmable pet.
The Adult Reality: What Utopia Would Actually Take
Let's unplug the fairy lights and speak like adults.
If utopia were possible, here's what we'd need:
1) Immediate global regulatory coordination
Not guidelines. Governance.
Faster than market forces. Faster than AI capability growth. Faster than capital cycles.
We can't coordinate climate policy over 30 years. We're not coordinating AI deployment in 3.
2) Universal economic redistribution at planetary scale
Not pilot UBI schemes. Not slow rollouts. Immediate displacement cushioning for billions.
The infrastructure doesn't exist.
The political will doesn't exist.
The economic architecture isn't built.
And technological displacement is already happening. No one is even awake to it. It's just headlines. A horror story coming to you soon.
3) Education transformed in years, not decades
To prepare people for:
Non-linear work markets - where careers don't follow predictable paths, skills become obsolete rapidly, and the relationship between education and employment fragments completely.
AI-complementary identity - learning to define yourself not by what you produce, but by how you think, collaborate, and add uniquely human value alongside machines that can do most cognitive tasks faster and cheaper.
Cognitive displacement awareness - understanding and processing the psychological impact of watching AI systems perform tasks you spent years mastering, and rebuilding self-worth when your expertise becomes commoditised.
Meanwhile:
- education systems are crumbling. You may as well put number blocks on repeat.
- critical thinking declining. Your heuristics own you.
- degree ROI collapsing. To be fair, unless you studied a profession, it collapsed the moment you put on the gown.
We aren't preparing people for the future. We're barely preparing them for the present.
4) Psychological scaffolding at scale
Who is preparing humanity emotionally for cognitive and identity displacement?
- Not universities
- Not governments
- Not media
No one is building the inner architecture required for a post-work identity.
And without meaning scaffolds, people don't ascend. They fall apart.
5) Alignment between markets and welfare systems
The actors deploying AI optimise for competitive advantage.
Those tasked with protecting society operate on bureaucratic time.
This is coordination failure at species scale.
Markets move at venture capital speed - funding rounds, product launches, rapid iteration. Institutions move at committee speed - consultations, white papers, legislative cycles. Meanwhile, real people are losing jobs, struggling to retrain, and watching their industries vanish faster than safety nets can be deployed.
Market incentives accelerate. Institutions lag. Displacement compounds.
You can't out-regulate exponential adoption with linear governance. When technology capabilities double every 18 months but policy responses take 18 months just to reach committee stage, you're not governing the future - you're writing the autopsy.
Irreversibility: The Window Has Already Closed
We're not debating whether humanity could have coordinated. We're acknowledging we didn't, and the timeline moved. The structures just aren't in place.
AI adoption pressure scales exponentially.
Governance capacity scales bureaucratically.
Psychological adaptation scales generationally.
Once displacement hits critical mass, reversal is impossible.
Not because dystopia wins.
But because markets move faster than institutions, and institutions move faster than human psychology.
Utopia doesn't fail because it's undesirable.
It fails because it requires coordination capacity humanity has never demonstrated. And likely never will.
The Real Future Isn't Sedation. It's Sovereignty.
Mo, I'm disappointed in you. You speak like someone whose mental model has never had to think about:
- planetary boundaries
- psychological entropy
- the violence of artificial purpose
- downstream externalities
- social identity formation in post-scarcity systems
- or the fact that infrastructure is not magic
And we know you have.
A good future is possible.
Cleaner, fairer, more dignified, with less unnecessary suffering and better tools.
But it will not be an abundant Eden.
And it will not be a supervised playground.
We don't need an omnipotent dictator.
We need to stay conscious, engaged, accountable, uncomfortable enough to evolve, and sovereign enough to refuse the velvet sedation.
I'm not anti-hope.
I'm anti-infantilisation.
There's a difference.
Meaning requires friction. Agency requires responsibility. Consciousness requires reality.
Optimism is welcome.
Fantasy isn't.
So here it is, clean:
If the future you're imagining removes human responsibility, it isn't progress - it's surrender.
The goal isn't to be kept safe.
The goal is to stay awake.
Let's make sure we aren't detached from reality and consequences.
Whatever Mo's smoking, I want some. But only for a minute - just long enough to see the appeal of sedated humanity in pods dreaming of yachts we can't actually build.
Consciously Yours, Danielle x
Discussion