We're Building Our Purpose on Quicksand
Stop Asking What Only Humans Can Do
Every day, I see people making the case of ability X or feature Y being ‘uniquely human’, and then proceeding to make it something that we can:
a) use to make fun of AI with,
b) pin our hopes of future employment on it, or
c) console ourselves with that whew, we still have X or Y.
Sometimes all three.
And then, every few months, I watch another one of these “uniquely human” capabilities fall to machines. It’s not like this is a new phenomenon. We’ve seen it repeatedly over the past couple of hundred years.
Almost thirty years ago, it happened to chess, of which Garry Kasparov said:
Thousands of years of status quo human dominance,
a few decades of competition,
a few years of struggle for supremacy.
Then, game over.
Then Go; then writing; then appearance of empathy; what’s next?

Each time, there’s a familiar pattern: initial denial, quite a bit of anger, gradual if reluctant acceptance, and then a hasty retreat to the next defensive position. Take the Turing test - for decades it was seen as the gold standard for machine intelligence. Then LLMs started passing it, and suddenly the discourse shifted: ‘Well, that’s not real thinking, that’s just pattern matching.’ We’re retroactively redefining what we supposedly meant all along.
“Ah, but humans are still uniquely good at this thing,” we say, pointing to whatever capability hasn’t quite fallen yet.
We’ve been playing this game for decades, and somehow we still haven’t learned the lesson.
Instead of recognizing the pattern, we’re doing it again – just more desperately and urgently this time. We scan the horizon for those remaining things AI cannot do, whether it’s complex reasoning, genuine creativity, moral judgment, or authentic emotional connection. If we can’t find it, we claim that well AI just seems to do it, not actually do it.
We find these ostensibly “uniquely human” capabilities and extrapolate confidently: these will be the domains where the future of human work and purpose will be found from.
Interestingly, we’ve never had this problem with animals.
Eagles fly better than we ever could. Dogs perceive smells in dimensions we can’t even imagine. Cheetahs run faster, octopi solve spatial puzzles differently, whales navigate vast oceans with senses we don’t possess.
We don’t have existential crises about any of this. Why? Because we never tied our sense of human worth to being physically superior to other creatures. We just...coexist – or I guess hunt them to extinction, but mostly I would like to think we appreciate what they can do, we do what we can do, and life goes on.
But the moment something we created might plausibly think, create, or judge better than us? Blasphemy! Suddenly it’s an existential threat.
Why is this? I’m sure part of the reason stems from us having internalized our economic system so deeply that we’ve confused market value with human worth. For most of us, our livelihood depends on being economically productive in ways that capitalism rewards. We’ve spent our entire lives in a system where ‘what you do for money’ and ‘who you are’ became the same question; in some cultures even more so than in others. Your job became not just how you pay rent, it became your identity, your social position, your sense of contribution to the society.
So when AI threatens to do our work more efficiently, it’s threatening more than our paychecks; it’s threatening our identity. We panic about the potential of AI creativity not because creativity itself is under threat, but because ‘creative professional’ might stop being an economically viable identity. The existential crisis is actually an economic crisis that we’ve mistaken for a philosophical one.
We’ve let capitalism define the terms of human purpose, and now we’re scrambling to find purpose within those terms as the ground shifts beneath us. The result? We’ve convinced ourselves that human worth flows from being the best at thinking, creating, deciding. Which means we’re playing a game with only one possible outcome: losing it.
If your sense of purpose is built on competitive advantage, you’ve already lost. You’re just waiting for the timer to run out. It’s only a matter of time before the thing you’ve staked your identity on becomes something another entity does more efficiently, more quickly, or at greater scale.
You’ve built your house on quicksand and you’re surprised the foundation keeps shifting. Of course you’re then angry at the situation, and in denial, and want to make the whole shift go away.

This isn’t how purpose works in any other part of human life, so why do we accept it here?
Think about it: you don’t love people because no other entity could love them more. You don’t cook dinner for your family because you’re uniquely capable of combining ingredients, and that nobody else could possibly do it. You don’t pursue curiosity about the world because you’ve cornered the market on asking good questions. The amateur chef doesn’t need to out-cook a head of a Michelin restaurant to find meaning in cooking. The weekend runner doesn’t need to beat Kipchoge to value running. The hobbyist woodworker doesn’t need to be the best carpenter in the world for woodworking to matter to them.
We understand this instinctively in these domains. So why do we abandon this understanding the moment we talk about human purpose in the age of AI?
I think part of the confusion comes from conflating two entirely separate questions. On one hand, there’s the economic question: what will companies pay humans to do? On the other hand, there’s the existential question: what makes human life meaningful?
These are not the same question, even though we keep treating them as if they were.
Yes, capitalism will automate whatever is more efficient.
Of course it will.
If there’s a cheaper, faster way to do something, the market will find it and use it. That’s how the system works – this is how we designed it to work, and suddenly we’re all upset about it? I don’t like it, but this is a fact about our economic system, not a fact about human purpose or meaning.
What companies will pay for and what makes life worth living can be entirely different categories. Arguably they should be entirely different categories. The market values efficiency, scalability, competitive advantage. But human flourishing requires agency, meaning, direct engagement with what matters to us. These occasionally overlap, but they’re not the same thing.
If you hate the economic logic - if you think it’s wrong that human worth gets measured by market value, a statement that shouldn’t really be all that contentious - then the fight is to change the economic system, not to contort your sense of human purpose to fit market demands.
And let’s be direct about what change looks like: the way things are looking like, at minimum, we need something like universal generous income. Not as welfare, but as infrastructure - the economic foundation that lets people pursue purpose independent of whether that purpose happens to be marketable in this particular configuration of capitalism. We can even keep, if we so want, capitalism, markets, innovation, entrepreneurship.
We could even keep billionaires - and when you really think about, UGI (Universal Generous Income) would protect the billionaires. Nick Hanauer forecasted they’d otherwise be facing pitchforks, and I am pretty sure most of them, while probably also itching to try out their apocalypse hideouts in New Zealand would actually want to avoid the pitchforks.
UGI is just pragmatic. We just – and I’m fully conscious just how much work goes into this ‘just’ – need to tweak the tax structure and redistribution so that human flourishing isn’t held hostage to having a job that a machine can’t do more cheaply. If, and as it appears when, we’re dead-set at building an economy where human labor becomes increasingly optional for production, we need an economic system where a significant part of human income becomes independent of human labor.
But even with better economic infrastructure, we still need to get the philosophy right.
Universal income isn’t permission to be purposeless - it’s the foundation that lets us build purpose on something more stable than market demand.
What do humans need to do to live meaningful lives?
There is a sea of philosophers who are better at answering that than me, but I think part of it is about agency - the capacity to choose what we engage with and how we engage with it. It’s about direct participation in our own lives. It’s about the difference between having things done for you versus doing them yourself, when doing them yourself is what gives the activity meaning.
This is where we need to rebuild the foundation: on the trinity of agency, autonomy, and purpose.
Agency - the capacity to act, to choose what we engage with and how we shape our engagement. Not just having options presented to us by an algorithm, but possessing the judgment to decide which path serves our flourishing – and a freedom to opt out of things like that algorithm.
Autonomy - the ability to self-govern, to maintain independence of thought and action even within systems of some automation. The difference between using tools and being used by them. The preservation of our capability to function when the systems we depend on fail or mislead us.
Purpose - the meaning we derive not from being uniquely capable, but from direct participation in activities that matter to us. The intrinsic value of doing, regardless of whether someone or something else could do it “better.”
These three aren’t luxuries or philosophical abstractions. They’re the foundation of human flourishing. Without them, we drift toward atrophy - skills unused become skills lost. We drift toward alienation - lives lived at one remove from direct experience. We drift toward a slow hollowing out of what makes existence feel worth living.
We don’t need to be uniquely good at creativity to find meaning in creative work. We don’t need to be the best at empathy to value showing up for other people. We don’t need to monopolize judgment to make decisions worth making.
I’m not saying that the machines are better than we are at those things today, or even that they will be. But I am saying that IF they will be, it would be better for us to have built our foundations of purpose on something more solid, or we could really find ourselves adrift in a dangerous manner as a species.
We know from history that large populations of idle young men are dangerous to societal stability; now imagine that dynamic scaled up to entire populations being unmoored from the structures that gave their lives meaning and direction. If we automate work before fixing this, before rebuilding the foundations of human purpose, we’re not heading towards some peaceful post-scarcity utopia.
What we need is to preserve the agency to choose what we engage with directly, and the judgment to know when that direct engagement matters. We need to maintain the skills that let us participate in our own lives rather than just spectate. We need to remember that purpose comes from doing, not from being the best at doing.
The “uniquely human” crowd is accidentally arguing for their own obsolescence; if your worth is predicated on being the best or unique at something, and something better comes along, then yes - by that logic, you’re finished.
But if your worth is grounded in agency, in the choice to engage, in the meaning you derive from direct participation in your own life? That’s not threatened by something being more capable than you. It’s orthogonal to it.
Stop looking for what only humans can do, or can do better than something else, or do so “more genuinely”.
The quicksand was always unstable. Instead of staking our identity on the next ‘uniquely human’ capability, we should look for solid ground - and ensure everyone can reach it.




Excellent to see you tackle this, Sami. You landed in a very similar place to my most recent essay also on the conflation of market value and human worth.
And this:
“I’m not saying that the machines are better than we are at those things today, or even that they will be. But I am saying that IF they will be, it would be better for us to have built our foundations of purpose on something more solid, or we could really find ourselves adrift in a dangerous manner as a species.”
What might that “something more solid” be, do you think? What are the moorings of purpose? Is it just a Nietzschean self-authoring exercise or something else?
I've pondered what humans will still be needed for (in a productive sense) as machines get better and better. Two things I've landed on, that are not skill-based, are (i) people can take accountability for things, and are able to receive punishment if they fail, (ii) people sometimes prefer to interact with other people for particular tasks.