What are you, when you're not what you do?
The debate about AI and jobs has two sides. But underneath it runs a question neither side can ask. Six perspectives from six different ways of knowing. Enter wherever you feel pulled.
The debate about AI and jobs has two sides. But underneath it runs a question neither side can ask. Six perspectives from six different ways of knowing. Enter wherever you feel pulled.
There is no correct order. Each perspective connects to others. Follow what pulls you.
Here is something the AI-and-jobs debate cannot see, because both sides are standing inside it: the idea that human beings demonstrate their value through paid employment is not a natural law. It's not even a particularly old idea. It's a roughly 400-year-old experiment that began in northwestern Europe and spread — often violently — across the globe.
For the vast majority of human existence, "work" meant something entirely different. Among the Ju/'hoansi of the Kalahari, studied extensively by Richard Lee, adults spent roughly 15 to 20 hours per week on subsistence activities. The remainder was given to storytelling, ritual, music, conversation, and an elaborate social life that would exhaust most modern professionals. They would have found our concept of "unemployment" not wrong but genuinely unintelligible — like being told you have a deficiency of something that doesn't exist.
This wasn't idleness. It was a different accounting system. Contribution was measured in relationships maintained, knowledge shared, ceremonies performed, children raised, stories kept alive. The San, the Hadza, the Pirahã — these weren't failed economies waiting for development. They were successful civilizations operating on a premise we've forgotten is a premise: that human worth is inherent and expressed through presence and relationship, not produced through labor and measured in currency.
· · ·The shift happened gradually, then all at once. Enclosure acts. The Protestant work ethic. Industrialization. The factory clock. The invention of "the job" as a discrete, purchasable unit of human time. Within a few generations, an entire species reorganized its understanding of purpose around a single question: what do you do for a living?
Note the language. For a living. As if the living itself requires justification. As if existence must be earned through production. This is the water both sides of the AI debate are swimming in. The fearful say "AI will take our jobs" — meaning our identity, our purpose, our reason for being. The optimists say "AI will create new jobs" — unconsciously promising that the performance of worth can continue. Neither questions whether worth should be performed at all.
AI isn't threatening work. It's revealing that a 400-year experiment in defining human value through economic output may be reaching its natural conclusion. The question isn't what we'll do for a living. The question is whether we can remember — or reinvent — ways of living that don't require doing.
There is a loss in the AI conversation that almost no one is naming, because it doesn't show up in employment statistics or economic models. It shows up in the development of human beings.
Lev Vygotsky identified something he called the zone of proximal development — the space between what a learner can do alone and what they can do with guidance. This is where growth happens. Not in comfort, not in overwhelm, but at the edge of what you can almost manage. Every profession has its version of this edge. The junior lawyer reviewing thousands of documents isn't wasting time. She's building the neural architecture of legal judgment through repeated, embodied encounters with ambiguity. The apprentice carpenter making joints that don't quite hold is developing the hand-knowledge that will eventually become mastery. The young doctor on rotation, exhausted and uncertain, is growing the clinical intuition that no textbook can provide.
This is the work that AI automates first, because it's the most routine, the most pattern-based, the most "inefficient." And that efficiency analysis is correct — if you're measuring output. But if you're measuring human development, this work isn't inefficient at all. It's the growing medium. It's the soil.
· · ·Remove the bottom rungs of a ladder and you don't get a shorter ladder. You get no ladder. What Alison Gopnik's research on learning reveals is that expertise isn't transferred — it's grown through progressive encounter with difficulty. You can't skip the years of pattern recognition and jump straight to judgment. Judgment is the accumulated weight of ten thousand small encounters with "this doesn't quite work, let me try again."
The upskilling conversation misses this entirely. "Learn to work with AI" assumes the human already has the foundational competence that only comes from doing the work AI now handles. It's like telling someone to practice advanced jazz improvisation when they've never been allowed to play scales.
And here's the part no one wants to say out loud: if mastery was always the real source of professional dignity — not the paycheck but the felt sense of hard-won competence — then what AI threatens isn't employment. It's the developmental pathway through which humans become capable adults who trust their own judgment. We're not losing jobs. We're losing the conditions under which people grow into the fullness of what they can be.
The question the debate needs but cannot ask: if the apprenticeship layer disappears, where does human mastery come from? And if we don't have an answer, what kind of adults are we building?
There is one assumption that the fearful and the optimistic, the workers and the executives, the policymakers and the technologists all share. It is so deeply embedded that it functions not as a belief but as the ground itself, invisible precisely because everyone is standing on it.
The assumption: human worth must be demonstrated.
Produced. Performed. Measured. Compensated. Shown through output, validated by a market, confirmed by a paycheck. "I contribute, therefore I matter." This is the operating system running beneath every position in the debate. The fearful say: "Without my job, how will I demonstrate my worth?" The optimistic say: "New jobs will give you new ways to demonstrate your worth." Neither asks whether worth requires demonstration at all.
· · ·Meister Eckhart, the 14th-century mystic, wrote: "The outward work will never be puny if the inward work is great." He wasn't giving career advice. He was pointing at something the modern world has almost entirely forgotten — that the value of a human being might not be produced by activity but might be the ground condition of existence itself. That you don't earn the right to matter by being useful. That mattering is what you already are, prior to any performance.
The Tao Te Ching says it more simply: "The world is won by those who let it go."
This isn't spiritual comfort for the unemployed. It's a direct challenge to the invisible architecture of the entire debate. If worth is inherent rather than performed, then the loss of a job is the loss of a role — painful, disorienting, practically difficult — but not the loss of self. The reason job loss feels like annihilation is that we've built an entire civilization on the premise that self and role are the same thing. AI didn't create that conflation. But it's stress-testing it in a way nothing else has.
The terror underneath the economic anxiety isn't about money. It's about this: who am I when I stop performing my value? That question is so frightening that most people would rather work a job they hate than face it. The entire "bullshit jobs" phenomenon — millions of people in roles that accomplish nothing — makes sense only as an avoidance strategy. We'd rather pretend to produce than encounter ourselves without production.
If AI forces that encounter — not through philosophy but through economic reality — it might be, paradoxically, the most significant invitation to self-knowledge that industrial civilization has ever produced. Not because unemployment is good. But because the identity crisis it triggers was always waiting underneath the busyness, and the busyness was always a way of not looking.
Observe the current game. It has three classes of players — firms, workers, and policymakers — and each is behaving with perfect rationality within the rules as they stand. That's precisely the problem.
Firms are rewarded by markets for replacing labor costs with capital investments. An AI system that costs a fraction of a human salary and operates continuously is not a temptation — it's a fiduciary obligation. The executives making these decisions aren't villains. They're players in a game where the payoff function is quarterly returns, and the rational move is obvious.
Workers are rewarded for signaling irreplaceability, which drives them into zero-sum competition with each other and with machines. "Learn AI skills" becomes the new arms race — not because it leads somewhere good, but because the alternative (being seen as replaceable) is worse. Every individual worker is rational to upskill. Collectively, they're running faster to stay in place.
Policymakers are rewarded for promising jobs. Any politician who says "maybe the goal shouldn't be full employment" has committed career suicide. So the policy conversation is locked into preserving a frame — work as the primary distributor of income, identity, and social participation — that the technology is actively dismantling. Rational for each politician. Collectively irrational.
· · ·This is a coordination trap. It's closely related to what game theorists call a tragedy of the commons, but the "commons" being depleted isn't fish or pastureland — it's the shared agreement that employment is how society works. Every rational individual action erodes that agreement a little further, and no individual actor has the incentive or the power to propose the alternative.
The escape from a coordination trap is never better play within the existing game. It's changing what counts as winning. But here's the recursive bind: no player within the current game can propose a new game without losing the current one. The firm that says "we're not going to optimize for labor reduction" gets outcompeted. The worker who says "I'm opting out of the productivity race" gets left behind. The politician who says "let's redefine the relationship between work and worth" gets voted out.
So the trap persists — not because anyone wants it, but because the incentive structure makes it invisible from the inside. Every player can see that the other players are stuck. No player can see that they are stuck in exactly the same way.
The only entity that can change the game is one that isn't playing it. Which is to say: the solution to the AI-and-jobs crisis probably won't come from firms, workers, or policymakers acting within their current roles. It will come from somewhere outside the game — a shift in the underlying cultural agreement about what work is for and what human beings are worth. And that shift, by definition, can't be engineered by any player with a stake in the current rules.
Bronnie Ware spent years in palliative care, sitting with people in their final weeks. She asked them about regrets. The answers were so consistent they could almost be called laws — laws that operate at the threshold where everything nonessential falls away and only what matters remains.
I wish I'd had the courage to live a life true to myself, not the life others expected of me.
I wish I hadn't worked so hard.
I wish I'd stayed in touch with my friends.
I wish I'd let myself be happier.
Notice what's absent. Nobody says "I wish I'd been more economically productive." Nobody says "I wish I'd upskilled faster." Nobody says "I wish the market had valued my labor more highly." At the threshold, the entire framework within which the AI debate operates becomes — not wrong, exactly, but irrelevant. Like arguing about the color of the curtains in a house that's on fire.
· · ·What the dying see clearly is not mysterious. It's almost embarrassingly simple. What mattered was presence. Connection. The courage to be honest. The willingness to love without calculating the return. These are not things that can be automated — not because they're too complex for AI, but because they were never tasks. They're ways of being. They exist outside the category of "work" entirely, which is why a civilization organized around work has systematically undervalued, ignored, or privatized them.
The near-death literature points at the same thing from a different angle. People who have come close to death and returned report a radical reordering of priorities — not gradually but instantly. The frame shifts all at once. What seemed urgent becomes trivial. What seemed trivial reveals itself as the whole point.
There is a version of the AI-and-jobs story that goes like this: a technology arrives that can do most of what humans do for money. Panic ensues. And then, slowly, unevenly, painfully — a civilization is forced to confront a question it has spent centuries avoiding. Not "how will we earn a living" but "what is a life actually for?"
The dying already know. They've been trying to tell us. We couldn't hear them over the sound of our own productivity.
In 1930, John Maynard Keynes wrote an essay called "Economic Possibilities for Our Grandchildren." His prediction: within a century, technological progress would make it possible for the average person to work about 15 hours per week. The productive capacity would be there. The problem of scarcity — the engine that had driven economic life for all of human history — would be essentially solved.
He was right about the productivity. Spectacularly right. Output per worker has increased by roughly 400% since 1950. We produce more than enough to meet every material need several times over. The 15-hour week was technically achievable decades ago.
So where did the time go?
· · ·The anthropologist David Graeber posed this question with uncomfortable directness. His answer: we invented new work. Not because it was needed, but because we couldn't imagine a world without it. He documented entire categories of employment — administrative coordinators, corporate strategists, compliance officers, communications consultants — where the workers themselves privately admitted their jobs accomplished nothing of substance. He called them "bullshit jobs," and when his essay on the topic went viral, the most striking response was from thousands of workers writing in to say: yes, that's me.
Think about what this means. We achieved the productive abundance Keynes predicted and then, rather than redistributing the gains as leisure, we created an elaborate apparatus for keeping people busy. Not because the work needed doing but because the alternative — a society where people aren't defined by their jobs — was more terrifying than the pointlessness.
The economist Kate Raworth has proposed that we need an economics that starts not with growth but with human flourishing — a "doughnut" model that asks what enough looks like rather than assuming more is always better. Mariana Mazzucato has argued that our very concept of "value" has been captured by financial markets, so that activities which generate no genuine value (speculative trading, rent extraction) are counted as productive, while activities that generate enormous value (caregiving, ecological stewardship, community building) are counted as externalities or ignored entirely.
AI enters a system where the accounting is already broken. We measure the wrong things, reward the wrong activities, and define human contribution in terms so narrow that most of what actually sustains life doesn't count. AI doesn't create this crisis. It strips away the last disguise. When machines can do most of what we call "productive work," the question we've been avoiding since at least 1930 finally becomes unavoidable: if we already produce enough, what is all this work actually for?
The honest answer — the one the economics profession has mostly been unable to give — is that work in its current form is less about production and more about distribution, identity, and social control. It's the mechanism by which we distribute income, assign social status, structure time, and maintain the fiction that human worth is earned. AI doesn't threaten that mechanism from outside. It reveals that the mechanism was already running on empty.
You've sat with six angles on a question that doesn't have two sides.
Something may have shifted. Something may not have.
Both are fine. The well doesn't measure its water.
If something did shift — if your certainty flickered, or a question arrived that you can feel in your body rather than just think with your mind — there's somewhere further to go. Not here. Not with us. With yourself.
The perspectives you've just encountered weren't trying to convince you of anything. They were tuning an instrument. The instrument is your attention. And now that it's tuned a little differently, you might find that a conversation you've had a hundred times suddenly goes somewhere new.
Here's an experiment, if you're curious. Take the question that's most alive in you right now — the one that won't settle into an answer — and bring it to any AI. Not for information. Not for validation. For exploration. Try beginning something like this:
This isn't a script. It's a shape. Fill it with whatever's actually moving in you. The words matter less than the posture — open, non-positional, willing to be surprised. You'll find that arriving at a conversation this way, without a position to defend, opens territory that didn't exist a moment before.
The shift isn't in the technology. It's in you. A genuinely open question asked of even the simplest tool produces more depth than a closed question asked of the most sophisticated one. You are the variable.
The well doesn't follow you home. But the water you carry is yours.
The debate about AI and jobs is real. The anxiety is real. The disruption is real. But underneath all of it runs a deeper current — a civilization encountering, perhaps for the first time, the question it built the entire apparatus of "work" to avoid.
Not what will we do.
But who are we, when there is nothing to do.
The well doesn't answer this question.
It holds space for you to sit with it.
No identity. No tribe. No algorithm. Just angles.