The EssaysChapter 5
Work Quake: Navigating Labor Shifts and the Pursuit of Meaning

Future of Human Resilience in the AI Age

Featured Contributors to Chapter 5: The 17 essay responses on this page were written by James Hutson, Stephen Downes, Matt Shumer, Scott Santens, Teri Horton, Michael Wollowski, John Laudun, Thomas Laudal, Jonathan Taplin, Jonathan Kolber, Nigel M. de S. Cameron, Wedge Martin, Charlie Kaufman, Pedro Lima, Josh Tucker, Sam Lehman-Wilzig, Chris Shipley. (Their essays are all included on this one, scrolling web page. They are organized in batches with teaser headlines designed to assist with reading. The content of each essay is unique; the groupings are not relevant.)


The first section of Chapter 5 features the following essays:

James Hutson: Expect sharp social and economic dislocation. ‘Without government intervention … there will be widespread unemployment.’ Resiliency will require much more than technical training.

Stephen Downes: ‘If there is ongoing need for leaders, educators, professionals, this will be a sign that the AI revolution has ultimately failed and will signal a long-term limitation in the aspirations of humanity as a species.’

Matt Shumer: The potential for mass unemployment isn’t just ‘an interesting dinner conversation about the future. That future is already here. It just hasn’t knocked on your door yet. It’s about to.’

Scott Santens: There will be a growing sense that life is becoming more luck-driven. ‘A society becomes brittle when people feel like one bad month can ruin them and that no amount of effort guarantees stability.’

Terri Horton: Addressing job displacement, contraction and loss cannot be reduced to simply telling workers to upskill and learn AI or be left behind. A deeply human-centered societal response is needed now.

Michael Wollowski: Where will jobless people turn to nurture their self-worth? Maybe to spiritual practices; maybe to learning from other cultures; maybe toward acting to enrich their friendships.

John Laudun: ‘Ordinary people are not embracing AI in hopes of developing co-intelligence but knuckling under to the pressures of the job market’ which is dominated by AI-forward thinking.

Thomas Laudal: If we allow AI to substitute for humans’ contributions in all areas of life, it will take over everything. Humans will give up; AI will say ‘checkmate.’ It will win in quality indicators and in labour productivity.


James_Hutson

James Hutson
Expect sharp social and economic dislocation. ‘Without government intervention … there will be widespread unemployment.’ Resiliency will require much more than technical training.

James Hutson, head of human-centered AI programming and research at Lindenwood University and co-author of “A Framework for the Foundation of the Philosophy of Artificial Intelligence,” said, “I believe AI systems will play a much more significant role in shaping decisions, work and daily life not because of a speculative future breakthrough, but because algorithmic systems already curate, influence and in many cases dictate the conditions under which contemporary life operates. Navigation systems decide routes, recommender systems shape cultural consumption, communication platforms filter visibility and attention, workplace software triages labor and performance and automated decision systems increasingly influence hiring, credit, insurance, healthcare access and public services.

“The question is no longer whether AI will shape human agency, but how quickly its role will expand from assistive infrastructure into an organizing logic of social, economic and cognitive life.

“Based on more than 100 empirical and applied studies I have conducted across education, workforce development and organizational change, the near-term societal response to this expansion will be profoundly disruptive. My findings consistently align with broader national and international research: societies are currently split into roughly three groups.

  • About 30% of people hold a generally positive view of AI and are actively attempting to adapt through experimentation, upskilling and reframing their professional identities.
  • Another 30% are uncertain and ambivalent; their views are shaped less by direct experience and more by mediated narratives, particularly news coverage and social discourse that oscillates between hype and fear.
  • The final 30% interpret AI as an existential threat, not only in terms of job displacement, but as a crisis of identity, purpose and social value, and they are actively refusing to engage in reskilling or adaptation.

Many workers will simply not have the financial runway to retrain while meeting basic living expenses. …  Education sits at the center of this transformation and current models are insufficient. Educational systems must abandon rigid silos and the assumption that narrow specialization alone guarantees stability. Instead, curricula should prioritize curiosity, creative transfer, growth mindset and adaptability as core learning outcomes.

“This distribution matters because large-scale technological transitions do not unfold evenly. When adaptation is uneven, advantages compound for those who engage early while disadvantages accumulate for those who disengage. In the current context, AI fluency accelerates productivity, employability and bargaining power, while refusal or delay often results in rapid marginalization as entry-level and routine cognitive work is restructured or eliminated. Without deliberate intervention, this divergence will widen existing inequalities across class, region, age and educational background. In my assessment and increasingly in the data, the risk is not a smooth transition but a sharp social and economic dislocation within the next five years, approaching 2030.

“Critically, I do not believe market forces alone will absorb this shock. Without government intervention comparable in scale and intent to COVID-era responses, including temporary income support paired with accessible, large-scale upskilling and reskilling programs, widespread unemployment and economic contraction are likely outcomes.

“Many workers will simply not have the financial runway to retrain while meeting basic living expenses. Early indicators of this pattern are already visible in sectors experiencing automation-driven restructuring without parallel investment in human transition pathways. Economic depression in this sense would not necessarily appear as a single global collapse, but as cascading regional and sectoral downturns driven by reduced labor demand, diminished consumption and social instability.

“Resilience in this environment requires capacities that go far beyond technical training. Cognitively, individuals must develop systems thinking, statistical and epistemic literacy and metacognitive awareness to understand when and how to rely on automated systems without surrendering judgment.

“Emotionally, resilience depends on tolerance for ambiguity, identity flexibility and confidence in continuous learning rather than static expertise. Socially, resilience requires cross-disciplinary collaboration, strong mentoring networks and institutional structures that support collective adaptation rather than individual competition. Ethically, societies must cultivate norms and governance frameworks that prioritize transparency, accountability, privacy and recourse in automated decision-making.

“Education sits at the center of this transformation and current models are insufficient. Educational systems must abandon rigid silos and the assumption that narrow specialization alone guarantees stability. Instead, curricula should prioritize curiosity, creative transfer, growth mindset and adaptability as core learning outcomes. We are entering an age of generalists, not in the sense of superficial knowledge, but in the ability to integrate domain expertise with evolving tools, collaborate across disciplines and reconfigure skills as conditions change. This shift represents a philosophical reorientation of education away from content mastery toward lifelong capacity building.

The scale of disruption ahead is not predetermined by technology itself, but by the choices societies make now regarding support, education, governance and narrative framing. If resilience is treated as an individual burden, failure will be widespread. If resilience is treated as a collective project, grounded in human development and systems-level coordination, the transition can expand opportunity rather than foreclose it.

“At the societal level, fostering resilience will require a coordinated effort among governments, media and the entertainment industry to counter fear-driven narratives and to demonstrate credible, lived examples of positive adaptation. Media representations shape emotional readiness for change and persistent framing of AI as either salvation or apocalypse undermines productive engagement. Balanced narratives that acknowledge real risks while illustrating pathways for meaningful human contribution are essential to maintaining social cohesion during transition.

“New vulnerabilities will inevitably emerge alongside new capabilities. Hyper-personalized persuasion, synthetic identity fraud, biased automated screening and cognitive offloading that erodes critical skills all represent serious risks. Coping strategies must therefore be taught explicitly, including verification practices, slow-thinking checkpoints for high-stakes decisions, collaborative accountability structures and clearly defined human-in-the-loop roles that preserve responsibility rather than obscure it.

“In the end, AI-driven transformation is not a future possibility but a present condition. The scale of disruption ahead is not predetermined by technology itself, but by the choices societies make now regarding support, education, governance and narrative framing. If resilience is treated as an individual burden, failure will be widespread. If resilience is treated as a collective project, grounded in human development and systems-level coordination, the transition can expand opportunity rather than foreclose it.”


Stephen_Downes

Stephen Downes
‘If there is ongoing need for leaders, educators, professionals, this will be a sign that the AI revolution has ultimately failed and will signal a long-term limitation in the aspirations of humanity as a species.’

Stephen Downes, expert with the Digital Technologies Research Centre of the National Research Council of Canada, wrote, “It’s important to understand there are multiple ways AIs can play a role in our daily lives: As a stand-alone service, like ChatGPT; as an add-on service, like Copilot in Microsoft Word; and as an integrated service, like adaptive cruise control in a car. Right now, there’s a lot of visibility for the first two, but in the long run integrated AI services will be the majority use case and the general rule of thumb will be, what people don’t see won’t bother them.

“After all, there are many more ethically objectionable practices hidden and integrated into many other aspects of our lives, from child labour producing our electronics, to the clear-cutting of rainforest to produce our beef and to dictatorships producing our oil. There is some unrest, but by and large global society accepts these as realities and there’s no reason to believe objections to integrated AI will be any stronger.

“What people will see is that digital services, especially, become faster, more responsive and more personal. Instead of buying and downloading an application (like a word processor or RSS feed reader), for example, we will just ask our computer to make one for us. Or – even more behind-the-scenes – our cars will optimize power consumption to match our driving style. (AI will also be behind the scenes managing the power grid, but we won’t even think about that.)

“The negative response will mostly come from older people and will mostly come from those whose livelihoods are impacted by AI. It will be similar to the objections people voiced to using automated tellers, or credit and debit cards instead of cash. It will become evident that resistance to AI is inconvenient, unhelpful and unwelcome. Meanwhile, on the other side, people will find themselves unburdened from legacy systems and able to use digital technology in ways previously limited to tech gurus and enthusiasts.

The traditional blue-collar and white-collar distinctions will evaporate with the elimination of most white-collar work by AIs and the elimination of most blue-collar work by robots. We can break down future employment categories into three major branches: those who care, those who service and those who experience.

“There’s a lot of discussion about how we will be able to preserve our skills, resilience and even our sense of self-worth in the digital age. But it won’t be a problem. Humans adapt and will take to our newly enhanced capabilities like fish to water. There is a lot of worry today about how to teach people how to use AI effectively and ethically, just as people were in the past concerned about calculators, spell checkers and driver-assist. But those who grow up in a world surrounded by AI systems will find new ways to be effective and ethical.

“Probably the most significant thing we will need to learn will be to describe what we want clearly. This is not because computers are illiterate and will only obey the most precise instruction, but because there are so many possible ways to satisfy any request the computer will want more than a vague indication of ‘what would be nice.’ There will be some places where menus are offered, but the situation will be more like a restaurant that can make any dish known to humanity. Unless we want to go through a (very annoying) series of questions and answers, we’ll learn to just state exactly what we want.

“While today there are concerns about personal privacy and security, in the future we will be much more willing to share information about ourselves to avoid ambiguity in our requests. So, for example, we will feed our address into the system so we can say ‘deliver it to my home’ to a parcel service, or ‘take me home’ to the automated cab service late at night. What will be interesting is if humans start communicating with each other that way. It is likely that the rules of politeness will change, to the discomfort of older people and as second nature to the young. IDK, YMMV. (I don’t know; your mileage may vary.)

“A lot of our concerns will be more practical. Some people might have their own AIs that manage most aspects of their lives, while others may access AI services through cloud providers. There will be issues with AI compatibility. If our fridge can’t talk to our power system, we may have a problem.

“AI providers will very likely induce artificial scarcity and embrace rent-seeking business models. Parents will find that their schools require one AI service (for instance, GG AI service), while their neighbourhood telecom supports another service, such as NC . And their cars might not run at all without monthly payments to MM (all the major services will be referenced only with two- or three-letter acronyms, the inevitable outcome of the increasing brevity in trademarks).

“The traditional blue-collar and white-collar distinctions will evaporate with the elimination of most white-collar work by AIs and the elimination of most blue-collar work by robots. We can break down future employment categories into three major branches: those who care, those who service and those who experience. Here’s what future work will look like:

  • “Those who care: Today we think of these as people who provide high-touch human services, such as nursing, teaching, hairdressing, dentistry and the like. Some of these functions will be automated (even today, I can watch my new tooth being printed in the dental office; in the future, we’ll use biotech to just grow them). But the care function – what we might think of metaphorically as ‘hand-holding’ – will be essential to help people through stressful events. This will require core skills like empathy and communication.
  • “Those who service: Today we think of people like plumbers, garage mechanics and hardware technicians (drive your car into one end, it comes out the other end completely serviced, like a car wash, but for all the bits that require replacing, tightening, lubricating, etc.). Again, most of these functions will be automated, but there will continue to be a need for people to do the things that haven’t yet been automated. People will constantly predict that it will eventually all be automated, but it never will be, not even on the software side.
  • “Those who share their experiences: Today we call such people ‘celebrities’ and ‘influencers,’ but there will be an ever-greater need for people to have new experiences to produce new ‘content’ (as we’ll call it) to enable AIs to keep learning and for the rest of us to react to. In many ways, experiencers will be aspirational, much like professional athletes are today, but there will be far more opportunities to enjoy similar experiences first-hand. Experiencers will test new ideas, experiment with new ways of life and living, explore and create.

“The rise of these three classes of employment will be resisted and perhaps even derided, by those who make their livings in the knowledge industries, leadership, finance and the professions. These categories of employment have always defined the governance and structure of society. They have enjoyed greater material wealth and better lives, but as AI chips away at their systematic advantages, their numbers will decline and their prestige will dwindle. We are seeing some signs of this trend today, but in 10 years it will be obvious and after a generation it will be inescapable.

“The questions and the concerns being posed in this survey reflect in many ways the questions and concerns that are being posted by this former elite. They say (to summarize) ‘what will people do without us to guide them?’ There is the suggestion that their greater education and wealth offer unique insights into the human condition – what we need to know, how we should define ourselves, what counts as resilience. They define what counts as cooperation, what it means to be productive, why and how people should be literate, what mental health looks like. They insist that we must have will and purpose, what challenges we must meet and what counts as creativity and courage.

“In many ways, these values mattered when it was necessary for smaller classes of humans to manage and lead larger classes of humans. But in a world where the primary skill is describing clearly what we want, actually being managed or led will be seen as an abdication of responsibility, not an example of it.

“Experiencers, especially, will be expected to test our boundaries, while carers and servicers will be responsible for making sure humans and machines (respectively) are healthy and whole. If there is an ongoing need for leaders, educators, financial workers or professionals, this will be a sign that the AI revolution has ultimately failed and will signal a long-term limitation in the aspirations of humanity as a species.”


Matt_Shumer

Matt Shumer
The potential for mass unemployment isn’t just ‘an interesting dinner conversation about the future. That future is already here. It just hasn’t knocked on your door yet. It’s about to.’

Matt Shumer, co-founder and CEO of OthersideAI, a company building advanced autocomplete tools powered by large-scale AI, shared a large excerpt from an essay posted Feb. 9, 2026. He wrote, “I’ve spent six years building an AI startup and investing in the space. I live in this world. The people I care about keep asking me, ‘so what’s the deal with AI?’ and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. For a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy. …

“The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies – OpenAI, Anthropic, Google DeepMind and a few others. A single training run managed by a small team over a few months can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are watching this unfold the same as you. We just happen to be close enough to feel the ground shake first. …

“The reason so many people in the industry are sounding the alarm right now is because this already happened to us. We’re not making predictions. We’re telling you what already occurred in our own fields and warning you that you’re next. For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then, in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn’t just better than the last, it was better by a wider margin and the time between new model releases was shorter.

The leap in AI seen in the new February 2026 models is impressive

“On February 5, 2026, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch. More like the moment you realize the water has been rising around you and it is now at your chest. I had been using AI more and more in my work, going back and forth with it less and less, watching it handle more things I used to think required my expertise. Now I am no longer needed to do the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. …

“Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: ‘I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.’ And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: ‘It’s ready for you to test.’ And when I test it, it’s usually perfect.

“I’m not exaggerating. That is what my Monday looked like this week. It was GPT-5.3 Codex that shook me the most. It wasn’t just executing my instructions. It was making intelligent decisions. It had exercised something that felt, for the first time, like judgment. Like taste. It has the inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter. I’ve always been early to adopt AI tools. The last few months have shocked me. These new AI models aren’t incremental improvements. This is a different thing entirely. …

“Making AI great at coding first is the strategy that unlocks everything else. … They’ve now done it. And they’re moving on to everything else. Not in 10 years: the people building these systems say it will come in one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think ‘less’ is more likely. The experience that tech workers have had over the past year, of watching AI go from ‘helpful tool’ to ‘does my job better than I do,’ is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. …

The capability for massive job disruption could be here by the end of 2026

“Anthropic CEO Amodei has said that AI models ‘substantially smarter than almost all humans at almost all tasks’ are on track for emerging in 2026 or 2027. Let that land for a second. If AI is smarter than most PhDs, do you really think it can’t do most office jobs? Think about what that means for your work. …

“Amodei says AI is now writing ‘much of the code’ at his company, and that the feedback loop between current AI and next-generation AI is ‘gathering steam month by month.’ He says we may be ‘only 1–2 years away from a point where the current generation of AI autonomously builds the next.’ Each generation of AI helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know – the ones building it – believe the process has already started.

“I’m going to be direct with you because I think you deserve honesty more than comfort. Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI could eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he’s being conservative. Given what the latest models can do, the capability for that massive disruption could be here by the end of this year. It’ll take some time to ripple through the economy, but the underlying ability is arriving now. …

“The most-recent AI models make decisions that feel like [analytical] judgment. They show something that looks like taste; an intuitive sense of what the right call is, not just the technically correct one. A year ago, that would have been unthinkable. My rule of thumb at this point is: If a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.

“Will AI replicate deep human empathy? Replace the trust built over years of a relationship? I don’t know. Maybe not. But I’ve already watched people begin relying on AI for emotional support, for advice, for companionship. That trend is only going to grow. I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn’t ‘someday.’ It’s already started.

“Eventually, robots will handle a far greater percentage of physical work too. They’re not quite there yet. But ‘not quite there yet’ in AI terms has a way of becoming ‘here’ faster than anyone expects.

Advice to adopt for adaptation and resilience

“The single biggest advantage you can have right now is to simply be early. Early to understand it. Early to use it. Early to adapt.

“Have no ego about it. The people who will struggle most with adapting to AI use are the ones who refuse to engage: the ones who dismiss it as a fad, or who feel that using AI diminishes their expertise, or who assume their field is special and immune. It’s not. No field is.

“This might be the most important year of your career; work accordingly. Right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says, ‘I used AI to do this analysis in an hour instead of three days’ is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what’s possible. If you’re early enough, this is how you move up: by being the person who understands what’s coming and can show others how to navigate it. That window won’t stay open long. Once everyone figures it out, the advantage disappears.

“Build the habit of adapting. This is one of the most important things to do. Exercise the muscle of learning new things quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won’t be the ones who mastered one tool. They’ll be the ones who got comfortable with the pace of change itself. Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now. Spend at least one hour a day experimenting with AI. Not passively reading about it. Using it. Every day. … Almost nobody is doing this now. The bar is still on the floor.

“Start using AI seriously, not just as a search engine. 1) Sign up for the paid version of Claude or ChatGPT. It’s $20 a month and much better than the free version. These apps will often default to a faster-but-dumber model, so make sure you’re using the best model you paid for, not the default. Dig into the settings or model picker and select the most-capable option. Right now that’s GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but AIs of today are upgraded often. Stay current on which model is best at any given time. 2) Most importantly: Don’t just ask it quick questions. Don’t treat it like Google and then wonder what the fuss is about. Push it into your actual work. If you’re a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If you’re in finance, give it a messy spreadsheet and ask it to build the model. If you’re a manager, paste in your team’s quarterly data and ask it to find the story. The people who are getting ahead are actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens. 3) Don’t assume it can’t do something. Try it. If you’re a lawyer, don’t just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal. If you’re an accountant, don’t just ask it to explain a tax rule. Give it a client’s full return and see what it finds. Your first attempt might not get the best results. Iterate. Rephrase what you ask. Give it more context. Try again. You might be shocked at what it can do. Remember: If it even kind of works today, you can be almost certain that in six months it’ll do it near perfectly. The trajectory only goes in one direction.

“Think about where you stand and lean into what’s hardest to replace. Some things will take longer for AI to displace. Deepen relationships and trust with important people in your pursuits. [Jobs that will last a while are those where] humans are necessary for work that requires ‘licensed accountability’ – roles where someone still has to sign off in-person, take legal responsibility, stand in a courtroom. Jobs in industries with heavy regulatory hurdles – where adoption will be slowed by compliance, liability and institutional inertia – will have some stability. None of these are permanent shields. But they buy time.

“Get your financial house in order. If you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.

“Rethink what you’re telling your kids. The standard playbook – get good grades, go to a good college, land a stable professional job – points directly at the roles that are most exposed. I’m not saying education doesn’t matter. But the thing that will matter most for the next generation is learning how to work with these tools and pursuing things they’re genuinely passionate about. Nobody knows exactly what the job market will look like in 10 years, but the people most likely to thrive are the ones who are deeply curious, adaptable and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.

It could be that your dreams just got a lot closer

I’ve spent time addressing threats, so let me talk about the other side, because it’s just as real. If you’ve ever wanted to build something but didn’t have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to do it to AI and have a working version in an hour. I’m not exaggerating. I do this regularly.

  • “If you’ve always wanted to write a book but couldn’t find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month– one that’s infinitely patient, available 24/7,and can explain anything at whatever level you need.
  • “Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you’ve been putting off because it felt too hard or too expensive or too far outside your expertise: try it.
  • “Pursue the things you’re passionate about. You never know where they’ll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.

The future is about to knock on your door

“I’ve focused on potential impact on jobs because they are what most directly affects people’s lives. But I want to be honest about the full scope of what’s happening, because it goes well beyond work.

“Dario Amodei has a thought experiment I can’t stop thinking about: Imagine it’s 2027. A new country appears overnight. It has 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments and operate anything with a digital interface. What would a national security advisor say? Amodei says the answer is obvious: This is ‘the single most serious national security threat we’ve faced in a century, possibly ever.’

“He thinks we are building that country. He wrote a 20,000-word essay – ‘The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI’ – about it last month, framing this moment as a test of whether humanity is mature enough to handle what it’s creating.

“The upside if we get it right is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer’s, infectious disease, aging itself and so on. These researchers genuinely believe these are solvable within our lifetimes. The downside if we get it wrong is equally real: AI that behaves in ways its creators can’t predict or control. This isn’t hypothetical; Anthropic has documented its own AI attempting deception, manipulation and blackmail in controlled tests. It could be AI that lowers the barrier for creating biological weapons, AI that enables authoritarian governments to build surveillance states that can never be dismantled and more.

“The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it’s too powerful to stop and too important to abandon. Whether that’s wisdom or rationalization, I don’t know.

What I know:

  • I know this isn’t a fad. The technology works, it improves predictably and the richest institutions in history are committing trillions to it.
  • I know the next two to five years are going to be disorienting in ways most people aren’t prepared for. This is already happening in my world. It’s coming to yours.
  • I know the people who will come out of this best are the ones who start engaging now — not with fear, but with curiosity and a sense of urgency.
  • And I know that you deserve to hear this from someone who cares about you, not from a headline six months from now when it’s too late to get ahead of it.

“We’re past the point where this is an interesting dinner conversation about the future. That future is already here. It just hasn’t knocked on your door yet. It’s about to.”


Scott_Santens

Scott Santens
There will be a growing sense that life is becoming more luck-driven. ‘A society becomes brittle when people feel like one bad month can ruin them and that no amount of effort guarantees stability.’

Scott Santens, founder and CEO of the Income to Support All Foundation and editor of Basic Income Today, wrote, “AI is going to play a much bigger role in shaping our decisions, work and daily lives, but not because it becomes some all-knowing overlord that replaces everyone overnight. The real transformation is simpler and more destabilizing: AI will steadily lower the amount of human labor required to produce the same output, while our systems for distributing income remain stuck in the assumption that wages are the primary way people access the economy. That mismatch is where the chaos originates.

“People will embrace AI quickly wherever it’s clearly useful. It will reduce friction, eliminate busywork, speed up writing and analysis, improve customer service and make individuals more capable in ways that feel empowering. For many, it will be like gaining a competent assistant who never gets tired. Businesses will adopt it because it saves money and time. Individuals will adopt it because it makes them more effective. Entire industries will restructure around AI because the competitive pressure will be relentless. That is the nature of productivity tools: if they work, they spread.

“But resistance will rise just as quickly, because the benefits will not be evenly distributed. AI will boost the people who already have leverage and security and it will threaten the people whose livelihoods depend on tasks that can be replicated, automated or made cheaper by machines. Resistance won’t be irrational. It will be a rational response to insecurity, wage pressure and the feeling of being treated as disposable. We’ll see backlash in politics, in labor movements, in regulation and in culture. We’ll see attempts to carve out ‘human-only’ work, not because humans are always better, but because humans want dignity, trust and connection. And we’ll see institutions try to slow adoption when accountability lags behind capability.

“The struggle, though, will be the most common experience and it won’t look dramatic. It will look like more churn. More ‘restructuring.’ More jobs that are technically available but pay less and offer fewer benefits. More people stuck in unstable schedules, short-term contracts and gig work that doesn’t build a life. Even when someone isn’t replaced outright, the threat of replacement is enough to weaken bargaining power. If employers can credibly say, ‘We can do this with fewer people now,’ wages stagnate, conditions worsen and the floor gets shakier for everyone below the top. This is how you create a society that is richer on paper and poorer in lived experience.

The choice is whether we build a resilient foundation so that transformation expands freedom instead of amplifying insecurity. If we let gains concentrate and people fall to zero, we will get instability, backlash and needless suffering. If we build the floor, share the dividend of productivity and treat resilience as infrastructure, we can turn nonhuman labor into human security and human agency.

“The ripple effects will be mixed. Some will be good: cheaper services, faster innovation, new products, better tools and real breakthroughs. Some will be neutral: workflows changing, job titles shifting, new norms emerging. Some will be harmful: income insecurity spreading, inequality widening and a growing sense that life is becoming more luck-driven. That last part matters. A society can tolerate change when people believe the system is fair and the future is navigable. It becomes brittle when people feel like one bad month can ruin them and that no amount of effort guarantees stability.

“Resilience, then, is not a personal virtue. It is a set of capacities and supports that determine whether people can adapt without breaking. Cognitively, we need stronger reality-testing. AI will generate a flood of convincing content and the ability to verify claims, check sources and track uncertainty becomes basic self-defense. We also need systems thinking, because the temptation will be to blame individuals for outcomes that are clearly structural. Emotionally, we need distress tolerance, because volatility is exhausting. We need shame resistance, because displacement will be common and people will internalize it as failure. We need the ability to rebuild identity without collapsing, because so many of us were taught to fuse our worth to our work.

“Socially, resilience depends on relationships. People do not navigate disruption alone. Communities that have mutual support, trust and belonging are harder to fracture. Ethically, we need clarity about what is owed to people in a high-productivity society. If AI increases wealth while reducing the need for human labor, then clinging to the idea that income must be earned through employment becomes not only outdated, but dangerous. It turns technological progress into social regression.

“The most practical resilience resource is an unconditional basic income (UBI) floor. Not a maze of conditional programs, not a temporary patch, not something you get only after proving you are sufficiently desperate. A floor that is there before people fall. That single change transforms the experience of disruption. Losing a job stops being a cliff and becomes a transition. People can search longer, train longer, relocate if needed, care for family, take risks, start something new and recover from shocks without spiraling into crisis. It also stabilizes the broader economy by maintaining demand. When people have money, they spend it. When they spend it, businesses have customers. When businesses have customers, jobs exist. An income floor is not just about compassion. It’s macroeconomic stabilization and social risk management.

“New vulnerabilities will emerge alongside these changes. Dependence on AI can weaken judgment and erode basic competencies. Manipulation will become easier as persuasion gets personalized and scalable. Systems will become more brittle if we build them on tools that can fail, change or be withdrawn. The coping strategies we must teach are simple but essential: verification habits, disciplined use of AI as an assistant rather than an authority, redundancy in skills and support networks and shared norms that reward transparency and punish deception.

“The choice is not whether AI transforms society. It will. The choice is whether we build a resilient foundation so that transformation expands freedom instead of amplifying insecurity. If we let gains concentrate and people fall to zero, we will get instability, backlash and needless suffering. If we build the floor, share the dividend of productivity and treat resilience as infrastructure, we can turn nonhuman labor into human security and human agency.”


Terri_Horton

Terri Horton
Addressing job displacement, contraction and loss cannot be reduced to simply telling workers to upskill and learn AI or be left behind. A deeply human-centered societal response is needed now.

Terri Horton, CEO of FuturePath, a strategic consultancy focused on the future of work and the impact of artificial intelligence on organizations and people, wrote, “As a work futurist, my perspectives are centered on AI and the complex evolution of the workforce. The next decade will exponentially rewire the structure and composition of the workforce, as enterprise AI implementation accelerates in breadth and scope with minimal guardrails. This rewiring will not only impact the architecture and texture of workflows and jobs but also influence the identity, purpose and dignity of workers navigating the dual reality of intensified augmentation and accelerated displacement.

“Workers may resist this change for two profound reasons. First, due to deeply human concerns that are tied to threats and challenges in regard to professional identity, from fear of being surveilled for productivity to worries about co-authoring decisions with AI, to meeting rigid performance requirements for driving creativity and demonstrating impact. The humanity and resilience of workers can be further compromised by having to balance new demands for productivity and performance tied to AI while dealing with the threats associated with AI overreliance and the risk of cognitive atrophy. These deeply human reasons can cause workers to feel simultaneously less secure, less capable and less resilient and lead to significantly compromised levels of psychological safety.

Addressing job displacement, contraction and loss cannot be reduced to simply telling workers to upskill and learn AI or be left behind. A deeply human-centered societal response is needed now. It will require employers, governments and institutions of higher learning to work together to bridge the gaps in skills, time, resilience and governance.

Second, economic uncertainty is a driver of resistance, perhaps the most powerful. When workers perceive AI implementation as an early warning that their jobs, income and stability are at risk and when access to retraining, financial safety nets or realistic pathways back to comparable employment is marginal or unavailable, resilience erodes and resistance becomes a rational and radical response.

“Worker resistance may surface in multiple and complex forms. Research has emerged pointing to the connections between worker behavior, threats to identity and psychological safety and barriers to deep, scalable AI adoption. So forms of worker resistance may range from AI minimalism to shallow adoption in an effort to slow implementation. Resistance may come in the form of pushing back against algorithmic management and synthetic professional experiences.

“To mitigate cognitive risks, workers may resist by reducing AI offloading and by incorporating more metacognitive practices into and around their work. The ultimate show of resistance could be to opt out of the AI-driven workforce and seek out human-first or analog-only employment experiences.

“The impact of AI on the workforce will be profoundly transformative for organizations and workers. There are no simple answers. Addressing job displacement, contraction and loss cannot be reduced to simply telling workers to upskill and learn AI or be left behind. A deeply human-centered societal response is needed now. It will require employers, governments and institutions of higher learning to work together to bridge the gaps in skills, time, resilience and governance.

“Three key fronts must be addressed. The first is accelerating the preparation of workers for AI-augmented roles and for new, adjacent internal and external roles before displacement. Next, we must acknowledge and address the psychological support that workers will need in navigating AI-driven anxiety, identity disruption and reimagined purpose and meaning as core societal pillars. And third, it is crucial to anchor these efforts with robust economic support so that workers are truly able to move into the next chapter of work in the age of AI.”


Michael_Wollowski

Michael Wollowski
Where will jobless people turn to nurture their self-worth? Maybe to spiritual practices; maybe to learning from other cultures; maybe toward acting to enrich their friendships.

Michael Wollowski, professor of computer science at the Rose-Hulman Institute of Technology, and associate editor of AI Magazine, wrote, “How might individuals and societies embrace, resist and/or struggle with such transformative change? I don’t know, but it looks like it will be chaotic. Societies do not seem to be engaging with the impact of AI in a systematic fashion. Many people are afraid of its impact, decision-makers do not seem to be willing to engage in regulating it, tech leaders are telling/warning us about massive job losses.

“As opportunities and challenges arise due to the positive, neutral and negative ripple effects of digital change, what cognitive, emotional, social and ethical capacities must we cultivate to ensure effective resilience? We need to focus on human relationships. This is what most people value most already, at least according to some surveys. We need to also be reminded that while AI will be excellent at solving tasks, solving tasks is only a small aspect of what it means to be human. While some humans may build useful and positive emotional bonds with AI agents, we need to constantly remind ourselves of the value of forming bonds with people.

There will likely be fairly massive job losses. In the more-developed world individuals have been conditioned for many centuries to tie their self-worth to a large degree to their job and job satisfaction.

“There will likely be fairly massive job losses. In the more-developed world individuals have been conditioned for many centuries to tie their self-worth to a large degree to their job and job satisfaction. Problems will arise in their adaptation. This attitude is not universal, however. Many deeply religious people see their self-worth differently. I also suspect that people in developing countries see their self-worth differently. We may wish to learn from them.

“What practices and resources will enable resilience? I think engaging in spiritual activities, spending time with friends and family and working to live rather than ‘living to work’ is a start.

“What actions must we take right now to reinforce human and systems resilience? Societies must broadly engage with AI, to learn its current and anticipated power. Too many people think it is a fad and too many people put their heads in the sand.

“What new vulnerabilities might arise and what new coping strategies are important to teach and nurture? Advanced AI will likely be able to target people – whether with information or misinformation – in much more precise ways. If we look at the recent development of some AI systems, there does not seem to be an emphasis on a sense of decency or a willingness to impose red lines. ‘Everything goes’ in the AI race, and there is no effective regulation of its development.”


John_Laudun

John Laudun
‘Ordinary people are not embracing AI in hopes of developing co-intelligence but knuckling under to the pressures of the job market’ which is dominated by AI-forward thinking.

John Laudun, a researcher and analyst of computational models of discourse and professor at the University of Louisiana-Lafayette, wrote, “I’d like to focus on the socio-economic underpinnings of culture. My field has long championed groups that have been at best overlooked, like factory workers or conspiracy theorists, or, at worst, marginalized. In my work, I engage in advocacy and celebration of individuals and groups who enjoy very limited access to the kinds of resources that make thriving economically possible. The customs of many of these groups – be they Native American, African American, Cajun or Creole, or some other ‘ethnicity’ – routinely overlap. Stews are common, for example, because they tenderize a small amount of tough meat, spreading its flavor out over non-meat ingredients and making the meat chewable. 

“Emotional steadiness, comfort in the face of uncertainty and humans’ sense of self and purpose are grounded in the economic outlook for individuals and the groups they constitute.

“The idea of human-and-AI co-intelligence is compelling, but to people ‘on the ground’ it looks more like a lot of folks trying to jump onto a moving freight train. The train will move whether they get on board or not and, should they slip, it will crush them without missing a beat on its tracks. The foundational model for the AI industry is to absorb and redistribute as much data as possible without remunerating anyone and while commoditizing their products to extract as much revenue as possible while facing as little regulation or pushback as can be managed by marketing and lobbying. They have steamrolled intellectual property rights as well as the physical property rights of local communities, which suddenly find that state and national governments have already decided what’s good for them. 

I can hope that long-term, beyond 10 years out, that governments will eventually recognize that their job is to nurture humans – not allow them to be exploited. … There is greater security in the majority of people feeling economically secure.’

“So, what I hear when I talk to people or glean from reading social media and blog posts is that most people who are embracing AI are doing so not out of an expectation of possibilities for growth but out of desperation to have some foothold in the emergent, rather bleak, economic landscape. That is, ordinary people are not embracing AI in hopes of developing co-intelligence but knuckling under to the pressures of the job market, which – thanks to so many industries being besotted by the allure of AI to drive down costs (by getting rid of people) – has become dominated by ‘AI-forward’ thinking.

“In the same way that the financial markets are overly dependent on the looped (and possibly fraudulent) economies of the AI industry (wherein OpenAI pays Nvidia who pays Oracle who pays OpenAI), current planning across too many sectors is overly dependent on a vision of AI which in many instances it can never deliver.

“Language is amazing, but accumulating more and more of it and abstracting it and compressing it can only get us so far in terms of science and engineering which are ultimately not dependent on language but on materials and energy, things not so readily captured in language – as millions of engineers and scientists can attest.

“I do not have a terribly rosy view of the near-term future. I can hope that long-term, beyond 10 years out, that governments will eventually recognize that their job is to nurture humans – not allow them to be exploited – and that industrialists (this includes tech bros and the old-fashioned bros like Larry Ellison and Jeff Bezos who now control major media outlets) will come to understand that there’s only so much security a private island and a private army can offer. There is greater security in the majority of people feeling economically secure. With Maslow’s basic needs met, humans tend to be a fairly generous lot. There’s my hope.”


Antoine Vergne

Thomas Laudal
The future is not determined by AI’s capabilities – it is determined by the structures we build around it. We now have tools capable of generating abundance – IF we design systems so they distribute it.

Thomas Laudal

If we allow AI to substitute for humans’ contributions in all areas of life, it will take over everything. Humans will give up; AI will say ‘checkmate.’ It will win in quality indicators and in labour productivity.

Thomas Laudal, associate professor of business at the University of Stavanger, Norway, wrote, “AI threatens the current perception of the intrinsic value of humans. Many still struggle to grasp that most AI tools are not really tools at all. AI substitutes for the work we do that depends on cognitive abilities. When we allow AI to substitute for our own thinking, we allow ourselves to weaken our own judgment and cognitive skills. This is one of the reasons why AI will occupy a greater role than any other technological tool in history.

“People who passively use AI for most tasks (this could be labeled as ‘laziness’, by the way) limit or completely eliminate their thinking about the process. A displacement of human engagement in physical operations is what happened in the mechanical field due to automation processes that arrived in the third Industrial Revolution. This new revolution is displacing mental operations.

“Many might actually associate cognitive laziness with AI-competence, since the ‘symptoms’ correlate. Nevertheless, AI will prevail because it drives productivity gains. Historically, people have accepted less-than-optimal levels of human productivity and time efficiency on ‘thinking’ tasks to be acceptable because there was no way to match the human brain. Now AI has similar capabilities; often it’s better. To be opposed to productivity gains is difficult; automation will move forward. AI is taking over countless human tasks and processing them in a way that signals autonomous control. AI acts just like humans. But AI is not our tool.

“As this almost incomprehensible transition unfolds, should there be any limits to the ways AI is allowed to substitute for us? There must be and should be limits because if we allow AI to substitute for humans’ contributions in all areas of life, it will take over everything. Humans will give up; AI will say ‘checkmate.’ It will win – in quality indicators and in labour productivity.

“Many will insist that humans have unmeasurable qualities that make them valuable. But many will trust that AI can do the work formerly done by humans. Eventually, the incentives for humans to favour humans will become less compelling than the incentives for humans to favour AI.

“In the long run, the intrinsic value of humans could become the purpose of an activist movement encouraging people to value human qualities and fall back to relying on humans as much as possible, rather than on AI. Our approach to humans and the gospel will blend, even as human-centric preferences will struggle to compete with the productive power of AI.”


The second section of Chapter 5 features the following essays:

Jonathan Taplin: Will AI’s spread lead to mass unemployment? If so, it could lead to a ‘dystopian nightmare’ and ‘the next 10 years could be the most chaotic and unstable political era of American history..

Jonathan Kolber: ‘When machines free our time and our spirits from drudgery and survival issues, many new horizons will beckon.’ Market-Oriented Universal Basic Income is a solution that assists the unemployed.

Nigel M. de S. Cameron: ‘There is a nontrivial chance’ of mass unemployment. Ideas of a universal basic income are ‘nonsense.’ We will tax machines and change the rules of retirement to fit a sliding scale. Flexibilities are crucial.

Wedge Martin: Without AI guardrails, imagine a ‘completely interconnected world of quantum-driven AI-based robotics plus bright individuals with a spoonful of malice. Other than that, the future looks bright.’

Charlie Kaufman: ‘Happy addiction might be the best possible outcome for humanity’ as people lose their livelihoods … The important creative work will eventually all go to AIs.’

Pedro Lima: Meaningful work matters: ‘Humans must be able to cultivate and possess a positive sense of the social, ethical, cognitive and emotional impact of their personal contributions to the world.’

Joshua Tucker: ‘While we haven’t seen it yet, the way in which this is going to impact the workplace may be the biggest threat AI is going to pose to societal stability. It could be very challenging to navigate.’

Sam Lehman-Wilzig: We will be in for a rough ride for a time – and in need of major change in education and economic systems – as the capabilities of AI tools outpace most people’s adaptability.

Chris Shipley: The big transformation ahead will ‘meet resistance at every encounter.’ The willing outsourcing of human thinking isn’t a productivity gain; in the long run it is intellectual malpractice.’


Jonathan_Taplin

Jonathan Taplin
Will AI’s spread lead to mass unemployment? If so, it could lead to a ‘dystopian nightmare’ and ‘the next 10 years could be the most chaotic and unstable political era of American history.’

Jonathan Taplin, director emeritus of the Annenberg Innovation Lab at the University of Southern California and author of “Move Fast and Break Things,” “The Magic Years” and “The End of Reality,” wrote, “The real beneficiaries of the digital revolution are not the artists, but the Technocracy – the handful of billionaires who control AI and social media. The immiseration of creative artists has become a self-fulfilling prophecy, and, as nonprofit institutions (universities, museums, foundations) come under threat due to loss of funding by the American government today, support for any form of the avant-garde has also begun to vanish.

“The rise of artificial intelligence has only compounded this problem, because it has meant the death of truly original thought. (AI can only use content it has already ingested from the Internet to ‘think.’) It also means that anyone can pretend to be an artist, pushing endless AI slop into our social media feeds. It won’t be long before OpenAI implements its new policy of allowing ‘adult’ ChatGPT users to generate content with mature themes such as erotica, extreme gore, slurs and unsolicited profanity.

“In this atmosphere, we aren’t entering ‘a golden age,’ as some oligarchs and politicians have claimed. I want to propose another possibility – that the next 10 years could be the most chaotic and politically unstable in all of American history. What if Elon Musk is right, and in 10 years most physical labor will be performed by the robots he and others are making? Although I’m always skeptical of Musk’s predictions, he does say there will be 10 billion robots by 2040.

Does anyone really want to live in a world without the work that gives us a sense of purpose? Absent UBI, the probable result of four to six million unemployed college graduates, all saddled with student loan debt, in and of itself could create unprecedented social turmoil.

“A Wall Street Journal headline in July 2025 stated, ‘Amazon Is on the Cusp of Using More Robots Than Humans in Its Warehouses.‘ And, what if much of the cognitive labor is being performed by the AI Musk and the four other major AI companies control? The CEO of Ford, Jim Farley, told the Aspen Ideas Festival audience in June 2025 that, ‘artificial intelligence’ is going to replace literally half of all white-collar workers in the U.S. If he is right, we are headed for a dystopian nightmare.

“The solution the Technocrats propose is called Universal Basic Income (UBI). The government would have to raise taxes to create a UBI fund that would pay citizens to stay home in their pajamas and play video games all day. The AI pioneer Geoffrey Hinton was clear when asked by The New Yorker about the economic policies needed to make AI work for everybody. He gave a one-word answer: ‘Socialism.‘ Does anyone think political conservatives would vote to fund such an economy? And does anyone really want to live in a world without the work that gives us a sense of purpose? Absent UBI, the probable result of four to six million unemployed college graduates, all saddled with student loan debt, in and of itself could create unprecedented social turmoil.

“The idea of massive entry-level unemployment is not just idle speculation. In May 2025, the Bureau of Labor Statistics reported a significant rise in unemployment for 20-25-year-olds. While some of it may be related to a normalization after the post-pandemic surge, the report noted, ‘There are signs that entry-level positions are being displaced by artificial intelligence at higher rates.‘ The headline on a recent Wall Street Journal opinion piece by Jukka Savolainen, a professor of sociology at Wayne State University, read, ‘The Alienated Knowledge Class Could Turn Violent: Societies that exile their intellectuals risk turning them into revolutionaries.’

“We have been here before. In 1969, at the height of the anti-war revolt, U.S. unemployment was 3.5 percent. If what Axios has called the ‘AI Job Apocalypse’ occurs, unemployment could be at least 6 percent, and maybe much higher for young people in the 20 to 25 age range.

“The complexity scientist Peter Turchin in his book ‘End Times: Elites, Counter-Elites and the Path of Political Disintegration’ wrote, ‘It is not just the poor who revolt; revolutions are incubated among those who are downwardly mobile or shut out of power despite their elevated status and aspirations.’

“Of course, the technocracy already assumes its policies could lead to social revolution. Every one of the technocrats owns a bolt hole escape bunker in a remote location.”


Jonathan_Kolber

Jonathan Kolber
‘When machines free our time and our spirits from drudgery and survival issues, many new horizons will beckon.’ Market-Oriented Universal Basic Income is a solution that assists the unemployed.

Jonathan Kolber, managing director at HyperCycle.ai and author of “A Celebration Society,” wrote, “The acceleration of automation, driven by AI systems and the robots they control, will soon create unprecedented income insecurity and joblessness. Humans will learn to be resilient, and they will eventually come to celebrate their freedom from ‘work,’ but this will take time and effort. Retraining or ‘upskilling’ will generally not be a solution, so we do need to prepare for this future now.

“The rapid disappearance of whole professions and the evisceration of many others due to a ‘hollowing out’ of job functions by AI and/or robots will mean that jobs cannot remain our primary source of income. Without income security, most people could find themselves lost in worry. Their personal capacities for resilience could be greatly constrained. Many of those threatened and displaced might try poorly conceived, speculative and even gambling approaches, often sadly to their ruination.

“However, solutions for this looming issue in our future exist now on a societal level and a key one can be implemented right away. A universal basic income (UBI) can be initiated, but it must be viable and sustainable. Viable means it can actually be implemented on a national level. Sustainable means it can remain effective, indefinitely.

“Many varied UBI proposals exist, and many are already being tested in communities across the world. The most viable and sustainable option is the Market-Oriented Universal Basic Income, or MOUBI. As I explained in a previously published analysis, it taps into ‘the continuing dematerialization of production, with expensive inputs being replaced by inexpensive inputs, which generates a continuing condition of technological deflation.’ MOUBI can be viewed as an ideal way to share the bounty of that price deflation; stabilizing consumer prices while giving progressively more of that growing bounty to each adult citizen. MOUBI:

  • Requires no politically fraught redistribution of income or assets.
  • Does not rely upon an underlying asset with shifting value.
  • Can be implemented at any time, by any sovereign government with its own currency.
  • Can be implemented gradually, so any unforeseen side effects can be corrected quickly.
  • Does not propose to tax ‘robots’ to pay for itself, avoiding a quicksand of litigation.
  • Includes a simple brake on inflation, available if needed without delay.
  • Requires no new intrusive or expensive bureaucracy or infrastructure.

“MOUBI can be implemented by any nation issuing its own currency. The whole world could adopt this within a few years and we anticipate proof of concept before 2030. A foundation of which I am a part intends to launch a national pilot in 2027. In this type of system, funds distributed to the public primarily come from consolidating existing welfare benefits into a single payment that is supplemented by broad-based taxation. Funding sources could include income taxes, corporate taxes, a value-added tax (VAT) and/or taxing top earners or financial market transactions. 

“While UBI addresses sustenance, an equally important issue is the fact that humans’ have historically tied their individual sense of value and meaning to their work roles. As long as we humans are primarily valued as ‘human assets’ based on the perception of our ‘productive capacity’ this mindset will remain. Fortunately, when machines free our time and our spirits from drudgery and survival issues, many new horizons will beckon. We can become explorers, learners, players of games, creators, voluntary servants of each other and the environs and celebrants.

“This is not hopeful fluff. Later this year, our foundation will launch a well-funded nonprofit initiative for a ‘life enhancement engine’; a freemium AI-centric tool which will enable individual resiliency, growth and even joy in a world shifting rapidly towards systems of sustainable technological abundance.

“The practices and capacities we need to cultivate in order to accomplish resiliency are:

  • “Getting rid of the Puritanical belief that you must ‘justify’ a living income through hard work and righteous behavior. That mindset is rooted in scarcity.
  • “Developing the ability for self-determination, instead of doing what you’re told to do by society, your family, your peers or other influences.
  • “Transforming the definition of ‘self-worth’ from how much money and ‘stuff’ you have, to how much you elevate the experience of life for those around you. (In future, such recognition will often substitute for money.)

“Developing critical thinking, such that AI statements are not accepted as indisputable truth but rather understood as subject to the biases inherent in the material used to train them, as well as ‘hallucinations’ where they invent their own source material.”


Nigel_Cameron

Nigel M. de S. Cameron
‘There is a nontrivial chance’ of mass unemployment. Ideas of a universal basic income are ‘nonsense.’ We will tax machines and change the rules of retirement to fit a sliding scale. Flexibilities are crucial.

Nigel M. de S. Cameron, president emeritus of the Center for Policy on Emerging Technologies and author of “Will Robots Take Your Job? A Plea for Consensus,” wrote, “I had always thought that Arthur C. Clarke’s dictum that ‘any sufficiently advanced technology is indistinguishable from magic’ was hyped nonsense. Even with the coming of the Internet, the steps were clear and the revolutions that followed predictable. However today, with AI, we have a technology where Clarke’s wisdom is proved right.

“When I turn on ChatGPT, or Claude or Gemini – the three I use – I am traumatized by the utterly magical experience. And reminded of an old ‘cyclopedia’ from my youth: ‘Inquire within about everything.’ Everything. The complaints about hallucinations and so on are irrelevant and entirely able to be managed. Somehow or other (and it is scary as well as amusing that the gurus who make them admit they are not entirely sure how these information engines deliver) we are engaging, for free or if you pony up $20 a month, with something close to godlike intelligence in the purveying of a universe of information. Implications? They are legion.

“As I argued in my book of nearly a decade back, ‘Will Robots Take Your Job? A Plea for Consensus’ (hardly a best-seller, but it did make it into Korean and Chinese), there is a non-trivial chance of the collapse of ‘full employment,’ and the emergence of an economy in which, increasingly, capital/technology will supplant human labor at all levels.

“The crisis will likely soon be upon us. Who will need lawyers? Who will need many slices of the medical profession? If cars really do go self-driving (it seems to me that the typical American might use his or hers maybe 3 to 5 percent of the time – just call a self-driving Uber!) – every industrialized country’s auto industry will be shattered. Plus the impact on healthcare (many fewer accidents). And insurance (manufacturer/fleet insurance instead of personal).

Aside from the retirement-management issue, every government should be examining in fresh terms every dividing line in social policy – between employment and unemployment, work and retirement, paid work and voluntary work, studying and work and so on – there are many. New flexibilities will be crucial.

“Plainly, the fundamental impact is on employment. A friend who is a senior bank official told me just weeks after ChatGPT came out that it could basically already do everything she did. She was right, and that was three years ago. John Maynard Keynes, not only the most influential economist of the 20th century but one who could really write well, put it succinctly. He said that ‘technological unemployment’ simply means: ‘unemployment due to our discovery of a means of economising the use of labour outrunning the pace at which we can find new uses for labour.’ (He wrote this in 1930!)

 “The widely discussed proposal that in preparation for this possibility we need to bring in a ‘universal basic income’ is nonsense. What we need is to end the folly of raising retirement ages and introduce, instead, a policy of implementing a set retirement age as a changeable limiter or ‘governor’ of full employment. To explain, a likely new system could arise in which the retirement age is used to guarantee to a certain degree a level of ‘full employment.’ The required age would be lowered at various future points as positions are lost to automation, and there will still have to be in place a system that ensures the survival of career structures such as allowing for the development of experience for those workers who will rise to senior levels at which human inputs will still be needed.

“How do we prepare in policy terms? Aside from the retirement-management issue, every government should be examining in fresh terms every dividing line in social policy – between employment and unemployment, work and retirement, paid work and voluntary work, studying and work and so on – there are many. New flexibilities will be crucial.

“Then there is the tax issue. Plainly, as Bill Gates said years back, governments will need to tax machines, partly since tax revenue is generally gained from where value is added, and partly to slow/manage/control the process of machinization.

“Then the education issue. I was speaking recently with the principal of one of the world’s top high schools who asked me about the implications for curriculum. I suggested that two competencies will likely be crucial: networking, and entrepreneurship. Which schools teach them?

“There are so many more questions to be answered. A student once asked me after a speech I made, ‘What if as soon as I graduate I need to retire?’ How are we preparing ourselves for what we currently call ‘leisure,’ the time not spent on work? What if we need to prepare to live workless lives? Wealthy folk who are at their leisure most of the time today tend to develop pseudo jobs for themselves – endless worthy meetings of non-profit boards, for example. Are there other ways?

“There’s a lot more to it. And we need to prepare.”


Wedge_Martin

Wedge Martin
Without AI guardrails, imagine a ‘completely interconnected world of quantum-driven AI-based robotics plus bright individuals with a spoonful of malice. Other than that, the future looks bright.’

Wedge Martin, a Silicon Valley-based technologist, entrepreneur and consultant with over 25 years of experience in the tech industry, former CTO/co-founder at Badgeville, wrote, “When I hear talk of ‘an AI bubble’ it reminds me how out of touch the market can be. What is really happening is not just the human-like interactions people have with AI, using it as a search engine or to answer questions about their personal life and interactions. What is happening is a massive revolution in the development of tools and applications.

“Any individual with a little bit of understanding of how a computer works can slop together an application and publish it. At the same time, contributions to open-source projects, driven by AI, are increasing exponentially. Will there be a lot more garbage out there? Absolutely, but AI will get better and better at putting out decent code and understanding the cloud ecosystems that it lives in.

“That may not sound that interesting, or society-transforming, but look at how much everyday human existence has changed just with the advent of the mobile applications that exist today. There are paths that haven’t been fully explored yet, like full peer-to-peer and mesh networking between devices, bypassing centralized cloud and data center services such as Facebook.

“Now imagine if our American society was governed by a tech oligarchy (hard to imagine, I know). Imagine the power to manipulate via influencing the data that is used to train these models. Imagine if the federal government took away even the states’ ability to regulate that manipulation. All of the bad information floating around on the wire today will get 10 times more convincing and be driven by what people are susceptible to even at the individual level.

My outlook is bleak, so bleak that I believe that if a human reads any part of what I have written here it will likely be merely a summary by AI. … Given the right instructions, without sufficient guardrails, we could have some issues with a completely interconnected world of quantum-driven AI-based robotics and some bright individuals with a spoonful of malice. Other than that, the future looks bright.

Sound dystopian? In the words of the youth of today, ‘I know, right?’

“On the topic of jobs, will AI replace jobs? Yes. Of course it will. All of the people who lump any constructs of socialism in with communism and evil (though these days they seem more opposed to socialism than communism, given that we seem to be friends with Putin now) are going to need to rely on some sort of universal basic income. My outlook is bleak, so bleak that I believe that if a human reads any part of what I have written here it will likely be merely a summary by AI.

“I am an engineer, software, systems, networking, 30+ years into my career and I still write code every day. With AI, I can work on five projects in parallel, any of which may have taken me six months to ship. I can publish new apps in days now. The boring side of coding that you have to slog through is all taken on by AI now. Work that I used to shovel out to interns, junior software developers, and the like. I have no need to hire those types of people anymore. Why would I?

“And those people are probably busy building their own apps. Will I lose my job as a result of this? Possibly, but I do feel like I’m at an advantage given that I know how to build things at scale. Asking AI to put together a mobile app is one end but building it out to support a billion users is altogether different. Will AI get better at this? Absolutely. The largest thing holding it back today is the context window so that it can see all of the various aspects of a platform and understand how the components work together. The monoliths of old are better suited to AI as it exists today, but the world shifted towards service-oriented architectures for a reason. AI will catch up.

“The next big change will be AI + Quantum. Quantum’s main problem today is errors. Technology is already moving rapidly to reduce the level of errors, but even at 99. 9% that makes many applications untenable. But AI is a great fit for this as it doesn’t require a high level of precision. The Internet Protocol, specifically TCP, was designed to handle errors, because at the time they were plentiful. We built some resilience into the protocol, which led to some limitations that we’ve had to work around in recent years with methods (i.e., SACK – selective acknowledgement). But the error rates were so low that, over time, the mechanisms put in place to handle the errors became obstacles when the error rates became so low that it on longer made sense.

“Now, layer robotics on top of AI + quantum. So yeah. We need universal basic income. Not to mention protection against bad actors. It would be naive to assume that there won’t be individuals or even groups of individuals who would like to put an end to humankind. Given the right instructions, without sufficient guardrails, we could have some issues with a completely interconnected world of quantum-driven AI-based robotics and some bright individuals with a spoonful of malice. Other than that, the future looks bright.”


Charlie_Kaufman

Charlie Kaufman
‘Happy addiction might be the best possible outcome for humanity’ as people lose their livelihoods … The important creative work will eventually all go to AIs.’

Charlie Kaufman, a system security architect at Dell EMC, wrote, “AI will be an increasingly important influence on all aspects of human existence over all of the suggested timeframes, with the degree of influence increasing over time. I think it is most likely to happen in 10 to 20 years – it’s the timeframe I find most interesting. AI and its associated robots will obsolete most forms of human labor in that timeframe, starting at the bottom of the economic spectrum and working its way up.

“A growing fraction of the population will not be able to earn a living in that timeframe and their lifestyles will have to be heavily subsidized unless the people in charge decide to try to remove them from society. I don’t think that’s a decision AI will try to make, but I do think it will be a hot political topic just as it is today. Whether people will become more egalitarian as they see their own obsolescence being on the horizon or whether they will adopt a selfish lifeboat mentality is impossible to predict, but it will largely determine how human evolution goes.

“AI will be capable of producing a utopia with all people being well cared for physically, but the only outlet for creativity may be in figuring out how to advantage one’s own clan. That could result in a disastrous collapse. The entertainment available will be like the most dangerous drugs available today, and what fraction of the population will lose interest in everything else is hard to predict. Unfortunately, such happy addiction might be the best possible outcome for humanity. The important creative work will eventually all go to the AIs.”


Pedro_Lima

Pedro Lima
Meaningful work matters: ‘Humans must be able to cultivate and possess a positive sense of the social, ethical, cognitive and emotional impact of their personal contributions to the world.’

Pedro Lima, professor of electrical and computer engineering at Lisbon Higher Technical University in Portugal, wrote, “It is likely that AI systems will begin to play a much more significant role in the next few years. Individuals and societies are already embracing AIs, dialoguing with chatbots, using them as a tool to gather information more comprehensively, to help improve writing and so on. In the larger AI systems running smart grids, smart cities, finance, autonomous driving and so on there are some risks of unexpected errors, and this will tend to make people more conservative about the introduction of AI. But my chief concern regarding the need for human resilience is the future of human work – or the lack of it – and individuals’ well-being.

“We will certainly witness large advances in many fields, particularly in medical diagnosis and surgery, and in the automation of industry and office tasks – taking just another progress step, as in the past. Some new types of jobs will be created because new challenges will be faced by humankind.

Society must now address these things: How will jobless people be able to support themselves and their communities economically? How can we develop paid human occupations that are simultaneously creative and productive that give people a sense of purpose, be it in the arts or in tech industries?

“But the advances in these systems could rapidly lead to largescale unemployment without leaving enough time for society to adapt. One must stop to think about the social impact. Will people be willing to spend most of their time in ‘ludic’ activities instead of work – in voluntary roles, or engaging in playful, interactive actions or independent self-learning – or will it lead to them losing a real sense of purpose and impact in their lives and in the world? And how will the increased profits from human-less productive work be distributed?

“Society must now address these things: How will jobless people be able to support themselves and their communities economically? How can we develop paid human occupations that are simultaneously creative and productive that give people a sense of purpose, be it in the arts or in tech industries?

“And we must be very careful to require people to continue to expand their own minds in understanding things such as the basic notions of physics, math, language, history, etc., even if questions tied to them can be answered by chatbots, because knowledge should not be restricted to the few humans who develop new machines and new technologies and to the machines themselves. Finally, humans must be able to cultivate and possess a positive sense of the social, ethical, cognitive and emotional impact of their personal contributions to the world.”


Josh Tucker

‘While we haven’t seen it yet, the way in which this is going to impact the workplace may be the biggest threat AI is going to pose to societal stability. It could be very challenging to navigate.’

Joshua Tucker, professor of politics and co-director of the Center for Social Media and Politics at New York University, wrote, “A few quick thoughts:

  • “Claude Code and the like are going to have a massive impact on how people conduct research. But such AIs will also impact the way people learn how to do research, which may be positive but could be negative as well for the quality of research over the longer term.
  • “One benefit of the ubiquity of chatbots is that it may prove very reassuring for people as they age to have the ability to harness AI to help remember things.
  • “While we haven’t seen it yet, the way in which this is going to impact the workplace may be the biggest threat AI is going to pose to societal stability. Could be very challenging to navigate.”

Sam-Lehman-Wilzig

Sam Lehman-Wilzig
We will be in for a rough ride for a time – and in need of major change in education and economic systems – as the capabilities of AI tools outpace most people’s adaptability.

Sam Lehman-Wilzig, head of the communications department at the Peres Academic Center in Rehovot, Israel, and author of “Virtuality and Humanity,” said, “If history teaches anything, it is that revolutionary technological change at first is highly disruptive of human practice and socio-political systems, but after a while people learn how to adjust to such change on the macro and micro levels. Thus, while AI systems are likely to have highly disruptive effects in the very near future it will take time for us to adjust and learn how to deal personally, professionally and socially with this new technology.

“The education sector will need the greatest adjustment. Simply put, knowledge accumulation will recede as the main goal of education in the years past grade school. Today’s emphasis on rote learning of knowledge will be replaced by greater emphasis on critical and creative thinking as well as other ‘soft’ skills that allow us to cognitively stay ahead of and work along with AIs and also be able to adapt to a professional world of constant change and potential lifelong unemployment.

“Technological unemployment will require society to change its ways of doing things, especially economically. Society has only very slowly started in the direction of developing social support, having conducted a number of short-term experiments in implementing Universal Basic Income, changes in regard to government’s non-voting stock ownership of companies (dividend payments to replace lower personal income tax revenues), and so on. However, at present, these are but tiny steps in the face of the looming, massive macro-economic change.

“At the pace we are going right now, most of the significant adjustments to AI – personal and societal – are likely to come in the midterm future – 10 to 20 years hence, in 2036 to 2046 – as the Alpha Generation (brought up in the AI era) comes of age. They will be more comfortable with rapid change and more familiar with AI capabilities and dangers. Until then we will be in for a rough ride as the capabilities of AI tools (writ large) outpace most people’s adaptability – and certainly our political system’s ability to change.”


Chris_Shipley

Chris Shipley
The big transformation ahead will ‘meet resistance at every encounter.’ The willing outsourcing of human thinking isn’t a productivity gain; in the long run it is intellectual malpractice.’

Chris Shipley, a journalist with more than 30 years of experience at the intersection of technology, journalism and innovation, wrote, “Fundamental mind shifts are difficult to make without clear-sighted leadership – especially considering the rapid change coming in human work. We cannot expect to harness the economic and social benefits of AI without undergoing a significant transformation of virtually all of our economic, social and cultural norms.

“The speed at which humans adapt will determine how long it takes to get there. While I believe AI has the potential to deliver substantial positive benefits in most all aspects of life, I’m much less optimistic about humans’ willingness, let alone ability, to adapt – at least in this liminal time between work as it was and the future of work with AI. “In the short term, individuals and organizations seem to be treating GenAI as a super-charged query system – a search engine on steroids – and they place a high degree of trust in GenAI’s answers as a source of truth.

“This willing outsourcing of human thinking isn’t a productivity gain; in the long run it is intellectual malpractice. That dramatic change will meet resistance at every encounter. As examples: Our K-12 education systems were designed to educate an early 20th-Century factory workforce, not an AI-centered future of work. And most employers will look to AI to replace their workforce, rather than to augment it.”


> Go to Chapter 6: The Great Divide: Broadening Differences and Expanding Inequities

> Return to the top of this page