The question I get asked more than any other (except for ‘where is Dr Ruja’) is ‘what should my children study?’ It reflects a wide social concern: that our economy is changing quickly and in unpredictable ways; and children are learning things that will be ill-suited or irrelevant in 10 or 20 years’ time.
It’s not an unreasonable fear. My ‘career adviser’ — a grand title for someone who just read ‘bank manager’ and ‘supermarket manager’, from a pre-prepared list — offered me exactly zero useful advice. I am now doing work that didn’t exist when I left school. And if anyone had told me 20 years ago that by the mid 2020s most young people would type words on tiny computers in their free time, I would have laughed them out of the room.
In this two-part essay, I’m going to attempt to answer that question. In this first essay I’ll explain roughly how I think the world of work is changing (and it’s not all scary). In the second, how education might be re-organised for the smart machine age. As they say, prediction is hard, especially about the future. But as you’ll see, technology or date-specific projections are often quite wrong; but the broad contours are often about right.
All those reports about AI stealing your job
Fears about humans being out-competed by machines have been around forever - whether it was Luddites (reasonably) worrying about skilled labour being devalued by the power loom; or Herbert Simon foreseeing in 1960 that vast administrative functions would be handed over to software. For at least a decade there have been regular, detailed reports about AI and robotics replacing workers. You have probably read at least one newspaper headline about how, in ten years, this or that industry will be gone. You probably ignored it, but then OpenAI arrived and the crazy sounding reports didn’t seem so crazy anymore.
But in truth, these reports diverge quite widely, and no-one is quite sure what’s going to happen. Some say five per cent of jobs will be automated fairly soon – others predict nearly half. A little while back, the MIT Tech Review identified 18 separate and very different automation predictions. One study said 2 billion jobs would be liquidated by 2030; another that a mere 800 million would go - with over 800 million new ones arriving. No-one seems to agree. A large Oxford study estimated nearly half of all jobs were at risk of automation; soon after an OECD study thought it would be only 14 per cent. The most recent in this confusing series was Goldman Sachs, who last year estimated two thirds of occupations in the US are at some degree of risk from automation using AI (although they’re optimistic, as well-paid consultant usually are).
As an illustration of just how tricky this is, take the most famous of all these studies, Oxford Martin School’s 2013 mega-report “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, These guys weren’t picking numbers out of a hat. They carefully analysed 20,000 ‘unique task descriptions’, looked at the skill sets of workers, got a load of machine learning experts to cross reference that against the state of technology, and then made predictions about roughly 700 different jobs.
Their conclusion? That 47 per cent of U.S. jobs are would be at risk of automation. (Although they didn’t say when exactly). But ten years on, plenty of their ‘at risk’ professions are still going strong. The study suggested the following professions had a 99 per cent likelihood of being automated away within a few years: telemarketers, title examiners, hand sewers, mathematical technicians, insurance underwriters, watch repairers, cargo and freight agents, tax preparers, photographic process workers, new accounts clerks, library technicians and data entry keyers.
If you Google insurance underwriters, there are dozens if not hundreds of jobs available. Tax accountants are doing just fine, thank you very much. (To their credit, five years after the initial report the authors wrote an article explaining why the figures vary so much. In short, looking at specific tasks workers carry out doesn’t capture every nuance of a job; and their estimates didn’t consider when automation or AI would complement rather than replace existing workers.)
I think there are other reasons this is so hard to get right. Pretty much every past prediction about jobs assumes that a/ humans are rational and b/ machines always work perfectly. When JG Ballard wrote that ‘the future is going to be boring’ this is what I imagined he meant. Everything becoming smooth, predicable, repeatable.
If anything, the last 20 years of machine invasion suggests the opposite. For all their incredible powers of storage, calculation and predictions, IT systems break. Incompatibility problems multiply. Everyone gets hacked. My mother now needs to download and then update an app just to park her car. Automated phone helplines don’t answer your questions. Your fridge needs a software update. I’ve never been busier. Simply getting by is a full-time job. This is also true in the world of work. People get scared and can’t be bothered to learn another new bit of software. Legacy systems cling on because changing them is a pain, requiring new experts, ‘change management’ teams, and the rest.
Many years ago, Warren G Bennis said that “The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment”. In fact, the factory of the future will have no-one working the machinery. But it will need to employ 10 cyber-security officers; 3 procurement managers; 2 thought-leaders; 4 crisis-comms advisors; a data protection officer, 2 HR bosses; plus an in-house mental health and well-being guru to pump motivation because everyone stares at a screen all day. If anything the factory of the future might be bigger than before, even though hardly anyone is involved in the actual physical production of the product.
Every prediction about the future of work and automation seems to assume that machines and software all work perfectly, that humans are rational, and nothing ever goes wrong. Which is obviously absurd, and why the cost of almost every IT ‘upgrade’ programme ever balloons over budget – and then needs replacing again just at the moment everyone has finally got used to it.
Although AI is often described as a ‘multi-purpose technology’, that isn’t always quite true at the level individual occupations. I know of a few company CEOs who have noted with excitement the remarkable advances in ‘generative AI’. They see the flawless videos made by a machine; they read articles about AI-generated medical patents. They spend months wondering what wonders AI will offer, only to conclude: not much right now. (This also happened with blockchain, only worse.)
Just because OpenAI can produce a beautiful machine-made video from a few prompts, it does not follow that the bakery on your street corner can implement an ‘AI solution’ to ensure their delivery man turns up on time.
Keep reading with a 7-day free trial
Subscribe to How to Survive the Internet to keep reading this post and get 7 days of free access to the full post archives.