Will AI Reshape the Value and Nature of Work?

MIT economist David Autor argues that AI has potential to level the playing field for employees

MIT IDE
MIT Initiative on the Digital Economy

--

Image: Shutterstock

By Irving Wladawsky-Berger

“A recent Gallup poll found that 75% of U.S. adults believe AI will lead to fewer jobs,” wrote MIT economist David Autor in a recent article, AI Could Actually Help Rebuild The Middle Class. “But this fear is misplaced. The industrialized world is awash in jobs, and it’s going to stay that way. Four years after the Covid pandemic’s onset, the U.S. unemployment rate has fallen back to its pre-Covid nadir while total employment has risen to nearly three million above its pre-Covid peak.”

“Due to plummeting birth rates and a cratering labor force, a comparable labor shortage is unfolding across the industrialized world (including in China),” he added. “This is not a prediction, it’s a demographic fact. All the people who will turn 30 in the year 2053 have already been born and we cannot make more of them. Barring a massive change in immigration policy, the U.S. and other rich countries will run out of workers before we run out of jobs.”

In his article, Autor argues that AI can accentuate the positive and equalize some of the negatives in the economy.

AI has the potential to transform the labor market by reshaping the nature of human expertise.

AI offers us the opportunity to extend the value of human expertise by enabling “a larger set of workers equipped with the necessary foundational training to perform higher-stakes decision-making tasks currently arrogated to elite experts, such as doctors, lawyers, software engineers and college professors.”

Expertise is the domain-specific knowledge or competency required to accomplish a particular goal. Expertise commands high wages if the goal is both necessary and relatively scarce, e.g., physicians, engineers, and lawyers. In contrast, jobs that require little expertise and training generally command low wages, e.g., waiters, janitors, and school crossing guards.

Pep Montserrat | Illustration

The article examined the historical evolution of expertise across three different eras: the industrial era of the 19th- and early 20th Century, the computer and digital era of the past several decades, and the emerging AI era.

In summary, Autor says that mass expertise was narrowly defined in the industrial era. Expert judgment wasn’t needed or even wanted from workers in assembly lines or offices. “As a result, the narrow procedural content of mass expert work, with its requirement that workers follow rules but exercise little discretion, was perhaps uniquely vulnerable to technological displacement in the era that followed.”

The Computer Era

The second half of the 20th Century saw the emergence of the digital computer era, a.k.a. the information age. A wide variety of scientific and business applications could now be precisely represented as software programs. Over the following decades, the exponential advances in the performance and price-performance of digital technologies — i.e., Moore’s Law — significantly increased the use of computers across our increasingly digital economy.

Computers automated a large share of the mass expertise, routine tasks of the industrial era, replacing many mid-skill production and clerical workers or forcing them into lower skills, lower-pay jobs.

At the same time, sophisticated computer tools enhanced the productivity and value of jobs that required the kind of expert problem solving and complex communications skills typically seen in managerial, professional and technical occupations. While beyond the scope of computer automation, most such jobs have been complemented by advanced computer tools.

Since the 1980s, high-skill jobs requiring elite expertise have significantly expanded, with the earnings of the highly educated workers needed to fill such jobs rising steadily.

The AI Era

Fast forward to the emergence of AI. “Like the industrial and computer revolutions before it, artificial intelligence marks an inflection point in the economic value of human expertise,” wrote Autor. “To appreciate why, consider what distinguishes AI from the computing era that we’re now leaving behind. Pre-AI, computing’s core capability was its faultless and nearly costless execution of routine, procedural tasks. Its Achilles’ heel was its inability to master non-routine tasks requiring tacit knowledge. AI capabilities are precisely the inverse.”

Many of the AI leading researchers in the 1960s and 1970s were convinced that you could develop AI systems capable of human-like cognitive capabilities within a generation, and obtained considerable government funding to implement their vision. But eventually it became clear that all these projects grossly underestimated the difficulties of developing machines exhibiting human-like intelligence, because in the end, you cannot express as software barely understood cognitive capabilities like language, thinking, or reasoning. After years of unfulfilled promises and hype, these ambitious AI approaches were abandoned in the 1980s, and a so called AI winter of reduced interest and funding set in that nearly killed the field.

AI was reborn in the 1990s. Instead of trying to program human-like intelligence, the field embraced a statistical, brute force approach based on searching for patterns in vast amounts of data with highly parallel supercomputers and sophisticated algorithms — an approach widely used in scientific applications like high energy physics, theoretical astronomy, and computational genomics. AI researchers discovered that such an information-based approach produced something akin to intelligence or knowledge. Moreover, unlike the earlier programming-based projects, the statistical approaches scaled very nicely.

The more information you have, the more powerful the supercomputers, the more sophisticated the algorithms, the better the results.

AI is fundamentally a data-centric discipline — a kind of software 2.0. The centrality of data is the common element in the key technologies that have advanced AI over the past few decades, including big data and analytics in the 2000s, machine and deep learning in the 2010s, and more recently foundation models, LLMs, and generative AI. “AI is remarkably effective at acquiring tacit knowledge. Rather than relying on hard- coded procedures, AI learns by example, gains mastery without explicit instruction and acquires capabilities that it was not explicitly engineered to possess.”

Image: Dreamstime.com

How does this tie back to the labor market? “AI’s capacity to depart from script, to improvise based on training and experience, enables it to engage in expert judgment — a capability that, until now, has fallen within the province of elite experts. Though only in its infancy, this is a superpower.

As AI’s facility in expert judgment becomes more reliable, incisive and accessible in the years ahead, it will emerge as a near-ubiquitous presence in our working lives.

Its primary role will be to advise, coach and alert decision-makers as they apply expert judgment.”

The True Promise of AI

AI technologies have the potential to address serious labor challenges — such as diminishing opportunities for mide-wage, mid-skill jobs — by augmenting the procedural knowledge of less expert workers and enabling them to perform tasks requiring expert decision-making capabilities. “AI could enable a larger set of workers possessing complementary knowledge to perform some of the higher-stakes decision-making tasks currently arrogated to elite experts like doctors, lawyers, coders and educators. This would improve the quality of jobs for workers without college degrees, moderate earnings inequality, and — akin to what the Industrial Revolution did for consumer goods — lower the cost of key services such as healthcare, education and legal expertise.”

“For the lucky among us, work provides purpose, community and veneration,” Autor concludes. “But the quality, dignity and respect of a substantial minority of jobs has eroded over the past four decades as computerization has marched onward and inequality has grown more prevalent.”

“The unique opportunity that AI offers humanity is to turn back this tide — to extend the relevance, reach and value of human expertise for a larger set of workers.

Not only could this reduce earnings inequality and lower the costs of key services like healthcare and education, but it could also help restore the quality, stature and agency that has been lost to too many workers and jobs.”

“This alternative path is not an inevitable or intrinsic consequence of AI development. It is, however, technologically plausible, economically coherent and morally compelling. Recognizing this potential, we should ask not what AI will do to us, but what we want it to do for us.”

Read the original March 7 blog in full, here.

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.