To Fear or Not to Fear: AI and the Future of Work
Recently at Phoenix airport, I hopped into a taxi and, as driverless ride-hailing services moved alongside us on the highway, the driver asked me what I thought about the advent of AI. Before I uttered a syllable he launched into a diatribe about the dire consequences for human employment—in the era of AI, he exclaimed, humans will ultimately end up with no work prospects and no wages.
I could have argued that it’s indisputable that AI—from machine learning and deep learning to large-language models—will bring dramatic beneficial advancements to critical sectors such as healthcare and manufacturing (e.g., see Manufacturers Alliance’s recent report, Manufacturing Intelligence). Still, there are some serious concerns about the advances made in AI, particularly in the hands of criminals and political kleptocrats—think increased cyber risks and deep fakes—as well as in the dramatic uptick in the amount of energy needed to run data centers. But human employment isn’t high on my list of concerns.
Why not? Because of “Polanyi’s Paradox”—the concept that because humans know more than we can describe, it dramatically curtails the capacity of AI to mimic certain critical human abilities.
Luddites and ATMs
More on that in a moment. But first: fears of technology replacing human labor are centuries old. Automation has long been a bugaboo among workers, starting with 19th-century English textile laborers—“Luddites”—who took to destroying any new machinery purchased by textile factory owners. As technological advances increased the capabilities of automation, cries of alarm continued. And yet, the opposite occurred. As MIT economist David Autor observed in a 2015 paper in the Journal of Economic Perspectives, over time, while workplace technologies may substitute for labor, any labor that cannot be substituted—and there are many examples—are generally complemented by technology. In effect, helping, not hindering.
Take the dire consequences predicted for bank workers as ATMs in the country quadrupled between 1980 and 2010. Instead, the number of bank employees increased. ATMs reduced the number of cash-handling tasks, but technology provided new data on customers and new opportunities for institutions to become more involved in relationship banking. Tellers became sales representatives.
But, you’ll argue, AI is different. Automation may complement blue-collar jobs, but AI will take over professional occupations that require intricate thought processes, as it not only substitutes for but serves as a dramatic improvement on human brain function. As Pew Research Center researcher Rakesh Kochhar told a Forbes writer last year, “[AI] is reaching up from the factory floors into the office spaces where white-collar, higher-paid workers tend to be.” Particularly jobs focusing on data analysis, financial reporting, and repetitive administrative tasks. This, of course, was the same risk that bank tellers faced four decades ago, and we know how that worked out.
Polanyi's Paradox
Which brings us back to what AI can and, practically speaking, cannot do. In the 1960s, the British-Hungarian philosopher Michael Polanyi theorized that there are certain kinds of knowledge we cannot articulate. For example, how do you describe intuition, insight, morality and experiential wisdom? MIT’s Dr. Autor took this to its logical conclusion: if you can’t articulate it, then you can’t program it, and if you can’t program it then a machine won’t be able to achieve it. This he called Polanyi’s Paradox.
Thus there are practical limits to AI, related to the difference between explicit and tacit (or implicit) knowledge. In math, we can describe processes that with data collection and analysis can be taught to students from grade school through graduate school. That’s explicit knowledge, which enables us to program a computer to use artificial neural networks to analyze and learn from data, i.e., deep learning. But many human activities use a cognitive ability we understand intuitively but cannot clearly explain. This is implicit knowledge. How does a critic explain the difference between the creativity of Monet and Degas or of Hemingway and Twain? How does a psychotherapist diagnose marginally different psychological disorders? How does a CEO explain her reasoning behind a business decision to acquire a specific firm? How does an HR professional know which of the candidates applying for a manufacturing facility is truly the best fit?
This latter challenge, in fact, stands out in a joint study to be published in May by Manufacturers Alliance Foundation and American Fidelity. While top HR executives at manufacturing firms are turning to AI to aid in certain processes requiring data analysis, such as recruiting and onboarding employees, they express concern over the inherent risks of bias. They also overwhelmingly believe that psychological wisdom and empathy—forms of implicit knowledge—out of necessity must play a significant role, elevating the value HR professionals bring to their employers.
Winning vs. feeling
Human subjective experiences, beliefs and empathy are core to another insurmountable hurdle that will prevent AI from superseding humans in the workforce. As Dr. Tom McClelland at Clare College observed, it’s one thing to win a game of chess; another thing to feel the excitement of victory. That’s because those internal processes stem from human consciousness, which plays a fundamental role in the evolution of humans—biologically, culturally and behaviorally, all of which are the foundation of labor in our societal structure. You can’t program consciousness—mountains of literature and research have been published on the subject, and yet we still can’t agree on what it is.
These obstacles to AI’s complete domination of human labor don’t necessarily benefit my Phoenix taxi driver, who must compete with driverless cars. But it should give us an appreciation for the fact that we’ve been here before—and that ultimately, despite the most dire of predictions, technology has improved, not diminished, our quality of life over the centuries.