New technologies often bring calls for new regulation. A current example is artificial intelligence (AI)—the creation of machines that think and act in ways that resemble human intelligence.
There are plenty of AI optimists and AI pessimists. Both camps see the need for government intervention. Microsoft founder Bill Gates, who believes AI will “allow us to produce a lot more goods and services with less labor,” foresees labor force dislocations and has suggested a robot tax. Tesla’s Elon Musk, who believes AI presents a “fundamental risk to the existence of human civilization,” calls for proactive regulation “before it is too late.”
Both billionaires might be encouraged from President Trump’s new executive order, (EO), “Maintaining American Leadership in Artificial Intelligence,” which calls for a coordinated approach to regulation of AI. This next year will see the emergence of a U.S. regulatory framework.
Manufacturers need to pay very close attention.
The New Executive Order
The EO, issued February 11, outlines the policy of the U.S. government to ensure leadership in AI through development of a coordinated strategy. It is based on five principles: driving technological breakthroughs, developing appropriate technical standards, training a workforce for the future, fostering public trust and confidence, and ensuring international trade of AI-enabled products.
A significant part of the EO pertains to regulation. The Office of Management and Budget (OMB) is to issue guidance to regulatory agencies within six months, and the agencies are to develop their own plans consistent with the OMB guidance.
The National Institute of Standards and Technology (NIST) is given the lead role in developing a plan for federal engagement in technical standards to support systems that use AI technologies. This is timely given the push for such standards around the globe and the ongoing efforts of several standards development organizations (SDOs).
Profound Impact on Manufacturing
These federal actions will have a profound impact on U.S. manufacturers. The future of manufacturing lies in smart manufacturing—the digitalization of factories and their supply chains. Market analyses project global value-added as high as several trillion dollars by 2025. Smart manufacturing depends critically on using AI—and machine learning in particular—to find patterns in digital information using algorithms.
Today, manufacturers are utilizing machine learning to gain a competitive edge. Applications include workforce training (Land Rover uses augmented reality to train new technicians), production process improvement (Ford uses cobots—collaborative robots that work side by side with humans—to install shock absorbers on its assembly line), quality control (the placement of labels on Tabasco sauce are checked using computer vision), supply chain optimization (IBM uses AI to better manage its global supply chain), predictive maintenance (Siemens places sensors on older motors to detect irregularities before a problem materializes), designing new products (Adidas uses generative design to create new athletic shoes), and distribution and transportation (many firms use semi-autonomous or autonomous vehicles such as fork lifts, inventory robots, and low-payload drones in a factory or warehouse setting).
Regulation of current and future applications of AI is not straightforward. Regulators will have to balance their desire to address legitimate social concerns (e.g., rapid loss of manufacturing jobs due to adoption of AI) against potential erosion of innovation and productivity. Analogous to the story of Goldilocks and the three bears, the feds must find a sweet spot between too much and too little regulation. Whatever they decide, critics will pounce and judicial challenges will ensue. Inevitably, a regulatory system will be created.
International competitiveness is also at stake. Other nations—notably China—are moving quickly to establish standards and set regulations for AI to give their country a competitive advantage. The EU aims to create rules for “ethical AI.” The country or region that can best influence global norms of behavior for AI will have a first-mover advantage. This fact is not lost on the Trump administration—the United States, Mexico, and Canada Agreement (USMCA), which would replace NAFTA, includes provisions that reflect US preferences on AI governance.
Three Key Questions
As the U.S. moves to create its own standards and develop a coordinated approach to regulation, three questions are critical.
How do existing regulatory programs address AI?
Existing regulatory programs already address many AI applications.
For example, the Food and Drug Administration (FDA) regulates medical devices that incorporate machine learning algorithms. FDA regulation of pharmaceutical manufacturing allows the agency to determine whether an AI-enabled production process meets current Good Manufacturing Practices (cGMP). The Federal Aviation Administration (FAA) is certifying new aerospace parts created using generative design. The National Highway Traffic and Safety Administration (NHTSA) will be altering its federal motor vehicle safety standards—which refer to a human driver—to allow for self-driving cars. Both the Commerce and State Departments must determine whether the export of certain AI-enabled products creates a national security issue.
With the new executive order, the Administration seeks to provide some consistency and control over the regulatory decisions of dozens of regulatory agencies—lest regulatory requirements set a bad precedent. By learning how AI is currently being regulated, agencies can learn from each other and the administration can best determine if existing programs are consistent with the principles of the new executive order.
Does AI raise novel issues?
Perhaps AI raises unique issues that cannot be addressed adequately if left to the market or by using existing regulatory authorities. Which aspects of AI might raise novel issues? With respect to machine learning, debate centers on the potential for bias, [lack of] auditability, and evolution of performance. None of these concerns, however, is sufficient to prohibit AI.
Machine learning is based on training data, and patterns that emerge from real-world training data may reflect bias. For example, machine learning used to identify the best qualified candidates from among thousands of job applicants may inadvertently discriminate against certain classes of people. This problem, however, is not new to regulators, who have experience in identifying and enforcing against policies and practices that, in effect, discriminate against protected classes of people.
Machine learning is often a black box that defies explanation. For example, Alpha Zero, a Google chess-playing program based on deep learning (a form of machine learning), is the best chess playing machine on the planet—much better than other chess computers that do not rely on AI. Does it matter how it comes up with a winning move? It matters to professional chess players, who are seeking to improve their game. Similarly, there may be situations in which a regulator needs to know how AI drew a conclusion. For example, FAA might want to know why an autonomous plane (drone) crashed in a crowded residential area. Devising AI to be explainable may be important to regulators. Fortunately, world class expertise in “explainable AI” resides within the federal government, at the Department of Defense. Such expertise can be tapped before devising suitable standards or regulations.
AI applications that evolve (i.e., learn) over time may create unique problems for regulators. Consider situations where regulatory approval is needed before a new product or service can come to market. How does FDA approve a new medical device if the device is based on an algorithm that changes over time? How does NHTSA set standards for the design of autonomous cars if vehicle performance continuously evolves as it is used? Such a situation suggests the need for regulatory performance standards, as opposed to prescriptive, command-and-control regulation.
How will regulators use AI?
Regulators themselves may leverage AI to better accomplish their mission. For example, regulators might use AI to identify violations within a massive set of compliance data, to determine best available technology for the purpose of establishing pollution control requirements for steel factories, or to evaluate the weight of scientific evidence for the toxicity of an industrial chemical based on hundreds of toxicological studies.
As the Administration develops its plan to regulate AI, it should also disclose how regulators will use AI, which will provide greater certainty for regulated entities and otherwise foster public trust and confidence consistent with the principles of the executive order.
Over the next year, the U.S. government is poised to develop a coordinated regulatory approach to AI. Although the need for such coordination is understandable, the process may lead to unnecessary or insufficient regulation that negatively impacts U.S. manufacturing competitiveness in the long term. To get regulation right, federal officials will need input from manufacturers on both current and projected applications. Only though a dialogue with all stakeholders can regulatory officials gather sufficient information to craft a constructive federal policy. As part of this process, key questions must be answered and the resulting information shared with the public.
Keith B. Belton is Director of the Manufacturing Policy Initiative at Indiana University in Bloomington, Indiana.