Tech
Biden’s AI Executive Order Will Actually Decrease Regulation in the Long Run
Last March, the House Oversight Committee held a hearing on the Biden administration’s executive order (EO) on artificial intelligence (AI). In it, tech lobbyists and centre-right think tanks attacked Biden’s executive order over fears it is leading to excessive and burdensome regulation, stifling innovation, and slowing the economy. Unfortunately, they are wrong. Biden’s executive order will actually lower the regulatory burden in the long run. The reasons: better information about the state of AI and the talent and expertise to understand it.
The main objection is Biden’s use of the 74-year-old Defence Production Act to make companies give information about training runs and AI projects that exceed a certain threshold of computation power. They say it’s not what the DPA was intended for and think this is a violation of his executive authority. But clearly what’s at the heart of the issue is a fundamental disagreement over whether, when, and how to regulate private industry.
Opponents misunderstand the situation. The pace of AI progress and development has been unlike anything we’ve seen before and will continue due to record investment. While we shouldn’t necessarily be regulating now, the point where we should may be closer than we think. Countries attending the UK’s AI Safety Summit, including the U.S., agreed that future advanced AI systems could be powerful enough to enable cyberattacks on critical infrastructure or develop new bioweapons/pandemics. Whether Republicans like it or not, there will be a point in the future where the government regulates AI. If we prepare for that day now, we can ensure those regulations are limited and efficient.
There is an information asymmetry between governments and AI companies. Tech companies hold all the cards. Not only do they understand the present technology best, but they have the best information on where the technology is going. This means that when it comes to regulating and safeguarding this technology, they are currently in a much better position to do so than the government.
To start, we should remove the information asymmetry. We need a better-informed government that better understands where the current technology is and, crucially, where the technology is going. Uninformed regulation can have wide-reaching and unintended negative effects. However, the better clued-in the government is about an industry, the better the chances are that their regulations are targeted, proportionate, and useful. Moves to better inform regulators will reduce and narrow any regulations that may come in the future.
For AI, this means getting access to more information from labs. It means getting a better understanding of what sort of datasets are being used, what resources are going into training models, and crucially, evaluating and monitoring what sort of outputs and capabilities are possible from current and upcoming models. There is a wealth of data and metrics coming out of the AI industry that is currently being ignored by the government. Measuring this data is the first step to better-informed policymaking.
However, monitoring doesn’t just mean measuring. It also means understanding. Not only do we need the capacity for governments and regulators to measure the AI industry, we need them to have the technical know-how to interpret and understand the results. This means hiring more technical talent. Ideally, hiring straight out of the industry to work in these monitoring and evaluation roles. The UK’s AI Safety Institute recently hired Geoffrey Irving, a big name in the AI field who was previously working at Google Deepmind. The newly announced U.S. AI Safety Institute should aspire to do the same.
Beyond the direct benefits to AI policymaking and regulation, bringing more technical talent into government will also have wider positive effects, by sharing their expertise and allowing the broader government to operate more efficiently (saving the taxpayer money) and build stronger relationships with the private sector. There will also be benefits that come from having technical talent engage and interact with non-technical staff, who can give unique perspectives on the socioeconomic impacts of AI that can’t be found in AI labs.
90 days into its implementation, Biden’s executive order looks on track to meet this vision. Already, the administration has used the Defence Production Act to make AI labs share information they have that is important and safety-relevant, to the Department of Commerce. Alongside acquiring this information, the DOC is also developing testbeds and testing environments for AI systems, developing evaluation tools and systems to analyse data sets used in AI systems, to help provide the necessary knowledge and insight on modern AI systems.
Another big win of the executive order is the policies aimed at bolstering AI talent in the U.S. and its government. There are 11 different provisions on immigration and international recruitment of top AI talent. Then, there are a further 3 provisions aimed at kickstarting the domestic AI workforce. Lastly, there are 19 tasks and provisions aimed at equipping and staffing the federal AI workforce, including the new AI.gov tech talent portal, direct hire authority, and temporary accepted service appointments for some key technical positions in government.
These are just a few of the many policies in the executive order. But they show that the EO is prioritising the right things. Beyond these moves to improve state capacity, information, and talent, much of the EO simply tells departments and officials to “start making plans.” Not to make regulations, but to plan and get better informed and to understand the space better to prepare for the possibility that we do need to pass regulations. This ensures that when we do, it’s narrow, targeted, and proportional.
Indeed, just look at the one piece of actual AI regulation that will come out of the EO soon. Based on risk assessments carried out by nine federal agencies, legislation is now being drafted to regulate AI safety in critical infrastructure systems, like the electricity grid. Ensuring the safety of critical U.S. infrastructure is the perfect example of the targeted and proportional regulations that will follow from the executive order. Without the EO, the alternative would be vulnerable infrastructure or excessive regulation.
Concern about wide-reaching and cumbersome regulation is warranted. If we mess this up, we could end up in a far worse position than when we started. I think that current governments are insufficiently prepared to regulate. However, the speed of progress in AI may mean we aren’t afforded the time for the government to “catch up.” That’s why we need an innovative and novel approach to AI regulation. That’s why we need more information and technical talent in government. That’s why we need this EO. And that’s why the EO will lead to narrower, more proportional, and frankly better AI regulation in the long term.