Employing a 1950 law enacted during the Korean War which gives the administration emergency powers,the directive requires agencies to set standards of testing of AI systems and use,and address chemical,biological,radiological,nuclear and cybersecurity risks.
It imposes safety obligations on AI developers,requiring that they perform “red-teaming” tests – using third parties to stress-test their systems – and disclose the results to the government before rolling out their products.
It also establishes a national AI research agency to develop standards,tools and tests for AI systems and seeks an increase in the intake of immigrants with advanced AI skills to build America’s capabilities in the sector.
There’s a big emphasis on privacy protections and preventing “algorithmic discrimination” in employment,the justice system,healthcare and housing.
Executive orders have their limitations and are vulnerable to legal challenges. Biden’s order will require the US Congress to pass legislation if the country is to properly regulate AI – which might be a challenge,given thedysfunctional state of US Congress.
At present,the largest US tech companies – among them Google,Microsoft,Facebook’s parent Meta Platforms,Amazon and Open AI (ChatGPT’s parent) – are operating under a voluntary code of conduct that commits them to external security testing of their systems,the sharing of information about them and their capabilities and usage with government and the industry,as well as watermarking AI-generated products.
Given that the executive order,and any legislation that might emerge,would be confined to US companies – even though they are the leaders in AI and the development of the bleeding-edge semiconductors that power them – it is obvious that there will need to be widespread international agreement on regulatory principles if AI’s risks to individuals and humanity are to be managed and its opportunities realised.
From facial recognition technologies to the potential for AI to be used to develop nuclear and biological weapons or to create “deep fakes” and potent disinformation,there’s a multitude of risks associated with a technology that is developing at a rate that regulators will struggle to keep up with. There are estimates that suggest the computational capacity of generative AI is doubling every six months.
Some sectors of the US tech industry have pushed back against the Biden administration’s efforts to provide some regulatory framework around AI,even though the executive order is aimed at the next generation of AI rather than current systems.
Loading
Republican senator Ted Cruz,for instance,described the measures in the order as “barriers to innovation disguised as safety measures”. It is inevitable that there will be similar attempts politicians and industry stakeholders to play the innovation card wherever policymakers try to erect safeguards around technology thateven some of its pioneers have warned carries grave risks for humankind.
In Europe,there was a dispute between some countries and the members of the European parliament,for instance,over the use of live facial recognition technologies,with the states wanting to be able to use AI for border security but the European MPs regarding it as an invasion of privacy. The draft legislation was a win for those worried about the privacy implications.
China,of course,has deployed live AI facial recognition technology widely and has the development of AI and global leadership of the sector as one of its key policy ambitions – anambition the US has attempted to thwart with its ever-intensifying bans on the supply of the largely American-designed advanced semiconductors needed for next-generation AI systems and applications.
China does regulate AI quite stringently,with companies required to obtain a licence from the state before they can release generative AI models. China wants to be a leader in AI development while retaining absolute state and Communist Party control.
Loading
In Australia,we’restill at the discussion paper stage of developing a regulatory approach to AI,but the same concerns about managing the risks while trying to capture the opportunities exist here as they do in the US or Europe.
It is apparent that the first item on any regulatory agenda ought to be transparency,so that there is an understanding of how AI systems were developed and how they are being used. There needs to be a clear identification – “watermarking” – of any material produced and external human auditing of all of the above guaranteed to ensure risks are identified and can be responded to.
The tension between the balance of the risks and opportunities latent in the technology makes it unlikely there will be a single prescriptive global standard or approach to regulating AI,even though it appears obvious that the search for competitive advantage by companies,countries and “bad actors” will push risk tolerances to,or beyond,whatever boundaries the more cautious regimes impose.
The saving grace might be that the technology relies on the most advanced chips and software,whose design and production lies almost entirely within the major economies. Which means,there is at least the potential to control how AI develops if there is the will to do so.
The Business Briefing newsletter delivers major stories,exclusive coverage and expert opinion.Sign up to get it every weekday morning.