“Because it is hard to regulate AI,should we do nothing? No.” Former Australian chief scientist Alan Finkel.Credit:Eamon Gallagher
Cries for regulation are issued almost daily.The European parliament has approved sweeping draft laws which,if legislated,will restrict the use of AI and mandate the role of human oversight,data quality and accountability. European companies are pushing back,concerned that the proposed legislation will not be effective at tackling AI risks and will damage the competitiveness of European businesses.
The trouble for regulators is that the horse has bolted. AI is everywhere. Regulations will be adhered to by high-integrity organisations and they will dutifully label their products. However,the rules will inevitably be ignored by criminal organisations,terrorist groups and rogue states.
To add to the dilemma,if some countries were to apply the brakes on development,others with a history of supporting terrorism and developing nuclear weapons would ignore the call and leap ahead in the AI race.
Loading
Because it is hard to regulate AI,should we do nothing? No. The risks are so great that burying our collective heads in the sand is not the answer. Calls bytechnology leaders for a hiatus in AI development are not the answer. Regulations that only affect the well-intentioned,high-integrity developers and users of AI are not the answer.
Perhaps the answer will come from thinking laterally. What can we learn from the nuclear non-proliferation treaty that entered into effect in 1970?
While not perfect,it slowed the spread of nuclear weapons,and arsenals today are about one-fifth of what they were 50 years ago. What made the treaty possible was that the cost and scale of developing nuclear weapons is so large that the regulated targets like the silos,reactors and enrichment facilities can be monitored by international audit teams and by satellite surveillance to verify compliance. The problem is not solved,but we have enjoyed decades of respite.