• العربية
  • کوردی

Here’s a simple way to regulate powerful AI models

8/16/2023 3:29:00 PM

By Jason Matheny

Artificial intelligence is advancing so rapidly that many who have been part of its development are now among the most vocal about the need to regulate it. While AI will bring many benefits, it is also potentially dangerous; it could be used to create cyber or bio weapons or to launch massive disinformation attacks. And if an AI is stolen or leaked even once, it could be impossible to prevent it from spreading throughout the world.

These concerns are not hypothetical. Such a leak has, in fact, already occurred. In March, an AI model developed by Meta called LLaMA appeared online. LLaMA was not intended to be publicly accessible, but the model was shared with AI researchers, who then requested full access to further their own projects. At least two of them abused Meta’s trust and released the model online, and Meta has been unable to remove LLaMA from the internet. The model can still be accessed by anyone.

Fortunately, LLaMA is relatively harmless. While it could be used to launch spear-phishing attacks, there is not yet cause for major alarm. The theft or leak of more capable AI models would be much worse. But the risks can be substantially reduced with effective oversight of three parts of the AI supply chain: hardware, training and deployment.

 

The first is hardware. The creation of advanced AI models requires thousands of specialized microchips, costing tens or even hundreds of millions of dollars. Only a few companies — such as Nvidia and AMD — design these chips, and most are sold to large cloud-computing providers such as Amazon, Microsoft and Google, as well as the U.S. government, a handful of foreign governments, and just a few other deep-pocketed tech companies. Because the pool of buyers is so small, a federal regulator could track and license large concentrations of AI chips. And cloud providers, who own the largest clusters of AI chips, could be subject to “know your customer” requirements so that they identify clients who place huge rental orders that signal an advanced AI system is being built.

The next stage of AI oversight focuses on the training of each model. A developer can — and should be required to — assess a model’s risky capabilities during training. Problems detected early can be more easily fixed, so a safer, less expensive final product can be built in less time.

 

Once training is complete, a powerful AI model should be subject to rigorous review by a regulator or third-party evaluator before it is released to the world. Expert red teams, pretending to be malicious adversaries, can try to make the AI perform unintended behaviors, including the design of weapons. Systems that exhibit dangerous capabilities should not be released until safety can be assured.

AI regulation is already underway in Britain, the European Union and China. But many breakthrough models — and most of the advanced AI systems that have brought us to this moment — have been developed in the United States. We would do well to establish a model of oversight for the world that focuses on the three parts of the AI supply chain. Increasing the safety of the American AI industry would boost public confidence at a time when consumers are growing wary of just what sort of future AI might bring.

 

Jason Matheny is the President and CEO of the Rand Corporation. We was previously the founding director of the Center for Security and Emerging Technology at Georgetown University. This article has been published from The Washington Post previously.