A misguided California bill would make cutting-edge AI work effectively illegal | Opinion

The safety impacts of artificial intelligence are here, are real and are often very personal.

Innocent teenagers are being blackmailed with their faces inserted into photo-realistic pornography. Terrified grandparents are sending tens of thousands of dollars to “virtual kidnappers” who use AI to fake the voices of family members using snippets of audio from social media. Propaganda shops in China, Russia and Iran are using AI to create fake personas to manipulate political discussions around the world.

We are in desperate need of action at every level of government, but if you pay attention to the debate in Sacramento over AI, you will hear much more about science-fiction risks than solutions aimed at how people are being harmed today.

Opinion

The centerpiece of California’s massive slate of potential AI regulations seems to be Senate Bill 1047 authored by Scott Wiener, D-San Francisco. It recently passed out of the Senate and awaits action by the Assembly. Among several reasonable proposals in this bill, however, is a poison pill for California’s continued dominance of the field: a requirement that makers of foundational (aka general purpose) AI models swear that their products could never be used or modified to enable widespread harm.

The bill specifically mentions nuclear and biological threats, which are reasonable areas where we wouldn’t want AI developed. But it also includes catch-all language that blames AI developers for any potential misuse of their product. Penalties apply even if that misuse happens by ill-intentioned third parties intentionally defeating built-in safety protections.

California inventions such as the nicotine patch, WD-40 and the skateboard would never have survived such a standard. Responsible companies think a lot about how their models might be misused, and government agencies and private researchers are racing to create frameworks for predicting and testing for safety flaws. But general purpose AI models are just that: general purpose. They can be used both for good and bad in the same way as a car or hammer.

Requiring companies to foreclose every potential abuse is asking them to peer into a murky crystal ball. It would make it impossible to deploy any open-source AI code that could possibly be modified to make it dangerous.

Recently, 160 students in my Stanford University class presented their final projects applying open or closed-source AI to real online safety challenges. In just seven weeks, they built systems to detect the grooming of children online, prevent the creation of fake spam accounts and stop the spread of virtual revenge porn. For every negative use of AI, we will need positive protections.

To be clear, outlawing free tools will only benefit those individuals and countries who already operate outside of the boundaries of civilized society.

A better approach for legislators would be to focus on the use of AI to hurt others and to bring our state’s resources to bear against the bona fide bad actors and encourage further responsible development.

California could create severe civil and criminal penalties for those unethical entrepreneurs who are providing online services that create virtual child abuse material or deep-faked pornography. Victims of cybercrime are often caught between FBI offices that won’t return their calls and local police officers who don’t have the resources to disrupt international gangs. California could lead the country in creating a state-level law enforcement agency to investigate and prosecute cybercrime, much of which is now AI-powered.

We could collectively clarify that AI vendors have liability when they knowingly enable harmful or discriminatory behavior and require safety reviews before hooking up AI models to risky applications like the power grid. California could demonstrate to the world how governments can use AI for good, by protecting hospitals, school districts and local institutions that are under constant cyberattack by professional ransomware gangs.

I hope the Legislature shifts to a path to addressing the very real needs of Californians who are falling victim to those who are abusing AI.

Alex Stamos is the chief trust officer of SentinelOne, an AI cybersecurity company, and a lecturer in computer science at Stanford University. He is the former chief security officer of Facebook and Yahoo, a Sacramento native and a graduate of Bella Vista High School and UC Berkeley.