If California government wants to use AI, it will have to follow these new rules
Stay up-to-date with free briefings on topics that matter to all Californians. Subscribe to CalMatters today for nonprofit news in your inbox.
As artificial intelligence technology advances, state agencies would like to make use of it. California as of today is one of the first states with formal rules for government departments to follow when buying AI tools.
The guidelines introduced this week are the product of an executive order aimed at challenges and opportunities from generative AI by Governor Gavin Newsom late last year.
Generative AI produces text, imagery, audio, or video from simple text prompts. Since the release of ChatGPT in fall 2022, the technology has triggered fear of job loss, election inference, and human extinction. The technology can also produce toxic text and imagery that amplifies stereotypes and enables discrimination.
The guidelines require all state agencies to designate an employee responsible for continuous monitoring of generative AI tools, and carry out assessments to evaluate the risk of use to individuals and society before using generative AI. State agencies must report use of generative AI, determine if it increases risk that a public agency can harm citizenry, and submit for review by the California Department of Technology any contracts involving generative AI before signing them.
The guidelines also require state agency executives, technical experts, and government workers receive training on the meaning of artificial intelligence and usage best practices such as how to prevent discrimination.
Though the guidelines extend protections against irresponsible use of generative AI, that’s only one form of artificial intelligence, a technology and scientific discipline that first emerged in the late 1950s.
The guidelines will not protect people from other forms of the technology that have already proven harmful to Californians.
For example, millions of people were wrongfully denied unemployment benefits by the California Employment Development Department. A February 2022 Legislative Analyst’s Office report found more than 600,000 unemployment claims were denied when the agency started using ID.me for identity verification and a fraud detection algorithm made by Thomson Reuters. The problems were listed in a Federal Trade Commission complaint in January by the Electronic Privacy Information Center against Reuters in 42 states.
Electronic Privacy Information Center fellow Grant Fergusson evaluated AI contracts signed by state agencies across the U.S. He found they total more than $700 million in value and roughly half involve fraud detection algorithms. The California unemployment benefits incident, he says, is one of the worst instances of harm he encountered while compiling the report and “a perfect example of everything that’s wrong with AI in government.”
Still, he thinks California deserves credit for being one of the first states to formalize AI purchasing rules. By his count, only about half a dozen US states have implemented policy for automated decision-making systems.
State agency executives stress that California’s guidelines are an initial step, and that an update could occur following the completion of five pilot programs underway that aim to reduce traffic fatalities and give business owners tax advice, among other things.
Outside contributors to California’s efforts on generative AI include experts in academia like the Stanford University Human-Centered AI Institute, advocacy groups like the Algorithmic Justice League and Common Sense Media and major AI companies, including Amazon, Apple, IBM, Google, Nvidia, and OpenAI.
Responsible AI rules
A fall 2023 report by state officials about potential risks and benefits says generative AI can produce convincing but inaccurate results and automate bias, but the report also lists several potential ways state agencies can use the technology.
Speaking from a Nvidia conference in San Jose, Government Operations Agency secretary Amy Tong said the intent of the framework is to make sure the state uses AI in an ethical, transparent, and trustworthy way.
Just because these guidelines wouldn’t have stopped California from inaccurately flagging unemployment claims doesn’t mean they’re weak, she said. Together with Tong, California State Chief Technology Officer Jonathan Porat likened the actions required by Newsom’s executive order to writing a book.
“The risks and benefits study last fall were the forward, contract rules are like an introduction or table of contents, and deliverables coming later in the year like guidelines for use in marginalized communities, how to evaluate workforce impacts, and ongoing state employee training, will be the chapters,” he said.
UPDATED: Please read and follow our commenting policy: