If California government wants to use AI, it will have to follow these new rules

State agencies must follow a new set of rules when signing contracts that deal with AI.|

Stay up-to-date with free briefings on topics that matter to all Californians. Subscribe to CalMatters today for nonprofit news in your inbox.

As artificial intelligence technology advances, state agencies would like to make use of it. California as of today is one of the first states with formal rules for government departments to follow when buying AI tools.

The guidelines introduced this week are the product of an executive order aimed at challenges and opportunities from generative AI by Governor Gavin Newsom late last year.

Generative AI produces text, imagery, audio, or video from simple text prompts. Since the release of ChatGPT in fall 2022, the technology has triggered fear of job loss, election inference, and human extinction. The technology can also produce toxic text and imagery that amplifies stereotypes and enables discrimination.

The guidelines require all state agencies to designate an employee responsible for continuous monitoring of generative AI tools, and carry out assessments to evaluate the risk of use to individuals and society before using generative AI. State agencies must report use of generative AI, determine if it increases risk that a public agency can harm citizenry, and submit for review by the California Department of Technology any contracts involving generative AI before signing them.

The guidelines also require state agency executives, technical experts, and government workers receive training on the meaning of artificial intelligence and usage best practices such as how to prevent discrimination.

Though the guidelines extend protections against irresponsible use of generative AI, that’s only one form of artificial intelligence, a technology and scientific discipline that first emerged in the late 1950s.

The guidelines will not protect people from other forms of the technology that have already proven harmful to Californians.

For example, millions of people were wrongfully denied unemployment benefits by the California Employment Development Department. A February 2022 Legislative Analyst’s Office report found more than 600,000 unemployment claims were denied when the agency started using ID.me for identity verification and a fraud detection algorithm made by Thomson Reuters. The problems were listed in a Federal Trade Commission complaint in January by the Electronic Privacy Information Center against Reuters in 42 states.

Electronic Privacy Information Center fellow Grant Fergusson evaluated AI contracts signed by state agencies across the U.S. He found they total more than $700 million in value and  roughly half involve fraud detection algorithms. The California unemployment benefits incident, he says, is one of the worst instances of harm he encountered while compiling the report and “a perfect example of everything that’s wrong with AI in government.”

Still, he thinks California deserves credit for being one of the first states to formalize AI purchasing rules. By his count, only about half a dozen US states have implemented policy for automated decision-making systems.

State agency executives stress that California’s guidelines are an initial step, and that an update could occur following the completion of five pilot programs underway that aim to reduce traffic fatalities and give business owners tax advice, among other things.

Outside contributors to California’s efforts on generative AI include experts in academia like the Stanford University Human-Centered AI Institute, advocacy groups like the Algorithmic Justice League and Common Sense Media and major AI companies, including Amazon, Apple, IBM, Google, Nvidia, and OpenAI.

Responsible AI rules

A fall 2023 report by state officials about potential risks and benefits says generative AI can produce convincing but inaccurate results and automate bias, but the report also lists several potential ways state agencies can use the technology.

Speaking from a Nvidia conference in San Jose, Government Operations Agency secretary Amy Tong said the intent of the framework is to make sure the state uses AI in an ethical, transparent, and trustworthy way.

Just because these guidelines wouldn’t have stopped California from inaccurately flagging unemployment claims doesn’t mean they’re weak, she said. Together with Tong, California State Chief Technology Officer Jonathan Porat likened the actions required by Newsom’s executive order to writing a book.

“The risks and benefits study last fall were the forward, contract rules are like an introduction or table of contents, and deliverables coming later in the year like guidelines for use in marginalized communities, how to evaluate workforce impacts, and ongoing state employee training, will be the chapters,” he said.

What the government attempts to monitor in risk assessments and initial uses of generative AI will be important to California residents and help citizens understand the kinds of questions to ask that hold government officials accountable, Porat said.

In addition to Newsom’s 2023 executive order about AI, other government efforts to create rules around the technology include an AI executive order by President Biden and a forthcoming bill stemming from AI Forum discussions in the U.S. Senate, which also focuses on setting rules for government contracts.

Supporters of that approach in the responsible AI research community argue that the government should regulate private businesses in order to prevent human rights abuses.

Last week a group of 400 employees at local government agencies across the country known as GovAI Coalition released a letter urging citizens to hold public agencies accountable to high standards when the agencies use AI. At the same time, the groups released an AI policy manual with government contract rulemaking best practices.

Next week the group is hosting its first public meeting with representatives from the White House Office of Science and Technology in San Jose. City of San Jose Privacy Officer Albert Gehami helped form the group and advised state officials on the formation of contract rules..

Gehami said the impetus for forming the coalition came from repeatedly encountering companies that make proprietary claims to justify withholding information about their AI tool, but still try to sell their technology to public agencies without first explaining key information. It’s important for government agencies to know first about factors like accuracy and performance for people from different demographics. He’s excited to see California take a stance on government contracts involving AI and overall he calls the guidelines a net positive, but “I think many people could argue that some of the most harmful AIs are not what people will call generative, and so I think it lays a good foundation, and I think it’s something that we can expand upon.”

Debunking AI fears

Fear of generative AI algorithms has been exaggerated, said Stanford University Law School professor Daniel Ho, who helped train government employees tasked with buying AI tools following the passage of a U.S. Senate bill that requires government officials with the power to sign contracts participate in training about AI. Ho coauthored a 2016 report that found that roughly half of AI used by federal government agencies comes from private businesses.

He told a California Senate committee last month that he thinks effective policy should require AI companies to report adverse events just like companies are required to report cybersecurity attacks and personal data breaches. He notes that fear of large language models making biological weapons was recently debunked, an incident that demonstrates that the government cannot effectively regulate AI if state employees don’t understand AI.

At the same hearing, State Sen. Tom Umberg, a Democrat from Santa Ana, said government uses of AI must meet a higher standard because of the potential impact to things like human rights. But in order to do so, government must overcome the pay gap between government procurement officers and their counterparts that negotiate such contracts in private industry.

Government agencies can’t compete with the kind of pay that private companies can afford, Ho said, but removing bureaucratic hurdles can help improve the current perception that it’s hard to make a difference in government.

Since the accuracy of results produced by AI models can degrade over time, a contract for AI must involve continuous monitoring. Ho thinks modernizing rules around contracts government agencies sign with AI tool makers is essential in the age of AI but also part of attracting and retaining talent in government. Signing AI contracts is fundamentally different, he said, than purchasing a bunch of staplers in bulk.

In that same hearing, Services Employees International Union spokesperson Sandra Barreiro said it’s important to consult rank-and-file workers before government agencies sign contracts because they are best suited to determine whether the public will benefit. Tech Equity Collaborative chief program officer Samantha Gordon, who helps organize meetings between people in the tech industry and labor unions, urged state senators to adopt policy that ends AI contracts if tests find the technology proves ineffective or harmful.

UPDATED: Please read and follow our commenting policy:
  • This is a family newspaper, please use a kind and respectful tone.
  • No profanity, hate speech or personal attacks. No off-topic remarks.
  • No disinformation about current events.
  • We will remove any comments — or commenters — that do not follow this commenting policy.