AI companies will be required to disclose their testing protocols and what guardrails they have in place to the California Department of Technology. State attorneys general can sue the company if the technology causes “substantial harm.”
Wiener’s bill comes amid a surge of state bills addressing artificial intelligence. Policymakers across the country are increasingly alarmed that years of inaction in Congress have created a regulatory vacuum that benefits the tech industry. But California, home to many of the world’s biggest technology companies, plays a unique role in setting the precedent for guardrails in the tech industry.
“We can’t go about developing software and ignore what the state of California is saying and doing,” said Lawrence Norden, senior director of elections and government programs at the Brennan Center.
Members of Congress have held numerous public hearings on AI and proposed several bills, but none have passed. Advocates of AI regulation are now worried that the same pattern of argument without action that played out on past technology issues such as privacy and social media will be repeated.
“If Congress could pass strong pro-innovation, pro-safety AI legislation at some point, I would be the first to support it, but I’m not holding my breath,” Wiener said in an interview. . “We need to get ahead of this issue to maintain public trust in AI.”
Mr. Wiener’s party has a supermajority in the state Legislature, but tech companies have fought hard against regulation in California in the past and have powerful allies in Sacramento. Still, Wiener said he believes the bill can be passed by the fall.
“We were able to pass some very tough technology policies,” he said. “So, yes, we can pass this bill.”
California is not the only state pushing AI legislation. According to an analysis by the BSA Software Alliance, an industry group that includes Microsoft and IBM, there are 407 AI-related bills under consideration in 44 states in the United States. This is a dramatic increase since BSA’s last analysis in September, which found that states had introduced 191 AI bills.
Several states have already signed legislation addressing serious risks of AI, including its potential to exacerbate employment discrimination and create deepfakes that could interfere with elections. About a dozen states have passed laws requiring governments to study the technology’s impact on employment, privacy, and civil rights.
But as the most populous state in the United States, California has unique authority to set standards that affect the entire nation. For decades, California’s consumer protection regulations have essentially served as national and even international standards for everything from hazardous chemicals to automobiles.
For example, in 2018, after years of debate in the Legislature, the state passed the California Consumer Privacy Act, setting rules for how technology companies can collect and use people’s personal information. The United States does not yet have a federal privacy law.
Wiener’s bill largely builds on President Biden’s October executive order using emergency powers to require companies to conduct safety tests on powerful AI systems and share the results with the federal government. It is something. California’s measures go further than the executive order, explicitly mandating protection from hacking, protecting AI-related whistleblowers, and forcing companies to conduct testing.
The bill will likely receive criticism from much of Silicon Valley. They argue that the regulator’s move is too aggressive and risks enshrining a system that makes it difficult for startups to compete with larger companies. Both the executive order and the California law address large-scale AI models, which some startups and venture capitalists have criticized as short-sighted about how the technology should evolve.
Debate over the risks of AI intensified in Silicon Valley last year. Prominent researchers and AI leaders from companies such as Google and OpenAI signed a letter stating that this technology is on par with nuclear weapons or pandemics in its potential to harm civilization. The organization that compiled the statement, the Center for AI Safety, was involved in drafting the new bill.
Wiener said he has also consulted with technology stakeholders, CEOs, activists and others about how best to approach regulating AI. “We have done a lot of outreach to our stakeholders over the past year.”
What’s important is that there’s a real discussion going on about the risks and benefits of AI, said Josh Albrecht, co-founder of AI startup Imbue. “It’s good that people are thinking about this at all.”
Experts expect the pace of AI legislation to further accelerate as companies release increasingly powerful models this year. The proliferation of state-level bills could increase industry pressure on Congress to pass AI legislation, as it may be easier to comply with federal law than with a patchwork of different state laws. There is sex.
Craig Albright, BSA’s senior vice president of U.S. government relations, said: “There are significant benefits to national clarity on laws governing artificial intelligence, and strong national legislation is needed to achieve that clarity. That’s the best way.” “That way businesses, consumers and all enforcers know what is required and expected.”
California’s law could have a significant impact on the broader development of artificial intelligence, as many of the companies developing the technology are based in the state.
“California’s state legislature and advocates in the state are much more sensitive to technology and its potential impacts, and they are very likely to take the lead,” Norden said. .
States have a long history of moving faster than the federal government on technology policy. Since California passed its privacy law in 2018, nearly a dozen other states have enacted their own laws, according to an analysis by the International Association of Privacy Professionals.
States are also trying to regulate social media and child safety, but the tech industry is challenging many of these laws in court. Later this month, the Supreme Court is scheduled to hear oral arguments in landmark social media cases over social media laws in Texas and Florida.
At the federal level, partisan infighting has kept lawmakers from focusing on bipartisan legislation. Senate Majority Leader Charles E. Schumer of New York has created a bipartisan group of senators to focus on AI policy and will soon release an AI framework. But the House’s efforts haven’t made much progress. In a Post Live event on Tuesday, Rep. Marcus J. Molinaro (RN.Y.) said House Speaker Mike Johnson called for the creation of a task force on artificial intelligence to help enact the bill.
“Too often we’re behind the curve,” Molinaro said. “Last year put us further behind.”