Drew Liebert and David Evan Harris are the director and senior policy advisor, respectively, of the California Initiative for Technology and Democracy (CITED), a project of California Common Cause.
Amid the ongoing gridlock in Washington, California is boldly stepping forward to address critical technology policies. The recent signing of the California AI Transparency Act by Governor Gavin Newsom marks a defining moment for the state, particularly regarding AI legislation that aims to protect children and enhance transparency in digital content. This event underscores California’s role as a leader in tech policy reform, providing a refreshing alternative to federal inaction.
As the home of Silicon Valley, California has a unique responsibility, and its legislators are proving that even small, incremental steps can pave the way for significant changes. However, it’s clear that the battle against powerful tech interests and corporate lobbying is far from over. The groundwork laid in this legislative session is essential, but more challenges await as we look toward 2026.
In an ideal scenario, federal lawmakers would create a robust national framework that emphasizes innovation while also holding tech companies accountable. Unfortunately, the ongoing paralysis in Congress and misguided efforts to override state-level protections have led to a situation where state intervention has become a necessity. When children and vulnerable populations are left exposed to risks posed by deepfakes, data exploitation, and algorithmic discrimination, California’s proactive measures become all the more urgent.
This year, Sacramento has initiated several key legislative reforms aimed at addressing these issues. While none of these laws serve as an all-encompassing solution, taken together, they reflect the possibility of governing technology in the public interest—balancing innovation with responsibility.
One noteworthy piece of legislation is the California AI Transparency Act of 2025. Spearheaded by Assemblymember Buffy Wicks (D), this law mandates that major social media platforms, messaging applications, and search engines identify AI-generated content. It also obliges smartphone and camera manufacturers to incorporate digital provenance data, enabling users to verify the authenticity of images and videos. This is a crucial tool in an age when misinformation spreads rapidly, helping individuals distinguish real content from AI-generated deception.
Alongside the transparency act, Senator Scott Wiener (D) introduced SB 53, which sets baseline safety and transparency standards for advanced AI systems. This law aims to address the potential catastrophic misuse of AI and includes provisions to protect whistleblowers who disclose safety violations, thereby promoting accountability in tech operations.
Beyond these landmark reforms, several new laws combat pressing concerns, such as the regulation of AI chatbots that have been linked to encouraging self-harming behaviors among children. Additionally, regulations now mandate mental health warnings on social media platforms and clarify that companies cannot evade accountability for harmful algorithms or the harassment their systems may amplify. Users can also more easily opt out of data sales, further enhancing their control over personal information.
These pragmatic reforms collectively aim to empower individuals and ensure that technology firms begin bearing responsibility for their creations. Nevertheless, there’s a significant gap between these measures and the comprehensive protections offered by the European Union’s AI Act. Advocacy for meaningful consumer protection remains crucial as California continues its efforts to safeguard its residents.
Particularly concerning is the lack of measures to protect location privacy. Legislative efforts to outlaw the sale and misuse of precise geolocation data fell short this year. This practice poses a severe risk, allowing entities—including government agencies—to track individuals in sensitive spaces like workplaces and places of worship, thereby violating personal freedoms.
Another oversight is algorithmic fairness, with proposals requiring companies to analyze and disclose the effects of their automated systems on decisions in housing, employment, and credit being shelved for the year. The absence of such safeguards perpetuates issues like digital redlining and algorithmic bias, especially at a time when AI technology poses economic threats to millions.
Lastly, while the Legislature has made strides in online safety for children, it has yet to establish substantial financial accountability for platforms that cause harm. Protecting children online is an urgent priority, and the need for additional legislative action in this area is critical.
For California to truly fulfill its role as a tech policy leader, the upcoming legislative sessions must tackle these outstanding issues with urgency and ingenuity. Each represents not merely a policy challenge but a litmus test of whether democracy can keep pace—or catch up—with evolving technologies.
Critics of regulation often express concerns about its potential to stifle innovation. However, history suggests otherwise. Looking at the evolution of automobile safety demonstrates that regulations like seat belts and air bags enhance safety while promoting trust among consumers. A similar approach in the tech sector promises to bolster innovation, create employment opportunities, and encourage responsible progress through transparent and accountable practices.
While California’s legislative actions in 2025 are not yet deserving of celebration, they signify essential early strides in a race against time. The Golden State must build on this momentum, prioritizing the safeguarding of democracy in the digital age through courageous legislative initiatives that reflect the stakes at hand.
Inspired by: Source

