Deepfakes, Elections, and Consent: What’s Illegal Under California’s New AI Laws

California’s groundbreaking AI laws crack down on deepfakes, protect election integrity, and safeguard consumers.

Al Landes Avatar
Al Landes Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image credit: Wikimedia

Key Takeaways

  • California’s new AI laws prohibit the distribution of deceptive AI-generated content related to elections and require disclosures for AI-manipulated election ads.
  • Creating and sharing sexually explicit deepfakes without consent is now illegal, and social media platforms must provide ways for users to report such content.
  • Studios must obtain permission before creating AI-generated replicas of actors’ and performers’ voices or likenesses, including those of deceased individuals.

California is focused on regulating artificial intelligence (AI) with a series of controversial laws. Governor Gavin Newsom has signed nine AI-related bills into law, covering everything from deep fakes to election integrity. These laws aim to protect consumers, ensure transparency, and prevent the misuse of AI technology. However, they boil down to pure censorship.

VOA reports that under the new laws, it’s now illegal to distribute materially deceptive AI-generated content related to elections in the 120 days before an election and 60 days after. Social media platforms must block and remove such content within 72 hours of being notified. Election ads using AI-manipulated content must include disclosures alerting viewers that the content is AI-generated.

“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation — especially in today’s fraught political climate,” Newsom said.

The laws also crack down on sexually explicit deepfakes, as reported by Techcrunch. Creating and circulating sexually explicit images of a real person that appear authentic and cause serious emotional distress is now a crime. Social media platforms must create ways for users to report these deepfakes and temporarily block the content while investigating.

Actors and performers also gain new protections, as reported by TheHill. Studios must obtain permission before creating AI-generated replicas of their voices or likenesses. This protection extends to deceased performers, requiring consent from their estates.

Governor Newsom is considering several more AI bills, including SB 1047, which requires developers to integrate safeguards into their AI models. Social media platforms and free speech advocates are expected to challenge these laws, as they infringe on First Amendment rights.

These laws have significant implications for election integrity and public trust in the democratic process. Platforms would be able to simply ban any political speech they do not like, thereby swaying actual elections, when claiming they want to prevent the same.

As AI technology continues to advance, California’s laws could serve as a model for other states or federal regulations. This can not be allowed to happen.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →