A bill aimed at preventing the spread of deceptively edited synthetic media, also known as deepfakes, generated by artificial intelligence in the lead-up to elections has raised a number of First Amendment concerns.

The bill, which prevents anyone from knowingly distributing deepfakes 90 days prior to an election or primary, does contain exceptions to that requirement if deepfake videos or images are distributed by news media and contain a disclaimer that the media was generated by AI. But that language has not been enough to eradicate concerns the bill infringes on Constitutionally protected free speech and will potentially make broadcasters liable for the content of advertising broadcasters may rely on for income.

HB 5342, which received a public hearing in the Committee on Government Administration and Elections on March 4, defines deceptively synthetic media as “any image, audio or video of an individual, and any representation of such individual’s speech or conduct that is substantially derived from any such image, audio or video, which a reasonable person would believed depicts the speech or conduct of such individual when such individual did not in fact engage in speech or conduct.”

It prohibits the distribution of deceptively synthetic media, or deepfakes, within 90 days of an election or primary if it’s distributed knowingly, without consent, or if the intent is to injure a candidate or influence an election outcome. It also specifies that people can distribute deepfakes within 90 days of an election if they include a disclaimer and a link to the original media.

It further states that radio or television stations, news websites, magazines, and newspapers can publish deepfakes as part of news coverage as long as they add a disclaimer or, for breaking news where they may not be aware that the content is manipulated, later add a disclaimer.

The bill further specifies that individuals who purchase advertising can be required to attest that their ad does not contain manipulated media. It also includes a carveout for material that is parody or satire, “provided a reasonable person would not believe that such individual in fact engaged in speech or conduct as depicted.”

Violations of any part of the law would be guilty of a Class A misdemeanor if deepfake media was shared with the intent to cause bodily harm or to distribute to an audience exceeding 10,000 people. Subsequent violations committed within five years of a prior conviction would constitute a Class D felony. The attorney general would have the ability to bring a civil action to permanently enjoin anyone who is violating the law or could “reasonably believed” to intend to imminently violate the law.

Candidates or individuals injured by the law would also be able to bring a civil action.

The bill also specifies that Internet service providers and cable services are not liable under its restrictions.

This is not the first year a similar restriction on the distribution of deepfakes was proposed. Last year, SB 2, Sen. James Maroney’s bill to regulate AI, contained a similar provision to prohibit the knowing distribution of deepfakes within 9o days prior to the availability of overseas ballots in a primary or election.

That bill did not specify that news media could broadcast deepfakes as part of coverage so long as they included a disclaimer, but did specify that it did not cover parody or satire.

Secretary of the State Stephanie Thomas testified in favor of the bill, arguing it balances free speech concerns, including by allowing labelled satire and parody, providing disclaimer requirements, and exempting “bona fide news reporting.” Thomas asked the bill be expanded to cover not just candidates but also registrars of voters, moderators, town clerks, and poll workers.

Thomas cited faked robocalls purporting to be from former President Joe Biden telling people not to vote in the state’s 2024 presidential primary as evidence of the threat deepfakes pose to elections. The Democratic political consultant responsible for the fake calls was indicted on criminal charges in New Hampshire but was acquitted and was fined $6 million by the Federal Communications Commission. To date, faked robocalls have not been reported in Connecticut elections.

But according to John Coleman, legislative counsel on AI and free expression at the Foundation for Individual Rights and Expression (FIRE), the provision of HB 5342 that allows media to distribute deepfakes with a disclaimer still may infringe on protected speech.

Coleman, who testified against the bill, told Inside Investigator that the bill raises serious First Amendment concerns because it broadly criminalizes the sharing of altered media, even though false or misleading speech is constitutionally protected.

“HB 5342 raises serious First Amendment concerns because it would make it a crime to share certain altered audio, images, or video that make it appear someone said or did something they never actually said or did. Outside of narrow categories like fraud and defamation, the First Amendment protects false or misleading speech.” Coleman told Inside Investigator. “That may sound surprising, but it reflects a basic reality that people exaggerate, joke, misunderstand facts, and get things wrong all the time. If every false statement could lead to jail time, many people would simply stop speaking about politics rather than risk punishment.”

Coleman added that the bill’s requirement that anyone who shares deepfakes include a disclaimer noting the media has been altered could amount to compelled speech.

“The bill focuses on whether someone intended to harm a candidate or influence an election, even though influencing voters is exactly what political speech is meant to do. It also requires speakers to attach government-mandated warning labels to their own messages and allows courts to block speech before it’s shared, raising serious concerns about compelled speech and prior restraint.” Coleman said.

He added that federal courts have already blocked similar laws in several states, including California and Hawaii.

In California, a federal judge struck down a law the state passed allowing political candidates and anyone who views deepfake videos to sue for damages. The law was passed after Elon Musk shared and amplified an AI-altered video of former presidential candidate Kamala Harris. Chris Kohls, who originally posted the video under his “Mr Reagan” account on X, sued the state, alleging the law was unconstitutional and the video he had posted was parody and therefore protected by the First Amendment.

In Hawaii, parody news website The Babylon Bee sued the state over a law that banned the distribution of AI altered media that could be harmful to a candidate’s reputation or influence a voter’s behavior unless they included a disclaimer. A federal judge found the law violated the First Amendment.

The ACLU of Connecticut also flagged First Amendment concerns with the bill in their testimony.

[T]he bill’s presumption that AI-generated or manipulated content is inherently deceptive fails to meet the high standards required for restricting political speech. This approach conflicts with established First Amendment principles, including the need for proof of actual malice in cases involving speech about public officials or public figures.” policy counsel Jess Zaccagnino said in written testimony.

“The government may regulate explicit fraud in elections, but attempts to regulate potentially misleading political speech are subject to strict scrutiny, even when well-intentioned. The Supreme Court has long held that discussion of public issues and debate about candidates’ qualifications are entitled to the broadest constitutional protection to ensure the ‘unfettered interchange of ideas for bringing about political and social changes desired by the people.'” Zaccagnino also wrote.

Coleman also said the bill’s definition of media to whom the exemption for distributing deepfakes within 90 days of an election may not be broad enough.

“The Constitution protects speech broadly, and that protection applies to everyone, not just traditional news outlets. Bloggers, freelance journalists, independent commentators, and ordinary social media users all take part in political debate that the First Amendment protects. Yet this bill creates narrow exemptions only for certain news organizations, limited to routine news coverage and conditioned on adding a government-mandated disclaimer.” Coleman said.

He told Inside Investigator that the bill risks making members of the media responsible for determining whether statements advertisers are making are true.

While HB 5342 stipulates that broadcasters who require anyone who purchases advertising to attest that media is not deceptively edited are not liable under the law, it also states that if they learn this is not true and distribute the content anyway, they may become liable.

“The bill ultimately risks making the government the arbiter of truth. Even if media outlets make an initial judgment about whether an ad contains what the law calls deceptive synthetic media, the final decision about what counts as deceptive would rest with government officials. That gives the government the last word over what political messages can be shared.” he said.

The Connecticut Broadcasters Association also raised this concern in their testimony opposing the bill.

“Connecticut’s broadcasters serve as distributors of political messages; we do not and cannot take cognizance of the authenticity, production methods, or underlying intent of each advertisement we air. For that reason, the statutory standard must reflect that stations need to be informed when an advertisement contains synthetic or deceptive AI-generated elements. The responsibility to disclose that information, and to obtain any required consent, should rest squarely with the advertisers and producers of such content.” they said in written testimony.

“If passed as written, this bill would impose a financial hardship on CT Broadcasters. Stations do not have the resources necessary to assume legal responsibility and costs or financial penalties for content we do not produce. Shifting that burden onto stations from creators of the content has the potential to destabilize the very media infrastructure local and federal politicians in the state rely on.” Jennifer Parson, the president of the Connecticut Broadcasters Association, told Inside Investigator.

Parson noted that while 28 states have passed laws aimed at stopping the spread of deepfakes, most don’t make broadcasters responsible for the content of advertising they did not create. Parson pointed to laws passed by Delaware, Vermont, and Colorado “specifically for language that does not require broadcasters to assume liability for content they did not create.”

Delaware’s law, which also prohibits the distribution of deepfakes within 90 days of an election, makes specific exemptions for broadcasters if they have made a “good faith effort to establish that the depiction is not a deceptive and fraudulent deep fake, which shall be presumed if the broadcasting station receives a representation from the payor that the payor has not provided a deceptive and fraudulent deep fake” or if a station prohibits advertisers from using deepfakes in advertising.

Both Colorado’s law and Vermont’s law specifically excludes broadcasters who receive payments for advertisements.

Was this article helpful?

Yes
No
Thanks for your feedback!

Creative Commons License

Republish our articles for free, online or in print, under a Creative Commons license.

An advocate for transparency and accountability, Katherine has over a decade of experience covering government. Her work has won several awards for defending open government, the First Amendment, and shining...

Leave a comment

Your email address will not be published. Required fields are marked *