States are lagging in tackling political deepfakes, leaving potential threats unchecked heading into 2024

When it comes to policies tackling the challenges artificial intelligence and deepfakes pose in political campaigns, lawmakers in most states are still staring at a blank screen.

Just three states enacted laws related to those rapidly growing policy areas in 2023 — even as the size, scale and potential threats that AI and deepfakes can pose came into clearer view throughout the year.

And with just weeks before the 2024 election year formally kicks off, proponents of regulating those spaces are warning that states must try to do more: not just because the federal government hasn’t taken action, but because different approaches in different state capitals could provide a strong sense of what works — and what doesn’t.

“It’s certainly the case that the states unquestionably need to do more,” said Daniel Weiner, who as director of the elections and government program at the nonpartisan Brennan Center is closely following the issue. “I don’t think we can afford to wait.”

The reasons states have been slow to tackle the issue are myriad, Weiner and other experts have explained: Potential regulations would need to be reconciled with First Amendment rights and survive legal challenges. Generative AI and deepfake technology are growing and changing quickly and exponentially. Many state lawmakers don’t yet know how to respond to these issues because they don’t sufficiently understand them. And, crucially, any enforcement mechanisms would depend on a broad raft of parties, including giant social media companies.

Still, Weiner and others warned, states need to start navigating these challenges now.

“The really corrosive possibilities [from deepfakes] have fully burst into consciousness in the last year to two years,” Weiner said. “But there are effective policy solutions on the table, so I think folks should roll up their sleeves and get to work.”

Deepfakes are videos that use artificial intelligence to create believable but false depictions of real people. They have become significantly more common online in recent months — an increase that has prompted some experts to warn that the 2024 race could be the first “deepfake election” because voters could see political disinformation videos online and not be able to determine what’s real and what’s not.

In 2023, only Minnesota, Michigan and Washington enacted laws attempting to tackle the issue, according to the National Conference of State Legislatures, which has tracked bills related to the subject. All passed with bipartisan support. Another seven states introduced bills designed to tackle the issue, but those proposals stalled or failed.

Dual state-level approaches

All of the bills fall into two categories — disclosure requirements and bans — and could possibly be models for future legislation in other states.

A Washington state law enacted in May requires a disclosure be put on “synthetic” media that is being used to influence an election.

The law defines “synthetic” as any image, audio or video “of an individual’s appearance, speech, or conduct that has been intentionally manipulated with the use of generative adversarial network techniques or other digital technology in a manner to create a realistic but false image, audio, or video.”

Minnesota lawmakers in August enacted a law that bans the publication of “deepfake media to influence an election” in the 90-day window prior to an election in the state.

A person can be charged under that law if they “know or reasonably should know that the item being disseminated is a deepfake”; if the media is shared “without the consent of the depicted individual”; and is “made with the intent to injure a candidate or influence the result of an election.”

The law defines the crime as a misdemeanor, with most offenses punishable by up to 90 days in jail or fines of up to $1,000.

A Michigan law enacted last month employs both a ban and a disclosure requirement. It prohibits the “distribution of materially deceptive media” 90 days prior to an election. That ban, however, will not be enforced if the material includes a disclosure stating that the media has been “manipulated.” Manipulation is outlined in different ways, depending on whether the ad is an image, video, audio or text.

Under the Michigan law, enforcement of the ban is also contingent on the person responsible knowing that the media “falsely represents” the persons depicted in it, and that that person “intends the distribution to harm the reputation or electoral prospects of a candidate in an election.”

The law defines a first violation as a misdemeanor punishable by up to 90 days in prison or a fine of up to $500.

Prior to 2023, California, Texas and Wisconsin were the only other states that had enacted legislation designed to tackle AI in elections.

Many social media and tech giants have also taken steps in recent months.

In November, Meta, which owns Facebook and Instagram, and Microsoft said they would begin requiring political ads on their social media platforms to disclose if they were made with the help of AI. Google made a similar announcement in September.

Lack of federal action

Experts said that state action will be particularly important in upcoming legislative sessions given that the federal government hasn’t addressed the issue.

Proposals in the U.S. Senate and House aiming to regulate the use of AI deepfakes in political campaigns haven’t moved forward. While the Federal Election Commission announced an effort in August to take steps to regulate deepfakes in campaign ads, the agency hasn’t announced much progress on the initiative.

President Joe Biden issued an executive order in October that encouraged stakeholders to consider important safety concerns. The order tasked the Commerce Department with creating guidance about “watermarking” AI content to make it clear that certain deepfake videos were not created by humans.

Those incremental moves at the federal level come as the U.S. heads into a chaotic election year that could be made even more unpredictable by the use of AI and deepfakes in campaign ads — a development that has already reared its head this year.

One of the most prominent examples came in June when Florida Gov. Ron DeSantis’ presidential campaign released an ad attacking Donald Trump that included AI-generated depictions of the former president hugging Dr. Anthony Fauci, the former director of the National Institute of Allergy and Infectious Diseases and Biden’s former chief medical adviser.

Many more similar cases are likely on the way — and the results, experts warn, could be disruptive.

“A deepfake released shortly before Election Day — perhaps showing a candidate drunk, or speaking incoherently, or consorting with a disreputable figure — could easily sway a close election,” Robert Weissman, the president of government watchdog Public Citizen, which has petitioned the FEC to more aggressively take action against deep fakes, said in a statement. “A torrent of deepfakes could leave voters unable to distinguish what’s real from what’s synthetic.”

“And the prevalence of deepfakes could enable candidates to deny the validity of authentic content,” he added, “dismissing it simply as fake.”

Leave a Comment