Anti-Deepfake Law in California Is Far Too Feeble

Credit to Author: Brandie M. Nonnecke| Date: Tue, 05 Nov 2019 14:00:00 +0000

While well intentioned, the law has too many loopholes for malicious actors and puts too little responsibility on platforms.

Imagine it’s late October 2020, and that there's fierce competition for the remaining undecided voters in the presidential election. In a matter of hours, a deepfake video depicting a candidate engaged in unsavory behavior goes viral, and thanks to microtargeting, reaches those who are most susceptible to changing their vote. Deepfakes—the use of AI to generate deceptive audio or visual media depicting real people saying or doing things they did not—are a serious threat to democracy and lawmakers are aggressively responding. Unfortunately, their current efforts will largely be ineffective.

Brandie M. Nonnecke (@BNonnecke), PhD, is founding director of the CITRIS Policy Lab at UC Berkeley and a fellow at the Aspen Institute’s Tech Policy Hub and the World Economic Forum. She studies human rights at the intersection of law, policy, and emerging technologies, with her current work focusing on issues of fairness and accountability in AI.

Last month, Governor Gavin Newsom signed California’s AB 730, known as the “Anti-Deepfake Bill,” into law. The intention to quell the spread of malicious deepfakes before the 2020 election is laudable. But four major flaws will significantly impede the law’s success: timing, misplaced responsibility, burden of proof, and inadequate remedies.

Timing

The law applies only to deepfake content distributed with “actual malice” within 60 days of an election—a forced time constraint that does not reflect the enduring nature of material posted online. “What happens if content is created or posted 61 days before an election and remains online for months, years?” asks Hany Farid, a professor and digital forensics expert at UC Berkeley who works on deepfake detection.

To ensure that the law does not infringe upon free speech rights, it incorporates exemptions for satire and parody. However, AB 730 is ambiguous on how to efficiently and effectively determine these criteria—ambiguity that nefarious actors are likely to game. By claiming satire and parody when the material is contested, a deepfake could be tied up in a lengthy review process for removal. Like the manipulated video of House Speaker Nancy Pelosi to make her appear intoxicated, a drawn-out review process to determine the video’s intent enables it to further gain virality and spur a contagion of negative effects.

Misplaced Responsibility

The law exempts platforms from the responsibility to monitor and stem the spread of deepfakes. This is due to Section 230 of the Communications Decency Act, which provides platforms with a liability safeguard against being sued for harmful user-generated content, especially if they are acting in good faith to remove the content. Court interpretations since the law’s passing in 1996 have broadened platforms’ immunity, even if they deliberately encourage the posting of harmful user-generated content.

Instead, the law places responsibility on producers of deepfakes to self-identify manipulated content and on users to flag suspicious content. This is like having a Wall Street broker trading on inside information to self-identify intentions and suspicious transactions, or asking a con artist to make victims sign a terms of service agreement before they get swindled. These tactics will be unenforceable and ineffectual. Nefarious actors will not voluntarily disclose their creations as deepfakes. They’ll use botnets—connected communities of bots that interact with each other to quickly spread content through a social network—to evade detection. The damage to public perception will be done well before the content is flagged and reviewed for takedown.

According to the law, any registered voter may seek a temporary restraining order and an injunction prohibiting the spread of material in violation. It is not hard to imagine this being manipulated by special interest groups to tie up contentious content in a lengthy review process for removal, as well as inviting public skepticism regarding its veracity. By introducing doubt, the damage to public perception will be done without needing removal. This is especially problematic for content that truthfully depicts a candidate engaging in unsavory or illegal behavior, but is being claimed by supporters to be a malicious deepfake. When there is no definitive truth, everything is a lie.

Content spread through platforms has tangible effects on our democracy and public safety. To mitigate the spread and impact of malicious deepfakes, platforms must be required to play a more proactive role. Last month, senators Mark Warner, the Democrat of Virginia, and Marco Rubio, the Republican of Florida, sent identical letters to leading social media platform companies urging them to establish industry standards to deal with deepfakes. If the California legislature really wants to address the spread of malicious deepfakes, they must put pressure on platforms.

Burden of Proof

Again, the law only pertains to deepfakes posted with “actual malice” or “the knowledge that the image of a person has been superimposed on a picture or photograph to create a false representation,” and that there is an “intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.” Proving actual malice will not be straightforward. Clear and convincing evidence, which is often difficult to obtain, will be necessary to determine intent. Because of the high burden of proof to determine actual malice, a lengthy review process will likely ensue and allow the deepfake to continue to spread.

Inadequate Remedies

When a malicious deepfake is detected, remediation should also be implemented. Like the spreading of a virus, only those who receive the immunization will be spared. Under the law, malicious deepfake videos will have the opportunity to spread widely before detection and removal, and there is no mechanism to ensure those who were exposed also receive a notification of its intent and accuracy.

The “Anti-Deepfake Law” isn’t without value. According to Deeptrace, an Amsterdam-based company that specializes in the detection of deepfakes, the prevalence of deepfakes online has increased by a staggering 84 percent over the past year. The law raises awareness of the risks of malicious deepfakes on election integrity and creates an initial framework to monitor and stem their spread and impact—a critical step before the 2020 presidential election. Yet significantly more needs to be done, including the following: reconsidering the 60-day time constraint and defining a review mechanism to efficiently and effectively determine satire and parody; placing greater responsibility on platforms to monitor their content; establishing a credible review process to determine intent; and developing robust mechanisms to remedy the harms caused by malicious deepfakes.

California’s progressive tech legislation has a history of influencing other state and federal efforts, and the “Anti-Deepfake Law” is no exception. Language in the law restricting the spread of malicious deepfakes within 60 days of an election has made its way into an amendment to a federal bill on foreign interference in elections that is under review by the House Committee on Rules. The passing of the “Anti-Deepfake Law” in the tech sector’s beating heart may be a blow to the implementation of adequate mechanisms to mitigate the harms of deepfakes before the 2020 presidential election. Future bills, especially those at the federal level, must do more.

WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.

https://www.wired.com/category/security/feed/