In document Counterfeit Campaign Speech (Page 45-47)

In the realm of protecting elections, timing is often everything. If the goal

is to protect the electoral process from faked candidate speech, could a ban

provide an adequate remedy before the damage is done? Particularly with

respect to counterfeit campaign speech unleashed on the eve of an election, a

remedy may fail to protect the vote. Sophisticated actors will surely time the

release of counterfeit campaign speech to maximize its damage to the candidate,

to voters, and to the electoral process.


One way to address the unique timing problem counterfeit campaign

speech presents is to implement a notice and take-down regime similar to the

system used in copyright law but confined to a short period before an election.


This could provide candidates who become aware of counterfeited campaign

speech a means to require the platform hosting it to remove it immediately from

circulation followed by a process for determining liability that would unfold

after its power to distort an electoral outcome is extinguished. The obvious retort

is that any requirement to immediately remove political speech from circulation

could be misused by political adversaries, resulting in a diminished marketplace

of ideas.


Those considering this type of tool to address counterfeit campaign

speech must carefully balance these interests.

Importantly, assuming digital forensics continue to improve, it may well

be that, in the future, detecting faked speech quickly and effectively will be

possible. One fascinating breakthrough came when media forensics experts

realized that people portrayed in deep fake videos rarely blink, suggesting a

surprising avenue of detection.


For the most part, however, media forensic

scientists are keeping possible detection methods under wraps for fear of tipping

off deep fakers on ways to elude detection.


Despite Hany Farid’s gloomy

predictions, he and many others are on the job.


One can imagine that a

259. This problem arises often in courts’ appraisals of the problem of deception in campaigns generally.

See, e.g., Commonwealth v. Lucas, 34 N.E.3d 1242, 1254 (Mass. 2015) (noting difficulty in discrediting a

lie on eve of election); see also Chesney and Citron, supra note 5 at 20 (listing as a harm to society “fake audio clip that [sic] ‘reveal[s]’ criminal behavior by a candidate on the eve of an election.”).

260. See Digital Millennium Copyright Act (DCMA) notice and take-down provisions codified at 17 U.S.C. § 512(c)(3), (g)(2) (2012). Scholars have proposed modified notice-and-takedown regimes to address harms such as revenge porn. See, e.g., Ariel Ronneburger, Sex, Privacy, and Webpages: Creating a Legal

Remedy for Victims of Porn 2.0, 2009 SYRACUSE SCI.&TECH.L. REP. 1, 28–30; see also Derek E. Bambauer, Exposed, 98 MINN.L.REV. 2025, 2055 (2014) (“Internet intermediaries are familiar with analogous

provisions of [the DCMA], which condition immunity on compliance with a notice-and-takedown system for material allegedly infringing copyright. It should be straightforward for Internet firms to implement the proposed notice-and-takedown system for non-consensual distribution of intimate media as well.”).

261. See, e.g., Hasen, supra note 14 at 75 (noting the Court’s concern with “partisan manipulation” of processes to police lies in campaigns).

262. Will Knight, The Defense Department Has Produced the First Tools for Catching Deepfakes, MIT TECH. REV. (Aug. 7, 2018), https://www.technologyreview.com/s/611726/the-defense-department-has- produced-the-first-tools-for-catching-deepfakes/.

263. Id. (explaining that scientists would “rather hold off at least for a little bit . . . . We have a little advantage over the forgers right now, and we want to keep that advantage.” (internal quotation marks omitted)).

technological fix may one day eclipse legal prohibition, allowing voters and/or

trusted platforms to filter content to ensure that faked candidate speech does not

reach voters’ eyes and ears—or that, if it does, it is clearly identified as such.


Some might object to detection filters of any kind. Surely a filter would be

imperfect, acting as a technological censor of speech that could be authentic and

important. Yet, it is a good placeholder; workable technological solutions to the

problem of counterfeit campaign speech may lie somewhere on the horizon. In

the meantime, a clear law imposing stiff penalties would serve as a deterrent, as

would well-publicized prosecutions even after elections have concluded.


A ban on counterfeit campaign speech is a natural extension of age-old

bans on counterfeit activity that seeks to defraud. Seen in this light, counterfeit

campaign speech could serve as the unicorn courts and scholars have suggested

exists to reign in at least one form of false election speech. Stemming the tide of

counterfeited candidate speech—or perhaps more achievably ensuring that it is

labeled as such—is no easy task. But some of the hardest problems of regulating

false campaign speech fall away when the targeted behavior is counterfeited

candidate speech. This narrow focus does not eliminate line-drawing problems,

nor ought it remove all First Amendment scrutiny given the core nature of

political speech involved. Still, to safeguard the faith of the voting public, to

preserve the essence of our democratic project, and to protect the ever-dwindling

dignity of candidates for office, attention to what can be done deserves every

effort. As argued here, recognizing that a ban on counterfeit campaign speech is

constitutionally permissible—and advisable—is a step in that direction.

265. See, e.g., Hany Farid, Reining in Online Abuses, 19 TECH. AND INNOVATION 593, 594 (2018) (describing efforts to develop technologies to detect child pornography images).

In document Counterfeit Campaign Speech (Page 45-47)