Gage Skidmore; Photo illustration by John Lyman

Tech

/

Can Congress Outlaw the Doppelgänger?

As artificial intelligence accelerates and harmful deepfakes proliferate, Congress faces a classic First Amendment dilemma: How do you protect people from identity theft and manipulation without choking off legitimate speech and creative experimentation? Put differently: Can lawmakers shield citizens from abusive digital replicas without building legal monopolies that stifle the very creativity that drives cultural progress?

The problem is real and urgent. Off-the-shelf tools can now fabricate faces and voices with unnerving fidelity, enabling scams, blackmail, and disinformation at scale. It is appropriate, even necessary, for lawmakers to deter those abuses. But how do you do so while respecting constitutional protections for free expression?

Congress’s latest attempt fumbles that balance. The NO FAKES Act, introduced in the Senate in July, would create a new “digital replication right”—a federal right granting individuals, their heirs, or licensees exclusive control over the use of their image and voice in digital replicas. The right would last for up to seventy years after death. Licenses issued during a person’s lifetime could run for up to ten years at a time and would transfer upon death. The bill also promises statutory compensation for unauthorized uses of a person’s digital likeness or voice.

On one level, the bill offers a straightforward framework for identity protection. It would give people the exclusive power to authorize or forbid the use of their image and likeness, authority that has not previously existed in such sweeping, federalized form. On another level, it poses serious risks to free expression and to the everyday practices of artists, journalists, and creators.

Consider video games. Studios routinely design characters that evoke real people, simulate distinctive voices, or draw inspiration from public figures. Under the bill, any unlicensed resemblance could invite litigation, with fines starting at $25,000 per work, plus actual damages and profits. Disclaimers noting that a character or performance is AI-generated would not necessarily help. Without explicit, written licenses, studios would be rolling the dice. This is not a hypothetical worry: celebrities have already tested the boundaries—remember Lindsay Lohan’s failed suit against Grand Theft Auto V over a character alleged to resemble her.

Developers and performers are hardly blind to these risks. Many game actors are now signing contracts that include explicit protections against AI misuse and unauthorized digital replicas—evidence that the industry is capable of evolving guardrails through bargaining and norms. But imagine being a producer forbidden from including a character or a voice that merely resembles a real person—potentially for seventy years after that person’s death—unless you’ve secured permission.

And it’s not just games. The act would reshape the broader creative landscape. Filmmakers, musicians, documentarians, satirists, and biographers would all face a heightened threat of lawsuits for using any likeness or voice that could be construed as a “digital replica.” The likely result is risk-averse self-censorship: creators avoiding real-world references, historical reconstructions, or even obvious parody rather than inviting legal trouble. That would narrow the space for cultural expression, diminish realism in storytelling, and impoverish our collective ability to engage with true stories or characters inspired by actual people.

It’s also unclear that a sweeping new federal right is necessary. Existing laws—covering defamation, fraud, publicity rights, misappropriation of likeness, and related torts—already provide remedies when a fake causes concrete harm. Layering a broad, post-mortem replication right atop those tools risks duplication and complexity without clear proportional benefit.

Civil society groups are raising the alarm. In a coalition letter, organizations including the Association of Research Libraries, the Computer & Communications Industry Association, and the Center for Democracy and Technology warn that the NO FAKES Act, as drafted, could chill lawful speech, trigger expansive litigation, and erode fair uses and other speech-protective doctrines. Their critique is not that deepfake harms are illusory, but that Congress is reaching for a blunt instrument where a scalpel is required.

If every plausible resemblance must be licensed to avoid a lawsuit, many creators—independent filmmakers, musicians, even social-media satirists—will simply steer clear of real people and public life. That would be a loss not just for artists, but for audiences, who rely on culture to interrogate power, history, and identity.

Protecting the public from deepfake abuse is urgent. Yet in its current form, the NO FAKES Act risks entrenching a legal monopoly over identity that curtails expression and innovation. The smarter path is targeted legislation: penalize malicious impersonation and deceptive commercial exploitation, preserve room for satire, commentary, biography, and historical reconstruction, and build clear safe harbors for good-faith creative work. The goal is not to deny that identity matters—but to ensure that protecting it does not come at the expense of a culture able to speak freely about the world and the people in it.