DTTW and its affiliates have observed with great interest and particular unease the proliferation of synthetic personae. We find ourselves ill at ease, not because generative modeling is improper in itself (on the contrary, we regard it as a normal outgrowth of U.S. technological exceptionalism and market competition), but because a specific use case has emerged that profanes the civic right to - and social function of - identity. DTTW revels in the ascending capabilities of intelligent systems and decries the debased use of them for deepfakes that appropriate a person’s name, face, or voice without consent as an instrument of scam or slander.
Indeed, artificial intelligence technologies offer many creative commercial applications ranging from decision support and supply chain optimization algorithms to the generative models that produce text, images, video, and audio content. For many years, AI tools have served to increase productivity and optimize industrial processes by systemizing and automating huge and complex systems. Only in the past year or two, however, has the creative potential of generative AI blossomed into instruments that can conjure real art from the steep vectors of gradient descent. This near miracle of engineering wants only for the spark of human spirit to be considered truly capable of creation.
This is why, setting aside the illegality of stealing somebody’s likeness to do them reputational harm or worse, we condemn perverse deepfakery in the strongest terms. That practice does not read to us as “innovation” so much as parasitism. It turns the individual’s hard-won public presence into unpriced input.
This problem is, importantly, not a Luddite objection to synthetic media. The United States is presently evolving toward a more coherent federal right of publicity in response to AI replicas, as seen in the various “NO FAKES” and “No AI FRAUD” proposals, and that is a healthy nationalist instinct: to ringfence American personalities as economic assets that cannot be commandeered by anonymous model-operators at trivial marginal cost.1 When and if an individual chooses to license or sell their personality for such use, it should only be for fair compensation and via binding covenant.
What we see across states is an attempt to insist that the American body and voice are not common-pool resources for the model era, but proprietary signals with a price, a context, and a dignity.2 That is a capitalist claim, not a censorial one. Examples of this include:
- Tennessee’s ELVIS-style updates,
- election deepfake bills,
- and the emerging federal conversation.
There is a deeper reason to draw the line here. Identity functions in modern media economies a bit like a sigil: it is a concentrated sign that, when invoked, calls forth trust, audience, and sometimes capital. These should allow an actor to summon the social power and credibility attached to a name or visage, while evading the relational and commercial obligations that normally attend that power. A republic that allows unlimited commercial ventriloquism of its citizen artists, officials, and workers is allowing the counterfeiting of civic presence. To excuse such behavior on the merits of free speech, as some might attempt, would be to allow personal, civic, and economic harm to be protected by those hallowed allowances of the first amendment.
Some will reply that AI itself is already dislocating labor and that we should be preparing for broader substitution of humans for automation. On that question, our view is still developing. Automated systems that outperform or displace human routines are not, in themselves, illegitimate; they are consistent with a long American tradition of innovating technologically to secure advantage over rivals, in both domestic and foreign markets. Productivity is not a sin.
Likeness is about who speaks in public under your banner, it is about the ontology of the self in a media-saturated polity. Labor is about who performs which tasks in the production function. The former implicates deception, reputational misallocation, and even civic destabilization (e.g. political deepfakes timed to elections). The latter is merely increasing outputs and decreasing inputs. DTTW believes that choice is key - producers can choose whom (or what) to employ, and individuals can choose whom or what may masquerade as them. Our organization was founded to explore greater depths and higher dimensions of AI research, and as such any policy position we take must be to avoid ends while enabling means of automation and generative AI.
The United States competes in a world where some jurisdictions will be more permissive about personality expropriation such as churning out celebrity facsimiles, deceased-artist “new” works, or fabricated endorsements at scale.3 If U.S. firms must fight tooth and nail against every city or state jurisdiction to self-govern, Congress will eventually do the job for all of us. That way most likely is broader, slower, and less technically aware than what industry can draft today.4 A narrow, consent-first standard for NIL (name, image, likeness) in synthetic media is the pro-innovation path. This would keep American models reputable, reduce litigation drag like recent right-of-publicity suits, and preserve exportability of U.S. AI systems into global markets that are already moving this way.5
If we let unconsented deepfakes proliferate, we reinforce in the American civic system that the most essential individual property, oneself, is not inalienable. If, instead, we insist that every simulacrum of human likeness be bound by a contract, a license, or a statutory exception (news, parody, scholarship), we teach the system that American creativity is a resource rich in both abundance and market value. That is the capitalist liturgy: invoke, but remunerate.
Accordingly, DTTW recommends: (1) industry alignment around a consent-and-attribution registry for digital replicas; (2) contractual default clauses with performers, creators, and even employees that clarify AI re-use of their likeness; and (3) support for a federal, preemptive right of publicity that contains well-defined speech exceptions, so that satire and reporting remain untouched, while synthetic impersonation for commercial or reputational gain is presumptively unlawful. None of these measures prevent the U.S. from pursuing aggressive AI-driven productivity gains. They simply prevent innovation from degenerating into unlawful imitation.