Taylor Swift Weighs Response as AI-Generated Nudes of the Pop Star Sweep the Internet

The singing sensation could prove to be a pioneer in launching litigation against ‘deepfake’ pornography, a new frontier of the law.

AP/Ashley Landis, file
Taylor Swift performs during the opener of her Eras tour at Glendale, Arizona in March. AP/Ashley Landis, file

The flood of fake, sexually explicit images of Taylor Swift across social media this week is raising new questions about how victims can prosecute against this novel kind of content, produced with ease by artificial intelligence technology.

AI-generated, nude images of the pop sensation amassed more than 27 million views on the social media platform X, and more than 260,000 likes in 19 hours before the account that posted the images was suspended. The most circulated deepfakes of Ms. Swift portrayed her nude in a football stadium, a commentary on her relationship with a Kansas City Chiefs player, Travis Kelce. 

Ms. Swift has yet to speak publicly about the situation, but a source close to her allegedly told the Daily Mail: “Taylor’s circle of family and friends are furious” about the dozens of graphic images and she is considering taking legal action against the deepfake porn website, Celeb Jihad, that uploaded them. Litigation against non-consensual content created with generative AI has little precedent, but that could change quickly. 

“The creation of nude bloody images of Taylor being groped at a football game, a male-dominated space — is the most violent deepfaking this firm has seen in the last 10 years,” the owner of C.A. Goldberg, a law firm founded to protect people harmed by technology, Carrie Goldberg, tells the Sun. “And this violence exponentially compiles with every single share, link, retweet.”

 The White House weighed in on Friday, asserting that it was “alarmed” by the rise of deepfakes. “While social media companies make their own independent decisions about content management,” the press secretary, Karine Jean-Pierre, told reporters, “we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people.” 

Ms. Swift could sue on the grounds of a privacy invasion, or tort, known as “false light,” the director of policy and advocacy at the Electronic Frontier Foundation, Katharine Trendacosta, tells the Sun. Whereas defamation cases concern damage to a person’s reputation, false light cases focus on damage to a person’s feelings or dignity.

“False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes,” Ms. Trendacosta says. “As always, the target is those making the images and not the companies the makers happen to use to share the images — companies with platforms overwhelmingly used for legal, protected expression.”

Ms. Swift has not been afraid to go to court, and her track record thus far has been largely successful. Last year, she beat a copyright lawsuit that accused her of stealing from a Mississippi woman’s poetry for her companion book to the album “Lover.” 

When the star herself was accused of defamation by a man who was terminated after a sexual assault incident at a meet-and-greet in 2013, she successfully countersued for $1. It is believed that the incident inspired one of her hit songs, “Look What You Made Me Do.”

Fans have swarmed to defend Ms. Swift. Just as quickly as “Taylor Swift AI” trended on X, “Protect Taylor Swift” arose in response, as fans flooded the platform with positive posts about the singer-songwriter in an effort to bury the AI-generated content. X, meanwhile, asserted in a statement that it is “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”

The platform clarified that posting “Non-Consensual Nudity (NCN) images is strictly prohibited on X, and we have a zero-tolerance policy towards such content.” The fake images of Ms. Swift violated X’s prohibition against users sharing “misleading media,” meaning “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

Ms. Goldberg says that criminal liability should rest with social media networks and the platforms that enable their operations. “The only way to tackle the problem at scale is to hold the companies that manufacture and distribute the deepfake products themselves liable,” Ms. Goldberg says. “So places that profit from the technology — like the AppStore and Google Play — in my eyes are the most pernicious predator when it comes to deepfakes.”

Ninety-six percent of deepfakes are pornographic, and they almost always feature women, according to research from a cybersecurity firm, Deeptrace. That statistic came before the artificial intelligence chatbot developed by OpenAI, Chat GPT, launched in November 2022, and with it, an unprecedented ability for anyone, anywhere to easily generate fake images that look alarmingly real. 

“A global financial sextortion crisis” is how the FBI and its international law enforcement partners described the rise in this kind of harassment in a joint warning issued in February 2023. They pointed to “an explosion in incidents of minor boys around the world being coerced into sending explicit images online and extorted for money.”

Recent examples include pornographic depictions of Hollywood celebrities, images of President Trump hugging and kissing Anthony Fauci, and a fake explosion in the Pentagon, which sent a brief shock through the American stock market before police officials clarified that the Department of Defense headquarters were not, in fact, on fire. 

In the absence of a specific law addressing pornographic deepfakes, the outcry surrounding Ms. Swift has prompted some members of Congress to call for legislation to halt this malicious use of artificial intelligence. “This is an issue,” a Democratic congresswoman of New York, Yvette Clarke, wrote on X, “both sides of the aisle & even Swifties should be able to come together to solve.”

States have been looking to regulate such content in recent years. Nine states have enacted laws that regulate deep fakes, mostly in the context of pornography and election influence. The content is considered a criminal violation only in Hawaii, Texas, Virginia, and Wyoming. In New York and California, victims of the deepfakes have a private right of action to bring civil suits. 

“We are at a turning point in society where artificial intelligence has been unleashed on society without restraint and now we are seeing the harms that flow from it,” Ms. Goldberg says. “Ten years ago it was still legal to share and publish nude and sex images of people without their consent. Now the technology is in everybody’s hands to fabricate that sort of content into existence.”

Fake, non-consensual, pornographic images generated by AI tools could perhaps be an exception to the liability immunity granted to social platforms through Section 230 of the Communications Decency Act of 1996. That act enables web operators to moderate user speech and content that they “publish,” aligned with their First Amendment right to decide what content they will distribute.

In 2018, the Fight Online Sex Trafficking Act carved federal sex trafficking laws out of Section 230. Sex trafficking that takes place with help from the internet is, therefore, one of the few limits on Section 230’s protections for social platforms. 

A press relations specialist at the Electronic Frontier Foundation, Hudson Hongo, tells the Sun he thinks Section 230 immunities would apply as usual in this context.  Yet the legal waters surrounding AI and potential defamation remain murky and largely uncharted. If anyone could wield a sharp legal dagger into the heart of artificial intelligence, it would be the societal sensation that is Ms. Swift.


The New York Sun

© 2024 The New York Sun Company, LLC. All rights reserved.

Use of this site constitutes acceptance of our Terms of Use and Privacy Policy. The material on this site is protected by copyright law and may not be reproduced, distributed, transmitted, cached or otherwise used.

The New York Sun

Sign in or  create a free account

By continuing you agree to our Privacy Policy and Terms of Use