Democrat Urges Action After Fake, Sexually Explicit Taylor Swift Images Go Viral
Rep. Yvette Clarke (D-N.Y.) on Thursday blasted the online circulation of fake, sexually explicit images of Taylor Swift that were likely generated by artificial intelligence, calling on lawmakers from both parties to find a solution to an issue that affects women across the country.
“What’s happened to Taylor Swift is nothing new,” Clarke wrote on social media, as she requested action from politicians as well as the singer’s fans. “This is an issue both sides of the aisle & even Swifties should be able to come together to solve.”
What’s happened to Taylor Swift is nothing new. For yrs, women have been targets of deepfakes w/o their consent. And w/ advancements in AI, creating deepfakes is easier & cheaper.
This is an issue both sides of the aisle & even Swifties should be able to come together to solve.— Yvette D. Clarke (@RepYvetteClarke) January 25, 2024
The Democrat noted that advancements in technology like AI have made it easier for bad actors to create such seemingly realistic images, sometimes known as deepfakes.
One post with fake Swift images on X, the social platform formerly known as Twitter, garnered over 45 million views before it was removed about 17 hours later, The Verge reported, even though the company has rules banning this type of content.
X issued a statement early Friday saying that it was monitoring its site for further violations, without directly mentioning Swift.
“Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content,” the company wrote on its @Safety account. “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”
Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely…
— Safety (@Safety) January 26, 2024
Under the ownership of Elon Musk, however, the platform has made deep cuts to its content moderation team. The company is also under investigation in the European Union over whether it violated the 27-nation bloc’s Digital Services Act, which covers content moderation.
Though it wasn’t immediately clear where the fake Swift images originated, 404 Media reported that they came from a Telegram group in which members create sexual explicit images of women without their consent, in some cases using an AI-powered tool called Microsoft Designer. Some of the group’s members also discussed the circulation of the images on X, the outlet reported.
HuffPost has reached out to Swift’s representatives for comment.
A handful of states have passed bills addressing deepfakes, but Congress has so far failed to legislate on the issue.
Rep. Joe Morelle (D-N.Y.) has introduced the Preventing Deepfakes of Intimate Images Act, which would make it a crime to intentionally put out images that have been “created or altered” with technology like AI to show a person engaging in sexually explicit conduct.
On Thursday, he reiterated the importance of getting his bill passed.
“The spread of AI-generated explicit images of Taylor Swift is appalling—and sadly, it’s happening to women everywhere, every day,” Morelle wrote. “It’s sexual exploitation, and I’m fighting to make it a federal crime with my legislation.”