US politicians and fans of Taylor Swift expressed outrage on Friday after fake pornographic images of the singer, created with the help of generative artificial intelligence, were widely shared on X (formerly Twitter) and other platforms in recent days, AFP reported, citing news. .ro

Taylor SwiftPhoto: David Fisher / Shutterstock Editorial / Profimedia

One of these images was viewed more than 47 million times on the social network X. According to US media, the image remained on X for more than 17 hours before it was removed.

Fake pornographic images (“dipfakes”) of famous women, as well as targeting many anonymous people, are not news. But the development of generative artificial intelligence (AI) programs risks creating an uncontrolled flow of offensive content, according to several activists and regulators.

The fact that such images this time even affect Taylor Swift, the most listened-to artist in the world on the Spotify platform, may yet serve to raise awareness of the power of the issue, given the outrage of her millions of fans.

“The only ‘good’ side of this happening to Taylor Swift is that she has enough clout to pass legislation to eliminate it. You guys are sick,” wrote Danisha Carter, an influencer with an audience of several hundred thousand people on the site. network X.

X is known for having less strict nudity rules than Instagram or Facebook. Apple and Google have the right to control the content that is transmitted in apps through the rules they set for their mobile operating systems, but so far they have tolerated this situation on X.

In a statement, X said it has a “zero tolerance policy” for the publication of nude images without consent. The platform said it would “specifically remove all identifiable images” of the singer and “take appropriate action against the accounts that posted them.”

Representatives of the American singer have not yet given any comments.

“What happened to Taylor Swift is nothing new, women have been targeted for false images without their consent for years,” said Democratic politician Yvette Clarke, who supported the law to combat the phenomenon. “With advances in artificial intelligence, it’s becoming easier and cheaper to create such images,” she said.

A 2019 study found that 96% of deepfake videos were pornographic. According to Wired magazine, 113,000 such videos were uploaded to major porn sites in the first nine months of 2023.

The Taylor Swift deepfake also reached the ears of the White House, which said on Friday it was concerned about fake images of the pop singer online and stressed that social networks have an important role to play in enforcing their own rules to prevent the spread of such misinformation, Reuters reported.

“This is very troubling. So we will do everything possible to solve this problem,” White House Press Secretary Karin Jean-Pierre said at a press conference, adding that Congress should take legislative action in this regard.

According to Jean-Pierre, lax measures against fake images, possibly created by artificial intelligence (AI), too often disproportionately affect women. “So while social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent misinformation and intimate images of real people from being shared without consent,” he said. Jean-Pierre.