American actress Jenna Ortega, star of the Netflix series “Wednesday,” has been the target of an ad campaign featuring fake nude images shared on Facebook and Instagram, owned by Meta, NBC News reported.

Jenna Ortega in Netflix’s viral Wednesday seriesPhoto: Netflix / Everett / Profimedia Images

The ad campaign was carried out by RichAds, a Cyprus-registered company, promoting an app that it claims can use artificial intelligence (AI) to create sexually suggestive images from ordinary photos.

NBC News found 12 commercials for a program called “Perky AI” in the Meta ad archive that use a photo of Ortega taken when she was 16, with the image edited and then faded to suggest the actress is nude.

The ad told Facebook and Instagram users that the “Perky AI” app could change Ortega’s clothes into different outfits until she was “unclothed.”

The 21-year-old American actress has become an international star thanks to the huge success she has achieved WednesdayNetflix series shot in Romania under the direction of Tim Burton, some segments with Ortega from the series have gone viral on social networks such as Instagram and TikTok.

RichAds, the company that created the fake images, describes itself on its website as a “global network of services” that offers various ways to create push ads and other types of pop-up ads or mixed messages.

Meta defends after deepfakes with Jenna Ortega

NBC News notes that Meta suspended the “Perky AI” app page after reaching out to Mark Zuckerberg’s company for comment, but the Cyprus-based firm had previously posted more than 260 ads on Meta’s platforms since last September.

“Meta strictly prohibits child nudity, content that sexualizes children, and services that offer AI-generated nudity without consent,” said Ryan Daniels, a spokesperson for Meta, in a press release.

But this case raises new questions about the availability or ability of Meta platforms to deal with the phenomenon of “fake” images, especially in situations where paying customers are concerned.

The company, led by Mark Zuckerberg, provided fresh assurances on this as recently as last month, saying it was expanding its efforts to identify images created or altered by artificial intelligence to eliminate disinformation and “fake” content ahead of the election. which will take place in the United States of America and other countries of the world in 2024.

Apple also caught in controversy over “Perky AI”

The “Perky AI” app can be downloaded from the App Store, as Apple’s online store policy does not prohibit deepfake apps. However, the App Store’s terms of use prohibit apps that contain pornography, as well as those that create “defamatory, discriminatory, or malicious content” or that may “demean, intimidate, or harm an individual or group.”

But Apple said the “Perky AI” app had already been rejected from the App Store on February 16 for violating its “explicitly sexual or pornographic content” policy, and that it had been under review for 14 days at the time, led by Tim Cook. The company has been contacted for comment.

But during this time the application could be downloaded and used. But Apple says it has since removed the app from its online store and suspended the Cypriot company behind it from its developer program.

This new controversy surrounding the distribution of sexual deepfakes comes just weeks after a major scandal erupted over Elon Musk’s social media platform X, which allowed pornographic images of singer Taylor Swift to be widely distributed.

“X” at one point blocked all searches for Taylor Swift’s name on its platform until the content was removed.