
Deepfake videos may be a magnet for millions of Internet users, but they pose a serious problem for millions of women, including underage girls, who are at risk or have already been victims of “non-consensual pornography.”
The vast majority (96%) of the deepfake material circulating on the Internet is still exclusively “unsolicited pornography”, using in these “teasing” videos images of women – often well-known – that have been synthesized using artificial intelligence and replace images of faces, originally recorded for video.
Research on deepfake porn has not been widely reported, but a 2019 report by artificial intelligence firm DeepTrace Labs found that female actors from Europe and the US were the most used, followed by South Korean K-pop singers.
This technology was developed in 2014 by Ian Goodfellow, former director of Apple’s custom machine learning system. Deepfake allows someone to record one person’s face onto another’s body and customize their voice, expressions, and movements. This creates fake content that looks real.
The problem could get worse as new creative AI tools emerge, experts say. “The reality is that technology will continue to spread, evolve and become as simple as pushing a button,” said Adam Dodge, founder of EndTAB, an organization that provides information about deepfake-based revenge pornography.
The reaction of technology companies
Meanwhile, some AI programs have announced that they are already restricting access to deepfake images.
The creator of OpenAI’s ChatGPT says it has removed the DALL-E image generator that helps create deepfakes. Midjourney, another program, blocks the use of certain keywords and encourages users to flag potentially “problematic” images to moderators.
Meanwhile, startup Stability AI released an update in November that removes the ability to create such images. The change came after reports that some users were creating celebrity-inspired deepfakes using his technology.
Some social networks have also tightened their rules to better protect their platforms from harmful content.
Last month, TikTok said that all deepfakes or edited content showing “realistic” scenes must be labeled as fake or altered in some way. At the same time, deepfakes by unknown persons will no longer be allowed.
The Twitch platform also recently updated its deepfake image policy after it became known that a popular streamer named Atriok created deepfake porn in his browser during a livestream in late January.
Other companies have also tried to ban deepfakes from their platforms.
Apple and Google recently said they removed an app from the app stores that used sexually unattractive deepfake videos of actresses to promote their product.
The same app, which was removed by Google and Apple, featured ads on the Meta platform, which includes Facebook, Instagram, and Messenger.
In February, Meta, as well as adult sites such as OnlyFans and Pornhub, began participating in Take It Down, an online tool that allows teens to report their deepfake images and videos from the internet.
According to the Associated Press
Source: Kathimerini

Ashley Bailey is a talented author and journalist known for her writing on trending topics. Currently working at 247 news reel, she brings readers fresh perspectives on current issues. With her well-researched and thought-provoking articles, she captures the zeitgeist and stays ahead of the latest trends. Ashley’s writing is a must-read for anyone interested in staying up-to-date with the latest developments.