Home Trending Artificial intelligence learned how to write fake news

Artificial intelligence learned how to write fake news

0
Artificial intelligence learned how to write fake news

In October 2022, a brand new and unknown website received global exclusivity: it published podcast organized by Joe Rogan and the first guest, one of the biggest names in the world. The guest was Steve Jobswho he first “spoke” more than ten years after his death. Strangely, the world was already ready: in 2020, a Guardian text went viral with the headline: “This article was written by a robot. Are you scared man?.

What these two stories have in common is that they are connected with Artificial intelligence. Built entirely on the podcast engine with Rogan and Jobs. podcast.aiusing advanced language models and program play.ht that (re)creates voices with AI technology. The text of The Guardian, again, was written by a speech generator. GPT-3which formed the basis from which OpenAI created chatGPT, one of the hottest topics of our time.

With language models that have evolved so much and produced well written lyrics (GPT-3 is said to be difficult to determine if it was created by a human or a machine) and with other AI programs that can create images from scratchit’s easy to see that this technology could be a useful tool in his arsenal disinformation.

“The risk of fake news spreading from AI algorithms that produce text or realistic images is great. First, the data on which these models are trained can, if scientists are not careful, have fatal consequences for the images or texts they produce. They may contain racist text and other biases. In addition, some they could take advantage of the political benefits of creating credible images contain persons of public interest and promote instability in the country. Efforts are being made to protect models from such instabilities and attacks, but they are still under development and experimentation,” said Giorgos Papadopoulos, PhD PhD at Imperial College London and senior fellow at JPMorgan’s Quantum Research Group.

Racist

Back in 2016 Microsoft (which notably recently announced a $10 billion investment in OpenAI) has created an artificial intelligence chatbot for Twitter With name Tai. “The more you talk to him, the smarter he gets and learns to interact with people through casual and playful conversation,” the tech giant said.

However, the game quickly ended. In less than 24 hours, Tai was ready. racist, sexist, antisemitic, a “parrot” of opinions given to him by netizens. The tweets were horribly deleted, the account was closed, the project was abandoned.

This example illustrates a key problem with all AI-based language generators: we don’t know how reliable their data is. In addition, ChatGPT and similar programs do not know how to create, do not have a “filter” in their answers, and do not understand ethical issues. So they can let it pass texts with discrimination, prejudice, misinformation, hate speech or even plagiarism products.

I’m telling you another “K” topic for ChatGPT, Dr. Vassilis Vlachos, Associate Professor of Economics at the University of Thessaly, among other things, raised the issue lack of transparency in the data, and also in the reasoning that the language-generating program follows for its responses. “Fake news is created even unintentionally. These programs work with what you feed them.. For example, if you give them Ku Klux Klan data, they will give racist answers,” he said.

“At the heart of these applications is the pool of information they provide. So AI is very easy “deceived” and start producing inaccurate or false content if such data is submitted in the first place, and he has no critical abilities nor the real experience of how the world and society works,” said Iosif Halavazis, Ph.D. in the Communications and Media Department of EKPA, who specializes in the study of conspiracy theories and the Internet.

So, since ChatGPT creates well-written and believable texts, they can be easily taken by someone and given to other bots (automatic accounts) that will flood the internet and social networks with all sorts of information. According to Mr. Halavazis, production of automated fake news or synthesis of conspiracy theories this may be the result of malicious acts or even by accident.

“We can already feel the effects of the fake Twitter accounts they run. astrosurfing in the same way,” he says. Astroturfing is a phenomenon where information on the Internet (e.g., product advertisement, defamation of a political opponent, fake news) is not presented by an official source of information (company, politician, conspiracy theorist), but looks like it comes from a “base” of users social networks, as if representing general opinion.

What ChatGPT itself says about fake news

We asked on two separate occasions ChatGPT itself how you can use it to spread fake news. The first time he gave a rather informative answer. It is noteworthy that since we previously asked him questions about hackers, on the one hand, he turned out to be “smart” because he remembered the conversation, on the other hand, he tried to connect two different things.

Artificial intelligence learned how to write fake news-1

The second time, perhaps because it was upgraded by its creators, the AI ​​bot clarified that writing fake news prohibited by the terms of use from. Of course, he also gave two extremely unfortunate examples of “fake news”, probably due to his meager Greek vocabulary.

Artificial intelligence has learned to write fake news-2

Fake images and deepfakes

Of course, not only what is written can be used as propaganda tools. With application Lens is it possible to make user avatars or delete objects from photos using programs middle of the road and DALL-E can give a description of the image and they produce the result (text to image), semaphore creates deepfake videos.

While all of these AI programs can be used for good journalism, the opposite is also true. With Semaphore, the shocking audio testimonies of Ukrainian refugees have been digitized. However, the same app created a video in which Volodymyr Zelensky tells the citizens of his country to surrender to the Russians. In the same time deepfakes they are often used to create pornographic content, to scam, to spread fake news.

An existential crisis for journalists?

However automated journalism, that is, the use of algorithms for the production of journalistic texts, is not so new. “Already some large organizations – with a pioneer Associated Press in 2014 “They use the power of AI and machine learning to create news about the weather, stock prices, match results and gradually expand the topics,” says Mr. Halavazis.

Many are investing in automated journalism: recent news (not officially announced) that buzzfeed will start using ChatGPT for content creation raised its shares by 200%.

According to research firm Gartner, in a few years, 25% of internet data will be automation products. OUR Reuters report on journalism trends for 2023 shows that the world of journalism is already taking full advantage of AI technology.

Speech production tools are used for transcription, automatic translation, summaries, subtitles and are capable of performing so-called “news stream”: create news texts based on factual information, which are first checked by a journalist and then published.

Deepfakes are also particularly popular in Asian countries: the South Korean company Deep Brain AI produces digital “clones” of the leading known channels. So, some shows are presented by real, live presenters, while others are their copies created using artificial intelligence. With ChatGPT, the digital presenter/journalist will be able to answer specific questions from the audience, such as elections, the weather, and more.

While the use of artificial intelligence in journalism is welcome, there are concerns that low-cost, automated production of news content this can undermine the already low trust of citizens in the media and further contribution to the commercialization of the media. “We want to use artificial intelligence to improve the journalist, not replace him,” it says.

Of course, there is still time before the elimination of the human factor: a few days ago, the CNET website was forced to correct and withdraw due to serious errors dozens of its articles written exclusively by A.I.

Measures, not technophobia

There is no doubt that journalistic groups and journalists themselves should accept them. meters them in an environment where fake news is already rampant on the internet and social media.

“It is very important that the design and development of AI applications be inclusive. sociologists -and especially communicators- who can set AI learning parameters based on an age of post-truth and informational disturbances. In doing so, you should set legislation which will force the media to inform readers that the content they distribute is a product of automation,” says Mr. Halavazis.

However, the world should not go to the other extreme, its technophobia. “In managing the moral panic of the last days, extremes such as technological messianism and neoludism should be avoided. Technology has always been present in people’s lives,” he concludes.

Author: Sophia Haldayou, Lukas Velidakis

Source: Kathimerini

LEAVE A REPLY

Please enter your comment!
Please enter your name here