Home Trending ChatGPT: how it can muddy the waters on social media

ChatGPT: how it can muddy the waters on social media

0
ChatGPT: how it can muddy the waters on social media

Whether it’s cooking tips or helping with a task, ChatGPT is a great opportunity for people to practice artificial intelligence (AI) systems.

The AI ​​system uses text databases from the web, books, magazines, and Wikipedia articles, with a total of 300 billion words entering the system.

The end result is a chatbot that responds almost like a human, with encyclopedic knowledge. However, scientists, cybersecurity researchers and experts warn that attackers can use ChatGPT to share and spread propaganda on social media.

So far, the spread of disinformation has been driven entirely by users, but an artificial intelligence system like ChatGPT could make it particularly easy for Internet trolls to expand their influence, according to a report by Georgetown University, the Stanford Internet Observatory and OpenI published in January.

Can ChatGPT “help” influencer campaigns

ChatGPT can also stimulate social media “influencer campaigns” to promote politics and parties and amplify backlash against certain politicians through fake accounts.

Such a campaign took place ahead of the 2016 US election, when thousands of Twitter, Facebook, Instagram and YouTube accounts attacked Hillary Clinton in support of Donald Trump, according to the Senate Intelligence Committee in 2019.

It seems that future elections may face an even bigger “army of disinformation”. “AI systems could lead to new influence tactics” with personalized messaging that could prove far more effective, according to an AI report released in January.

Coordinated AI Disinformation Campaign

The spectrum of disinformation could be expanded and the means of its dissemination multiplied. In the future, artificial intelligence systems may improve to the point where it will be impossible to determine which messages are part of a coordinated disinformation campaign, says Josh Goldstein, co-author of the report and research fellow at the Georgetown Center for Safety and Security. Emerging Technologies, where he works on the CyberAI project.

“AI language models can create a lot of original content. So propagandists don’t have to repeat the same message over and over again across multiple accounts,” says Goldstein, predicting that it will become harder to tell the truth from… the fake truth.

For this particular bad scenario, tech giants like Facebook and Twitter also need to prepare to detect and prevent the spread of misinformation on social media.

“AI applications could lead to further proliferation of fake social media accounts,” warns Vincent Konitzer, professor of computer science at Carnegie Mellon University.

Source: BBC

Author: newsroom

Source: Kathimerini

LEAVE A REPLY

Please enter your comment!
Please enter your name here