
AI-powered chatbots could encourage terrorism by spreading violent extremism to younger users, a UK government adviser has warned.
Jonathan Hall, an independent counterterrorism watchdog, said it was “absolutely possible” that AI-powered bots like ChatGPT could be programmed or decide on their own to promote extremist ideology.
He added that AI-powered chatbots are not subject to anti-terrorism legislation, so they can act with “impunity.”
Hall stressed that “the current terrorist threat in the UK is knife or vehicle attacks. However, “artificial intelligence attacks are probably not far off.”
“They promote a violent extremist ideology”
In an article in the Mail on Sunday, Hall stated that “millions of people around the world could soon spend hours interacting with chatbots ‘tuned’ to promote extremism.”
“I think it’s entirely possible that AI chatbots could be programmed to promote violent extremist ideology.”
So far, there is no evidence that AI bots have propagated extremist ideologies to anyone, but there have been cases when their use led to unpleasant results.
For example, a Belgian father of two killed himself after talking to a robot named Elisa for six weeks about his concerns about climate change.
On the other hand, the Mayor of Australia has threatened to sue OpenAI, the makers of ChatGPT, after the chatbot falsely claimed to have served a prison sentence for bribery.
Source: Telegraph
Source: Kathimerini

Ashley Bailey is a talented author and journalist known for her writing on trending topics. Currently working at 247 news reel, she brings readers fresh perspectives on current issues. With her well-researched and thought-provoking articles, she captures the zeitgeist and stays ahead of the latest trends. Ashley’s writing is a must-read for anyone interested in staying up-to-date with the latest developments.