
A few weeks after release GPT chat app it already threatens to upend many forms of our daily communication, such as the way we write emails, university jobs, and countless other forms of writing.
The app, developed by Open AI, is a chat bot that can automatically answer written questions in a way that often resembles a human, to the point where it’s barely noticeable that it’s been used.
But beyond the surprise at the possibility of machines replacing humans in speech forms such as poetry and screenwriting, there is a much bigger threat: artificial intelligence replacing humans in democratic processes—not through voting, but through pressure.
GPT Chat can automatically compose comments submitted during the legislative process. Can write letters for publication to the editors of local newspapers. He can comment on news articles, blog posts, and social media posts millions of times a day. It can mimic the work of Russia’s Internet intelligence service in its attempt to influence the 2016 US election, but without a reported multi-million dollar budget and hundreds of employees hired for the purpose.
Automatically generated comments are not a new problem. For quite some time now, we have been facing the threat of bots, that is, machines that automatically post content. It is estimated that at least one million automatically generated comments were sent to the FCC about proposed net neutrality legislation five years ago. In 2019, a Harvard undergraduate student launched an automated text generator to submit 1,001 public comment comments on a health issue. Back then, commenting was just a game of numerical superiority.
Danger to Members of Congress
Since then, platforms have gotten better at eliminating “artificial, untrustworthy behavior.” Facebook, for example, deletes over a billion fake accounts every year. But these kinds of messages are only the beginning. Instead of filling legislators’ inboxes with messages of support or running the Capitol’s call center with robocalls, an AI system as advanced as the GPT Chat app and trained on data related to those processes could target legislators and influential people in critical situations. positions in order to identify the weakest parts of the legislative process and ruthlessly manipulate them through direct communication, public relations campaigns, bargaining or other means of influence.
When we as humans do things like this, it’s called pressure. Successful, i.e. the best lobbyists, combine precise wording with smart targeting tactics. Right now, the only thing keeping a Chat GPT-equipped lobbyist from doing something that, on a discursive level, could be compared to a drone war, is the lack of precision targeting. However, artificial intelligence has the potential to provide methods for this purpose as well.
A system that can interpret political networks, combined with the application’s text generation capabilities, can determine which member of Congress has the most weight in a particular policy, for example. corporate taxation or military spending. Acting as lobbyists, such a system can target the nation’s swing representatives who sit on committees that decide party politics and then focus their efforts on members of the ruling party when a bill is put to a vote.
Once the characters and strategies are defined, a chatbot like Chat GPT can generate text messages for use in emails, comments, and anywhere text is useful. Lobbyists can also target these people directly, and the combination is important: comments on articles and on social media have limited potential, and knowing which members of the legislature to target is not in itself a condition for manipulation.
Motivation to attack is strong
This ability to understand and target people working online will enable the creation of an AI hack tool that will exploit the vulnerabilities of social, economic and political systems with incredible speed and reach. Legislative systems will be a special target because the incentive to attack political decision-making systems is so strong, because the data to train such systems is so widely available, and because the use of AI can be very difficult to detect, especially when used strategically to manipulate people’s actions. .
Obtaining the data needed to develop such strategic guidance systems is only a matter of time. Open societies generally rely on transparency, and most legislators are willing – at least formally – to receive and respond to messages that appear to be sent by their people.
Perhaps an AI system could reveal which members of Congress hold decisive leadership power but whose public standing is low enough that their attention is not often called for. Then he could identify the group of common interests that has the greatest influence on the public positions of this member of the legislature. Perhaps he could even calculate the amount of donation needed to influence the organization, or target an advertisement with a strategic message to its members. For every goal that requires a political decision, it needs the right audience, the right message at the right time.
What makes the threat posed by AI-armed lobbyists greater than the threat posed by high-value lobbying firms based on Washington’s K Street is their ability to accelerate. Lobbyists draw on years of experience to find strategic solutions to successfully shape the outcome of a political decision. This experience is limited and therefore expensive.
Faster and cheaper
But theoretically, AI could achieve the same result much faster and cheaper. The start speed advantage is huge in an ecosystem where public opinion and media narratives can be quickly established and are also subject to rapid change in response to chaotic global events.
In addition, the flexibility of AI can help influence multiple policy development processes and jurisdictions at the same time. Imagine an AI-powered lobbying firm that can try to amend every bill that goes to the US Congress, or even to every state legislature. Influencers usually only operate in one state because there are such complex variations in laws, processes, and political structures. With the help of artificial intelligence, it can become easier to influence outside the usual political framework.
Just as educators will have to change the way they administer exams and assignments to students in light of the changes brought about by Chat GPT, governments will have to change their attitude towards lobbyists.
Undoubtedly, this technology can be useful for the environment of democracy, with greater accessibility. An experienced lobbyist cannot be paid by everyone, but the software to access the artificial intelligence system can be available to everyone. With any luck, perhaps such strategic AI could revitalize the democratization of democracy itself by giving such influence to the most vulnerable citizens.
However, the largest and most influential institutions are likely to use any AI techniques to influence in the most successful way. After all, it still takes people in the system, people who can get through the corridors of the legislature, and money to develop the best lobbying strategy. Influence is more than just delivering the right message to the right person at the right time. And while a bot with the ability to speak can determine who should be the recipients of these influencer campaigns, for the foreseeable future it will be up to humans to pay for it. So while it’s impossible to predict what a future full of lobbyists with AI weapons will look like, it will likely further increase the influence and power of those who already have them.
Source: Kathimerini

Ben is a respected technology journalist and author, known for his in-depth coverage of the latest developments and trends in the field. He works as a writer at 247 news reel, where he is a leading voice in the industry, known for his ability to explain complex technical concepts in an accessible way. He is a go-to source for those looking to stay informed about the latest developments in the world of technology.