
Where should I go on vacation? What should I eat today? Which book/movie to choose? New AI applications like ChatGPT seem to have the answer. Or almost everything. But can they predict the most basic question of democracy, namely, what will different demographics vote for in the next election? And how likely is it that future political sociologists will ask artificial intelligence instead of real people about their political views and voting intentions?
To answer these questions, political communication and computer science researchers at Brigham Young University “joined forces” and created “artificially intelligent faces” to which they gave specific characteristics in terms of race, age, ideology, and religion. They were then asked to answer what they would vote for in the 2012, 2016 and 2020 US presidential elections.
“Essentially, we gave the AI apps a set of instructions to ask them to penetrate the mindsets of people with different backgrounds. That is, we took the demographics of real respondents for real surveys, fed them to AI bots, and then asked them to respond based on those characteristics. The questions we asked were taken from various surveys conducted in the US at various times. Finally, we compared the answers of people with the answers of artificial intelligence tools, ”explains in a conversation with“ K ” Associate Professor of Political Science at Brigham Young University Ethan Busbywho specializes in political psychology, extremism, public opinion, racial and ethnic politics, and quantitative methods.
“Accurate Modeling of Human Behavior”
The answers of AI bots surprised them. “Artificial personalities” answered as many questions as real people.
“The study shows that, at least in the field of political science, language models are able to mimic human behavior accurately to some extent. This is especially interesting given that language models were not specifically trained for political science. They were simply trained on a wide variety of data collected from the Internet, ”the conclusions of the study emphasize. David Wingate, Associate Professor of Computer Science, Brigham Young Universitywho worked as a research assistant at the Massachusetts Institute of Technology.
However, although “artificial personalities” have answered many questions about past elections that real voters have asked, this does not mean that they can accurately predict a person’s intention to vote.
Unstable Factors
“There are basically two reasons why an AI bot cannot accurately predict what a citizen will vote for. Firstly, the data used to train the model is outdated, so the model does not “know” current political events. Secondly, it is difficult to predict what certain groups of people will vote for, such as independents or undecideds.which each election is often affected differently,” says David Wingate.
Ethan Busby agrees, pointing out that AI can’t predict how a person will vote or think, but it can predict “where different social groups lean.”
AI in political communication
This fact fatally opens up discussion about the possible future use of AI bots in polls, especially at a time when citizen participation in opinion polls is declining, as well as the use of artificial intelligence tools in political communication.
“I think AI can be used to help us think about which groups of people have more flexible views or are more open to ‘persuasion’ or can be more easily mobilized to vote. Also, the AI bots were able to make different estimates for some of these groups,” notes Ethan Busby.
In addition, given the high cost of polling, as well as the high percentage of poll failures (as happened in the Greek elections in 2015), the idea of a politician or party turning to AI bots if they reflect with relative accuracy political views and electoral intentions of specific groups of people can be quite attractive.
Ethical Issues
In any case, the first results of a study on such a topic “catch” researchers to divide, because as exciting as it may seem at first to simulate people using language models, this opportunity can also be used for evil.
“First we need to discuss, as a community, what are the ethical requirements for such simulations. We see that we can use artificial intelligence to better convince people. Perhaps this can be done for a good cause – for example, to help people become less racist. But it can also be done for a bad purpose – for example, to radicalize them, ”says David Wingate.
For his part, Ethan Busby points out that there are already many organizations and countries that use such tools. for malicious purposes. “Researchers should use artificial intelligence to fight misinformation, reduce disagreements, promote mutual understanding between people, and not mislead them in an unethical way,” he concludes.
Source: Kathimerini

Ashley Bailey is a talented author and journalist known for her writing on trending topics. Currently working at 247 news reel, she brings readers fresh perspectives on current issues. With her well-researched and thought-provoking articles, she captures the zeitgeist and stays ahead of the latest trends. Ashley’s writing is a must-read for anyone interested in staying up-to-date with the latest developments.