Home World Daniel Kahneman in “K”: We will get used to being judged by the algorithm

Daniel Kahneman in “K”: We will get used to being judged by the algorithm

0
Daniel Kahneman in “K”: We will get used to being judged by the algorithm

How would you feel if two judges handed down different sentences for the same crime? Or if two radiologists made different diagnoses on the same x-ray? Or even worse, if the same judge or the same doctor made different decisions in similar cases, depending on their mood or fatigue? These fluctuations in human judgment in very important areas are what Nobel laureate psychologist and economist Daniel Kahneman and his colleagues have termed “noise.” Daniel Kahneman, emeritus professor of psychology at Princeton University, founder of modern behavioral economics and author of the golden book of business management consultants (Thinking, Fast and Slow), offers an analytical approach to “noise,” the factor that makes human judgment unsustainable. , and proposes ways to limit errors that we consider systemic (“Noise: an error in human judgment”).

In a conversation with K, Kahneman responds to the dilemmas posed by the explosion of artificial intelligence such as ChatGPT, while making no secret of his concern about the era of AI autonomy and the impact it could have on the human role.

– We are talking about “noise” in the context of multiple decision making. There may be many decisions about different things, or many decisions about the same thing. It’s actually easier to think of them as judgments or measurements, so you’re either measuring different things or the same thing multiple times. And then there is the concept of error. The average of these errors is what we call bias. Thus, the bias is the predictable mean error. But not all errors are the same. Some are big, some are small. The technical term for my description is “error variability”. It’s a pity, because the word “noise” has many different meanings and it’s very easy to forget what the technical meaning is. But the technical meaning is just “error variability”, which means you don’t want unwanted variations in judgments (of the same thing). Strictly speaking, the technical term is “undesirable variability in errors of judgment.”

“It used to be that mathematical algorithms didn’t have “noise” because every time you asked them the same question, you got the same answer. This does not apply to modern AI algorithms. If you ask ChatGPT the same question twice, you won’t get exactly the same answer. Therefore there is some “noise”. But there is certainly much less “noise” in algorithmic judgments; even when it comes to a simple combination of variables, there is much less “noise” than in human judgments.

In the long run, living with a higher intelligence poses a greater threat. So I think a lot of people are worried. Me too.

“I think it will be difficult at first. But it’s something people can get used to. For example, I heard that China is experimenting with artificial intelligence banking decisions. You can see that in this narrow area, using artificial intelligence is clearly better than using human judges. The habituation of people to judicial decisions made by machines will be a gradual process. But I wouldn’t be surprised if the use of artificial intelligence expands. For example, right now people want a doctor to tell them what’s going on. The following will happen: if the algorithm says one thing, and the doctor another, people will conflict. I mean, citizens can easily understand that the algorithm is right and the doctor is wrong. And since I believe that, in the end, algorithms will be right much more often than doctors, eventually people will learn to trust algorithms. All this will take time. But I don’t think people can’t trust algorithms.

We are very far from that. But on the one hand, in cases like bankruptcy, the approach is quite simple, because there are rules, so you can write a program that will follow the rules, and the law is quite specific. There you can program an algorithm that will be fair and equal. On the other hand, in cases where the problem is to determine who is telling the truth and who is not, we are far from algorithms that solve this. Of course, we know that humans aren’t very good at it either. So, there is certainly room for improvement in the work of judges.

– I’m worried, of course. I think that everyone watching this situation is concerned because in the short term it can be used in such a way that journalism becomes almost impossible, because it will be very difficult to distinguish truth from fake news. So that’s just one problem. And dozens of other problems. In the long run, living with a higher intelligence poses a greater threat. So I think a lot of people are worried. Me too.

Daniel Kahneman on

If artificial intelligence is highly developed, it will also be autonomous.

“Well, there’s a computer scientist at Berkeley, Stuart Russell, who’s trying to provide what’s called “alignment.” That is, artificial intelligence will have goals that correspond to human goals. And it turns out to be a difficult technical task. It is not easy to make sure that the algorithm will actually be useful, even if you set a goal for it. The problem that worries many people is that it can solve some subgoals on the way to the final goal and thereby harm people in the subgoals. If artificial intelligence is highly developed, it will also be very autonomous. Here are some of the horror scenarios that people worry about.

– Is it reasonable to expect human intelligence to improve through the use of artificial intelligence?

– Artificial intelligence will be very useful. This is already very useful for many people. This is understandable, but the problem is that in many professions, people may need AI, and AI may not need people. This is problem. And you know, this is already happening, ie. in chess, where the programs are much better than the best human players. And in any field where AI makes the same judgments as humans based on the same information, AI will always absolutely outperform humans.

– What advice would you give to the younger generation, also called the Instagram generation, who are constantly influenced by “noise”?

“I have no advice for the younger generation. You know, I need to know the future in order to give advice. And of one thing I am quite sure: I have no idea what will happen in the future, so I have no advice. And this is actually a good, definitive answer.

Author: Athanasios Katsikidis

Source: Kathimerini

LEAVE A REPLY

Please enter your comment!
Please enter your name here