
Would you trust artificial intelligence (AI) to read your emotions after being trained to recognize them on millions of other faces, thereby acquiring enduring biases? How would you feel if TN made a decision that was important to you, for example, denying you a job, but not being able to explain exactly how he came to such a result? Could you share your health data with T.N. which extracts information from thousands of others at the same time? These questions may have a strong moral connotation, but they are not theoretical at all. Potentially many others could discuss them, but the problem is that they seem too complicated for the layman. At least that’s what the Goethe-Institut’s program is demonstrating, which is trying to evoke reflection on the ethics of artificial intelligence.
“I believe that the ethics of T.N. it should be the subject of public debate because it is an issue that concerns all of us,” said Galina Dimitrova-Dimova, professor of digital arts at the National Academy of Arts in Sofia and curator of the program. “Today, algorithms are in the social media and apps we use, in our online shopping, in the media, in health products and services, they are everywhere. There are huge benefits, but there are also side effects that raise a number of ethical questions. What does TN mean in our humanity, how does it change our relationship with each other, how does it affect our lives and our relationship with devices and machine intelligence?”
“What are these apps doing to our humanity, how do they affect our lives and our relationships?” are some of the questions that the interdisciplinary program of the Goethe-Institut studies.
Fifteen fellows from the Goethe-Institut were invited to discuss these critical issues, each from their own scientific point of view. The AIMotion team, for example, highlighted the problem of facial recognition software that reads emotions through an interactive installation. These programs have recently been targeted by the European Union, which in some cases is even discussing their ban. “A smile, a frown, or raised eyebrows can say a lot about how a person is feeling at the moment,” the researchers report. “But how many of these characteristics are enough for TN to really recognize the feelings of a person and act on them?” participants ask. “Our project,” they conclude, “is based on the idea that the T.N. he could really read human emotions.” Those who encounter the machine will engage in non-verbal dialogue with it, and depending on their emotions, they will see the machine produce certain colors and images associated with each one.
Here the answers are not simple and are not given, as some T.N. they promise high accuracy of face recognition and, accordingly, the reading of emotions. But who will be responsible if a serious mistake is made?


New software and bias
“As self-learning machines become more and more involved in our lives,” notes Bettina Wenzel, director of information at the Goethe Institute in Athens and Southeast Europe, “society needs new rules that will not be dictated only by technologists and large companies. , technology companies. Until now, the potential of T.N. researched mainly by the private sector. Politics and civil society are left behind. In this context, it is important for our future to highlight existing approaches as well as the opportunities and risks of TN. with the participation of all citizens and intensify a broad social dialogue on this issue”.
“We live in a historical time,” notes the artist Marinos Koutsomihalis, who played the role of a mentor in the program, “where it has not yet been determined what the moral responsibility of T.N. We are in a transitional stage and do not yet have a common moral framework. It is characteristic that in the course of the fermentation of the groups, tensions arose between the more practical and more critical approaches of TN.
It seems like the world is still oscillating between the fear of T.N. and enthusiasm for all the good things he does – and that makes it even more difficult to assess him in a cold-blooded way. The Glubot team, for example, preferred to emphasize the positive impact of T.N. through a very practical basis and not touch on ethical issues, “although it will definitely happen at some point,” as Mr. Koutsomichalis predicts, referring to the privacy concerns raised by the TN apps. on health. The project, which is called “T.N. from you to you”, wants to confirm the high and just expectations placed on the use of TN. on health. It aims to help people with diabetes using the data generated by continuous glucose monitoring sensors. Users who have such systems will be able to interact with TN groups, seeing that the fear of the unknown is unfounded. When you know how the machine works and feel in control of it, you have nothing to fear and you trust it.
But how to ensure the correct, honest and impartial operation of automation? The main problem with TN, which is widely discussed around the world, is that it reproduces the prejudices, biases and beliefs of its creators. Many times it’s not even intentional. A team of male programmers cannot initially take into account a female perspective when training a machine learning model, just as a team of whites cannot fully understand the situation of blacks. To avoid this, diagnostic checks, experiments and corrections must be carried out. “The ALMA team approached the issue of bias more conceptually and philosophically,” says Mr. Cutsomihalis, “but they also realized that they needed measurements.”
Disputes about the ethics of T.N. it’s like a query mechanism: can ethics be quantified so we can see algorithm differences more clearly? Can we compile an ethical index to prevent bias? Some experts respond positively. The ALMA team is responding with another study that Mr. Koutsomichalis would like to make more aggressive.
The system rejected you but you don’t know why
Another, perhaps even more fundamental danger of T.N. it is its opacity and inexplicability. That even its developers themselves often do not understand how some algorithmic systems make decisions. He negatively assesses T.N. your performance at work or your credit prospects and you can’t figure out why. “You have inputs, outputs, and in the middle a process that no one can understand how it works,” Anastasia Nefeli Vidaki, a member of the Black Box team participating in the program with a more artistic form, tells K. A visited labyrinth through which he invites all of us to pass. “The labyrinth symbolizes what T.N. is for many: a complex, chaotic and difficult to understand process. We were interested in capturing this visually, in an interactive and experiential exhibition that, in addition to pathogens, also explores possible escape routes. Our starting point was experimental. In our working lives, especially as women, we often feel disoriented, at a dead end,” notes Ms. Vidaki.
However, the group does not need women, nor does anyone else, locked in this dark place. “We want to show the path that leads to the light,” she explains, “and we could say that in some way we created the myth of Ariadne. Our recipe is trivial, but effective. The point is not to leave the black box as is. It must be made transparent, the functions hidden in it must be made visible to everyone. We strongly believe in education and information as well as art and ethics. All this together will solve the problem. Only in this way will we see what is happening inside the black box, and finally get out of the maze of TN. Probably the road leading beyond the misunderstandings about T.N. Let’s start with what Ms. Dimitrova-Dimova also says, “Let’s all understand that there is much more to discuss than focusing on whether T.N. will one day win the world.”
The presentation of the EthicAI=FORUM 2022 project will take place on Thursday, November 24, starting at 17:00. and will be held in the lobby of the Goethe-Institut Athens, Omirou 14-16, Athens. It will end with a panel discussion at 19:30. titled “How can artificial intelligence be right?”.

Ashley Bailey is a talented author and journalist known for her writing on trending topics. Currently working at 247 news reel, she brings readers fresh perspectives on current issues. With her well-researched and thought-provoking articles, she captures the zeitgeist and stays ahead of the latest trends. Ashley’s writing is a must-read for anyone interested in staying up-to-date with the latest developments.