Home Trending After all, is ChatGPT always telling the truth?

After all, is ChatGPT always telling the truth?

0
After all, is ChatGPT always telling the truth?

Its rapid development artificial intelligence surprised both Internet users and researchers themselves. The latter, after all, concerned about the speed of the new data, asked the tech companies in an open letter for a six-month moratorium. OUR “K” turned to Dr. Yiannis Vlassopoulos.a mathematician who works to understand the mathematical structure of language, as neural networks learn it, in order to explain to us the mechanisms underlying artificial intelligence as simply as possible.

The idea, he explains, was born from the 40s on the basis of human brain neurons as a model. ChatGPT is the most advanced model, as it successfully passes the Alan Turing test, giving the interlocutors the false impression that he is a person. However, be careful! “Currently, no one can be sure that what ChatGPT says is true.”

ChatGPT is an artificial neural network (ANN) that has learned to understand and reproduce language at a level comparable to that of a human. Traditionally, there are two points of view on artificial intelligence. One is TND and the other is expert systems. The idea of ​​TND has been around since the 40s and is inspired by the basic characteristics of neurons in the brain. Expert systems are built to follow a set of rules to perform a specific task. Neural networks, in contrast, learn from examples, like children who speak by listening to others, rather than from rules of grammar. For many years, expert systems were considered the right path to artificial intelligence, but neural networks have proven to be more efficient in the long run. The reason they took so long to appear is because we had to wait for the development of the Internet to have many examples/data in digital form and progress in computer speed.

ChatGPT is what we call a chatbot. It can be used to write stories or poetry. Also for text analysis. After all, it has applications wherever speech is used. However, currently no one can be sure that what ChatGPT says is true. How can he improve so that he always tells the truth when asked? This worries researchers around the world. The problem is that no one knows exactly how concepts are represented in a neural network.

Dr. Yiannis Vlassopoulos, a mathematician, explains the mechanisms behind artificial intelligence in the simplest possible way.

Artificial intelligence exists in many areas outside of language. Everywhere there is a lot of data that we want to understand or transform and reproduce, for example when creating images. Software like midjourney and stable diffusion can only create images from a generic description. For example. “make me a pink hippopotamus on skis, in the style of Salvador Dali.” Artificial intelligence can be used to create music. Recently a hit on TikTok was a song in which the voice of famous singers was made using AI. However, in my opinion, AI will not be able to create art better, deeper, or more touching than human artists, because it does not experience the totality of the human experience. ChatGPT can also write code, which is why developers use it. It is also expected that AI will find application in the search for new drugs. We can imagine that AI will help lawyers and judges, accountants and educators in the future.

The problem is its reliability. Dangerous applications, unfortunately, do exist. One of them is the possibility of being used for the production and dissemination of propaganda on a large scale. Even worse, you can create images or videos that show a real person, for example. Barack Obama, do or say something he never did or said. Of course, there is a danger that it will become more difficult to distinguish what is real and what is not.

The question of rulemaking is very complex. Probably the best would be an agreement at the global level, for example, on nuclear or biological weapons, but this seems impossible. It also raises the question of who has access to the technology and who controls it. Another legal issue is who is responsible if a disaster occurs as a result of AI behavior.

One of the pioneers of TND, Geoffrey Hinton, said that the key reason he got into them was to understand how the human brain can work. Nobel laureate in physics Richard Feynman also said that “when I can’t build something, I don’t understand it.” The neural system of the brain is more complex than TND, but I think, yes, it’s possible that the underlying principle that allows TND to work has to do with something that happens in the human brain as well. Another point of view from which to answer the question is whether TND technology can help in deciphering the data that the brain produces. This is also being done, for example, at Elon Musk’s Neuralink company, where they are trying to use TND to directly connect the brain to computers. Immediate potential applications are related to the treatment of diseases.

Author: Joanna Photiadis

Source: Kathimerini

LEAVE A REPLY

Please enter your comment!
Please enter your name here