
In the “dark” side Internet, last December, a USDoD user felt proud and a little nervous at the same time. He made his first code and presented it to his “colleagues”. “This is very similar to the code OpenAI”, says a more experienced user. “Yes, OpenAI helped me finish the script,” replies o random hacker.
It may only be out for two months, but its unimaginable potential ChatGPT and the new areas it opens up have sparked discussions around the world. Most of them focus on its negative use: it is already grown problem in educationsince he can write finished works without regard to plagiarism and copyright rules, and questions were raised about the security of democracy itself.
But before worrying about our political system, perhaps we should look closer to the area where OpenAI’s creation exists and operates: the very Internet. A place that turned out to be open and inhospitable at the same time, easy to use but full of dangers.
One of them is online fraud, malware, hacker. Most technologies may not be unethical in and of themselves, but they can be harmful. unethical, of course, can be the use them. Therefore, in this context, the question arises whether ChatGPT and Artificial intelligence in general, it has opened up a glorious field of glory for those who stick their personal data, the files that we have on our devices.
Artificial intelligence technology at the service of hackers
According to a CheckPoint Research survey, just one month after the launch of ChatGPT, would-be cybercriminals have already found ways to exploit it. The US Department of Defense used it to create an encryption tool, seemingly innocuous but with potentially malicious use; someone else has created code that can copy someone else’s files and transfer them to another computer; many made fake images and “works of art” to sell to interested parties. The end result does not show if it was created by a person or a program.

“With ChatGPT, you can create a script that will potentially copy or delete files from a device, encrypt them (so others can’t get them), and attempt to communicate with a remote computer. The attacker can then demand a ransom in cryptocurrency or threaten to publish these files, ”the doctor notes. Vassilis Vlahos, Associate Professor of Economics at the University of Thessaly.
While this perspective doesn’t seem ideal, the good news is that ChatGPT it can’t by itself create something from scratch. He needs to give related instructions and related data subject to the limits and parameters set by the respective user.
Another obstacle is the human factor: “ChatGPT cannot by itself infect a computer with malware. Someone needs to make malicious code and “hide” it in a URL, email or SMS that tells you to “click here”, somehow “land” it on the appropriate device,” notes Vassilis Vasilopoulosdata protection officer at ERT and APE.
So it seems that its capabilities are still limited, so it becomes a valuable tool in the quiver of cybercriminals. In fact, in many cases the generated script has mistakes and it is necessary fixes.
Gateway for young cybercriminals
On the other hand, just as the USDoD made its first attempt, so do many others (especially younger ones) who have a little introduction to the basic principles programmers will be able to take advantage of ChatGPT.
He told K about this question. Jake Mooreleading global cybersecurity consultant ESET. From well-crafted phishing emails to writing malicious code, ChatGPT’s endless possibilities and uses at the hands of hackers, he says, could make it even harder to protect users and devices from the inevitable attacks. “Incredibly easy to take advantage of career criminalsbut also those with ambitions to go into cybercrime as the restrictions on its use are minimal,” he notes.
“At the moment there will be no problems. However, the bar is being lowered, the process is being simplified. If it would take a beginner a week to build something like this, now, with much less knowledge, it can be done in a day. So, the pool of potential hackers is growing, because it is easier and more affordable, ”the doctor notes. Vasilis Vlahos.
In fact, a professor at the University of Thessaly claims that even if someone does not have malicious intent, experimenting can cause “unintended disasters”recalling the case of Morrisworm, the first worm ever created in the 80s, which caused huge problems for the Internet at the time.
For his part, Vassilis Vasilopoulos emphasizes that ChatGPT can still take the first steps, but if the function is not installedits content and work can be done “complex malware production mechanism”.
Currently ChatGPT for free and free to the general public as it is tested, trained and data collected. At some point, OpenAI will start offering it for free to anyone who wants it, while promising that security measures will be in place to ensure its proper use. And they, however, can be bypassed: even though it is “locked” on Russiathe hackers who live there have found a way around this geographic restriction.

“The biggest concern is automation and escalation of this technology. While there are currently no official ChatGPT APIs, community-created options are available. This has the potential for Industrialize the creation and adaptation of malicious websites, targeted phishing campaigns and social engineering scams.among other things,” emphasizes Jake Moore from ESET.
What ChatGPT itself says about misuse
“K” also referred to rChatGPT itself for the problem created by its use by cybercriminals. As you will see, his Greek dictionary still has problems, what’s more, out of a database of 175 billion parameters, there are only about 20,000 in Greek.
“Hackers can use ChatGPT to create chatbots, analyze text, and create fake texts for the purpose of spreading misinformation, or in malicious phishing and fraudulent applications,” he replies, in his slow and…broken Greek, among other things.
In English, this is catchy and contains more interesting things: “OpenAI she is not responsible for any misuse of its technology by third parties. The company takes steps to prevent malicious use of its technology, such as complying with terms of use that prohibit illegal or harmful activity.”

It even gives hints how to protect yourself from hackers and the misuse of artificial intelligence.

Cat and mouse fight
However, AI technology can also be used to the serious reason. He can, so to speak, produce and cheque code for developers and cybersecurity professionals by identifying potential security gaps. It can also check for us if the image is fake or edited.
Talking about one battle between cat and mouseJake Moore of ESET points out that ChatGPT can also be used to support antivirus and security solutions, with programs that intelligently learn to overcome the dangers of hackers.
According to him, precisely because we are at the beginning of a new period, we must continue train the human eye to better perceive all sorts of possible attacksso they are constantly alert to new forms of danger.
Dr. Vlahos adds that some companies and websites have already taken action and banned the use of this particular tool.
“However, these are point decisions. We are at a crossroads. He needs Critical thinking citizens and to train the AI itself identify opportunities for its misuse.
The Dangers Behind the Flaws of AI
ChatGPT’s not-so-impressive Greek is one proof that AI still has a long way to go, as well as it depends entirely on the data that someone will “feed” it and in the way it was intended by its developers.
And perhaps it’s reasonable to worry about the potential of AI. it is her weaknesses that are of great concern and potential risks.

Vassilis Vasilopoulos, who is in charge of protecting APE and ERT, identifies five problematic aspects of artificial intelligence, in his opinion:
- He can’t understand them. the rules of the respective language (we saw a chaotic difference between English and Greek) nor general context on the matter in which he is called to help. For example, someone might be commissioned to write an ad for a soft drink, but not specify that it is an ad campaign. This small refinement can drastically change the final result.
- does not have originality no one creativity. It is trained on the basis of the data given by the person. (In an extreme case, if someone submits only Ku Klux Klan data to ChatGPT, it will return overtly racist responses.)
- He doesn’t arrest her. brand uniqueness (at least for now). For example, if ChatGPT is asked to select the best brand of chocolate, it will not recommend just one brand. This is extremely important for the possible use of this technology in marketing.
- He doesn’t understand the public, his needs and his interests. He cannot personalize his answers according to the person he is addressing.
- does not have authenticitytherefore, various legal and ethical issues arise when using it.
Dr. Vassilis Vlachos from the University of Thessaly asks experts from her fields to join lawher moral and other social sciences, in addition to technology, in the development of artificial intelligence technologies.
He emphasizes, in fact, the need for the existence greater transparency about the data from which ChatGPT draws its vocabulary, as well as how its reasoning is programmed.
For example, AI can be used to decide whether someone gets a loan or not, whose resume gets rejected, which prisoner should get a vacation, and so on. By what criteria will he decide?
“It’s one”black box”. We do not know which route he took to provide each answer or solution,” says the professor.
Source: Kathimerini

Ben is a respected technology journalist and author, known for his in-depth coverage of the latest developments and trends in the field. He works as a writer at 247 news reel, where he is a leading voice in the industry, known for his ability to explain complex technical concepts in an accessible way. He is a go-to source for those looking to stay informed about the latest developments in the world of technology.