
A growing number of scientists/researchers are expressing concern about the growing number of academic papers and exam papers written using artificial intelligence (AI) systems. Apparently, AI can successfully pass exams or write essays for students to help them cheat, or even essays or even entire scientific papers as if they were written by real researchers (see, for example, here or here or here ).
How can we protect ourselves from such scams? Shouldn’t we set some limits? Shall we ban reports and essays? Let’s put bans on the publication of articles, because this is too much? The answer is “yes!” it comes immediately from those who created the problem. The problem is not that artificial intelligence knows how to write texts that fool scientists, but that they are still blah blah without substance. These are fragments and mixtures of textbook definitions, general and vague statements. Yes, they can simulate a scientific experience. But only in front of those who do not have such experience! And who today sees himself surpassed in competence by what he shows – horror! – as a simple web page. A simple web page, no “BDI research papers” file, no “Hirsy index greater than 2”, no relation to the department and rectorship and to Professor Kutare of Kutare and Kutare.
Universities are only threatened by AI where the academic community does not know the difference between genuine scientific contributions and more or less plain nonsense.
Do we expect students to write essays that reject the slightly rearranged wooden words we gave in class? If they didn’t show any form of independent thinking? Then yes, AI is giving us what we deserve: a sinister travesty and the opportunity to make us even more worthless than we already were.
Do we accept vague and grandiose texts from colleagues about how important their “field of research” is… when in reality they only know how to clumsily copy some of the experiments of real researchers? Then we deserve to have the AI fraud show us for what we are: weak, irrelevant, incompetent, outdated, useless.
In the magazine ACS Energy Letters The American Chemical Society, a group of researchers from Amsterdam and Cambridge illustrated as a warning an example of a scientific article written by AI at their request. There are about 9 pages of text on lead halide perovskites for LED devices. It’s not a problem if you don’t know what perovskites are, or what halogens are, or what a lead atom is and how it differs from an ion, or how an LED works. Or if you only know a little, vaguely. The text of the IA article on this topic is laid out quite clearly, convincingly, without obvious errors. It’s just… it’s a story, that’s all. Words, without any specific image, without expressing or generating any new idea. That’s worthy of a Wikipedia article—no small feat, but it’s no scholarly article either. It talks about complex, little-understood chemical structures and reactions… but doesn’t actually show any of them. Instead of any scientific illustration of its own, it only contains an image of a puddle containing what appears to be discarded glass (perhaps a “solar panel”). It talks about technical performance and toxicity, things that can be measured, counted, quantified… but it has almost no numbers, no equations, no concrete measurements, just words. It only gives 3 numbers: 1978, 2014… and the value of the permissible lead limit according to the US Environmental Protection Agency. Even the Wikipedia article would say more specific things.
A very large part of the academic community (not only here) does not know the difference between a real scientific expert and a fake one. Between genuine scientific contribution and forgery. Between a real case of a researcher and a case prepared for a job competition, also prepared with imitation of professional competence. For them, the illustrated article in ACS Energy Letters it’s a real threat… just like light is to a vampire. Their place is in darkness, let their incompetence not be seen. So yes, you’ll see them complaining about AI growing out of control. You will usually find them lamenting the good old days, what has happened to this world, that there is no more respect for teachers and researchers. Because in the old days of… that’s the word, you could be an honorary member of the Academy and the president of the country, and no one could see that you didn’t even finish school, let alone deserve an academic title.
The good news is that artificial intelligence can turn them into essays. So it can save our time so that we can focus our energy on generating really new ideas, new concepts, measurements that others haven’t done or understood. This can help us better understand what an exam with students should look like: not a poetry recitation contest, not a copy and mix wooden text contest, but a test of thinking, understanding, documentation/information, the ability to practically apply some theoretical ideas. Please work hard too. Read the rest of the article on Contributors.ro
Source: Hot News

James Springer is a renowned author and opinion writer, known for his bold and thought-provoking articles on a wide range of topics. He currently works as a writer at 247 news reel, where he uses his unique voice and sharp wit to offer fresh perspectives on current events. His articles are widely read and shared and has earned him a reputation as a talented and insightful writer.