The Cambridge Dictionary revealed that the word of the year 2023 was “hallucinate” (“hallucinate”) after the term received an additional definition related to the production of false information by artificial intelligence (AI), DPA and PA Media informs, citing Agerpres.

Cambridge Dictionary websitePhoto: Transversospinales / Dreamstime.com

AI hallucinations, also known as confabulations, sometimes seem absurd, but they can seem completely believable even if they are factually inaccurate or patently illogical.

The traditional definition of the word “hallucination” refers to seeing, hearing, feeling, or smelling something that is not there, usually due to a medical condition or the administration of a drug. A new additional definition from the Cambridge Dictionary says:

“When a form of artificial intelligence (a computer system that has some of the qualities of a human brain, such as the ability to produce language that appears human) hallucinates, it creates false information.”

There is more and more talk about “hallucinations” and the shortcomings of AI

After an initial surge of interest in generative artificial intelligence (AI) tools such as ChatGPT, public attention has shifted to the limitations of AI and ways to overcome them.

Artificial intelligence tools, especially those using large language models (LLMs), have proven capable of generating believable language, but often do so using false, erroneous or fabricated “facts”. They “hallucinate” in a confident and sometimes believable way.

“The fact that artificial intelligence can ‘hallucinate’ reminds us that humans still need to use their critical thinking skills to use these tools,” said Wendalyn Nicholls, editorial manager at Cambridge Dictionary.

“Artificial intelligence systems are fantastic at digging through large amounts of data to extract specific information and consolidate it,” she added. “And the more original you ask them to be, the more likely they will go crazy,” she added. ‘

“Human experience is clearly more important – and more in demand – than ever before in creating authoritative and relevant information on which to study the LLM,” Nichols noted.

Ethicists are also interested in what artificial intelligence reveals about us

AI hallucinations have already affected the real world. A law firm in the United States used ChatGPT for legal documentation and ended up citing several fictional cases in the courtroom. And in Google’s commercial for its own Bard chat panel, the AI ​​tool made a factual error about the James Webb Space Telescope, a mistake that cost the company $100 billion.

The new definition illustrates a growing trend to anthropomorphize AI technology, using human metaphors when we talk, write or think about machines.

“The widespread use of the term ‘hallucination’ in relation to the errors produced by systems like ChatGPT offers a fascinating insight into how we think about artificial intelligence and how we anthropomorphize it. Inaccurate and misleading information has always existed, whether it’s rumours, propaganda or “fake news,” said Henry Shevlin, an expert on the ethics of artificial intelligence at the University of Cambridge.

“Although they are usually considered a product of the human mind, ‘hallucinate’ is a suggestive verb that refers to a subject experiencing a disconnection from reality. The language choice reflects a subtle but profound shift in perception: the AI ​​is ‘hallucinating’, not the user,” he explained.

“While this does not indicate a widespread belief in the wisdom of artificial intelligence, it does highlight our willingness to attribute human characteristics to artificial intelligence. As this decade progresses, I expect that our psychological vocabulary will expand to encompass the strange abilities of the new intelligences we are creating,” Shelvin said.

PHOTO Article: © Transversospinales | Dreamstime.com