
The Chinese authorities have drafted some rules that creators of ChatGPT-style chatbots will have to follow, so the responses of these conversational agents will have to comply with a strict system of censorship. This will not be easy, as chatbots often give wrong or strange answers.
The Cyberspace Administration of China has developed a set of rules that will apply to the various chatbots that will be launched, writes the New York Times. These aren’t final rules, but they’re strict, especially since chatbots will also have to follow strict censorship rules that websites and online apps can’t break.
Answers provided by chatbots must follow the Communist Party “line”, not denigrate the leadership in any way, and generally avoid very sensitive topics such as the Tiananmen Square Massacre (1989).
-
Chatbots are on the rise – should we be afraid? What are the benefits and dangers of these robots that we can communicate with
Chatbot responses will have to “reflect socialist values” and avoid information that could in any way “undermine” the power of the state.
China has created a gigantic system of censorship on the Internet, millions of people work as censors, and technology companies have been able to grow, but only by helping the state implement censorship.
Engineers in China have been working for some time to make chatbots comply with these rules, but it’s difficult because it’s perfectly normal for such AI programs to give wrong answers or “make up” things, all from the datasets on which these software are learned and from the sources from which the answers come.
Chinese giants such as Baidu or Alibaba are working hard on chatbots. However, strict censorship rules will make implementation much more difficult, and many programmatic changes will have to be made.
Chatbots are software that use technologies such as machine learning to communicate in a language that is as close to normal conversation as possible. Chatbots, while answering grammatically correctly, sometimes make factual errors and pick up on conspiracy theories or misinformation. They began to be widely used in recent months.
Conversational bots still need improvement, as they do not “understand” all requests and are easily misled. The big problem is that you don’t really know where they are getting their answers from. There are problems with bots that can generate images, because they take pictures without even having rights, taken by many people who did not give their consent.
Photo source: Dreamstime.com
Source: Hot News

Lori Barajas is an accomplished journalist, known for her insightful and thought-provoking writing on economy. She currently works as a writer at 247 news reel. With a passion for understanding the economy, Lori’s writing delves deep into the financial issues that matter most, providing readers with a unique perspective on current events.