On April 21, 2021, the  European Union  (EU) put on the table a pioneering measure for the  regulation  of  Artificial Intelligence (AI) . After three years of consultations,  Brussels  established the categorization of these systems according to their  risk  and the  prohibition  when this is “inadmissible”, see mass surveillance, facial recognition or social credit that China uses. However, the rapid emergence of generative models such as the popular  ChatGPT  has torpedoed the original proposal, forcing European legislators to study ways to bring AI under control.

In little more than four months, this  technology  has managed to spread dizzyingly to more and more aspects of our lives, sneaking into schools and multiple jobs. Its ability to simulate human conversations and to create fictitious images that seem increasingly real has fascinated many, but it has also aroused fear among experts due to its potential impact on aspects such as the labor market, biases, or  misinformation . “What has been done with ChatGPT is reminiscent of  Tesla : putting their cars on the market and correcting them while people drive. That’s why they have more accidents than others,” explains  Lorena Jaume-Palasí , an expert in AI, philosophy of law and ethics of the technology.

The speed of this emergence has forced the EU to slow down the processing of the law to add new control mechanisms. The European Parliament will vote on it at the end of April, but if approved it could take up to two years to enter into force. Although it’s unclear what it will ultimately look like, the two lawmakers spearheading the legislation have proposed categorizing generative AI as “high risk,” requiring companies to have stricter risk management and transparency  requirements A recent investigation by the NGO Corporate Europe Observatory points out that  Microsoft  and  Google – the tech giants leading the business race for AI – are pressing Brussels hard not to impose that risk label on their services. On the other hand, the European Commissioner for the Internal Market and Services,  Thierry Breton , assured on Monday that all the content that is generated with AI “will have to be marked”. “It’s an extremely important issue,” he said.

“ChatGPT has done like Tesla: put their cars on the market and correct them while people drive. That’s why they have more accidents than others”

Lorena Jaume-Palasí – Expert in AI, philosophy of law and ethics of technology

decoration

Ban in Europe?

Italy has been the first country in the European block that has chosen to block the access of its citizens to ChatGPT. On March 31, the Italian data protection agency denounced the “absence of a legal basis to justify the massive collection and storage of personal data in order to train its algorithms” and gave OpenAI, creator of the application  , 20  days . to demonstrate that it does not violate the   EU General Data Protection Regulation (GDPR).

In Spain, the issue is also worrying. Asked by El Periódico de Catalunya, from the Prensa Ibérica group, the  Spanish Data Protection Agency  (AEPD) has confirmed that it will request the inclusion of this debate in the next Plenary Session of the European Data Protection Committee –to be held in April– to that “harmonized actions” and “coordinated at European level” can be launched.

In this regard, the opinions of the experts do not coincide. “Banning is a somewhat radical option. It worries me because it reinforces the role of Europe as a brake on innovation that occurs abroad,” says analyst  Antonio Ortiz . “The regulation of aeronautics means that if a plane falls, everyone investigates it so that no more falls. It is not a perfect sector, but it is better than what happens with AI,” says Ariel Guersenzvaig, a researcher in technological  ethics . “No one denies that it is something fascinating, but you have to slow down a bit.”

Complaints and pressure

Since 2016, 123 laws dealing with AI have been passed in the world; 37 of which were approved in 2022, according to the AI ​​Index report from Stanford University. However, until now, regulatory attempts have focused more on conventional AI – applied in supply chains or in health – than on tools such as ChatGPT or  Midjourney .

Calculating the risks of these systems is complicated, since they have a very wide range of uses and part of these can be beneficial to society. Even so, they also have harmful effects, which is prompting new lawsuits against text and image generators ranging from a mayor who has denounced the reputational damage caused by the lies produced by these systems to artists who denounce a violation of their copyrights. .

This malaise is leading the authorities to act . In Germany, the regulator is studying a restriction similar to the Italian one, which the federal government rejects. In France, the Montpelier city council wants to ban ChatGPT “as a precaution”. In the UK, the Executive has asked regulators to set standards. China has required companies to provide details of their algorithms. Meanwhile, a lack of control in the United States has fueled a race among its corporate giants for AI dominance that critics have dismissed as “irresponsible.” Even so, legislators and civil organizations are already beginning to sound the alarms.

LEAVE A REPLY

Please enter your comment!
Please enter your name here