Table of Contents
1. AI Act Adopted by the EU
2. EU tech companies oppose a stringent rule for system regulation.
INTRODUCTION
The European Union’s landmark AI Act has received a mixed reaction from the region’s tech industry, with some welcoming the legislation’s attempt to regulate the development and use of artificial intelligence, while others have expressed concerns about its potential to stifle innovation.
- Recently, the EU decided on interim regulations to control the rapidly expanding AI industry.
- The Act lays out a risk-based strategy for controlling various systems that are in use in the area.
- Many worry that the rule’s strict standards might impede advancement in the area.
AI ACT ADOPTED BY THE EU
Approved on December 8, the Act establishes a risk-based framework for AI regulation, imposing the strictest rules on systems deemed more dangerous.
The EU Commission states that systems with “minimal risk,” such as spam filters or recommender systems, would not face regulatory scrutiny. Stringent regulations will apply to artificial intelligence systems labeled as “high-risk,” and those categorized as “unacceptable risk” will face prohibition.
As the complete text of the agreement remains undisclosed, the systems classified as high-risk are still unknown. The IT industry expressed concerns about burdensome requirements for dangerous systems, despite general approval of the regulatory approach.
Regulations for high-risk demand adhere to risk mitigation, quality data, activity logging, detailed documentation, clear user info, human oversight, and cybersecurity.
EU Tech Companies Oppose Stringent Rule for AI System Regulation.
Widespread worry exists that these regulations could burden developers, leading to skilled AI professionals leaving and hindering EU initiatives.
Cecilia Bonefeld-Dahl, DigitalEurope’s director general, highlighted that fulfilling new requirements, alongside laws like the Data Act, will require substantial corporate resources. These resources will be allocated to legal compliance instead of employing AI engineers.
High-risk AI projects or systems, as per France Digitale, must undergo a costly and time-intensive process to obtain a CE label.
Europe’s current solution amounts to regulating mathematics, which is illogical,” France Digitale said. Penalties for forbidden AI apps may reach €35 million (7% of global yearly revenue); other rule violations may incur €15 million (3%). Providing false information may result in €7.5 million (1.5%).