European Union takes the global lead in AI legislation
By Martyn Warwick
Aug 6, 2024
- World’s first AI law come into force, focusing on a pathway to ensure human-centric and safe development of the technology
- The Act is also the basis of a harmonised internal market for AI across all member states
- AI systems posing unacceptable risks to be banned within six months
- The majority of the sector is categorised as minimal risk, requiring the lowest level of oversight
The European Artificial Intelligence Act (AI Act), the world's first extensive and far-reaching legislation on AI, has come into force. Its purpose is to ensure that AI developed and used in the European Union’s member states is trustworthy and thus the Act is hedged around with comprehensive safeguards to protect people’s fundamental rights. The law was three years in the making, having been first mooted by the European Commission (EC) in 2021 and agreed by the European parliament in December 2023. It delineates the legal obligations of developers and deployers in regard to the specific uses of AI.
The Act further prepares the ground for the establishment of a harmonised internal market for AI across all EU member states, encouraging the uptake of AI technology and the creation of “a supportive environment for innovation and investment.” It is also designed to reduce administrative and financial burdens for business in general, and for small and medium-sized enterprises (SMEs) in particular. In due course, we’ll see how that particular intent turns out in practice in those parts of the EU where bureaucracy is still the be all and end all of daily existence. Names could be named, but the people who live there know who they are.
The new AI Act is part of a great overarching European AI strategy that includes the AI Innovation Package – measures to support European startups and SMEs in the development of trustworthy AI that respects EU values and rules – and the Coordinated Plan on AI, which aims to increase investment in AI, implement AI strategies and programmes and align AI policy to prevent fragmentation within Europe. The Act addresses the structure of AI applications, prohibits AI practices that pose unacceptable risks and sets parameters and controls for high-risk applications via governance structures set at both the European and the individual member state level. It also categorises AI into systems judged to be of unacceptable risk, followed by high-risk and then cascading down the stack via limited risk and minimal or no risk. Each categorisation is subject to specific obligations that become more stringent as the risk level rises.
The great pyramid of AI risks
The category of unacceptable risk covers AI systems deemed to be a clear threat to the fundamental rights of people. This includes those that “manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors, systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing.”
Additionally, some types of biometric systems will be prohibited, “for example emotion recognition systems used at the workplace and some systems for categorising people or real-time remote biometric identification for law enforcement purposes in publicly accessible spaces.” Any such AI systems will be banned in their entirety.
AI systems regarded as being high-risk will have to comply with strictly enforced requirements, including risk-mitigation systems, the highest quality of datasets, the logging of all activities, be documented in comprehensive detail, provide clear user information, be subject to human oversight at all times and under all conditions while being fully secure, robust and accurate and a high level of robustness, accuracy and cybersecurity. The EC gives as an example of a high-risk AI systems being those “used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots” as well as, critical infrastructures, such as transport, that could put the life and health of citizens at risk, and educational or vocational training that may determine the access to education and professional course of someone’s life (including the scoring/marking of exams).
Also included are the safety components of products such as AI applications in robot-assisted surgery (quite a handy one that) and the management of workers and access to self-employment, which includes areas such as CV-sorting software for recruitment procedures. Other sectors covered are essential private and public services – here, the example given is credit scoring denying citizens the opportunity to obtain a loan – and law enforcement that may interfere with people’s fundamental rights, such as the evaluation of the reliability of evidence. Other high-risk examples are those AI applications that are applied to migration, asylum and border control management and those attendant on the administration of justice and democratic processes.
The limited-risk category covers those associated with lack of transparency in AI usage. The new Act introduces specific transparency obligations to ensure that humans are informed when using systems such as chatbots, so they can decide whether to continue or not with such an interaction.
In addition, certain AI-generated content, including deep fakes, must be labelled as such, and users must be informed when biometric categorisation or emotion-recognition systems are being used. Furthermore, providers will have to introduce systems that mark and tell users that synthetic audio, video, text and image content are being artificially generated or manipulated.
The minimal-risk category applies to the great majority of today’s AI systems and includes AI-enabled recommender systems and spam filters. Such systems, despite being a bane for citizens across the EU (and the rest of the world) are not subject to the main obligations defined in the AI Act because, despite being an unconscionable nuisance and intrusion, they are categorised as not being a risk to the rights and safety to EU citizens.
Companies “can voluntarily adopt additional codes of conduct”, but the queue of them waiting to sign-up is conspicuous by its absence.
To complement the above system, the AI Act also applies rules to “general-purpose AI models” defined as “highly capable of performing a wide variety of tasks, such as generating human-like text”. General-purpose AI models are increasingly used as components of AI applications, and the AI Act “will ensure transparency along the value chain and addresses possible systemic risks of the most capable models.”
An excellent start: The new AI law is pragmatic, sensible and easily adaptable as AI evolves
Now the AI Act is EU law, member states have until this time next year to designate “national competent authorities” to oversee the application of the rules for AI systems and carry out market surveillance activities. The EC’s AI Office will be the key implementation body for the AI Act at pan-EU-level, as well as the enforcer for the rules for general-purpose AI systems identified as being of unacceptable risk, which will be subject to full enforcement in six months time. To bridge the transitional period before complete implementation of the Act, the EC has introduced the “AI Pact” which “invites AI developers voluntarily to adopt key obligations of the AI Act ahead of the legal deadlines.” Oh dear.
Implementation of the Act will be supported by the European Artificial Intelligence Board (EAIB), which will police the actor to ensure the uniform application of the AI Act across member states. It will oversee a panel of independent AI experts that will provide technical advice and input on enforcement of the new law. It will be particularly responsible for the issue of alerts to the AI Office on risks associated with general-purpose AI models.
The AI Act does not alter the fact that the EU GDPR law on the protection of personal data, privacy and the confidentiality of communications will apply to the processing of personal data in connection with the AI Act.
Despite the scope of the legislation, penalties for those found to be in breach of it seem very limited. Companies and organisations found to have broken the law will face fines of up to 7% of the global annual turnover where violations of banned AI applications are concerned, up to 3% of annual turnover for violations of other obligations and up to 1.5% for supplying incorrect information.
Yeah, that’ll put the fear of God into ‘em. The new guard dog will need bigger and better teeth.
- Martyn Warwick, Editor in Chief, TelecomTV
Email Newsletters
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.
Subscribe