Today is the first day in which companies doing business in the European Union must comply with some provisions of the AI Act.
No matter where on the globe you’re headquartered, if your use of AI touches any of your dealings in the European Union (anywhere “its use has an impact on people located in the EU” is how the European Commission puts it) then today is the day you need to be compliant.
The EU’s AI act outright bans some uses of AI, and highly regulates many others. Getting caught running banned applications will be quite costly when the EU starts to actually enforce the law in August, with fines of up to €35 million (about $36 million US), or 7% of annual revenue from the previous fiscal year, whichever is greater.
The act, which the European Parliament approved in March and officially went into force August 1, basically breaks AI use into risk-based levels, with the two lowest levels receiving little to no regulation. Minimal risk applications, such as AI-enabled video games or spam filters, aren’t regulated at all — although certain uses might be covered under other legislation — and limited risk uses come with a small degree of oversight — chatbots, for example, must make users aware they’re interacting with a machine.
Allowed But Regulated Uses
High risk AI systems, on the other hand, will be highly regulated going forward and can’t be marketed unless they comply with the law’s requirements. Examples of this level includes but is not limited to: critical infrastructures that could put the life and health of citizens at risk; educational or vocational training, that may determine the access to education and professional course of someone’s life; safety components of products; and law enforcement tools that may interfere with people’s fundamental rights.
According to the European Commission, the latter category includes biometric identification systems, which are considered high-risk and subject to strict requirements, and that “the use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited.”
The cops do have some wriggle room, however:
“Narrow exceptions are strictly defined and regulated, such as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.
“Those usages is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.”
Not Allowed AI Uses
Topping the list are uses that the EU has found to pose an unacceptable risk, which are banned. These include uses for the following purposes:
- Exploitation of vulnerabilities of persons, manipulation, and use of subliminal techniques.
- Social scoring for public and private purposes.
- Individual predictive policing based solely on profiling people.
- Untargeted scraping of internet or CCTV for facial images to build-up or expand databases.
- Emotion recognition in the workplace and education institutions, unless for medical or safety reasons (i.e. monitoring the tiredness levels of a pilot).
- Biometric categorization of natural persons to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation. Labeling or filtering of datasets and categorizing data in the field of law enforcement will still be possible.
- Real-time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions.
Many Have Signed the ‘AI Pact’
In September, the European Commission announced that 100 companies, including the likes of Amazon, Google, Microsoft, and OpenAI, had signed the AI Pact, an initiative in which participating companies commit to at least three core actions:
- Adopting an AI governance strategy to foster the uptake of AI in the organisation and work towards future compliance with the AI Act.
- Identifying and mapping AI systems likely to be categorized as high-risk under the AI Act.
- Promoting AI awareness and literacy among staff, ensuring ethical and responsible AI development.
Since that initial announcement, the number of companies getting on board has risen to 168.
The next big date on the timeline for the implementation of the AI Act comes on August 2, which is when many of the rules associated with the law are scheduled to go into effect. It’s also when it’s expected that the rules that went into effect today will start to be enforced.
Christine Hall has been a journalist since 1971. In 2001, she began writing a weekly consumer computer column and started covering Linux and FOSS in 2002 after making the switch to GNU/Linux. Follow her on Twitter: @BrideOfLinux
Be First to Comment