Check out this excellent piece written by Kelvin Graham
So, who’s heard of the EU Artificial Intelligence Act (EU AI Act)? Has it had any effect since phase one came into force on 1 August 2024? And will “unacceptable risk” be prohibited when phase two arrives this August?
There’s a very good reason why you might not have heard of the EU AI Act or seen any noticeable change. That’s because, after Brexit, the Act does not automatically apply in the UK.
However, compliance is required for ‘cross-border trading’. Therefore, the Act does apply to UK businesses providing:
· AI systems in the EU
or
· An AI system output that’s used within the EU
What’s it all about?
The aim is to make AI development safe, trustworthy, and transparent. Yeah, right, I hear you say. Bit late to the party, genie out of the bottle, and all that!
The EU AI Act is supposed to protect users from “harmful or high-risk” applications by ensuring “accountability” for how AI is designed and used within clear and consistent rules.
Who’s not sick of seeing an endless torrent of spammy AI emails, social messages, ad copy, site copy, OOH ads, images and videos.
Or worse, that feeling in the pit of your stomach when you realise your personal data is being quietly scraped each time you open an AI app and let it into your daily work/life routines.
The rules of compliance fall into 4 ‘risk’ categories, each with their own rules to be followed.
The 3 main ones are:
🤖 Unacceptable Risk - Prohibited
Aimed at ‘manipulative’ systems:
- Exploiting vulnerabilities
- Social scoring by public authorities
- Biometric classification
- Untargeted facial-image scraping for ‘recognition’ databases
- ‘Emotion recognition’ in workplaces or schools
🤖 High Risk - Heavily Regulated
Strict requirements are imposed upon:
- Risk management
- Data governance and documentation
- Human oversight
- Accuracy/robustness/cybersecurity
- Post-market monitoring
- Safety components
- Critical infrastructure
- Employment and Education
- Migration/asylum
- Access to essential services
🤖 Limited Risk - Transparency needed
Here the Act requires a clear notification:
- When a user interacts with AI, typically a chatbot
- Labelling synthetic media in specific circumstances
There’s no doubt that companies and organisations will be more likely to comply if their operations involve use of AI and they trade with the EU.
But what about here in the UK? And the growing number of businesses using AI for all types of consumer targeted activities.
Until we see some rules in place, expect not just more fake slop everywhere you look. But also artificial dodgers picking every info and data pocket they can.
Add comment
Comments