The EU is gearing up to implement its first major set of regulations on artificial intelligence, designed to protect people who’ve been harmed by the technology.
In its first ever legal framework on AI, the Commission aims to address and mitigate the risks of artificial intelligence. Examples include:
- Facial recognition systems which contain discriminatory bias
- Algorithms which boost and reinforce misinformation
- The targeting of children with harmful content
- Predictive AI systems which approve or reject loan applications, which could unfairly impact people from ethnic minorities.
All of these technologies are so far unregulated. But along with the EU’s AI Act, the AI Liability Directive will introduce new protections for citizens. For the first time, individuals who’ve been harmed by AI technology will be able to sue tech companies who produce and deploy it.
According to MIT’s Technology Review Journal:
“The goal is to hold developers, producers and users of the technologies accountable and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.”
What will the AI Liability Directive mean for tech?
The publishing of the draft AI Liability Directive by the European Commission recently sent waves through tech industries all over the world.
Some industry experts and leading tech brands have expressed concerns that more regulation will inhibit innovation within this rapidly growing sector.
The Computer & Communications Industry Association (CCIA) has written to the European commissioners responsible for the proposed AI-related acts. The letter stated:
“Applying strict liability would put a disproportionate burden on providers as non-material damages are less predictable and more complex to quantify than material damages. This could have a chilling effect on innovation, and/or materially increase the price of software for end-users, and could potentially hinder the uptake of useful advanced software applications, including AI, by the market.”
There are also urgent issues with European and UK organisations being unprepared for regulatory changes. As with GDPR, UK companies are likely to be affected by the EU AI Act, so should be paying close attention to the latest developments.
Research by McKinsey has found that many companies have a lot of work ahead of them in order to address AI risks and prepare for new regulations.
In 2020, its researchers found that less than half (48%) of organisations were able to recognise AI-related regulatory compliance risks. And even fewer (38%) were taking active measures to address them. These are alarming statistics, especially considering high-profile incidents in which AI has gone awry and caused major harm.
Unless they can get up to speed with new EU AI rules fast, organisations face potential fines of up to $30 million or 6% of global revenue. These penalties are even harsher than GDPR violations.
Looking to hire in tech or AI, or searching for a new opportunity? Get in touch with our specialist tech recruitment team here at Fairmont Recruitment to start your search.