DEV Community

Bart for dataroots

Posted on • Originally published at dataroots.io on

EU regulations for AI

original author(s): Jan Yperman

While the advent of advanced artificial intelligence (AI) systems in our daily lives is an absolutely thrilling prospect (and increasingly a reality), it's important to keep in mind the impact such systems could have on society if left unchecked. And that doesn't just apply to killer robots or evil corporations either. There are quite a few ways AI can do more harm than good when implemented without the proper precautions, even when deployed with the best of intentions. For example, a company might use AI techniques to select the most relevant resumes for job openings. If certain demographics are not properly represented in the data used to create this filter, however, the results will be biased. Bias in the data is just one example of a risk associated with the use of AI, there are quite a few more pitfalls to look out for. This is exactly what the newly proposed regulations for AI in the European union seek to address. The main goal of these regulations is to protect European citizens from the potential negative side-effects of AI without limiting the potential AI has to offer. The chosen approach is therefore risk-based , regulating only AI systems that pose a significant risk to people. This allows the bulk of AI systems to be developed with minimal legal friction, allowing Europe to stay competitive in this field. Should these regulations become law, deploying an AI will require an initial assessment of the potential risk it poses to European citizens. Various levels of risk are considered:

  • Unacceptable risk: E.g. exploitation of specific groups of people, social scores, ...
  • High risk: Safety components of products, critical infrastructure, justice system, ...
  • Limited risk: E.g. Chatbots, generated content (including e.g. deepfakes), ...
  • Minimal risk: Anything else.

The vast majority of the regulations are aimed at the high risk systems, as unacceptable risks are simply not allowed, and limited risk systems have only minor requirements for deployment such as informing the user of the nature of the system they're using.

For high risk systems a number of boxes must be checked in order to comply. Here's a non-exhaustive list:

  • Requirements for documentation
  • Model explainability (we'll be dedicating a post on this topic, so stay tuned!)
  • Model fairness
  • Continuous monitoring of the performance of the system while its in use
  • Keeping records for reproducibility and auditability

These requirements are usually already part of any well-designed implementation of these types of systems and are far from unreasonable. In fact, companies may be able to opt to comply with the high risk regulations, even for systems that pose no considerable risk. This would then serve to certify the trustworthiness of the system. While it may still take a couple of years for these proposed regulations to become law, it's good to be aware of the coming changes, which is why at dataroots we are already starting to incorporate these regulations in our way of working. The fact that these regulations are being implemented is indicative of the impact AI is starting to have on our lives. These rules will allow us to further explore this potential in a responsible way, and I for one can't wait to see what the future has in store!

Top comments (0)