Implementing the AI regulation: What companies need to do

The AI Regulation marks an important step in the regulation of artificial intelligence (AI) in Europe.
It aims to make the use of AI technologies ethical, transparent and safe, while at the same time promoting innovation.

Companies are facing the challenge of adapting their AI systems to the new legal requirements.
This brings with it both opportunities and considerable requirements.

In this article, you will learn what the AI Regulation means for companies, what categories of AI systems there are and what measures are required to comply with the legal requirements.

This article is for information purposes only and does not constitute legal advice.
The information refers to the state of knowledge as of September 2024.
For specific legal questions and the implementation of the AI Regulation in your company, we recommend consulting a qualified legal advisor.

Key Takeaways:

  • First comprehensive AI regulation in Europe: The regulation sets out clear guidelines for the use of AI and protects the rights of citizens.
  • Four risk categories for AI systems: These categories range from minimal to high risk, with strict regulations applying particularly to high-risk systems.
  • Obligations for operators and users: Companies that develop or use AI must fulfill comprehensive documentation obligations and ensure transparency.
  • Gradual introduction: The first regulations will apply from August 2024.
    All high-risk systems must comply with the requirements by 2026.
  • High penalties for non-compliance: Companies that do not comply with the regulations risk fines of up to 30 million euros or 6% of their global annual turnover.

Implementing the AI Regulation: What is an AI system?

An AI system is software that is based on algorithms and independently makes decisions or automates processes.
These systems use machine learning to continuously improve themselves by processing data.

The AI Regulation is based on the OECD definition and describes AI as a machine system that is designed to operate autonomously to varying degrees.

It processes inputs and generates results such as predictions, content or decisions that can influence physical or virtual environments.

Examples of AI systems include speech recognition programs, image recognition technologies and automated decision-making in companies.
These technologies perform tasks that normally require human intelligence, such as analyzing large amounts of data or predicting events.

However, the ability of such systems to learn also entails risks.
Incorrect data or unconscious biases can lead to discriminatory decisions.
This highlights the need for transparency and traceability when using AI systems.

What risk categories are there for AI systems in the AI Regulation?

The AI Regulation divides AI systems into four risk categories to ensure the protection of citizens’ rights and maintain ethical standards.
The risk classification plays a central role in defining the requirements for the use of AI technologies in various areas.

  1. AI systems with minimal risk: These systems include simple applications, such as data processing tools for predictive maintenance, in which product data is analyzed.
    These systems are associated with low risk and are hardly subject to any regulatory requirements.

  2. AI systems with limited risk: These systems, such as chatbots or product recommendation systems, are used in less sensitive areas.
    However, they are subject to certain transparency requirements.
    One example is deepfakes, which are not prohibited but must be clearly labeled as such to prevent deception.
    Transparency is crucial in these systems to ensure trust.

  3. High-risk systems: These systems are used in sensitive areas such as healthcare, law enforcement or human resources.
    Due to their potential impact on the fundamental rights of citizens, they are subject to strict regulations in terms of security, transparency and traceability.
    For example, AI systems used to select applicants must ensure that their decisions are not discriminatory.
    Another area is healthcare, where AI systems are used to diagnose or treat patients and are subject to strict safety regulations.

  4. Prohibited systems: Some AI systems are completely banned in the EU due to their harmful or manipulative effects.
    These include, for example, systems aimed at unconscious influence, such as hidden advertising that manipulates users’ behavior without them noticing.
    Another example is social scoring systems that analyze and evaluate people’s behavior, which is considered an invasion of privacy and fundamental rights.

AI training for your company?

What are the objectives of the AI Regulation?

The AI Regulation pursues several ambitious goals to ensure the responsible use of AI in Europe.

These objectives go far beyond mere regulation and include protecting citizens’ rights, strengthening trust in AI systems and promoting innovation and competitiveness in Europe.

  1. Protection of fundamental rights and freedoms: The regulation focuses on protecting the dignity, privacy and other fundamental rights of citizens.
    AI systems must not make decisions that lead to discrimination or manipulation.
    Particularly in sensitive areas such as human resources and healthcare, it must be ensured that AI acts fairly and justly.
    The regulation sends a clear signal to companies: AI can only be successful if it protects the rights of the people affected by its decisions.

  2. Strengthening trust in AI technologies: Trust is a key factor for the acceptance of AI technologies.
    The regulation aims to ensure that AI systems are transparent and comprehensible through clear rules.
    This transparency creates trust, especially in areas such as law enforcement or medicine, where decisions can have a profound impact on people’s lives.
    Trust in AI is crucial for its broad social acceptance.

  3. Promoting innovation and competitiveness: The AI Regulation should not only regulate, but also promote innovation.
    A uniform legal framework gives companies clarity on how they can develop and use AI technologies without breaking the law.
    This promotes an environment in which companies can invest safely and make their solutions competitive on the global market.
    Europe strives to secure its technological leadership while maintaining high ethical standards.

What roles and obligations are there in the AI Regulation?

The AI Regulation defines clear roles and obligations for the various stakeholders in connection with AI systems.
These include providers, operators, importers and distributors of AI systems.

  • Providers: Companies that develop or distribute AI systems bear the main responsibility for compliance.
    They must create detailed technical documentation that explains how the AI system works and what data it processes.
    For high-risk systems, it is essential to implement comprehensive risk management to identify and minimize potential risks at an early stage.
    Providers are responsible for ensuring that their systems comply with regulatory requirements and can be operated safely.

  • Operators: Companies that use AI systems must ensure that users are informed about the interaction with an AI system.
    They must also ensure that the results of AI systems are clearly labeled and made available in a machine-readable form.
    Especially in the case of systems such as deepfakes, it is important to clearly label them as AI products to avoid deception.
    Operators are also obliged to ensure that their employees have the necessary technical knowledge (AI expertise) to operate the systems safely.
    Regular training and the maintenance of an AI register documenting the systems used are also mandatory.

  • Importers and distributors: Companies that place AI systems on the European market must ensure that these products comply with the requirements of the regulation.
    They may only sell AI systems that comply with the applicable regulations.
    If defects are found, they are obliged to recall the affected product or withdraw it from the market.

We give your team the AI expertise.

Implementing the AI regulation: What companies should do

Companies that use or provide AI technologies must take extensive measures to meet the requirements of the AI Regulation.
This includes the creation of technical documentation that explains exactly how the AI system works, what data it processes and how decisions are made.

This documentation must be comprehensible and accessible to both users and regulatory authorities.

Another important aspect is ensuring that AI systems work in a transparent and non-discriminatory manner.
This is particularly important in areas such as human resources, where faulty algorithms could lead to discriminatory hiring.

Special care is also required when selecting and monitoring AI systems in the healthcare sector, where incorrect diagnoses can have serious consequences.

AI Regulation: What deadlines apply to companies?

The AI Regulation is being implemented gradually.
Companies should start meeting the requirements at an early stage.
The first requirements will come into force from August 2024, including important measures on transparency and documentation.

Prohibited AI practices must be removed from operation by February 2025.
The situation is particularly critical for high-risk AI systems, where the regulations must be fully implemented by August 2026 in order to avoid legal consequences.

What are the penalties for non-compliance with the AI Regulation?

Companies that do not comply with the requirements of the AI Regulation risk significant fines.
The penalties can amount to up to 30 million euros or 6% of annual global turnover, whichever is higher.

In addition to fines, companies that do not meet the requirements are threatened with exclusion from the market.
Especially
High-risk AI systems are subject to strict deadlines, and organizations should ensure that their compliance measures are implemented in a timely manner.

Relevant questions on the AI Regulation

FAQs: Implementing the AI Regulation

The AI Regulation is a legal regulation of the European Union designed to make the use of artificial intelligence safe, ethical and transparent.
It protects fundamental rights by providing clear guidelines for the use of AI systems, while at the same time creating a conducive environment for innovation and competitiveness in Europe.

The regulation divides AI systems into four risk categories: minimal, limited, high and prohibited.
High-risk systems in particular, which are used in sensitive areas such as healthcare or human resources, are subject to strict requirements.
Prohibited systems that manipulate unconscious behavior or perform social assessments are not permitted in the EU.

Companies that develop or use AI technologies must create detailed technical documentation, regularly monitor their systems and ensure that they operate transparently and in a non-discriminatory manner.

In addition, companies should train their employees and clearly label the use of AI systems to ensure compliance with legal regulations.

Companies that violate the requirements of the AI Regulation can be fined up to 30 million euros or 6% of their global annual turnover.
There is also the threat of exclusion from the market, especially for companies that do not adapt high-risk AI systems to the regulations within the deadline.

We support the development of AI expertise

What opportunities does the AI Regulation offer?

Despite the challenges, the AI regulation also offers companies considerable opportunities.
Companies that implement the regulations at an early stage can strengthen the trust of their customers and business partners.

This is especially true at a time when data protection and ethical responsibility are becoming increasingly important.
Companies that proactively implement the regulation can achieve long-term competitive advantages by positioning themselves as pioneers in the field of ethical AI.

Compliance with the AI Regulation not only provides legal certainty, but also strengthens the reputation of companies that use transparent and safe AI technologies.
This creates the basis for a sustainable and ethical future in the field of artificial intelligence.

Conclusion: tackle AI regulation early and implement it correctly

The AI regulation will have a lasting impact on the business world in Europe, similar to the GDPR.
Companies that do not act now not only risk severe penalties, but also their place in the competition.

It is no longer enough to simply use AI – it must be used safely, ethically and in compliance with the law. One important aspect will be the development of AI expertise within the company.
Employees must be properly trained in the use of AI systems.

Companies that take action at an early stage secure legal certainty and a strategic advantage.
They show their customers that they act responsibly and rely on transparent, fair technologies.

AI expertise needed?

Roover-Oliver-Breucker-Metaverse-Experte-SW
Oliver Breucker
Artificial Intelligence Expert

You might also be interested in

Differences of AR & VR
Augmented reality (AR), virtual reality (VR) and mixed reality (MR) are on everyone's lips: Facebook aka Meta announced the next generation of its VR glasses, the Quest 3. Apple unveiled its MR technology, the Apple Vision Pro, in early June.
What is the Metaverse?
The Consumer Metaverse is the idea of a virtual, shared space where people can interact and engage in various activities.
Augmented Intelligence
Welcome to the age of augmented intelligence, where machines work alongside humans to make better decisions, increase productivity and discover new innovation.