EU regulation and initiatives on artificial intelligence - a few thoughts

August 5, 2024
Eric Williamson

WHETHER we wish to embrace it or not, artificial intelligence is not going away. In fact, its impact on all our lives will only continue to increase.

Thankfully, throughout the world, there is an inherent desire to create strict safety nets and filters to ensure the technology is always in our control.

So, I thought it would be timely to take some time to analyse the European Union's upcoming AI regulation framework.

1. European Union AI Regulation Bullet Points

 

1.1 The AI Act

- Received a favourable vote from the European Parliament on March 13, 2024. The EU, the comprehensive AI Act.

- Expected to come into force in the coming months.

- Will be fully applicable around June 2026 after a two-year grace period.

 

1.2 Key features of the AI Act

- Aim to set strict guidelines for gathering, using, and preserving personal information.

- Complements GDPR and intends to give the EU significant control over AI development, use, and regulation.

- Guided by principles of transparency, accountability, and ethics.

- Proposes bans on:

  * Real-time facial recognition in public spaces and border posts

  * "Emotional recognition" AI used by employers or police

  * Social scoring systems (like those used in China)

  * Predictive policing

  * Indiscriminate scraping of internet photographs

- Requires transparency in AI training data sources.

- Mandates disclosure of copyrighted material used in AI training.

- Requires generative AI tools to identify themselves as machines and mark content as artificially generated.

- Prohibits the generation of illegal content (child abuse, terrorism, hate speech etc).

 

1.3 Enforcement and penalties

- Fines of up to €30 million or 6% of annual worldwide turnover for severe violations.

- €20 million or 4% of turnover for non-compliance with risk management and documentation requirements.

- €10 million or 2% of turnover for providing inaccurate information to authorities.

 

1.4 Other initiatives

- Proposed AI Liability Directive to clarify civil liability for AI-induced damages.

 

1.5 Regulatory authorities

- European Data Protection Board

- European Data Protection Supervisor

- The EU AI Board (as outlined in the AI Act)

- AI regulatory bodies of member states (e.g., Spanish AI Supervision Agency)

- Data Protection Authorities of Member States

 

2. Comparative analysis

 

2.1 Regulatory approach

- EU: Comprehensive regulation with strict guidelines and penalties.

 

2.2 Timeline

- EU: AI Act to be fully applicable by June 2026.

 

2.3 Scope

- EU: Broad regulation covering various aspects of AI development and use.

  

3. Implications and future outlook

 

- The EU's AI Act will likely set a global standard for AI regulation, potentially influencing policies worldwide.

- The UK's approach focuses on leveraging AI for economic growth, which may lead to a more business-friendly environment but could face challenges in addressing ethical concerns.

- Both approaches aim to position their respective regions as AI development and application leaders, albeit through different strategies.

- The divergence in approaches may lead to interesting comparisons in the coming years, potentially influencing future regional and global policy decisions.

 

Regulations in more detail...

 Which European authorities are overseeing AI compliance?

·        European Data Protection Board as well as the European Data Protection Supervisor

·        The EU AI Board, as outlined by the proposed AI Act

·        AI regulatory bodies of member states, such as the newly created Spanish AI Supervision Agency

·        Data Protection Authorities of Member States

 

Notably, the proposed AI Liability Directive aims to clarify civil liability concerning damages induced by AI systems.

 

The risk-based approach

The AI Act relies on a risk-based approach, which means that different requirements apply according to the level of risk.

·        Unacceptable risk. Certain AI practices are considered to be a clear threat to fundamental rights and are prohibited. The respective list in the AI Act includes AI systems that manipulate human behaviour or exploit individuals’ vulnerabilities (e.g., age or disability) with the objective or the effect of distorting their behaviour. Other examples of prohibited AI include biometric systems, such as emotion recognition systems in the workplace or real-time categorisation of individuals.

·        High risk. AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Examples of high-risk AI systems include critical infrastructures, such as energy and transport, medical devices, and systems that determine access to educational institutions or jobs.

·        Limited risk. Providers must ensure that AI systems intended to interact with natural persons, such as chatbots directly, are designed and developed so that individuals are informed that they are interacting with an AI system. Typically, deployers of AI systems that generate or manipulate deepfakes must disclose that the content has been artificially generated or manipulated.

·        Minimal risk. Minimal-risk AI systems, such as AI-enabled video games or spam filters, are not restricted. Companies may, however, commit to voluntary codes of conduct.

 

Other regulation points

The EU has already proposed new AI legislation that will curtail threats to individual rights and freedoms, including a proposed ban on the deployment of real-time facial recognition on European streets or at border posts.

The proposed measures package could see firms fined up to €10m or removed from trading within the EU for breaches of the rules. The proposals would also ban “emotional recognition” AI, such as those used by employers or police to identify tired workers or drivers.

European Parliament members have also sought to call time on AI that undertakes social scoring, such as in China, predictive policing, algorithms that indiscriminately scrape the internet for photographs, and real-time biometric recognition in public spaces.

The draft act would also force those generating artificial intelligence to be transparent about which original literature, science research, music, and other copyrighted materials they use to train machine learners.

This will enable performers, writers, and others whose work has been used by AI machines to sue if they think copyright law has been breached.

Companies deploying generative AI tools such as ChatGPT must disclose if their models have been trained on copyrighted material - making lawsuits more likely. Text or image generators,

such as Midjourney, would also be required to identify themselves as machines and mark their content in a way that shows it’s artificially generated. They should also ensure that their tools do not produce child abuse, terrorism, hate speech, or any other type of content that violates EU law.

The EU rules would likely set the gold standard of AI regulation. However, the proposals have already been watered down due to industry lobbying.

Requirements for foundation models—which form the basis of tools like ChatGPT—to be audited by independent experts were removed.

 

Like with GDPR, the EU is getting severe fines for violations of the AI Act...

Thirty million Euro or 6% of annual worldwide turnover (whichever is higher) – in case of using a prohibited AI system according to Art. 5 AI Act or if the company does not meet the quality criteria for high-risk AI systems set out in Art. 10.

Twenty million Euro or 4% of annual worldwide turnover (whichever is higher) – if establishing and documenting a risk management system, technical documentation, and standards for high-risk AI systems concerning the accuracy, robustness, and cybersecurity (Article 9) do not meet the criteria.

Ten million Euro or 2% of annual worldwide turnover (whichever is higher) – if the competent authorities receive inaccurate, insufficient, or deceptive information in answer to their request for information.

The EU wants the legislation to come into force late in 2024, encouraging tech companies to develop and promote trustworthy AI.

 

Summary

While the EU is taking a more regulatory approach to AI, focusing on ethical considerations and potential risks, the UK prioritises economic opportunities and growth. Both strategies aim to position their respective regions at the forefront of AI development and application but through different means. The success of these approaches will likely shape the global landscape of AI governance and innovation in the coming years.