Has the future of AI regulation in the UK suddenly become brighter?

August 1, 2024
Eric Williamson

WITH the appointment of Matt Clifford - effectively as an artificial intelligence tsar - to lead the government's AI Opportunities Action Plan, I thought now would be a good opportunity to take a deep dive into that appointment, the plan, and how it shapes the future of UK AI Regulation.

                    

AI initiative launch

UK Science Secretary Peter Kyle has commissioned an AI Opportunities Action Plan. The plan aims to identify how AI can drive economic growth and improve outcomes for people across the UK.

This initiative puts AI at the centre of the government's agenda for change, economic growth, and public service improvement.

Leadership and execution

Matt Clifford, tech entrepreneur and Chair of the Advanced Research and Invention Agency (ARIA), has been appointed to lead this work.

Clifford will deliver recommendations to the Science Secretary in September.

The Department for Science, Innovation and Technology (DSIT) will establish an AI Opportunities Unit to implement the Action Plan's recommendations.

 

Objectives of the action plan

   - Accelerate AI to improve people's lives through better services and new products

   - Build a UK AI sector capable of competing globally

   - Boost AI adoption across all sectors of the economy

   - Address infrastructure, talent, and data access needs to drive AI adoption in public and private sectors

   - Develop a roadmap to identify the most significant opportunities in AI

   - Consider the UK's compute and broader infrastructure requirements by 2030

   - Explore how to make AI infrastructure available for start-ups and scale-ups

   - Develop strategies to attract and develop top AI talent in public and private sectors

 

Economic impact

The International Monetary Fund (IMF) estimates that AI could increase UK productivity by up to 1.5% annually. The government sees this as vital for increasing productivity and kickstarting economic growth.

The action plan will engage key industry and civil society figures in its development. It aims to pool expertise and seize the benefits of AI across sectors.

A restructuring will see the Government Digital Service (GDS) and Central Digital and Data Office (CDDO) become part of the Department for Science, Innovation and Technology.

 

Timeline

The Action Plan starts immediately, with recommendations expected in September.

This initiative represents a significant push by the UK government to position the country as a leader in AI development and application, focusing on practical implementation, economic benefits, and societal improvements.

The Artificial Intelligence (Regulation) Bill

Now, let's not forget we had Artificial Intelligence (Regulation) Bill - brainchild of Lord Holmes - which was dropped following the dissolution of Parliament on May 30 2024

The King's speech on July 17 2024 mentioned nothing about an AI bill. The Cyber Security & Resilience Bill & The Digital Information & Smart Data Bill were highlighted.

The UK government, on March 29 2023, issued a whitepaper on its domestic AI regulation. The report runs through the current regulatory landscape for artificial intelligence and details the UK´s ambitious plans to enhance its regulatory environment. If brought to fruition, these plans would make the UK one of the best places in the world to develop and deploy AI.

The report leaves no room for doubt — the UK is dead-set on securing its position as a global leader in AI. It aims to create a pro-innovative regulatory framework that (hopefully) will make the UK the most attractive place in the world for AI innovation.

The Bill (starting in the House of Lords) seeks to “put regulatory principles for artificial intelligence into law”. Its main objective is to establish a central ‘AI authority’ to oversee the regulatory approach to AI concerning principles of trust, consumer protection, transparency, inclusion, innovation, interoperability and accountability.   

While the success rate for private members’ bills is not very high, they are often used to generate debates on important issues, thereby testing Parliament's opinion on areas where legislation might be required. 

Lord Holmes of Richmond introduced the Bill and set up a central regulator to coordinate and manage the government’s current sectoral approach.

He advocates for the construction of an agile but comprehensive regulatory framework for AI and considers that:

  • The “right-sized regulation will support, not stifle, innovation and is essential for embedding the ethical principles and practical steps to ensure AI development flourishes in a way that benefits us all – citizens and state”.
  • The government must legislate quickly to preserve and promote the UK’s position on innovation. Pointing to eminent bodies such as the Alan Turing Institute, he noted that the UK has the requisite knowledge and expertise on critical issues such as citizens’ rights, consumer protection and IP. 
  • Failure to create regulatory certainty risks alignment outside of the UK:

Other peers support expediting legislation to create the necessary conditions for innovation and economic success – particularly for sectors like life sciences, which thrive in countries with strong regulation. Regulation is also required to address particular dangers, including copyright infringement. 

The whitepaper itself indicated a clear intention by the UK government to create a proportionate and pro-innovation regulatory framework, focusing on the context in which AI is deployed rather than the technology itself.

At the heart of the UK's framework are five guiding principles that govern the responsible development and use of AI across all sectors of the economy. These principles are:

  • Safety, security, and robustness: ensuring reliable and secure AI systems.
  • Appropriate transparency and explainability: ensuring AI operations are transparent and easily understood by users.
  • Fairness: ensuring AI does not contribute to unfair bias or discrimination.
  • Accountability and governance: holding AI systems and their operators accountable for their actions.
  • Contestability and redress: providing mechanisms for challenging AI decisions and seeking redress.

Ministers have already suggested that AI technologies could pose many risks, such as privacy and human dignity, without regulatory oversight. As a result, it intends to design its regulatory intervention to ensure the responsible use of AI but simultaneously takes a more careful stance not to stifle innovation. It focuses on high-risk AI systems in specific contexts, such as medical diagnostics, critical infrastructure monitoring, or robotics.

Nevertheless, while the whitepaper highlights the focus on high-risk AI systems, it acknowledges that lower-risk AI systems may also be subject to AI-specific regulation. This could occur depending on how and in what context the AI system is used. For example, this could happen directly or if changes to the system or its use elevate it to a high-risk category.

The future of the UK’s AI regulation

Despite existing safeguards, some AI risks still arise across - or in the gaps between - existing regulatory remits. The UK government is trying to introduce a more streamlined regulatory landscape to mitigate this. To achieve this, it’s utilising a principle-based approach, trying to base its AI regulation on the following four main pillars:

  • Defining AI. The UK is currently working on a clear definition of AI to help regulators and give clarity to those creating AI technologies.
  • Context-Specific Approach. The UK understands that AI can have different impacts depending on its use. So, they plan to regulate AI based on its specific context, such as the case of self-driving vehicles or foundation models and LLMs. The UK is also launching a Foundation Model Taskforce to help build capability in this area.
  • Cross-Sectoral Principles. The UK also creates guiding rules for regulators to follow when dealing with AI. These rules will help ensure good practices across all AI development and use stages.
  • Central Functions. The UK plans to establish a central body to support the AI regulatory framework. These will include monitoring and evaluation to ensure the framework is working well and can adapt to changes in AI technology.

The UK also sees the value in creating safe spaces where AI innovations can be tested without the usual regulatory constraints. These “sandboxes” or “testbeds” would allow innovators to experiment in a controlled environment and help the regulators identify potential AI-related risks before full-scale deployment.

The UK is exploring different ways to implement these sandboxes to ensure they effectively support innovation and regulatory understanding.

Beyond that, the UK government also focuses on the international alignment of AI regulations to support UK businesses in global markets and protect UK citizens from cross-border harms. In this context, the UK tries to ensure that the territorial application of its AI laws will remain similar to its existing framework of crucial laws, such as the Data Protection Act 2018 and the Equality Act 2010.

Contributing to the UK´s AI regulatory strategy, other planned measures re being lined up, such as:

  • Conducting awareness campaigns to educate consumers and users about AI regulation and the associated risks.
  • Establishing frameworks to facilitate the assessment of risks related to AI by business and regulatory bodies.
  • Strengthening the capabilities of regulators to oversee and enforce AI regulations effectively.

Interestingly, though the whitepaper mentions measures to strengthen the UK´s enforcement framework, it does not explicitly address any changes in the enforcement actions. As such, it seems we should not expect more rigorous sanctions for non-compliance, in contrast to the regulatory framework of the EU’s AI Act, which establishes fines of up to 7% of the global annual turnover.

Key takeaways on the UK’s AI regulation plans

To sum up, the UK's regulatory approach regarding AI focuses on regulating specific use cases rather than the technology itself. Consequently, sector-specific rules will dominate the regulatory landscape, though universal cross-sectoral principles will provide a framework for those rules.

Specific sectors in the UK have already implemented AI governance principles and guided AI requirements. However, sectoral regulation for AI is still in its early stages as the UK is developing a framework that balances an innovation-supporting approach and the protection of user and consumer interests and rights.

Until the envisioned sector-based approach is fully implemented, AI will continue to be governed primarily by human rights and anti-discrimination laws, a few sectoral regulations, and the UK's data protection framework.

There will not be a “one size fits all” compliance solution for business, and every AI use case will have to be evaluated through the prism of the specific industry where it exists. While these sector-specific rules are still mostly in development, essential compliance with data protection, human rights, and consumer safety laws will help to ensure your current compliance and lay a robust groundwork for future regulations.