WITH the appointment of Matt Clifford - effectively as an artificial intelligence tsar - to lead the government's AI Opportunities Action Plan, I thought now would be a good opportunity to take a deep dive into that appointment, the plan, and how it shapes the future of UK AI Regulation.
UK Science Secretary Peter Kyle has commissioned an AI Opportunities Action Plan. The plan aims to identify how AI can drive economic growth and improve outcomes for people across the UK.
This initiative puts AI at the centre of the government's agenda for change, economic growth, and public service improvement.
Matt Clifford, tech entrepreneur and Chair of the Advanced Research and Invention Agency (ARIA), has been appointed to lead this work.
Clifford will deliver recommendations to the Science Secretary in September.
The Department for Science, Innovation and Technology (DSIT) will establish an AI Opportunities Unit to implement the Action Plan's recommendations.
- Accelerate AI to improve people's lives through better services and new products
- Build a UK AI sector capable of competing globally
- Boost AI adoption across all sectors of the economy
- Address infrastructure, talent, and data access needs to drive AI adoption in public and private sectors
- Develop a roadmap to identify the most significant opportunities in AI
- Consider the UK's compute and broader infrastructure requirements by 2030
- Explore how to make AI infrastructure available for start-ups and scale-ups
- Develop strategies to attract and develop top AI talent in public and private sectors
The International Monetary Fund (IMF) estimates that AI could increase UK productivity by up to 1.5% annually. The government sees this as vital for increasing productivity and kickstarting economic growth.
The action plan will engage key industry and civil society figures in its development. It aims to pool expertise and seize the benefits of AI across sectors.
A restructuring will see the Government Digital Service (GDS) and Central Digital and Data Office (CDDO) become part of the Department for Science, Innovation and Technology.
The Action Plan starts immediately, with recommendations expected in September.
This initiative represents a significant push by the UK government to position the country as a leader in AI development and application, focusing on practical implementation, economic benefits, and societal improvements.
Now, let's not forget we had Artificial Intelligence (Regulation) Bill - brainchild of Lord Holmes - which was dropped following the dissolution of Parliament on May 30 2024
The King's speech on July 17 2024 mentioned nothing about an AI bill. The Cyber Security & Resilience Bill & The Digital Information & Smart Data Bill were highlighted.
The UK government, on March 29 2023, issued a whitepaper on its domestic AI regulation. The report runs through the current regulatory landscape for artificial intelligence and details the UK´s ambitious plans to enhance its regulatory environment. If brought to fruition, these plans would make the UK one of the best places in the world to develop and deploy AI.
The report leaves no room for doubt — the UK is dead-set on securing its position as a global leader in AI. It aims to create a pro-innovative regulatory framework that (hopefully) will make the UK the most attractive place in the world for AI innovation.
The Bill (starting in the House of Lords) seeks to “put regulatory principles for artificial intelligence into law”. Its main objective is to establish a central ‘AI authority’ to oversee the regulatory approach to AI concerning principles of trust, consumer protection, transparency, inclusion, innovation, interoperability and accountability.
While the success rate for private members’ bills is not very high, they are often used to generate debates on important issues, thereby testing Parliament's opinion on areas where legislation might be required.
Lord Holmes of Richmond introduced the Bill and set up a central regulator to coordinate and manage the government’s current sectoral approach.
He advocates for the construction of an agile but comprehensive regulatory framework for AI and considers that:
Other peers support expediting legislation to create the necessary conditions for innovation and economic success – particularly for sectors like life sciences, which thrive in countries with strong regulation. Regulation is also required to address particular dangers, including copyright infringement.
The whitepaper itself indicated a clear intention by the UK government to create a proportionate and pro-innovation regulatory framework, focusing on the context in which AI is deployed rather than the technology itself.
At the heart of the UK's framework are five guiding principles that govern the responsible development and use of AI across all sectors of the economy. These principles are:
Ministers have already suggested that AI technologies could pose many risks, such as privacy and human dignity, without regulatory oversight. As a result, it intends to design its regulatory intervention to ensure the responsible use of AI but simultaneously takes a more careful stance not to stifle innovation. It focuses on high-risk AI systems in specific contexts, such as medical diagnostics, critical infrastructure monitoring, or robotics.
Nevertheless, while the whitepaper highlights the focus on high-risk AI systems, it acknowledges that lower-risk AI systems may also be subject to AI-specific regulation. This could occur depending on how and in what context the AI system is used. For example, this could happen directly or if changes to the system or its use elevate it to a high-risk category.
Despite existing safeguards, some AI risks still arise across - or in the gaps between - existing regulatory remits. The UK government is trying to introduce a more streamlined regulatory landscape to mitigate this. To achieve this, it’s utilising a principle-based approach, trying to base its AI regulation on the following four main pillars:
The UK also sees the value in creating safe spaces where AI innovations can be tested without the usual regulatory constraints. These “sandboxes” or “testbeds” would allow innovators to experiment in a controlled environment and help the regulators identify potential AI-related risks before full-scale deployment.
The UK is exploring different ways to implement these sandboxes to ensure they effectively support innovation and regulatory understanding.
Beyond that, the UK government also focuses on the international alignment of AI regulations to support UK businesses in global markets and protect UK citizens from cross-border harms. In this context, the UK tries to ensure that the territorial application of its AI laws will remain similar to its existing framework of crucial laws, such as the Data Protection Act 2018 and the Equality Act 2010.
Contributing to the UK´s AI regulatory strategy, other planned measures re being lined up, such as:
Interestingly, though the whitepaper mentions measures to strengthen the UK´s enforcement framework, it does not explicitly address any changes in the enforcement actions. As such, it seems we should not expect more rigorous sanctions for non-compliance, in contrast to the regulatory framework of the EU’s AI Act, which establishes fines of up to 7% of the global annual turnover.
To sum up, the UK's regulatory approach regarding AI focuses on regulating specific use cases rather than the technology itself. Consequently, sector-specific rules will dominate the regulatory landscape, though universal cross-sectoral principles will provide a framework for those rules.
Specific sectors in the UK have already implemented AI governance principles and guided AI requirements. However, sectoral regulation for AI is still in its early stages as the UK is developing a framework that balances an innovation-supporting approach and the protection of user and consumer interests and rights.
Until the envisioned sector-based approach is fully implemented, AI will continue to be governed primarily by human rights and anti-discrimination laws, a few sectoral regulations, and the UK's data protection framework.
There will not be a “one size fits all” compliance solution for business, and every AI use case will have to be evaluated through the prism of the specific industry where it exists. While these sector-specific rules are still mostly in development, essential compliance with data protection, human rights, and consumer safety laws will help to ensure your current compliance and lay a robust groundwork for future regulations.