The UK has an opportunity to be world leading in AI, with a focus on ethical AI for the common good and benefit of humanity. The government’s current approach, light touch regulation in an effort not to dampen innovation, is well intentioned but wrong. Right sized regulation will support, not stifle, innovation and is essential for embedding the ethical principles and practical steps that will ensure AI development flourishes in a way that benefits us all – citizens and state.  

House of Lords Recommendations on AI

Six years ago, I sat in the London offices of Deep Mind with colleagues on the House of Lords Select Committee on AI learning about neural networks and thinking about what intelligence really means. When we published our committee report AI in the UK: ready, willing and able? in April 2018, we concluded that the UK is in a strong position to be a world leader in the development of AI. Assuming the wider adoption of AI we concluded that UK leadership, with ethical AI at the heart, would be a major boost to the economy for years to come.

Since that report there has been an explosion of interest in AI. ChatGPT is contributing to many of the nation’s homework projects, and it sometimes feels like every news headline or conference has an AI angle. Even within the House of Lords there has been more AI related activity, in the last couple of months we have had two excellent Select Committee Reports on AI in Weapons Systems and Large language models and generative AI.

Autonomous Weapons Systems

The AI in Weapons Systems Committee found that although the Government aims to be “ambitious, safe, responsible”, in the use of AI in defence, aspiration has not lived up to reality. They recommend that the Government lead by example in international engagement on regulation of AWS.

Large language models and generative AI

The Communication Committee’s report into LLMs and GenAI focused on copyright and transparency – calling on the government to support copyright holders and not allow LLM developers to exploit the work of rights-holders. They also urged greater transparency in appointing expert advisers as an important part of retaining public confidence.

Trust and Public Engagement

The necessity for seeking, establishing, and maintaining public confidence and democratic endorsement in the development and deployment of AI is essential. It is referred to again and again in report after report. We know this matters, but we are not doing enough to make it happen. I have drafted this AI regulation Bill in part as a proposal about this, essential, public engagement piece.

Current Approach

Currently, the government’s ‘pro-innovation’, context-based framework for AI regulation relies on existing regulators to interpret and apply five cross-sectoral principles: 1) Safety, security and robustness, 2) Appropriate transparency and explainability, 3) Fairness, 4) Accountability and governance, 5) Contestability and redress.

Wait and See

The government hosted the AI Safety Summit and are building an AI Safety Institute, but they “will not rush to legislate”. They are arguing that legislative action for AI should not happen immediately, but “once understanding of risk has matured”. A pro-innovation approach to AI regulation: government response – GOV.UK (www.gov.uk) But this ‘wait and see’ approach means we are losing our opportunity to lead and raises some important questions.

First and foremost  are the UK public comfortable with AI risk ‘maturing’ before tailored laws and protections are in place? Other jurisdictions are proceeding with regulation – the EU AI Act, the US Executive Orders – are we happy to wait and see how that goes? Will that then make it more rather than less likely that we will have to align with those rules rather than designing our own?

Data Protection and Digital Information Bill

The government is passing two pieces of legislation currently that are relevant to the regulation of AI. They have said that the Data Protection and Digital Information Bill will clarify the rights of data subjects to specific safeguards when subject to solely automated decisions that have significant effects on them. I do not think it goes far enough. Trying to patch holes in this bill will not fix the democratic deficit in public sector use of AI.

Digital Markets Competition and Consumers Bill

We also have the Digital Markets Competition and Consumers Bill in the Lords – I have proposed an amendment that would provide a gap analysis of where the current legislation properly protects consumers when impacted by AI tools, products, or services. I do not expect the government to accept this amendment and I do not think that tinkering with either of these Bills is the right way to respond to this huge and exciting new opportunity we are facing.

House of Lords, Digital Markets, Competition and Consumers Bill, Committee Stage.

“Amendment 199 suggests that all legislation concerned with consumer protection be reviewed to assess its competence to deal with the challenges, opportunities and risks inherent in artificial intelligence. It is clear that a number of the concepts and provisions within consumer protection legislation and regulation will be applicable and competent to deal with AI, but there is a huge gulf between what is currently set out in statute and what we require when it comes to making the best of what we could call this future now.”

Conclusion

All this starts to explain why I have drafted my AI Regulation Bill. To pro-actively, engage fellow parliamentarians and the government with the ideas, and the concrete steps, we need to take to ensure we shape AI positively for the public’s benefit and lead the international community in AI’s ethical development. The Ada Lovelace report on Regulating AI in the UK published in July 2023 called for urgency. It is already eight months and I join those calls for urgency and also for leadership. We can do this – and we must.

Related posts:

Artificial Intelligence (Regulation) Bill – Lord Holmes of Richmond MBE (lordchrisholmes.com)

Share this page