My proposed legislation – the AI (Regulation) Bill – has been designed to create the right environment for the UK’s future AI economy by focusing on ethical AI. Right sized regulation is essential for embedding the ethical principles and practical steps that will ensure AI development flourishes in a way that benefits us all – citizens and state. My bill would put core regulatory principles of safety, transparency, fairness, accountability and redress on a statutory basis.

What is Ethical AI?

Ethical AI means aligning AI with human values and well-being, promoting fairness and minimizing harm. What could be more straightforward than that? But as so often with emerging technology we have to revisit how we understand key concepts and our shared understanding of values such as safety and fairness. What exactly does ‘fair’ mean when you are applying an algorithm to get the best results when managing large amounts of data. Two examples from education and healthcare help us think about this.

Pandemic A-Level Results

During the pandemic, when A-Level students were unable to take exams their results were instead calculated by an algorithm. The estimated grades were produced by combining two pieces of information: how students were ranked in ability and how well their school or college had performed in recent years. There would be continuity of course, but it would lock in all the existing advantages and disadvantages. Anomalies, how ever real or hard working – the talented student at a low performing school for example – would be ignored by the system. We know this is not fair. Students protested outside Number 10 and the decision was reversed. The trusted human was responsible and grades were decided by teacher predictions.

Organ Donation Allocation

In 2018, the National Liver Offering Scheme, or NLOS, shifted the decision about who should receive liver donations from hospital transplant surgeons to an algorithm. Now, when a liver becomes available whoever has the highest score nationally (called the Transplant Benefit Score)  gets the liver. The intent behind predictive algorithms, like this, is to make consequential decisions fairer. Every time one of these livers becomes available anywhere in the UK, the algorithm produces a score for each patient on the waiting list. The score uses 28 variables — seven from the donor and 21 from the recipient — to decide who goes to the top of the list. Essentially, the calculation is the difference in a person’s survival without transplantation (their need) from their survival after transplantation (utility).

It has been treated as a success because the overall number of deaths of individuals on the waiting list had dropped, compared with before the algorithm was introduced. The analysis also found that improvements were primarily for older patients. Patients in the 26-39 years age group have gone from waiting on average 40 days longer than patients over 60 – to 156 days. If you’re below 45 years, no matter how ill, it is impossible for you to score high enough to be given priority scores on the list. The system doesn’t account for other outcomes, such as the healthy life years lost by young patients kept waiting, their longer-term outcomes or reduced overall life expectancy. Taking these into account might paint a very different picture of whether the algorithm was beneficial and fair.

That’s not fair.

Computer scientists and statisticians have devised numerous mathematical criteria to define what it means for a model to be fair. These definitions of ‘fairness’ (at least 21) highlight attempts to make technical sense of the complex, shifting social understanding of fairness. Seemingly technical discussions about mathematical definitions in fact connect to weighty normative questions. A core component of these technical discussions has been the discovery of trade-offs between different (mathematical) notions of fairness; these trade-offs -often between utility and individual fairness – deserve attention beyond the technical community.

But AI systems can also help address existing biases.

A recent study examined data from 129,400 people visiting websites to refer themselves to 28 different NHS Talking Therapies services across England, half of which used a chatbot on their website and half of which used other data-collecting methods such as web forms. The number of referrals from services using the Limbic chatbot rose by 15% during the study’s three-month time period, compared with a 6% rise in referrals for the services that weren’t using it.  Referrals among minority groups, including ethnic and sexual minorities, grew significantly when the chatbot was available—rising 179% among people who identified as nonbinary, 39% for Asian patients, and 40% for Black patients. 

As soon as it works it’s not called AI anymore…

As AI tools are increasingly embedded in society it is essential that we have a consolidated and meaningful sense of the principles that govern AI systems and protect citizens from harm, and that should be on a statutory basis. It is absolutely essential that we act now to shape the AI, with human values and a shared understanding of these values and how they work.

Principles Based Regulation

My proposed law sets out high level AI regulatory principles

  • Standard AI good practice: (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.
  • Developers and deployers should: (i) be transparent about using AI; (ii) test it thoroughly and transparently; and (iii) comply with laws, including on data protection, privacy and intellectual property.
  • AI should: (i) comply with equalities legislation; (ii) be inclusive by design; (iii) not discriminate unlawfully or perpetuate such discrimination from “input” data; (iv) meet the needs of those from lower socio-economic groups, older people and disabled people; and (v) generate data that is findable, accessible, interoperable and reusable.
  • Ensure that the restrictions imposed are proportionate to the benefits and risks of the AI application. It must also consider if it enhances UK competitiveness.

The standard AI good practice principles above are the same five principles as proposed in the Government’s white paper which they have not planned to put on a statutory footing.

Our AI Futures

The transformative potential for artificial intelligence on society at home, and abroad, requires active engagement by one and all. We have an opportunity at this point in history to shape the development and deployment of artificial intelligence for the benefit of everyone but we must all engage in these important questions about what it is to be human and what values we want to shape our AI futures.

Share this page