End of Year Round-Up
2022 has been a busy year for the APPG AI – as the buzz around the new Chatbot ChatGPT – has demonstrated, this technology is developing at pace and the use cases are increasingly visible and impressive. As all the speakers at an event in Parliament this week (6th December 2022) stressed – although the speed of development, improvement and adoption has markedly increased – the issues remain the same, most importantly AI safety. This is not something that can be done by the industry (who are largely responsible for the technological developments) but must be considered by policy makers, and here the APPG AI has an important role to play.
The APPG AI was set up in January 2017. Lord Clement-Jones and Stephen Metcalfe MP co-chair the group and the Big Innovation Centre provides the secretariat. The group focuses on Machine Learning, Facial Recognition, National Security, Cybersecurity, Digital & Physical Infrastructure, Data Policy & Governance, Market Intelligence, Education (skills, jobs & the future of work), Health (diagnostics, COVID, telehealth), AI in the Boardroom, Fin-tech Automation, Innovation & Entrepreneurship, Autonomous Weapons, Sustainability, Policy and Regulation Landscaping and more. The group brings evidence, use cases and future policy scenarios into the UK Parliament while considering the economic, social, ethical and business model implications of developing and deploying AI. I am a huge supporter of their work and also a vice-chair of the group.
Most recently the group submitted evidence to the House of Commons Science and Technology Committee Inquiry into the Governance of Artificial Intelligence. (November 2022) The House of Lords has also just announced that they will be conducting a special inquiry next year into the use of AI in weapons systems which I am sure the group will feed into.
A couple of weeks ago the group held an evidence session looking at how artificial intelligence is being used in the context of health and medicines, and how it can help with research and development in this area. The previous evidence session had looked at precarious work and the digital economy. Excellent contributions from speakers exploring the ways in which micro-decisions are made by algorithm and considering what policy makers should be doing…. the need for standard definitions and better data around gig-work and the gig workforce, transparency, explainability and regulators that are empowered and clear about their objectives.
Earlier in the year we had an incredibly important session considering AI, National Security & Defence. How can AI be used in military scenarios and with autonomous weapons? What are the risks, and are there opportunities for safe military solutions? Expert witnesses gave evidence about the risks of AI in autonomous weapons and considered whether there are opportunities for better military solutions. It was a thought-provoking session and I believe this must be the most serious use case for AI, deep consideration is absolutely essential. Currently just a pixel difference can be game-changing and whether this technology means Armageddon or transformative life-saving systems we absolutely have to get to grips with it: debate, educate, inform, set standards, regulate, we must do more.
One of the most invigorating discussions organised by the group was an event in the summer focused on smart cities and the metaverse. As ever with new technologies the questions centre on how to realise the potential whilst minimising risk. The panel represented a wide range of expertise and various applications and approaches: smart cities as art cities, immersive art galleries, a welcome call for decentralised storytelling – “focus on diverse voices and celebrate the wisdom of crowds and collaboration”. An equally important plea to think about safety in design and consider how to be architects for the future, can the metaverse be a safe place for children that promotes learning and healthy connections? A reminder that whilst significant the metaverse is currently a small population, perhaps the online gaming and Discord community a more substantial ‘metaverse’ – and, again, the issue will be interoperability! It was timely at that point to have a definition of the metaverse, a natural development of the internet, moving from 2D to 3D, built upon the “3 pillars” of XR, AI and blockchain.
Right back at the start of 2022 meetings were taking place online. One evidence session on intellectual property rights and AI asked if algorithms should be granted status as owners of patents and how this would affect the accountability and responsibility for the outcomes of AI driven systems? A year ago, again online, we heard evidence on the future of work. It was fascinating to hear from Prithwiraj Choudhury about his research into “work from anywhere” which he found can be a win for workers, organisations and society. How we make the most of this potential and ensure the benefits are realised is a significant task for all of us. As our other witnesses made clear this will not happen as a matter of course. We heard about delighted Uber drivers when they heard that the algorithm assigning jobs would free them from (potentially) needing the favour of a human call handler. This enthusiasm for the potential of the technology was sadly soon followed by disillusionment over worker rights. James is one of the Uber drivers who fought, and won, a five-year legal battle that established Uber was breaking UK employment law by failing to offer basic worker rights such as holiday pay and national minimum wage. This so clearly demonstrates both the tremendous transformational potential and, simultaneously, the fact that although great good is possible, it is not inevitable.
I have long campaigned for greater flexibility and innovation in the workplace and in recruitment processes as a way of enabling individuals and creating more inclusive systems and practices. “Working from anywhere” holds incredible potential benefits for disabled people, or people who require greater flexibility for a wide range of reasons. The Covid pandemic, almost overnight, swept away resistance to the idea that it can’t be done. That the technology isn’t available or affordable, that people can’t adapt or adjust, that it’s not a good cultural fit – all these and many other ‘reasons’ for why it couldn’t be done have been comprehensively dismantled by our experiences over the past 18 months or so. When I entered the Lords, I could not have imagined that I would one day be taking part in Lords debates via Zoom. Individuals and organisations are currently wrestling with how to make work, work. For some it’s mandating a return to the office, for some getting rid of an office altogether, and almost every option in between.
It is an excellent metaphor for considering the impact of AI on our lives. A moment rich with opportunity, but it is up to all of us to make sure we take part in shaping a world, that works for us all. There is also a huge need for public engagement, what’s in it for me? That’s the question that everyone should be empowered, enabled and encouraged to ask. We must do this so that we can realise the potential the economic, social and psychological good for all. The right access to virtual work, the right relationship with algorithms at work, a happy, healthy, humane metaverse, a consistent approach to autonomous weapons systems, transparency frameworks, ethical frameworks, ultimately humans at the heart and a ruthless focus on safe AI.