Like electricity or the internet, artificial intelligence (“AI”) has the potential to change our world. Whilst it may bring huge benefits, there is also a considerable risk of harm.
Many jurisdictions are currently considering the extent to which AI needs to be regulated, and the approach to be taken, including the UK and the EU. As matters currently stand, the UK looks likely to take a different approach from the EU, though time will tell if divergent approaches are indeed implemented.
I. UK approach: the UK government’s March 2023 white paper
Whilst in the UK certain pieces of legislation already have a potential impact on the employment of AI, the UK government is currently considering the implementation of a focused regulatory approach to AI, having set out its proposals in a March 2023 white paper titled “A pro-innovation approach to AI regulation”.
A regulatory framework to ensure adherence to five key principles
The proposed approach is a somewhat “light touch” one, the intention being to introduce a “context-specific” regulatory framework which will focus on the potential outcomes AI is likely to generate in particular contexts so as to determine appropriate regulation.
The framework will be underpinned by five principles as follows:
i. safety, security and robustness – AI systems should function in a robust, secure and safe way with risks being continually identified, assessed and managed;
ii. appropriate transparency and “explainability” – it must be possible to understand how decisions are made by AI;
iii. fairness – in terms of its outcomes and use, and that it applies relevant law;
iv. accountability and governance – to ensure effective oversight of the supply and use of AI systems;
v. contestability and redress – ensuring that its outcomes can be challenged.
The intention is that the principles will be applied by various regulators already existing within the UK and charged with regulating certain industries and activities. The view is that these regulators are best placed to determine the issues and risk in their existing areas of regulation, and to act accordingly to encourage adherence to the principles. So there is no proposal for a single new “AI regulator” in the UK.
Nor is there a proposal for the implementation, at least initially, of new legislation to provide the principles with a statutory footing. The paper reasons that:
“New rigid and onerous legislative requirements on business could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances. Instead, the principles will be issued on a non-statutory basis and implemented by existing regulators”
However, the possibility of legislation is reserved for the future, and it is anticipated that a statutory duty will likely be introduced requiring regulators to have due regard to the principles.
The paper does, however, recognise the need for certain “support functions” to be undertaken by government to ensure that the framework for the regulation of the AI industry is sufficient and proportionate in promoting innovation whilst protecting against risk. This involves functions such as monitoring and evaluation of the effectiveness of the regulatory framework and implementation of the principles in terms of promoting innovation whilst protecting against risk, “conducting horizon scanning” so as to ensure that emerging new technologies are noted early and appropriate responses are implemented, and providing education and awareness to business and individuals to ensure that their voices are heard as the regulatory framework is developed. In addition, the paper recognises a need to promote “interoperability” with international regulatory frameworks.
Next steps
The white paper, which was open for consultation until 21 June 2023, raised a number of questions.
The government will as such now be considering responses received as part of the consultation process and intends then to publish its response and issue its cross-sectorial principles to regulators together with initial guidance for their implementation.
It will also design and publish an “AI Regulation Roadmap” with plans for establishing the central support functions referenced above. These will be provided in conjunction with certain key partner organisations outside of government.
Research will be commissioned by the government to monitor the extent to which businesses face barriers to compliance with the regulations and the best way to overcome these.
The government anticipates that this will all occur by around September of this year.
Thereafter, in the period through to March of next year, the government anticipates that it will begin to deliver the key central support functions (entering partnership agreements to do so). It will encourage the various regulators to publish guidance on how the cross-sectoral principles will apply within their remit. It will also publish proposals for a central “monitoring and evaluation framework”, which will identify metrics, data sources and thresholds or triggers for further intervention or iteration of the framework.
Commentary on the UK proposals
There are some obvious issues with the proposals, and it will be interesting to consider whether responses to the consultation process raise similar concerns:
i. whilst there is sense in the notion – detailed in the paper – that existing regulators are best placed to understand the particular industry or sector issues that arise in their own spheres of existing regulatory responsibility, query whether they possess the necessary technical knowledge as to “AI” (and further have the capacity to keep on top of developments in the technology) to apply the principles effectively;
ii. the risk of certain activities or practices “falling between” the gaps of regulation when the approach is to rely on existing regulators to cooperate to ensure effective and proportionate regulation;
iii. the potential for existing regulators to be “overwhelmed” by the additional burden that will arise from the need to regulate now also in respect of AI.
Being “light touch”, the general approach of the UK government is also somewhat at odds with the approach that the European Union (EU) intends to take across the 27 states comprising its membership. The intended EU approach is addressed below. One has to wonder whether the eventual EU approach, given the size of the EU, will ultimately end up influencing the UK position (i.e. will the UK approach in due course fall in line with the EU approach?).
This is especially so when a light touch approach to regulation appears also to be at odds with recent suggestions of the UK’s Prime Minister, Rishi Sunak, that the UK should serve as a possible hub for a future global regulator of AI technologies, modelled on the nuclear body, the International Atomic Energy Agency (IAEA), and with recent warnings of the Prime Minister’s own special advisor on AI, Matt Clifford (see e.g. The Times, 5 June, “AI systems “could kill many humans” within two years”)
II. EU approach
Whilst the UK is moving towards a light-touch regulatory framework, and without the implementation of new legislation at least initially, the EU is moving towards the adoption of legislation that will apply within the EU to AI systems.
On 14 June, the EU significantly advanced the adoption of its proposed legislation that is intended to guard against the serious possible harms that may be brought about by the uncontrolled development and use of AI technologies.
Thus, the European Parliament adopted its negotiating position on what has been termed the “Artificial Intelligence (AI) Act”, which now paves the way for talks with EU member states in the form of the European Council towards the final determination and implementation of the law.
In so doing, the EU is seeking to lead the way towards the control and implementation of safeguards in respect of AI and stands a very good chance of establishing itself as the global leader in this area and as the potential determiner of global standards and principles to be applied to AI.
The legislative approach will be a risk-based one – looking to the potential risks of the technology and its use, before providing for appropriate controls.
As such, the AI Act looks likely to prohibit certain uses of AI as being simply “too dangerous”, for example employing AI for “social scoring” (that is to say classifying people according to their social behavior or personal characteristics). However, other possible uses of AI were also included in the list of prohibited uses – including using AI to “recognise emotions” in the course of law enforcement, border management, workplace and educational institutions; using AI to “scrape” facial images from the internet or CCTV so as to develop facial recognition databases; and predictive policing systems (i.e, systems that seek to determine persons who are likely to commit criminal actions based on profiling, location or past criminal behaviour).
Certain “high-risk” uses of AI will also be controlled – these being uses that pose significant harm to people’s health, safety, fundamental rights or environment. Included on the list are systems that are used to influence voters and the outcome of elections as well as systems that rank job applicants by automatically scanning CVs.
There will also be controls in respect of “generative” AI systems – requiring disclosure that material has been generated by AI, and for there to be safeguards against the generation of illegal content. There is also to be a requirement that summaries of copyrighted data used to train such systems be made publicly available.
In striding towards the implementation of legislation that will take a firm line towards guarding against the harms that may arise from this technology, the EU is on a potential collision course with the “big tech” companies, many of which are U.S.-based (where legislative control of AI is currently some way off). However, even with leading figures in the industry calling for protections to be implemented, it remains to be seen how strong industry opposition to the proposed legislation will be. This is likely to be determined by industry perceptions as to whether the EU approach excessively prioritises controls over AI technology, with the effect that its development and use is stifled, or whether the AI Act strikes the correct balance between the benefits and opportunities and the risks.
Current issues for consideration
For now, commercial parties with UK operations who are involved in either the development, supply or use of AI systems should:
i. be aware of the above, and the fact that regulation / legislation is under consideration but coming.
ii. keep abreast of regulatory / legislative developments.
iii. in the UK have regard to the five principles that the UK government’s white paper has set out and plan development and/or use of AI technologies to ensure adherence to these as far as possible.
iv. keep an eye on legislative and regulatory developments outside of the UK, in particular the EU approach. In the view of the author of this article, it seems very likely that the approach taken in other jurisdictions will influence the approach taken in the UK, and quite possibly that in time a more unified global approach will need to be implemented to regulate and control the new technologies.