Good Governance Around AI – What Does It Require?

By Natalie Donovan, Counsel PSL, Head of Knowledge Tech and Digital of Slaughter & May, and Rob Sumroy, Partner at Slaughter & May

As AI adoption increases, and new AI laws, regulation and guidance are published, organisations are rightly focusing on their AI governance.

Importance of good AI governance 

Good governance will help ensure that organisations:

  • have set an appropriate risk appetite around AI – acknowledging that not benefitting from the opportunities and efficiencies which AI can offer creates its own risks;
  • understand where and how AI is being used within the business, and the rules which therefore apply; and
  • can engage with all relevant stakeholders regarding their AI use – from ensuring the board is appropriately informed of current and proposed AI use, to managing engagement with customers, employees, suppliers and (where relevant) regulators.

 Issues to consider

There are a range of issues to consider around AI governance:

  •  Greater board scrutiny on AI: 
    With the spotlight on AI, we can expect to see greater scrutiny by investors of AI-related governance frameworks and consequentially greater accountability for the board. We are beginning to see this expectation reflected in voting guidelines, with the Pensions and Lifetime Savings Association (“PLSA”) recommending in its 2024 guidelines that investors should consider voting against the re-election of a director where there is evidence of “egregious conduct” around the development and deployment of AI. PLSA also recommends that companies should have a governance framework for the acceptable use of AI, implement robust data anonymisation techniques and adopt a “zero-trust” approach when selecting AI tools and third-party services.

AI is also moving up the agenda of the Financial Reporting Council (the “FRC”). In its updated 2024 Corporate Governance Code Guidance, the FRC sets out an expectation that boards will at least consider whether controls over emerging technologies like AI are material and therefore should be monitored, reviewed and reported on under the Code. Separately, the FRC commented on annual reporting relating to AI for the first time in November 2023 in its Review of Corporate Governance Reporting, stating that it is important that boards have a clear view of the responsible development and use of AI within the company and the governance around it and noting that boards may need to increase their knowledge on AI.

  • Impact of existing governance structures: 
    Unsurprisingly, existing governance structures will shape approaches to AI governance. At board level, there are likely to be one or more committees (e.g., the risk or audit committee) where AI could be managed. Whilst many companies are setting up separate committees for AI governance (see below), this is below board level. For multi-nationals, the way global compliance strategies are usually managed should also inform approaches to governance of global AI compliance.
  • Where should AI ‘sit’?
    The potential opportunities and risks around AI are varied and involve a range of different stakeholders. AI touches all functions within an organisation, from operations, R&D and finance to BD and HR and everything in between. Opportunities are many and varied.
    From a legal, regulatory and compliance perspective, AI raises many potential issues around liability and risk allocation, intellectual property, data privacy, ESG, employment rights, etc. Some organisations are expanding the scope of their privacy, IP or tech procurement teams to manage AI compliance, while others are choosing to manage it via their existing compliance and risk functions (particularly those operating in regulated sectors). Some clients (albeit to our knowledge, only a few) have created a new role and function of ‘Chief AI Officer’.

Wherever AI sits within an organisation, it is vital that all relevant stakeholders are properly engaged. To achieve this, we are seeing organisations pulling together relevant stakeholders from across the business in a newly established AI committee, AI council or AI board, at executive and operational level.  This dedicated AI committee/council/board has direct reporting obligations to the senior executive team and beyond there to the Board. Also, in our experience, there is a keen (and understandable) emphasis on the AI committee/council/board being driven by the business, rather than for it to be legal or compliance led, to ensure it enables responsible AI development or deployment.

  • Managing a shifting risk and compliance landscape: 
    Legislators and regulators across the globe are grappling with how to manage AI, and we are seeing developing regulatory approaches in all major jurisdictions. At first blush there appear to be fundamental differences of approach; for example, the EU has a new cross-sector AI law (the EU AI Act – see article) while the UK is taking a sector-specific approach underpinned by a central set of principles and functions – see article).  However, closer observation shows there are common concerns and themes emerging around principles such as fairness, transparency, lack of bias and accountability. Multi-national initiatives such as the AI Safety Summits (see our blog) and OECD principles are also aiding global compliance.  However, the area is fast-moving and it is important that governance structures enable organisations to keep abreast of, and adapt to, a changing legal, regulatory and technological landscape. We saw, for example, how ChatGPT and GenAI raised a whole new range of issues a few years ago. The pace of technological development means that new functionality and use cases are continually raising new issues to consider.

In addition to new laws and regulation, it is also important to monitor developing international standards, assurance and compliance frameworks. Many new laws (including the EU AI Act) recognise that it is not practical to set all rules out in rigid pieces of legislation, but that standards can help provide practical guidance on how to ensure AI is developed and deployed responsibly. The National Institute of Standards and Technology’s AI Framework and ISO 42001 have been referred to (and adopted) by many organisations for the management of AI governance and risk, and a host of new AI standards are in the pipeline. It will therefore be important to monitor developments in this space to see what becomes market standard in terms of compliance.

Comment

While the AI hype cycle is set to continue for some time now, AI use is real and the vast majority of organisations are already using it in some form or other. This means the risks around AI are already real. Having a proper governance structure will help manage and mitigate these risks now, while also enabling benefits to be achieved from the many opportunities AI brings.