Risk Matters: Cyber Risk and AI – The Changing Landscape

By Lawrence A. Gordon, PhD

In today’s world of interconnected computer-based information systems, cyber risk has become one of the critical risk factors impacting organizations. Indeed, several studies have shown that cyber risk (i.e., the probability of being the victim of a successful cyber-attack) is one of the, if not the, top risk concerns to senior executives in private, as well as public, sector organizations. Auditors have also recognized the critical nature of cyber risk to organizations, as evidenced by the American Institute of Public Accountants’ development of its cybersecurity risk management reporting framework. Cybersecurity risk is also a key concern to the U.S. Securities and Exchange Commission (SEC), as evidenced by its 2023 disclosure rules requiring registrants to include Item 1C (Cybersecurity) in Form 10-K and to disclose material cyber incidents in Form 8-K.

AI Models

The technical arsenal used by organizations to manage their cyber risk includes such things as encryption, access controls, intrusion detection and prevention systems, firewalls, and system restoration. Over the last two decades, AI (artificial intelligence) models have been widely used to assist organizations implement the above methods for preventing and responding to cyber-attacks. For example, AI-generated machine learning models facilitate intrusion detection and correction, predictive analytics, financial fraud detection, and real-time responses to cyber incidents.

Although AI assists organizations defend against cyber-attacks, it is a double-edged sword. More to the point, AI is also providing cyber attackers with an array of cost-efficient techniques that facilitate their cyber-attacks. Sophisticated AI-generated phishing attacks, social engineering attacks, and ransomware attacks are just a few of the ways AI has made the cyber-attack landscape more lethal.

Game-Theoretic Aspects of Cyber Risk

AI-generated models used by cyber attackers and cyber defenders have been evolving at a rapid pace. As a result, the strategic interactions between cyber attackers and cyber defenders have become more automated, more dynamic, more adaptive, and more complex. These developments have increased, and substantially changed, the game-theoretic aspects associated with cyber risk.

Unfortunately, there is no dominant strategy that gives an organization (as a cyber defender) a clear path to minimize the probability of becoming a victim of a successful cyber-attack. Notwithstanding the above, it is well known that organizations become a less attractive target to cyber hackers (i.e., their cyber risk is lowered) by investing in a variety of cybersecurity-related activities. This raises the following fundamental question: How much should an organization invest to prevent, or at least reduce, the probability of a cyber incident?

Cost-Benefit Considerations

Although there is no definitive answer to the above question, a well-established framework for deriving the optimal amount to invest in cybersecurity-related activities is provided by the Gordon-Loeb Model. The Gordon-Loeb Model, which is based on cost-benefit analysis, consists of the following three main components: (1) the potential cost associated with a cyber incident, (2) the probability that a cyber incident will occur, and (3) the benefits derived from investments in cybersecurity (i.e., how spending on cybersecurity reduces the probability that a cyber incident will occur).

Besides considering the total amount to spend on cybersecurity-related activities, a subsidiary question for organizations to answer is: How much of our organization’s cybersecurity-related budget should be devoted to developing and implementing AI models designed to reduce the likelihood of a cyber incident? In answering this subsidiary question, organizations need to consider the costs associated with the AI models.

The costs of developing and implementing new AI models designed to reduce the likelihood of a cyber incident depend on many organizational-specific factors. These factors include, but are not necessarily limited to: (1) whether the organization has to develop specialized AI models, or could it use existing open-source AI models, (2) whether the organization needs to hire new personnel to develop and implement the AI models, and (3) whether new software and/or hardware is required in order to properly integrate the AI models into an organization’s existing information systems.

Concluding Comment

Ultimately, the economic aspects of managing an organization’s cyber risk program need to consider both the costs and benefits associated with defending against cyber-attacks. However, given the increasing utilization of AI-generated models by cyber attackers and cyber defenders, the game-theoretic aspects of cyber risk have taken on new dimensions. The winners in this new game will likely be those most familiar with developing and implementing AI models.

Lawrence A. Gordon is the EY Alumni Professor of Managerial Accounting and Information Assurance, at the Robert H. Smith School of Business, University of Maryland (UMD). He is also an Affiliate Professor in the UMD Institute for Advanced Computer Studies.