Navigating AI Regulations: Key Insights and Impacts for Businesses

Dr. David Marco

Abstract

The article discusses the lack of comprehensive federal legislation in the U.S. to regulate artificial intelligence (AI), despite its widespread use and potential risks, which include fraud and safety concerns associated with technologies like deepfakes and autonomous systems. The article emphasizes the urgent need for regulatory frameworks to ensure ethical AI practices, protect individuals and society from harm, and establish accountability for AI technologies as they become increasingly integrated into various aspects of our lives.

Introduction

Although artificial intelligence (AI) seems to be everywhere these days, there is currently no comprehensive U.S. federal legislation that regulates the development of it. There are several bills pending in the house and senate, but, currently, there is nothing concrete that ensures AI will be safe,

There’s no doubt AI is revolutionizing business, but as AI becomes more ingrained in corporate systems and more integrated into an organization’s decision-making process, there will be a growing need for ethical guidelines to govern its use. Legislation can help define acceptable practices while promoting transparency in AI operations. This could help ensure organizations adhere to strict ethical standards. Because AI technology advances so rapidly and its potential misuse so high, governments around the world have recognized the need for AI legislation to oversee and control it.

For AI to be trustworthy, it must be transparent and understandable. It must address any bias in the underlying data, incorporate human oversight and judgment, and be both robust and safe. It should comply with privacy laws and safeguard any personal data used in the model training. There should be mechanisms in place to hold developers and organizations accountable for their AI systems. This includes documenting decisions made by AI systems and allowing users to appeal decisions that might negatively affect them. To this end, many countries and jurisdictions have started to recognize the threat AI might pose. They are drafting and enacting legislation to address many of the potential problems implementing AI might entail.

What is Artificial Intelligence?

According to the analytics experts, SAS, “Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data.”

Although the term “artificial intelligence” was coined in 1956, it took sixty years for businesses to fully embrace it. Today, the term is everywhere. It has become the panacea to fix just about any business problem. It’s the buzzword of all buzzwords. AI can succeed now because of the improvements in both computing power and storage capacity. Models can be built on vastly bigger data volumes than in the past. Advanced algorithms can crunch greater numbers than in the past. Even more impressive, machine learning allows computers to model data without being given instruction by a human. Machine learning models have been successfully applied to robotics, game playing, autonomous vehicles, and recommendation systems.

AI has evolved from being a quaint, fanciful idea to the analytical backbone of many business processes, from customer personalization to chatbots to robotics, warehousing, and logistics. In his Using artificial intelligence McKinsey Maps 400+ AI Use Cases; Finds Trillions in Potential Value, Michael Chui notes AI adds a 69% value in the nineteen industries they studied, including travel, logistics, retail, high tech, oil and gas, insurance, and semi-conductors. AI is here to stay. It is a powerful technology, but not a perfect one. AI implementations hold plenty of risks. These should be recognized by anyone planning to use it.

The Threats and Hazards of AI

Companies need to establish regulatory frameworks to ensure AI technologies are developed and used responsibly. They need to promote ethical practices while also protecting individuals and society at large from any potential harm. As AI continues to evolve, proactive legislative measures will be crucial in protecting society from any negative impact this increasingly powerful technology holds. The rise of sophisticated AI technologies, such as deepfakes, has led to alarming incidents of fraud.

In its A pro-innovation approach to AI regulation White Paper, the UK government identified several high-level risks that a principles-based AI framework might be able to mitigate with proportionate interventions. These include:

  • Risks to human rights — Generative AI can easily generate deepfake pornographic video and image content, potentially causing reputational harm.
  • Safety risks — A user can follow an AI assistant’s dangerous recommendation ignorant that the AI doesn’t fully understand the context of the action or activity described. If the activity causes physical harm, who holds the responsibility and/or the liability?
  • Risks to fairness — An AI tool trained on biased data could assess the credit worthiness of particular individuals on different terms based on such characteristics as race and gender.
  • Risks to privacy and agency — Connected devices in a home might constantly gather data that stiches together a near-complete portrait of an individual’s home life. Privacy risks abound if parties other than the individual can access that data.
  • Risks to societal wellbeing — Disinformation generated and propagated by AI, including deepfake audio and video can undermine the electoral process, threatening the foundations of a country’s democratic institutions.
  • Risks to security — AI’s ability to automate, accelerate and magnify content at scale could increase the effectiveness of cyber-attacks.

The Threat of Autonomous Systems

Autonomous systems, such as self-driving cars or drones, rely heavily on software and connectivity, which makes them susceptible to cyberattacks that can compromise their functionality or overall safety. In a worst-case scenario, hackers could gain access and control over these automated systems. This could lead to potentially catastrophic outcomes, such as vehicle crashes or unauthorized surveillance.

Autonomous systems, like self-driving cars, present safety risks that necessitate regulatory oversight. In 2018, an Uber self-driving car killed a pedestrian and the backup driver was charged with negligent homicide. This incident underscores the importance of establishing public safety standards and accountability measures for certain AI technology.

Even if the outcomes aren’t deadly, they can be insidious, and governments around the world are taking notice. In his article, The Chinese government is wary of Tesla, Lianhe Zaobao reports that On 22 March, 2022, “People familiar with the matter told The Wall Street Journal that the findings of the Chinese government’s security assessment of Tesla vehicles raised concerns because the cameras installed in the cars are able to record images continuously and obtain data on how, when and where the vehicles are used as well as the contact lists of the mobile numbers linked to the cars. Beijing’s fear is that some of that data could be sent back to the US.”

Any organization utilizing autonomous systems must implement robust verification processes to mitigate these risks. As autonomous systems become more prevalent, regulatory frameworks must adapt to address the unique cybersecurity risks they pose. Organizations may face compliance challenges as they work to meet evolving standards for data protection and system integrity.

The Rise of Deepfakes

According to CNN, a finance worker was tricked into transferring $25 million to fraudsters who impersonated company executives using deepfake technology during a video conference call. In another case of fraud, cybercriminals used a deepfake audio fraud to demand an urgent wire transfer of close to $250,000, which was sent to a Hungary-based supplier. Such incidents highlight the need for regulatory frameworks to combat AI-driven fraud. These were some of the first cases of fraudsters using deepfake technology to commit serious financial fraud.

In her article, Half of Executives Expect More Deepfake Attacks on Financial and Accounting Data in Year Ahead, Christine Oh claims, “Deepfake financial fraud is rising, with bad actors increasingly leveraging illicit synthetic information like falsified invoices and customer service interactions to access sensitive financial data and even manipulate organizations’ AI models to wreak havoc on financial reports.”

Current verification methods may struggle against sophisticated deepfake technology and other forms of AI-generated content. This presents challenges for ensuring that communications and data inputs are legitimate.

Risks and Challenges Posed by AI

The historical risks and challenges posed by AI have evolved alongside its technological advancements. These risks encompass a wide range of ethical, social, and security concerns that have emerged as AI becomes increasingly integrated into various aspects of business.

AI systems often reflect the biases present in their training data. This leads to unfair outcomes in critical areas such as hiring, law enforcement, and lending. Biased algorithms can perpetuate historical inequalities, resulting in systemic discrimination against marginalized groups. For example, a U.S. hospital used AI to predict which patients would require additional medical care. The algorithm favored white patients over black patients, despite both sets of patients having similar health conditions. The bias arose because the algorithm relied on healthcare cost history, which was correlated with race. Black patients typically had lower healthcare costs due to systemic barriers. This leads to underrepresentation in the data used for the predictions.

The deployment of AI technologies, especially in surveillance contexts, poses significant threats to individual privacy. Governments and corporations can use AI to monitor citizens extensively, raising ethical questions about consent and civil liberties.

As AI systems become increasingly complex, ensuring their safety becomes more and more challenging. There are risks related to system failures or unintended consequences that could arise from the deployment of powerful AI systems without adequate oversight.

Transparency can be a big problem with AI as well. Many AI systems operate as “black boxes,” especially with unsupervised learning models. This makes it difficult for users to understand what’s going on under the good. This lack of transparency erodes trust in AI and can lead to resistance against its adoption.

International Approaches to AI Regulation

Governments around the world recognize both the potential and the threat of AI and are acting accordingly.

European Union Initiatives

On April 21, 2021, the EU proposed the AI Act, which attempts to regulate AI technologies within the European Union. The Act focuses on ensuring safety, transparency, and accountability in AI applications. It aims to establish a comprehensive legal framework that addresses the risks associated with AI while still promoting innovation and protecting fundamental rights.

Objectives of the EU AI Act

  • Establishes safety standards for high-risk AI systems to prevent harm to individuals and society.
  • Promotes public trust in AI technologies through fostering discussion and providing ethical guidelines and full transparency.
  • Ensures AI systems do not breach privacy or other fundamental rights, protecting fundamental rights.
  • Positions the EU as a leader in setting standards for ethical and responsible AI development.

Key Aspects of the EU AI Act

  1. Classifies AI systems into four risk levels, with each category containing different regulatory requirements that aim to ensure safety and accountability. The risks are:
    • Unacceptable risk
    • High risk
    • Limited risk
    • Minimal risk.
  2. AI applications that pose significant threats to safety or fundamental rights, such as social scoring by governments, are strictly prohibited.
  3. High-risk AI systems, which include applications in critical sectors like healthcare and transportation, must meet stringent requirements for safety, transparency, and accountability before deployment.
  4. Users must be informed when interacting with an AI system to ensure individuals understand how their data is being used.
  5. The establishment of a European Artificial Intelligence Board to oversee the implementation of the Act and ensure consistent application across member states.
  6. The protection of fundamental rights is emphasized to ensure that AI technologies are developed and used in ways that respect human dignity and do not perpetuate or promote discrimination.
  7. The Act aims to both regulate AI and foster innovation by providing a clear legal framework that encourages responsible development and deployment of AI.

The EU AI Act represents a significant step towards creating a balanced approach to regulating AI. It focuses on both innovation and protection against potential risks. Its implementation will likely have far-reaching implications on how AI is both developed and used in Europe and globally.

United Kingdom Initiatives

While specific legislation directly targeting AI in the UK is still evolving, there are several frameworks and initiatives aimed at ensuring the responsible use of AI technologies. April 15th, 2024, marked a shift in the sentiment about AI policy in the UK. Regulators cited their concern about large language models that support generative AI systems. These efforts focus on protecting individuals’ civil rights and promoting ethical standards, and addressing the potential risks associated with AI applications.

According to their article, New UK Government Announces AI and Cybersecurity Reforms, Nicola Kerr-Shaw and Aleksander J. Aleksiev state that, previously, the UK had “taken a principles-based, ‘pro-innovation’ approach to AI that relied on the application of existing laws rather than introducing dedicated AI legislation,” but was now changing that stance. Going forward, the government “indicated that regulation will focus on ‘the handful of companies developing the most powerful AI models.’”

Kerr-Shaw and Aleksiev claim the UK government plans to introduce a “statutory code” that requires “companies to release test data for scrutiny by the government’s AI safety institute.” The government isn’t looking to stop AI development. It plans “to stop short of the full prohibitions seen in the EU AI Act and instead impose targeted guardrails on the highest-risk models, such as those currently considered ‘general-purpose AI systems’ under the EU AI Act,” state Kerr-Shaw and Aleksiev.

As the UK is one of the members of the G7, it will continue discussions on global AI governance and regulatory alignment with its members. This includes the G7 Code of Conduct and agreements struck between the UK and US, Australia, Singapore and Canada to strengthen collaboration across innovative technology like AI.

UK Artificial Intelligence (Regulation) Bill

On 22 November 2023, Lord Holmes introduced the UK Artificial Intelligence (Regulation) Bill in the House of Lords. According to Kennedy Slaw, it aimed “to establish a central ‘AI Authority’ to oversee the regulatory approach to AI with reference to principles of trust, consumer protection, transparency, inclusion, innovation, interoperability and accountability.” However, the Bill died when Parliament was dissolved on 30 May 2024 after the Conservatives lost the election. No one knows if the new Labor government will move forward on similar legislations, but it would be unsurprising if they did.

Key UK Legislative Frameworks and Initiatives

  1. Data Protection Legislation:
    • The UK General Data Protection Regulation (GDPR), which is retained post-Brexit, provides a framework for data protection and privacy. It regulates how personal data is collected, processed, and stored, which will impact how AI systems utilize personal data.
  2. The Online Safety Bill:
    • This proposed legislation aims to regulate online content and platforms. It addresses issues related to harmful content, including those generated by AI technologies. It seeks to protect users from misinformation and harmful practices online.
  3. Ethical Guidelines:
    • The UK has established ethical guidelines for AI development through various organizations, including the Centre for Data Ethics and Innovation (CDEI). These guidelines emphasize transparency, accountability, and fairness in AI systems.

United States Regulatory Landscape

Currently, the regulatory environment for AI in the United States is highly fragmented, with no comprehensive federal legislation governing AI development and its use. Several bills aimed at addressing safety, accountability, and ethical concerns are making their way through Congress. Future legislation is expected to focus on establishing guidelines for transparency, protecting against misuse (such as deepfake fraud), and ensuring that AI technologies are developed responsibly while promoting innovation and public trust.

Algorithmic Accountability Act

According to U.S. Senator Ron Wyden, “The Algorithmic Accountability Act of 2023 requires companies to assess the impacts of the AI systems they use and sell, creates new transparency about when and how such systems are used, and empowers consumers to make informed choices when they interact with AI systems.” The act takes aim at AI systems that may lead to fraud or safety risks, such as deepfakes and autonomous systems. As incidents of AI misuse, including deepfake fraud, continue to rise, these legislative efforts are crucial for establishing regulatory frameworks that protect individuals and organizations while promoting ethical AI development.

State by State

According to The National Conference of State Legislatures, in the 2024 legislative session, “at least 45 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI bills, and 31 states, Puerto Rico and the Virgin Islands adopted resolutions or enacted legislation.” These include:

  • Colorado’s AI legislation requires developers and deployers of high-risk AI systems “to use reasonable care to avoid algorithmic discrimination and requires disclosures to consumers.”
  • Maryland’s Department of Information Technology adopted policies and procedures around the development, procurement, deployment, use and assessment of AI systems when utilized by the state government.
  • New Hampshire criminalized the fraudulent use of deepfakes.
  • South Dakota criminalized the use of AI to create computer-generated child pornography.
  • Utah enacted the Artificial Intelligence Policy Act, which aims to establish transparency and consumer protection standards for the use of generative AI in the state.

Deepfake Legislation

At least 40 states have deepfake legislation pending and at least 50 bills have been enacted already, according to The National Conference of State Legislatures. These include:

  • Alabama criminalized creating a private image if a person knowingly creates, records, or alters a private image without consent and the depicted individual had a reasonable expectation of privacy.
  • California allows individuals to report digital identity theft to a social media platform. The social media platform must permanently block and remove reported instances of digital identity theft.
  • Florida requires the addition of disclaimers on political advertisements, electioneering communications, or other miscellaneous advertisements when deepfakes are involved.
  • Louisiana criminalized the unlawful dissemination or sale of images of another individual created by AI.
  • Tennessee enacted the Ensuring Likeness, Voice and Image Security Act of 2024. This ensures every individual has a property right in the use of his or her name, photograph, voice, or likeness in any medium in any manner.
  • Utah updated the definition of counterfeit intimate image to include AI-generated depictions.

Future Directions for AI Regulation

As AI technology continues to advance, predictions regarding the evolution of AI regulations suggest the following key trends:

  1. Increased focus on accountability and transparency: future legislation will emphasize the need for organizations to demonstrate accountability in their AI systems, requiring them to disclose how algorithms make decisions, particularly in high-stakes areas such as finance, healthcare, and law enforcement.
  2. Stricter guidelines for high-risk applications: regulations may evolve to impose stricter guidelines on high-risk AI applications, such as autonomous vehicles and facial recognition technologies, ensuring that safety standards are met and that these systems are subject to rigorous testing before use.
  3. Enhanced consumer protection measures: as incidents of AI misuse, such as deepfake fraud, become more prevalent, regulatory frameworks may include specific provisions aimed at protecting consumers from deceptive practices and ensuring that individuals are informed any time they interact with AI.
  4. Collaboration between government and industry: fostering greater collaboration between government regulatory bodies and industry stakeholders to create best practices for ethical AI use will promote innovation while safeguarding public interests.
  5. Adaptability to rapid technological changes: because AI develops at a breakneck pace, regulations will adapt quickly to keep up.
  6. Global regulatory alignment: as countries around the world develop their own AI regulations, there may be efforts in the U.S., Europe, and Asia to align with international standards to facilitate cross-border cooperation while ensuring there is a cohesive approach to AI governance.

These predictions indicate a proactive approach towards creating a regulatory environment that balances innovation with the need for safety, accountability, and ethical considerations in the deployment of AI technologies.

Conclusion

The historical risks associated with AI highlight the need for careful consideration and proactive management as these technologies continue to evolve. Addressing these challenges requires collaboration among technologists, policymakers, ethicists, and society at large to ensure that the development and deployment of AI provides positive contributions to society while also minimizing potential harms.

AI systems raise significant data privacy concerns because they collect and process vast amounts of personal data. Regulatory frameworks establish guidelines for data protection. These ensure an individuals’ information is handled secretly, responsibly, and with their full consent.

AI systems must be understandable, fair, incorporate human judgment, and be ethical. Trustworthy AI systems should perform reliably across various conditions and be resilient to errors or attacks. Developers must comply with privacy laws and safeguard personal data used in training AI models. This includes obtaining user consent for data usage and implementing strong security measures to protect sensitive information. The protection of fundamental rights are emphasized to ensure that AI technologies are developed and used in ways that respect human dignity and do not perpetuate or promote discrimination.

Effective legislation can also foster public trust in this technology by ensuring it is safe, reliable, and ethical. Building trust is crucial for encouraging the adoption of AI while mitigating fears related to misuse or unintended consequences.

Since AI technology can infringe upon fundamental human rights, legislation is needed to ensure that AI systems are designed and implemented in ways that respect these rights and do not perpetuate biases or inequalities. However, regulation should not stifle innovation. Clear guidelines on how to use AI will help businesses navigate compliance without threatening creativity. A balanced approach will encourage responsible development as well as safeguard users against risks.

Dr. David P. Marco, PhD, Fellow IIM, CBIP, CDP is best known as the world’s foremost authority on data governance and metadata management, he is an internationally recognized expert in the fields of CDO, data management, data literacy, and advanced analytics. He has earned many industry honors, including Crain’s Chicago Business “Top 40 Under 40”, named by DePaul University as one of their “Top 14 Alumni Under 40”, and he is a Professional Fellow in the Institute of Information Management. In 2022, CDO Magazine named Dr. Marco one of the Top Data Consultants in North America and IDMMA named him their Data Management Professional of the Year. In 2023 he earned LinkedIn’s Top BI Voice. Dr. Marco won the prestigious BIG Innovation award in 2024. David Marco is the author of the widely acclaimed two top-selling books in metadata management history, “Universal Meta Data Models” and “Building and Managing the Meta Data Repository” (available in multiple languages). In addition, he is a co- author of numerous books and published hundreds of articles, some of which are translated into Mandarin, Russian, Portuguese, and others. He has taught at the University of Chicago and DePaul University. DMarco@EWSolutions.com