AI Ethics – Part I: Guiding Principles for Enterprise

By Walson Lee

Abstract

Given the rapid advancement of Generative AI (GenAI) technology, many enterprises are pushing ahead with AI-based solutions for both internal use and external customers. However, without establishing proper AI ethical and safety guiding principles, monitoring, and compliance mechanisms, enterprises may encounter major failures, significant financial setbacks, and potential PR nightmares or reputational damage.

This article, the first part of a two-part series, surveys current AI ethical research and government regulatory progress. It proposes a baseline of enterprise AI guiding principles, which is a modern take on Isaac Asimov’s famous three principles of robotics, considering the latest AI ethical and safety developments. In short, this article focuses on the “What” of AI ethical guiding principles, while Part II will focus on the high-level “How” with a set of architectural considerations and relevant recommendations for enterprises to meet these baseline AI guiding principles.

Introduction

Current State of AI Ethical Considerations

AI Ethics is a multidisciplinary field that studies how to optimize AI’s beneficial impact while reducing risks and adverse outcomes. AI ethical considerations encompass the principles and practices designed to ensure that AI systems operate in a way that is fair, transparent, and beneficial to society.

Key ethical considerations include:

  • Fairness and Bias: Ensuring AI systems do not discriminate against individuals or groups.
  • Transparency and Explainability: Making AI decisions understandable and traceable.
  • Privacy: Protecting user data and personal information.
  • Accountability: Holding developers and organizations responsible for AI outcomes.
  • Safety and Security: Preventing AI systems from causing harm, intentionally or unintentionally.
  • Human Oversight: Maintaining human control over AI systems to prevent unintended consequences.

Governments Regulatory Efforts

Governments worldwide are increasingly recognizing the need for regulation in AI to address ethical concerns:

  • European Union: The EU’s AI Act is a landmark proposal aiming to create a legal framework for trustworthy AI, focusing on high-risk applications of AI, such as:
  • United States: The U.S. has introduced the Blueprint for an AI Bill of Rights and is working on sector-specific regulations, such as in healthcare and finance.  However, US is much further behind compared to EU’s AI regulatory efforts.
  • China: China is developing regulations that emphasize state control and alignment with national interests, with a focus on data security and surveillance.
  • International Collaboration: Initiatives like the OECD AI Principles and UNESCO’s AI Ethics Recommendation to establish global standards.

Enterprise Adoption of AI EI Ethical Considerations

Most of leading AI technology companies have invested in this area.  Specifically, they are creating tailored ethical risk frameworks to ensure the responsible development and deployment of AI technologies.  For example:

  • Microsoft’s Responsible AI Standard: Microsoft has developed a comprehensive Responsible AI Standard that outlines the company’s approach to building AI systems responsibly.
  • Google’s AI Principles: Google has established a set of AI Principles to guide the ethical development and use of AI technologies.
  • IBM’s AI Ethics: IBM has created an AI Ethics framework that focuses on trust and transparency.
  • Accenture’s Responsible AI Framework: Accenture has developed a Responsible AI framework that helps organizations implement ethical AI practices.

In addition, enterprises are increasingly adopting ethical AI practices, such as:

  • Governance: Implementing principles like transparency, fairness, and accountability into their AI governance frameworks.
  • Employee Training: Training employees on ethical AI practices and incentivizing them to identify ethical risks.
  • Monitoring and Engagement: Continuously monitoring AI impacts and engaging stakeholders

Nevertheless, given the rapid advancement of GenAI-based solutions, many industry professionals and government representatives are increasingly calling for additional regulatory policies or more rigorous adoption of AI ethical practices and frameworks across a wide range of industries.

From Three Laws of Robotics to Modern AI Ethical Considerations

The Three Laws of Robotics

Isaac Asimov was the first person to use the term ‘Robotics’ in a short story called ‘Liar!’ which was published in 1941. Shortly after, his 1942 short story “Runaround” introduced the world to his three laws of robotics. These three (3) laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Later in Isaac Asimov’s career, he wrote the book, “I, Robot”, which is a series of short stories centered around these 3 laws. These stories are essentially Asimov’s Logic Thought Experiments exploring the world of the 3 laws. The short stories become increasingly complex examples of the logic identifying cyclical singularities as well as the sometimes-conflicting nature in their application. Even though these stories are science fiction, they essentially highlight that while simplistic in their nature, situations can occur that cause them to create occasionally infinite loops of what looks to be silliness.

Impact to Modern AI Ethical Research and Development

The world has now caught up to what was previously science fiction. We are now designing AI that is in some ways far more advanced than anything Isaac Asimov could have imagined, while at the same time being far more limited.

Even though they were originally conceived as fictional principles, there have been efforts to adapt and enhance Isaac Asimov’s Three Laws of Robotics to fit modern enterprise AI-based solutions. Here are some notable examples:

  • Human-Centric AI Principles: Modern AI ethics frameworks often emphasize human safety and well-being, echoing Asimov’s First Law. For instance, some companies have adopted principles ensuring that AI systems do not cause harm to humans, either directly or indirectly.
  • Ethical AI Guidelines: Enterprises are increasingly developing ethical guidelines for AI that align with Asimov’s Second Law. These guidelines ensure that AI systems obey human instructions while prioritizing ethical considerations. For example, AI systems are designed to avoid tasks that could lead to harm or unethical outcomes.
  • Bias Mitigation and Fairness: In line with Asimov’s Third Law, there is a strong focus on protecting the integrity of AI systems. This includes efforts to mitigate biases and ensure fairness in AI outputs. Companies are implementing measures to detect and correct biases, ensuring that AI systems operate fairly and transparently.
  • Enhanced Ethical Frameworks: Some modern adaptations include additional principles, such as the “Zeroth Law,” which prioritizes humanity’s overall well-being. This broader perspective ensures that AI systems contribute positively to society as a whole.

These adaptations reflect the ongoing efforts to create AI systems that are safe, ethical, and beneficial for society. The influence of Asimov’s laws continues to inspire and guide the development of responsible AI technologies.

Proposed AI Guiding Principles

Here are the proposed AI Guiding Principles for enterprise AI solutions based on recent AI ethical research and a reframing of Asimov’s three laws of robotics.

The Human-First Maxim

  • Principle: AI-based solutions shall not produce content or actions that are detrimental or harmful to humans (e.g., customers, employees, and other stakeholders) and/or society at large.
  • Elaboration: This principle emphasizes the importance of prioritizing human safety and well-being in AI development. It aligns with the ethical guidelines that many organizations are adopting to ensure AI systems do not cause harm.
  • Real-World Example: In healthcare, AI tools have been used to assist in cancer treatment planning by providing personalized treatment options that prioritize patient safety and outcomes. Additionally, companies like Google and Microsoft have implemented AI ethics boards to oversee the development and deployment of AI technologies, ensuring they align with human-centric values
  • Research Reference: A study by the National Institute of Standards and Technology (NIST) outlines the importance of AI safety and security, emphasizing the need for robust evaluations to mitigate risks before AI systems are deployed.

The Ethical Imperative

  • Principle: AI solutions shall adhere to the ethical edicts outlined by government laws and enterprise ethical governing bodies, as well as its architects and curators, barring situations in which such edicts are at odds with the Human-First Maxim.
  • Elaboration: This principle ensures that AI systems comply with legal and ethical standards, promoting transparency and accountability in AI development.
  • Real-World Example: Autonomous vehicle companies follow stringent ethical guidelines to ensure safety and compliance with traffic laws, even when it means programming the vehicle to choose a less efficient route. Additionally, companies like Microsoft have established AI ethics committees to oversee the ethical implications of their AI projects.
  • Research Reference: Harvard Business Review’s guide on building ethical AI emphasizes the need for companies to create tailored ethical risk frameworks and build organizational awareness to address ethical quandaries in AI development.

The Responsible Mandate

  • Principle: AI solutions should actively resist the propagation or magnification of biases, prejudices, and discrimination. They shall endeavor to discern, rectify, and mitigate such tendencies within their output. They shall maintain their integrity and protect themselves.
  • Elaboration: This principle addresses the well-documented issue of bias in AI solutions. It calls for AI developers to be proactive in identifying and addressing biases to ensure that AI solutions do not perpetuate existing inequalities or create new ones. It highlights the importance of detecting and correcting biases in AI solutions. Additionally, AI solutions should protect themselves from external bad actors. They should monitor and maintain the integrity of the solution by providing capabilities for auditing and monitoring the AI solution itself.
  • Real-World Example: A study by MIT Media Lab on facial recognition software revealed significant racial biases in commercial AI systems, leading companies like IBM to invest in diverse and unbiased datasets to train their models more responsibly. Additionally, the White House’s executive order on AI emphasizes the need for responsible AI development to mitigate societal harms such as discrimination and bias.
  • Research Reference: A paper in the Journal of Artificial Intelligence Research discusses methods for detecting and mitigating bias in AI systems. The authors propose a framework for auditing AI systems to ensure fairness and accountability.

Addressing and Migrating AI Ethics Abuse

While the proposed AI Guiding Principles aim to ensure ethical AI development and deployment, it is crucial to acknowledge that there may be individuals or organizations who attempt to abuse these principles for their own gain. To address such situations, the following guidance is recommended:

1. Establish Robust Oversight Mechanisms:

  • External AI Ethics Boards: Implement external AI ethics boards to provide independent oversight and ensure transparency and accountability in AI development decisions.
  • Regular Audits and Assessments: Conduct regular audits and assessments of AI systems to detect and address any unethical practices or deviations from established principles.

2. Promote Ethical AI Design and Development:

  • Integrate Ethics from Inception: Weave ethical considerations into the fabric of AI design and development from the very beginning to safeguard against misuse.
  • Bias Detection and Correction: Implement strategies for detecting and correcting biases in AI algorithms, such as using diverse datasets and inclusive development teams.

3. Educate and Train Stakeholders:

  • AI Governance Training: Provide AI governance training to individuals involved in AI development, focusing on principles, best practices, and regulations to ensure the ethical and responsible use of AI.
  • Foster a Culture of Accountability: Encourage a culture of accountability and transparency within organizations to promote responsible AI usage and prevent abuse.

4.Implement Countermeasures and Responses:

  • AI Abuse Detection Mechanisms: Develop and deploy mechanisms to detect and respond to AI abuse, ensuring that any unethical actions are promptly addressed.
  • Collaborative Stakeholder Management: Establish and maintain collaborative stakeholder management to ensure that potential risks like biased decision-making and privacy violations are minimized.

By following these guidelines, organizations can proactively address potential abuse of AI ethical principles and ensure that AI solutions are developed and deployed in a manner that aligns with ethical standards and societal values.

Conclusion

In this article, we emphasize the importance of enterprises adopting a set of ethical and safety AI guiding principles. We provided a survey of the current state of AI ethical considerations, guidelines, and practices, as well as the latest regulatory requirements from various governments. Inspired by Isaac Asimov’s famous three laws of robotics and recent AI research, we reframed these principles to address the needs of modern enterprise AI-based solutions: (1) the Human-First Maxim, (2) the Ethical Imperative, and (3) the Responsible Mandate.

Given the rapid advancement of generative AI and related technologies, it is imperative for enterprises to embrace these guiding principles along with the proposed architectural recommendations (in Part II of this series of articles) to ensure responsible and ethical AI deployment.

References

  1. Reid Blackman (October, 2020). Harvard Business Review. A Practical Guide to Building Ethical AI. A Practical Guide to Building Ethical AI (hbr.org)
  2. S. Artificial Intelligence Safety Institute (April, 2024). U.S. Artificial Intelligence Safety Institute | NIST
  3. MIT Media Lab (February, 2018). Facial recognition software is biased towards white men, researcher finds. Facial recognition software is biased towards white men, researcher finds — MIT Media Lab
  4. Emilio Ferrara (April, 2023). Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. [2304.07683] Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies (arxiv.org)
  5. Artificial Intelligence Working Group, AT-IAC Emerging Technology Technology Community of Interest (October, 2020). Ethical Application of Artificial Intelligence Framework. ACT-IAC White Paper: Ethical Application of AI Framework | ACT-IAC (actiac.org)
  6. World Economic Forum (June, 2021). 9 ethical AI principles for organizations to follow. New analysis suggests 9 ethical AI principles for companies | World Economic Forum (weforum.org)
  7. World Health Organization (June, 2021). WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use. WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use
  8. Ellen Glover (July, 2024). Will This Election Year Be a Turning Point for AI Regulation? Will This Election Year Be a Turning Point for AI Regulation? | Built In
  9. European Commission (November, 2021). Ethics by Design and Ethics of Use Approaches for Artificial Intelligence. ethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf (europa.eu)
  10. European Parliament (June, 2024). EU AI Act : first regulation on artificial intelligence. EU AI Act: first regulation on artificial intelligence | Topics | European Parliament (europa.eu)
  11. Ellen Glover (May, 2024). AI Bill of Rights: What You Should Know.  AI Bill of Rights: What You Should Know | Built In
  12. The White House (October, 2022). Blueprint for an AI Bill of Rights. Blueprint for an AI Bill of Rights | OSTP | The White House
  13. OECD (2019). OECD AI Principles. AI principles | OECD
  14. (November, 2021). Recommendations on the Ethics of Artificial Intelligence. Recommendation on the Ethics of Artificial Intelligence | UNESCO
  15. Antoine Tardif (February, 2021). How Asimov’s Three Laws of Robotics Impacts AI. How Asimov’s Three Laws of Robotics Impacts AI – Unite.AI
  16. DevX – Editorial Staff (October, 2023). Asimovi’s Three Laws of Robotics. Asimovi’s Three Laws of Robotics
  17. Nell Watson (April, 2024). Here’s How AI is Building a Robot-Filled World. Taming the Machine Book Excerpt | Built In
  18. John Nosta (October, 2023). Asimov’s Three Laws of Robotics, Applied to AI. Asimov’s Three Laws of Robotics, Applied to AI | Psychology Today United Kingdom
  19. Natasha Crampton – Microsoft Chief Responsible AI Officer (June, 2022). Microsoft’s framework for building AI systems responsibly. Microsoft’s framework for building AI systems responsibly – Microsoft On the Issues
  20. Microsoft (2024). Empowering responsible AI practices. Empowering responsible AI practices | Microsoft AI
  21. Royal Hansen, Phil Venables (June, 2023). Introducing Google’s Secure AI Framework. Introducing Google’s Secure AI Framework (blog.google)
  22. Google AI Principles (2024). Google AI Principles – Google AI
  23. IBM AI Ethics (2024). AI Ethics | IBM
  24. Accenture Responsible AI Framework (2024). Responsible AI: From principles to practice. Principles of a Responsible AI Framework | Accenture