By Perry Carpenter, Chief Human Risk Management Strategist at KnowBe4
Despite initial concerns and some ongoing skepticism about the role of generative AI, it’s fair to say the genie left the bottle and there’s no putting it back in. In fact, about 97% of business owners believe that GenAI is poised to benefit their businesses. This is despite some obvious and well-publicized problems with its use.
- In its first public demo, Google Bard AI made a factual error about the James Webb Space Telescope. More than just an “oops,” the error resulted in the loss of $100 billion in market value for the company.
- A deepfake video of President Volodymyr Zelenskyy exhorting troops to surrender, while ultimately unconvincing, is chilling in terms of the implications for the role synthetic media might play in international security.
- In 2023, two lawyers were fined $5000 for relying on ChatGPT to cite case law that proved to be fictitious.
These and a host of other examples should cause companies to carefully consider how GenAI is used in their organizations.
Key Ethical Concerns
While GenAI offers potential, it also holds peril in terms of both concrete risks and ethical concerns. For instance:
- Misinformation and disinformation: GenAI’s superpower (i.e. the power to be creative) comes with a side-effect – “hallucinations.” Hallucinates refer to when these AI systems simply make-up information that, while syntactically correct, has no basis in fact. Hallucinations can take the form of false historical facts, incorrect dates, non-existent citations, or invented scenarios that are entirely made up.
- Privacy and consent: GenAI platforms draw information from large, publicly available datasets with content that may not have explicit permission to be used by others. Authors, artists, designers, and academics have raised concerns about the potential for their intellectual property to be accessed and shared by GenAI models. Companies are also at risk of releasing their own proprietary data if employees inadvertently feed it to the GenAI models they use.
- Bias and fairness: There have been several cases of AI perpetuating biases based on the data it was trained on. For instance, judging male job applicants as more desirable than female applicants due to gender-based disparities.
- Security risks: If safeguards are not in place, the use of GenAI can expose security risks that could impact your company’s intellectual property, data, and employee and customer information.
It’s important to ensure that the use of GenAI for creating content is done in an ethical manner to protect the company, its customers and stakeholders.
Steps Toward Ethical AI Practices
Do your employees know whether they should be using GenAI tools to create content? Do customers know how your organization is using AI content to communicate and interact with them? Do employees understand the risks related to the use of GenAI in terms of protecting both their own and the company’s information?
These issues should not be taken for granted or left to chance. To help build a strong foundation for the ethical use of GenAI, consider the following tips:
- Create a cross-functional AI ethics committee from IT, legal, HR, and marketing, to regularly review and update policies on the use of AI.
- Develop a clear and comprehensive policy that identifies acceptable and unacceptable uses of AI-generated content. The policy should include ways to avoid the spread of misinformation, attain consent, and ensure privacy.
- Implement steps for AI-content verification to ensure authenticity and accuracy before it is published or distributed.
- Promote transparency in the use of AI with customers, employees, and other audiences. This might include disclaimers on AI-generated content, as well as discussions related to how AI is, and is not, being used. Consider the use of watermarking or metadata tagging to clearly identify AI-generated content.
- Invest in AI bias detection tools such as content filtering systems that screen AI-generated text for potentially biased or inappropriate language. Adjust training data and modify algorithms to minimize the potential for bias.
- Take a “human-in-the-loop” (HITL) approach to the review and approval of AI-generated content. Conduct regular audits to continually assess the quality and accuracy of content.
- Educate employees on the appropriate use of AI and make education and security awareness training an ongoing program. Determine which tools are appropriate to use—and which should not be used. Explain the “why” behind your policies and practices and create open lines of communication to foster trust and transparency.
AI-generated content can hold significant promise for all kinds of organizations for automating routine tasks, enhancing creativity, and boosting production. However, safeguarding your data, systems, and reputation can’t be left to chance. Make sure to take steps to ensure the ethical and safe use of AI for content generation and communication, utilizing a blend of human oversight and technological efforts.
Perry Carpenter is a multi-award-winning author, podcaster, and speaker, with over two decades in cybersecurity focusing on how cybercriminals exploit human behavior. As the Chief Human Risk Management Strategist at KnowBe4, Perry helps build robust human-centric defenses against social engineering-based threats. His latest book, “FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions” [Wiley: Oct 2024], explores AI’s role in deception. Perry also hosts the award-winning podcasts 8th Layer Insights and Digital Folklore. He can be reached by email at perryc@knowbe4.com