By Erich Kron
The doom, gloom, and dire predictions over the dangers of artificial intelligence are just now beginning to subside to the recognition that AI holds a lot of promise for many organizations if adopted responsibly and strategically.
AI holds the power to enhance efficiencies, streamline supply chains, and boost innovation and product development. Enterprises are recognizing that the rewards of AI outweigh the risks and are taking steps to ensure they are leveraging this technology.
Recognizing the Risks
Yes, there are risks related to the use of AI. But there are risks related to the use of any technologies in the workplace. AI risks include ensuring accuracy, accountability, transparency, fairness, privacy, and security.
These risks can be addressed and minimized by applying standards and best practices such as those recommended by the National Institute of Standards and Technology (NIST). We also recommend focusing on creating and sustaining a strong AI security culture through five best practices:
- Transparency to drive awareness and action
The old PR advice of “tell it all, tell it now” applies to AI communications. Your communication should be clear and transparent. Policies, procedures, and best practices should be developed and shared with all stakeholders. Employees cannot possibly use AI responsibly if they do not understand what ‘responsible use’ means to the organization.
Keep in mind that AI tools are broadly and readily available and may seem innocuous enough to users. But if they do not understand how threats can emerge from the inappropriate use of these tools, then your systems, data, and reputation may be at risk.
Make it clear what tools can be used and how they should be used—as well as what tools should not be used, and why. Explaining the “why” behind your policies and procedures can go a long way toward ensuring that employees are supportive and compliant.
- Training to keep the message alive
AI is still in its infancy and new to all of us. Employees cannot be expected to use AI appropriately if they do not fully understand what it is and how they should use it. AI can appear intimidating—it requires a learning curve and people may harbor fears of AI taking their jobs away.
Training needs to focus on helping employees do their work, leveraging AI for gains in both efficiency and effectiveness. Because AI rules and regulations are rapidly emerging and evolving, training needs to be ongoing. Training should also be available through multiple channels and means, allowing employees to access information on-demand when they need it.
- Targeting risks through a healthy security culture
Security culture drives both behaviors and beliefs. A security-first organization promotes information sharing, transparency, and collaboration. When risks are discovered, or when issues occur, communication should be immediate and designed to clearly convey to employees how their behaviors and actions can both support and detract from security efforts.
Enlist employees in these efforts by ensuring that your culture is positive and supportive. Make security and AI risk awareness an integral part of your organization’s values, norms, and unwritten rules. Help employees be allies by arming them with the resources, tools, and support they need.
- Accepting feedback and responding openly
Security culture does not exist in a vacuum and does not evolve in a silo. Input from a wide range of stakeholders—from employees to customers and partners, regulators and the board—is critical for ensuring that you understand how AI is enabling efficiencies, and where risks may be emerging.
Actively and proactively, seeking input from key constituents in an open and transparent way will help ensure that they alert you to their concerns and help unearth potential risks when there’s still time to adequately address those risks.
By seeking input from key constituents in an open and transparent manner, they will be more likely to share their concerns and help uncover potential risks while there’s still time to adequately address those risks. Acknowledge and respond to feedback promptly and highlight the positive impacts of that feedback.
- Tackling third-party risks
AI risks can be introduced to your organization through third-party suppliers and apps. Keep in mind that AI is pervasive today, incorporated into a wide range of programs and tools, which your company may already be using.
Take steps to ensure full awareness of how third-party partners are using AI, and how that usage impacts your business. Use a documented approach to address these risks and have contingency plans in place in the event of any third-party incident or failures.
Try to make a concerted effort to rope in the maximum benefits that AI can deliver to the organization while safeguarding its systems, data and people. Regularly measure and track your organization’s security culture, identifying areas where progress has been made and those that still need improvement.
About the Author
A 25-year veteran information security professional with experience in the medical, aerospace, manufacturing and defense fields, Erich Kron is Security Awareness Advocate for KnowBe4. Author, and regular contributor to cybersecurity industry publications, he was a security manager for the U.S. Army’s 2nd Regional Cyber Center-Western Hemisphere and holds CISSP, CISSP-ISSAP, SACP and many other certifications. Erich has worked with information security professionals around the world to provide the tools, training and educational opportunities to succeed in information security.