AI’s Impact on Privacy: How to Balance Innovation with Data Security

By Erich Kron, Security Awareness Advocate

It’s not an exaggeration to say that AI and generative AI, in particular, have had a significantly disruptive impact on businesses of all kinds. GenAI tools hold promise to boost efficiency and aid in data analysis across multiple professions, from HR to marketing to finance—literally every role in every company around the globe.

These impacts are already being realized in both positive and potentially perilous ways. AI can be used to build or destroy. To help or harm.

AI as Villain—or Ally

AI tools hold the potential for bias and can threaten the reliability of data, often providing outputs that are far from trustworthy. AI tools can be used to generate information—written, audio, or video—that is fake. A phishing email, for instance, written perfectly to replicate your CEO’s communication style.

But AI can also help us process vast amounts of data extremely quickly, offering insights that would never have been possible before.

Your company may likely be using these tools—whether they’ve been sanctioned for use or not. And, while AI can certainly help boost productivity and drive innovation, it can also put company data at risk.

It’s important to ensure that use of AI is designed to balance innovation with data security.

Ensuring the Appropriate Use of AI Tools

Organizations need to understand when, and how, to use AI tools, and which tools to use. Larger companies may have their own proprietary tools. Others may use publicly available tools such as ChatGPT.

Having a policy in place to guide this use can ensure your data isn’t inadvertently put at risk. Strong security measures can also help minimize missteps. Using enterprise AI tools that have enhanced security features, for instance.

AI As An Ally in Digital Defense

AI’s ability to analyze massive amounts of data to identify trends can be leveraged to help root out abnormal patterns that may point to potential threats. AI can be used to detect suspicious user behavior, identify malware, and alert the organization to security breaches.

For instance, researchers used a version of the Llama 2 large language model to identify suspicious social media posts that were then given to GPT-3.5 for analysis. The results were impressive. The AI assistant worked at lightning speed, was able to understand posts in multiple languages, and could quickly make sense of massive datasets.

While the experiment proved that the technology wasn’t infallible, it also pointed to the potential of AI to become an indispensable tool in our digital defense toolkits, offering organizations the ability to stay a step ahead of increasingly clever threat actors.

So, no, these tools aren’t failsafe. But they still represent important aids in an ongoing battle to thwart cybercriminals and protect proprietary data.

In addition to technology tools, though, we also need strong human defenses.

Building Up Human Defenses

By combining tech solutions with education, awareness, and a healthy respect for human creativity you can build and sustain a strong security culture. Employees represent your first line of defense. Collaborate with users to actively cultivate cognitive strategies such as critical thinking to help them navigate the digital communication environment.

Technological defenses like AI content-checkers and deepfake detectors can be helpful, but they’re not perfect. Use these tools to augment human judgment, rather than replace it. By teaching users to view the world through the eyes of a potential attacker you can help them understand the importance of taking personal responsibility for verifying information before accepting or sharing it.

Strategies for Balancing Innovation With Security

There are a number of things organizations should be doing to leverage technology while also arming users with the education, information, and resources they need to play an important role in digital defense.

  • Enhance media literacy. Educating employees about both the potential and the peril of AI tools is critical. Work with users to develop a healthy sense of skepticism, helping them to better navigate the digital environment at work and in their personal lives. Help users learn to spot the telltale signs of misinformation like emotional manipulation, false context, and unverified claims.
  • Use technology to authenticate digital content. Watermarking is one example and while not without limitations, it can provide value. For instance, when dealing with large volumes of user-generated content it can be used to identify potentially synthetic media.
  • Implement strong data governance. Prioritize data security through a comprehensive governance framework that ensures data transparency, consent management, and adherence to privacy regulations. Keep in mind that when users understand the why behind these actions, they’re more likely to follow desired policies and processes.
  • Keep up the conversation. Share information about the pros and cons of AI tools and offer examples of both the benefits and potential pitfalls. An environment of trust and transparency can help avoid misuse and ensure the proper use of AI in ways that benefit both employees and the organization.

By collaborating, staying vigilant, and being proactive, we can harness the positive power of AI tools, while navigating the challenges to boost innovation without sacrificing data security.

 A 25-year veteran information security professional with experience in the medical, aerospace, manufacturing and defense fields, Erich Kron is Security Awareness Advocate for KnowBe4. Author, and regular contributor to cybersecurity industry publications, he was a security manager for the U.S. Army’s 2nd Regional Cyber Center-Western Hemisphere and holds CISSP, CISSP-ISSAP, SACP and many other certifications. Erich has worked with information security professionals around the world to provide tools, training and educational opportunities to succeed in information security.

 

LinkedIn: https://www.linkedin.com/in/erichkron/