AI Bias and the Dangers It Poses for the World of Cybersecurity

By Khurram Mir, of Kualitatem

AI technology is becoming increasingly popular across a range of industries, which are  using it to speed up their processes and improve productivity. While this promises to increase efficiency, the technology itself is far from perfect and are often biased.

Unfair Targeting of Certain Groups

It’s no secret that racism is a significant issue in the world, so it should be no surprise that AI systems created by humans are also influenced by our perceptions, stereotypes and bias.

AI systems after all are trained using vast amounts of information from different times in human history and therefore often lack the same moral compass we possess today.  In the end, this can lead to unfair decision-making and provoke injustices by relying on cold, historical data to make decisions.

For instance, AI systems could unfairly target specific groups of people and issue a red flag simply because they are part of a certain demographic. For example, a cybersecurity tool could send alerts for a piece of software primarily used by people of color. Training data could cause the system to be biased, deeming software as malicious, leading to unfairness and potential delays.

Cyber-Attacks Could Be Misread

AI has frequently been used to enhance cybersecurity, but these incomplete and flawed systems are the perfect targets for hackers. The same issues that make AI systems flag a certain group as malicious could paint another demographic as harmless despite them not being so.

For instance, face recognition technology has already been shown to discriminate against people of color, painting a minor threat as something major. On the other hand, fair-skinned people are pushed to the bottom of the priority list, even if they could pose a higher danger. This could lead to security breaches, as the AI malware detection system is biased and overlooks the most significant threat.

Without careful monitoring, these biases could delay threat detection, resulting in data leakage. For this reason, companies combine AI’s power with human intelligence to reduce the bias factor shown by AI. The empathy element and moral compass of human thinking often prevent AI systems from making decisions that could otherwise leave a business vulnerable.

The Bias Could Generate False Positives

AI bias often affects companies because the system has been trained to categorize threats unfairly due to incomplete data. This could lead to threats being categorized as “safe,” as the system cannot recognize them without human intelligence.

The opposite could also occur, as AI could label a non-threat as malicious activity. This could lead to a series of false positives that cannot even be detected from within the company. For example, AI detection systems could flag slang as a phishing attempt, sending an alert to the security team.

Casual emails between employees could be labeled as spam, preventing potentially crucial information from reaching its destination. While some might argue that this is a good thing because supposedly “the algorithm works,” it could also lead to alert fatigue.

AI threat detection systems were added to ease the workload in the human department, reducing the number of alerts. However, the constant red flags could cause more work for human security providers, giving them more tickets to solve than they originally had. This could lead to employee fatigue and human error and take away the attention from actual threats that could impact security.

The Bottom Line

AI systems can be crucial for cybersecurity, as they can detect threats faster than human intelligence. However, the incomplete data pool and potentially biased information could cause it to mislabel threats, leaving a company vulnerable. While reducing the chances of error, an incorrectly trained AI system could create more work than it actually solves. For this reason, businesses prefer a hybrid system, where AI systems are used in conjunction with human intelligence.Khurram Javed Mir

Mir is the Chief Marketing Officer at Kualitatem, a software testing and cyber security company.