Topics in this article

Security

AI – and machine learning in particular – has been touted as the solution to myriad security problems, from detecting advanced threats to easing the cyberskills crisis.

When a ransomware syndicate launches a million attacks, they count a success rate of just 0.001% (10 attacks) as a victory. Defenders, however, need to repel 100% of attacks or risk serious consequences for their organization. And that’s simply not something that humans alone can do.

AI is therefore seen as a way of cutting through the noise to help analysts in security operations centers focus on the incidents that truly threaten their organization’s security posture.

But is it working? Or is AI creating even more unnecessary noise? The answer lies in how the technology is implemented rather than the solution itself.

Just as the mere existence of a firewall doesn’t stop cyberattackers at your system perimeter, adding AI to a solution without considering your threat environment and tuning the algorithm accordingly will lead to an excess of false positives – or, even worse, too many false negatives.

For AI to be effective, it must be trained with a lot of data  

Management school MIT Sloan describes AI and machine learning as different but closely related disciplines: although the terms are often used interchangeably, machine learning is “a subfield of AI that gives computers the ability to learn without explicitly being programmed”. So, machine learning is just one way of achieving AI, yet it is the dominant one today.

The key to successful AI is having a good fit between past data and a future event. Broadly, once an algorithm has been defined, it is fed large amounts of historical data to fine-tune its output. If this doesn’t work as expected, it adjusts the weightings of factors in its equations, then runs again with the data sets until it achieves a good fit. It continues to be updated with real data once it’s operational.

An algorithm can predict an outcome accurately only if it has enough definitive past data to infer a future event.

AI must balance false positives and false negatives

As Google’s manual on model training

Jetzt Kontakt aufnehmen