At its most basic, the argument in favour of automation runs as follows: if a machine can do the work of a human in a way that allows that human to get on with more complex or analytical tasks, why wouldn’t you do that?
This is particularly relevant to cyber security operations because of the sometimes delicate nature of the work, and the fact that threat detection can often depend on a level of human comprehension and reasoning that’s not yet possible for machines to grasp.
And there are plenty of examples within a functioning Security Operations Centre (SOC) of jobs that can be automated allowing whitespace for analysts to work at a second and third line capacity.
Recording the number of viruses blocked for clients each day on a SharePoint site or creating tickets for intrusion detection events are simple examples of generic menial SOC tasks. Using automation products and processing, a machine could complete these time-consuming tasks showing enough benefit over a human not doing it.
An effective cyber security programme involves a balance of processing large amounts of data and acting on a very human intuition to uncover and respond to threats.
To the left, to the left
When it comes to automation and cyber security, we often talk about the concept of ‘shifting left.’
In this instance, shifting left means upping the efficiency of a project to have it completed more quickly. If you picture a timeline of milestones on a project, you can see how shifting each event to the left will bring your endpoint nearer to the beginning of the timeline.
Efficiency and productivity are the lifeblood of any successful organisation, but within the context of a SOC, it could mean the difference between a threat being detected and stopped or not.
At Fujitsu, we’re already well into developing proof-of-concepts to record, collate and provide risk scores from multiple different sources simultaneously.
Time saved for SOC engineers and analysts is time that can be spent on completing the complex, non-administrative tasks they’ve been trained for.
An unknown future
The growth of the Internet of Things has broadened the threat landscape and introduced new risks.
Mirai botnets are the most prominent representation of the scale of these threats, and yet there’s every indication that these examples are yet to reach their full potential.
The combination of this great unknown and the UK’s skills shortage makes engineers’ time all the more valuable.
If we’re going to tackle the risks we already know about today and those unknowns that will come in the future, we need to have as much human brain power at our disposal as possible.
Artificial Intelligence (AI) will be vital in supporting this.
Deploying a machine workforce
The ability of machines to parse huge amounts of data-sets in a relatively short amount of time is already proving invaluable to cyber security engineers and analysts.
The same principles are already in products where algorithms spot anomalous behaviour and potential malicious activity by highlighting diversions from a perceived norm of safe or ‘good’ behaviour.
Products are being trained to monitor and recognise patterns in harmless, everyday behaviour to spot any abnormality, regardless of whether it is yet known to be threatening or not.
If, for instance, a reception desk computer is typically logged in and active on a steady 9-5 routine throughout the working week, a machine learning program that has observed this would be able to flag if the same unit is being used outside of those hours or if it’s connected to databases that it would usually have no reason to be accessing.
The overlay is a SOC analyst who can quickly and easily investigate and take the appropriate action. It’s a change in approach for security operations centre’s as more and more products are released to the market that identify anomalous behaviour requiring further investigation over the traditional approach of viruses being spotted by signatures.
Again, all of this points back to and depends on freeing up time for analysts and engineers to use their time more effectively – taking the basic or more labour-intensive tasks out of their daily routines and ‘shifting left.’
Achieving the correct balance
Ultimately, success will come from a combined approach – people working in tandem with machines.
With an insatiable appetite for learning, the potential applications of AI within cyber security are limited only by what we can imagine.
It’s an entirely realistic possibility, too, that in the coming years we’ll be using AI to help us to identify threats and vulnerabilities that are yet to be developed.
Latest posts by Paul McEvatt (see all)
- How the public sector is keeping UK citizens safe from cyber-attack - July 23, 2018
- The Year Ahead: Five Key Thoughts on Cyber Security In 2018 - January 5, 2018
- Why misuse of enterprise platforms could be your worst nightmare - October 31, 2017