People and machines: making security stronger together

People and machines: making security stronger together

Friday 5th February 2021 | 3 minute read

It’s been well documented that there’s a serious shortage of good cyber security skills, with one recent survey quoting a talent shortage of 140,000 and another finding that over three quarters (78%) of UK hiring managers are finding that their ability to obtain and retain skilled IT security professionals is a major challenge.

We’ve written some pieces recently about the advantages of incorporating artificial intelligence (AI) and machine learning (ML) into your security operations, including on reducing white noise and, indeed, why security is an ideal use case. And while AI and ML do provide huge benefits in terms of cost savings, operational efficiency and smarter strategic decision making, it’s important to remember that experienced security people still have a key role to play.


Rubbish in – rubbish out

AI and ML tools obviously are based on the premise of self-learning, but there is still initial set up and monitoring that needs to be undertaken by a person or team. For example, if that initial programming is done wrongly, then everything that subsequently comes out is likely to cause problems. This could include a security team drowning in false positive alerts, or them being sent down rabbit holes in search of malicious activity that isn’t there but is the result of misconfigured ML algorithms.

Merritt Maxim, researcher director for security at analyst firm Forrester said "If the inputs are bad and it's passing things through it says are okay, but it's actually passing real vulnerabilities through because the model hasn't been properly tuned or adjusted – that's the worst case because you think you're fully protected because you have AI".

The opportunities for customisations provided by tools like Azure Sentinel, via things like ‘bring your own machine learning’ – (BYOML) are also doubled-edged. Custom rules can be built to meet very specific needs, but they do require specific human input to make them work effectively and any errors could have serious consequences.


Creating a new sort of danger?

The increasing availability of AI and ML technologies means, by definition, that they are also available to hackers and other malicious actors. Europol reported that AI is a technology that could make cyberattacks more difficult to spot, and ML algorithms are being targeted by criminals with data poisoning – that is to ‘normalise’ untoward activity so it’s not flagged as anomalous. This method of attack makes them potentially a lot more dangerous. Fighting AI and ML with more of the same is simply not enough - but having an experienced team doing human-actioned analysis, combined with correlation and signature-based rules, can make the difference.

There have already been reports of AI being used in cybercrime to fraudulently obtain funds, and ML is apparently being used in a variety of criminal scenarios, particularly around social engineering, impersonation and password cracking.


Rise with the machines

AI-based tools have still have huge potential to benefit businesses and help bridge the skills gap that still exists. If they continue to develop and improve, and are applied in the right circumstances alongside people-based security operations, rather than as a replacement, businesses can have confidence that they will stay secure in the face of increasingly clever cyberattacks.

For example, using both AI and ML to establish a baseline of activity to support detection of threats and potential breaches takes a huge resource burden off the humans in the team. They can then focus on assessing the threat and undertaking any required remediation tasks, with peace of mind that a critical application or server is not being taken offline just because it’s operating outside of its ‘normal baseline’ for a legitimate reason.


Working smarter

With the skills shortage set to remain an issue, many organisations are taking a two-pronged approach to improving their security situation, one of which is focussed on making use of the technology and tools, like AI and ML, to help them do more with the experienced security people that they do have.

The second prong is working with strategic and experienced partners to help them, with 65% of organisations currently outsourcing some or all of their IT security efforts, and this is expected to rise to 72% by 2022 according to research from Sophos. An experienced partner can act as an extension of your IT team and provide exactly the kind of support you need, from a security health check, to out-of-hours cover, to full 24/7 support.

To learn more about how Maple Networks can support your organisation, please get in touch.

Contact us