Back in what we’d call the Bad Old Days, information security was both easier and more difficult than it is today. The attack surfaces were smaller and less complex. Threats were as likely to be a kid defacing a website as they were to be a complex attack designed to damage business infrastructure. And security logs were measured in megabytes a day. An easier time. Except our tools consisted of shell scripts, GREP, and whatever we could cobble together for visualization. Security automation consisted of whatever shell scripts we’d written to handle repetitive tasks for us and really not much else.
Things have progressed a lot in the twenty plus years since those dark days.
Things Got Better
Security Event Management and Security Information Management tools merged into SIEM, for Security Information and Event Management tools, a term coined by the folks at Forrester in 2005. Life got better. Analysts now had a consolidated view of what was going on in the environment – assuming they could set up their displays and filters to remove the cruft. It wasn’t just false positives. It was information overload. Any non-analyst who’s seen a SIEM dashboard can tell you how confusing that wall of color-coded events could be to the uninitiated.
When SOAR, for Security Orchestration, Automation, and Response, entered the picture officially after Gartner defined it in 2017, we gained tools that improved orchestration and response for the SecOps team. Life got better still, but there was still room for improvement.
Machines Up Their Game
This is where advanced behavioral analytics comes into the picture, bringing along machine learning and artificial intelligence. While a lot of SIEM and SOAR solutions have some layers of built in security automation, the majority of them are rules-based. If I see Event A, I will respond with Action B and display Alert C. While there are a lot of rules built into the systems and it’s gotten easy to modify existing ones and create new ones, they’re still rules. They still have the limitations imposed by being rules.
Rules have trouble dealing with edge cases and outliers. If you’re worried about people logging into the system after hours, you have to decide what constitutes “After Hours” as a rule, say, any time after 2330 (11:30 PM) and before 0630 (6:30 AM). Now, any login event during that time throws an alert. Chances are, that alert is configured at some set threat level and color coded to match. So when someone logs in 15 minutes before midnight, the on-duty in the SOC sees an off-orange alert pop up, indicating nothing serious, and it promptly scrolls out of sight and out of mind.
Nine times out of ten, or more likely, ninety-nine times out of a hundred, it is a low threat incident. The analyst knows that, so they ignore it. But that means they’ll miss that one time out of many when it really is something unusual. That late login combined with accessing systems that the user rarely touches, combined with a few other indicators, all point to a hostile logging in with stolen credentials. A rules-based system won’t catch that. It doesn’t correlate all of those events, so it can’t present the analyst with a unified risk score.
Adapt at Machine Speed
Behavioral analytics does exactly that. That after-hours login shows up as a moderate risk at worst and doesn’t even hit the analyst’s radar. But when the user starts poking around in places they don’t normally go, the risk score goes up. It continues to rise when the user triggers other events – until the alert pops up on the analyst’s dashboard with a high-confidence risk score. Of course, by then, the system could have already triggered an automated response that locked the wayward user out of sensitive systems.
Machine learning also makes the behavioral analytics engine more effective by letting the rules be a bit fuzzy around the edges. That 2330 to 0630 rule doesn’t trigger if the user logs in at twenty-five after eleven, or at six thirty-five. But behavioral analytics can see the outliers. They’re close, so they raise the threat level. It can also lower the threat level if the user has a habit of logging in at weird hours. That aspect of their behavior becomes normalized, while the other indicators aren’t affected.
It all rolls into an effective system that can adapt to a changing environment at Machine Speed and provide automated reactions and intelligent risk-based alerts for the live humans in the loop.
Watch the Webinar
Learn more about how Artificial Intelligence can drive security automation at Machine Speed.