When the Microsoft Exchange Server vulnerabilities dropped in early March 2021, it was the harbinger of a wave of compromises tied to the exploits. While the patches were rolling out across the enterprises that used on-premises Exchange servers, malicious actors were scrambling to leverage the vulnerabilities as much as possible before the patch closed the hole. As March wound down, the number of vulnerable systems wound down and, with it, the number of reported compromises dropped as well.
It’s a pattern we’ve seen repeated hundreds of times over the years. Literally, hundreds of times. While the scale hasn’t always been so grand as this recent wave, the pattern is still the same. Someone discovers a vulnerability in some piece of software, they weaponize it into an exploit, and they start using it against targets in the wild. Once the vulnerability and exploit become public knowledge, a race starts between malicious actors trying to use the ‘sploit while it’s still effective and sysadmins scrambling to patch against it. The scale usually isn’t Code Red level, but the pattern is the same.
The Scary Time: Discovery to Exposure
The scary part of this pattern is the time between someone finding the vulnerability and figuring out how to exploit it, and the time the developers find out about it and create a patch. If the vulnerability was discovered by an ethical hacker, chances are good it will be reported to the application’s developer who will then issue a patch in a timely manner, before the vulnerability is revealed to the public. That’s kind of an ideal situation.
When the vulnerability is discovered by someone with a less ethical bent, it is often concealed while the person who found it develops an exploit that takes advantage of it. Then, it’s either by them to attack vulnerable targets, or given – or sold – to someone who will deploy the attack for their own gain. This is even worse when a vulnerability is discovered by a State or State Sponsored threat actor. In that case, the exploit is likely to end up in some intelligence agency’s toolkit for use against specific targets in support of some nation state’s agenda.
Unless, of course, the exploits get leaked to the criminal underground, in which case the previously classified tools become the bane of security professionals worldwide.
How Did That Get There?
One of the biggest challenges in dealing with novel vulnerabilities is that they can be very hard to identify before their public release. Even when someone sees some variation of the dreaded “your files have been encrypted” message, they may not know how the bad guys got in. While it’s often the result of an attack against the user, it’s entirely possible the bad guys leveraged a Zero Day exploit to get in and drop their payload. Since it’s a brand-new exploit against a vulnerability people haven’t seen before, there are only limited tools available to thwart the attack.
That’s that time between discovery and deployment and time of announcement and patching. How do you defend during that window?
Without identified Indicators of Attack (IOA) or Indicators of Compromise (IOC) it can be hard to know when you’re under attack by a new exploit. There are some broad indicators that can show something is amiss, but only if the SOC is paying attention and has the tools to highlight the events. Deception technologies can trap an attacker on a decoy system, but they may not be sensitive enough to raise an alert for a newly released exploit. In fact, a few tools can show something is amiss but may not treat it as much beyond background noise.
It Does A Thing. We See The Thing.
That’s where User and Entity Behavior Analytics (UEBA) can fill in the gaps, even for a novel attack. By looking at behavior in user or entity activity, basically all the people and hosts in the environment, a UEBA system like Gurucul’s can react. It’s looking for anomalies in the telemetry rather than specific IOC’s. This way, even if it’s looking at a newly deployed exploit with no known signature, it can still spot it by what it does.
That’s one of the advantages overall of analyzing behaviors. Whether it’s a person or a host, there are patterns that emerge in normal day to day activities. When things start to happen outside those normal ranges, it can be a good indication that something is amiss. The more context you have on that telemetry the more accurate the assessment. Since it’s reasonably easy to compromise a user’s identity, but really hard to duplicate that user’s activity, behavior becomes a reliable indicator especially when you have a good idea of what’s normal.
It Can’t Do AND Hide
Since exploits almost always showcase behavior outside the norm, the UEBA software product recognizes the attack and reacts. It doesn’t need a known IOC. It doesn’t need a signature. It just needs to see an unusual behavior and it can do its job.
So even when the exploit is against a new and unique vulnerability, your defenses can still do their job.
Prev: ABCs of UEBA: U is for User Next: ABCs of UEBA: W is for Watchlist