Cleaning House: Getting Rid of Malicious Insiders

Free Services to help you during COVID-19 Learn More

Support Request a Demo Contact Us Blog
Getting Rid of Malicious Insiders

It’s that time of year when we throw out the old before starting afresh with the new.  It’s called spring cleaning and it’s a long cherished tradition.  We take a hard look at our inventory and clean out those assets that are no longer of practical use.  When it comes to cybersecurity practices, there’s nothing more important that getting rid of malicious insiders.

Bad Actors Gotta Go

Cybersecurity practitioners have a lot of things on their plate.  Corporate systems are constantly under attack in one way or another by malicious actors.  It’s the job of security operations to keep those systems safe whether the threat is from an outsider probing the perimeter or someone on the inside acting on bad intentions.

The fact is there are people within many organizations who don’t have their organization’s best interests at heart.  Whether it is a disgruntled employee who believes he has been mistreated, or an employee who has been bribed or blackmailed to do the bidding of an external attacker, malicious insiders are a very real threat.  Perhaps worse is an individual who entered the organization specifically intending to perform hostile acts.  There’s also the otherwise trusted individual looking to profit.  Whatever the motive, the result is the same: someone within the organization is acting maliciously.  These malicious insiders gotta go, and fast.  But first you must identify them…

SOC It to Them

Where a disgruntled employee is often acting on emotion, they are apt to make mistakes that reveals nefarious activity.  The same can be true for a compromised employee, either bribed or blackmailed, who is forced to operate under time constraints by whomever is coercing them to act.  However, dealing with malicious insiders in trusted positions is a special challenge.  As is dealing with someone who has some experience with exfiltrating data, or otherwise knows how to move unnoticed through the environment.   These are the adversaries that require advanced tools and experienced SecOps personnel to catch.

Identity and Access Management (IAM) tools can help validate users are who they say they are and thwart some forms of credential theft.  These can work hand in hand with multi-factor authentication that goes a long way to blunt credential-based attacks.  But IAM tools aren’t perfect, and they may not do anything to stop a privileged user or hostile insider who has legitimate access to valuable assets.

Data Loss Prevention (DLP) systems have a similar effect in protecting valuable digital assets.  But again, they may not be able to prevent an authorized user from reaching those assets and doing with them as they will.  Additionally, DLP systems can do little to protect non-data assets if the attack isn’t targeting files that can be copied or encrypted.  There are, in fact, a range of other defense technologies that can be quite effective within their scope but do little to thwart attacks that fall outside their design parameters.

It All Comes Down to Monitoring User Behavior

Behavior analytics tools that leverage machine learning are the most useful for detecting and remediating insider threats.  By aggregating telemetry from the entire security stack, Gurucul User and Entity Behavior Analytics (UEBA) can look at behaviors in context.  And that context lets it extract meaning from what the users and other entities, as assets or systems, are doing in relation to one another.  Where a careful attacker within the environment could move low and slow to avoid detection, because isolated events aren’t enough to gain an analyst’s attention, the system can now pick the attacker’s activity out of the noise.

This ability to see threat actors’ behaviors through the constant noise of most environments is vital.  Without it, the attackers can move unseen for days, or even weeks, before they are discovered.  In the case of malicious insiders who know what their target is and where to find it, this form of analysis may be the only way to detect their nefarious behavior before they are able to damage the organization and its assets.

Behavior analytics powered by machine learning is a vital piece of a modern security stack.  By delivering unified risk scores, in context, it lets the Security Operations team detect and defeat insider threats in ways that other tools can’t.

Take Out Compromised Accounts with The Self-Audit

A form of insider threat is account compromise.  This is where an external attacker compromises a valid user account and uses it to gain access to the network.  Detecting these sophisticated attacks also centers on understanding context, relevance, and risk for anomalous behaviors.  It also helps if employees are part of the solution.

Compromised accounts can be quickly identified by users if they are provided with the right information.  We recommend you provide employees with Self-Audit reports that risk-scores their access and activity using machine learning behavior models.  Statements are sent out via email or online and resemble the format of a credit card statement.  And as the experience of anyone with a credit card is familiar with, the act of the user reviewing the list of their activities is a simple process, requiring little time to perform.  As a result, users can validate, question, or refute any activities they believe are anomalous.  If a user’s account has been compromised, it’s quickly identified by the user for resolution by the Security Operations team.

Years-Long Nefarious Activity Discovered

This is a true story.  An insurance company sought to raise security awareness by providing employees with these self-audit reports once a week.  One employee spent a Wednesday out of the office and never logged into her high privilege accounts on that day.  The self-audit report, however, revealed account login and activity for Wednesday.  The employee quickly recognized this was not her.  Deeper investigation by the security team revealed that one of the employee’s high privilege accounts had been compromised for over 3 years by an external intruder.  The context of the self-audit report with machine learning risk scores and visibility added new eyes beyond the SOC team’s capabilities.

Attend the Webinar

Do you want to learn more about how to get rid of malicious insiders? Attend our upcoming webinar to explore how Gurucul’s Machine-Learning risk analytics platform can help you identify and remove malicious insiders before they generate a newsworthy incident.

Webinar: Cleaning House: Getting Rid of Malicious Insiders

Date: Thursday, April 15, 2021 @ 11:00am PT

Register Now Button

Share this page:

Related Posts