Managing insider threats — especially malicious and compromised insiders — is notoriously difficult:
To truly be effective (and efficient) at detecting, predicting, and preventing insider threats, an organization must combine people, processes, and technology within a dedicated insider threat program.
However, creating such a program from scratch can be intimidating. From our experience working with enterprises around the world, we recommend a seven-step process:
Initiate the program by engaging with all stakeholders: While an insider threat program is typically owned by Information Security or perhaps Information Technology, implementation and ongoing success of the program depends heavily on collaboration. Other stakeholders should represent a wide range — much wider than most might think — of groups or departments.
Define your use cases and threat indicators: Insider threats manifest in a wide variety of ways and the threat indicators are far different from the indicators of compromise (IoCs) of external cyber-attacks. To get the most out of your insider threat program, it’s helpful to focus your efforts on a subset of the most common or highest-impact use cases.
Select a platform that provides a unified view of security and risk: It’s important to have one place where you can consolidate all the available data, run the analytics, and get actionable results.
Link information across multiple sources to build a holistic view: A good insider threat platform should have link analysis algorithms (machine learning model chaining) built-in to interconnect all the data sources and build context of user and entity (e.g., a device or resource) activity to better confirm real threats.
Establish a baseline of normal behavior for all users and their peer groups: To identify anomalous behavior that could be indicative of malicious or risky activity, you first need to know what ‘normal’ activity looks like — and the insider threat platform requires a high level of sophistication and customizability to account for the dynamism of today’s organizations.
Monitor ongoing behavior and respond to potential threats: Now it’s a matter of the insider threat platform operating in a steady state (e.g., looking for anomalies, assessing their risk levels, and sending alerts when appropriate), continued collaboration between all stakeholders, and responding accordingly to suspicious and risky events.
Operationalize your processes: Continuously review the results using key performance indicators (KPIs) to track progress, optimize security investments, and keep stakeholders informed.
Following these steps will help to get an insider threat program off the ground and keep it running efficiently and effectively while adapting to an ever-changing threat landscape.
An insider threat refers to the risk associated with or attributable to individuals within an organization — particularly those who have authorized access to sensitive information, systems, or resources.
With today’s extended workforces, insiders include employees, contractors, temporary workers, and business partners (e.g., vendors, suppliers, customers), any of whom can become an insider threat actor:
Careless Insider: An insider who unknowingly introduces risk, for example through negligence, lack of awareness, or failure to follow established security protocols.
Malicious Insider: An insider who intentionally breaches security policies and misuses their privileges, typically for personal gain or to harm the organization.
Compromised Insider: An insider whose credentials or access rights have been compromised, enabling threat actors to gain unauthorized access to privileged resources.
The most common form of insider threat is unauthorized disclosure of sensitive information. This type of incident occurs when an individual with authorized access to confidential data — e.g., customer data, trade secrets, personally identifiable information (PII), protected health information (PHI), financial information, etc. — shares it with unauthorized individuals or entities.
The cause can be as innocent as misaddressing an email or including the wrong attachment (e.g., more data than intended, or the wrong data), or as malicious as deliberately disclosing proprietary knowledge to a competitor for personal profit.
“68% of respondents are concerned about insider risks as their organizations return to the office or transition to hybrid work.”
– Insider Threat Report by Cybersecurity Insiders
Malicious and compromised insiders generally present more risk to the organization.
For example, an attacker who deliberately compromises security by stealing sensitive data, engaging in fraud, introducing malware, or sabotaging systems — can (to list just a few consequences):
Verizon’s 2023 Data Breach Investigations Report (DBIR) found that the top three motivations for the Privilege Misuse attack pattern are:
In fact, IBM’s Cost of a Data Breach Report 2023 found that attacks initiated by a malicious insider (6% of breaches, compared to 15% for compromised insiders) were the costliest, at an average of $4.9 M USD — nearly 10% higher than the average cost of a data breach. These costs are based on post-breach expenditures driven by four process-related activities: detection and escalation, notification, post-breach response, and lost business.
Unfortunately, managing insider threats — and especially malicious and compromised insiders — is notoriously difficult. Preventative measures like identity and access management (IAM) and security policies must balance minimizing risk with ensuring members of the workforce have the privileges they need to perform their functions efficiently and effectively.
And detection and response are even more challenging.
Detection requires distinguishing between acceptable activities and those that either put the organization at risk or are outright malicious. Doing so is easier said than done, and the reality is that many organizations simply don’t have the systems and solutions in place to identify such threats in a timely manner.
Effective response might demand real-time and coordinated intervention in a variety of forms — from common technical measures such as isolating endpoints, suspending access privileges, and locking out certain devices, to procedural actions including alerting the HR and IT departments, and initiating full incident response (IR) plans.
Since 2010, Gurucul has helped companies of all sizes, nearly every industry, and around the globe to implement effective insider threat programs.
This document distills some of our best practice recommendations based upon our experiences.
Note: Insider threat programs are complementary to, but distinct from, general cybersecurity programs that include measures like implementing layered security controls, enforcing least-privilege access, deploying privileged access management (PAM), using FIDO2 authentication, requiring all members of the extended workforce to complete phishing and security awareness training (PSAT), etc.
The foundation of a successful insider threat program is engagement and continuous collaboration with all stakeholders throughout the entire organization.
Involving all stakeholders is essential for two reasons:
Cybersecurity Insiders’ 2023 Insider Threat Report found that insider threat programs are most frequently led by:
Line of business leaders from across the business areas should be given responsibility for informing the rolebased risk assessment and managing the implementation of program requirements in their domains.
In our experience, the insider threat program is typically ‘owned’ by Information Security or perhaps Information Technology departments. Other stakeholders should represent groups or departments including:
It’s especially important to include the HR department in the implementation effort. This group is generally the keeper of the official employment records, which include each person’s position and employment status.
This information is critical for understanding key elements of user behavior such as access permissions, group affiliations, and account provisioning/deprovisioning.
For example, if an employee leaves the company, this information needs to quickly feed into the system that will be looking at user behaviors and managing access privileges. If a departed employee’s account is utilized, the activity should immediately be flagged as high risk and stopped/contained to prevent malicious actions such as data exfiltration or deletion.
Company culture should be considered as the insider threat program is being created and, where applicable, communicated. Culture informs the appropriate messaging to the workforce about the program and how it will be managed, so that people recognize that the program itself is about preserving security for everyone’s benefit — while respecting privacy.
Practically, the culture around risk differs by industry and organization.
A financial services company accepts that insider risk is a problem and the company as a whole generally understands this reality and the need for employee monitoring. For them it’s not a trust issue but a security control.
In contrast, other organizations may have more relaxed cultures. In such an environment, implementing an insider risk program might be viewed as a significant departure from the norm, causing the workforce to wonder why management has lost trust.
Getting communications right is essential to implementing an insider threat program without undermining the very culture that has made the organization successful to date.
Contact us for templates that can help guide you through these crucial communications.
Depending on your organization, you may or may not need Board-level engagement.
In recent years, cyber risk has gained Board-level attention in many organizations, alongside the usual array of business risks that need to be managed. Insider threats may be included within the broader umbrella of cyber risk, or may be considered as an overlapping area — particularly if your organization’s insider threat concerns extend beyond the cyber realm.
At a minimum, the Board of Directors should be informed of the program, but deeper engagement might be needed for budget approval and to translate the strategic vision for the insider threat program into practice.
Plus, the support of the Board is a strong signal to the rest of the organization that the insider threat program is a priority.
“Gurucul really stood out because the analytics engine was the most powerful. I don’t think there’s a day that goes by where we don’t have a new interesting use case. we didn’t think of before. We’re down to the level of ingesting physical security logs from our parking ramp to determine who is here. Could they really have done what they did? They weren’t even at the building. These types of use cases, there’s really no end to it.”
– William Scandrett, CISO, Allina Health
Insider threats manifest in a wide variety of ways, and the greatest risks often vary by industry and organization. To get the most out of your insider threat program, it’s helpful to focus your efforts on a subset of the most common or highest-impact use cases — and collaborating with stakeholders is very helpful in this regard.
The 12 fundamental insider threat use cases that you may wish to consider are:
In an external attack, you would look for things like communication with known malicious URLs, traffic to command-and-control (C&C) channels, suspicious process execution, and so on.
Insiders typically don’t use those sorts of tactics to execute their bad behavior, so instead you might look for behavioral conditions like:
Engaging with a wide range of stakeholders can help to create a comprehensive list of threat indicators.
After all, no one has a better grasp of the activities involved in a particular role or department’s function than those who are deeply familiar with it on a day-today basis.
Insider threat detection requires a completely different way of looking at information and signals from an inside-out perspective by using context and not merely transactions.
Choose a platform that gives you a unified view of security and risk so you can consolidate all the available data, run the analytics, and get actionable results, all in one place. Siloed systems only further complicate an already complex domain.
The platform should inform you, “Here is the confirmed threat, here is the evidence, and this is what should be done in response.”
The next step is to build the context across various systems, looking at users’ access activity and any alerts, and building that holistic view and linking it together model chaining.
While the data sources available vary by organization, we recommend integrating your insider threat platform with:
This avoids relying upon correlation rules, which they are too basic to give you the highest efficacy. A good insider threat platform should have link analysis algorithms built-in to properly link all the data sources and build a 360-degree contextual view of user and entity (e.g., a device or resource) activity.
To identify anomalous behavior that could be indicative of malicious or risky activity, you first need to know what ‘normal’ activity looks like. Thus, you must establish the baseline of normal behavior for all users, their dynamic peer groups, and the devices they use.
Establishing baseline behaviors takes considerable effort, as you must consider an employee’s behavior relative to that of their peers.
For example, suppose Alex, a junior staff member in the Finance Department, accesses the company’s annual report file. Is this something that other junior staff members can do, or is this activity unusual for someone in Alex’s position?
However, Alex’s peer group can change. Perhaps their manager has legitimate access to the annual report, but is on a leave, and has temporarily granted Alex some managerial privileges. In this temporary context, Alex accessing the file is not by itself cause for concern.
Once the user and peer group baselines have been established, ongoing monitoring begins. You’ve already defined your use cases and threat indicators, linked your data sources to create context, and established normal behavioral baselines, so now it’s a matter of the insider threat platform operating in a steady state — looking for anomalies, assessing their risk levels, and sending alerts
when appropriate.
A fit-for-purpose solution should provide a minimal volume of alerts. To illustrate the point: in our experience, a well-governed organization with 100,000 users should be getting ~100 confirmed insider threat alerts per month — a far cry from the thousands of alerts typical of signature or rules-based systems.
The next step is to set the right response mechanisms, build the right playbooks, and have the right collaboration and communication processes established with stakeholders. As your program matures, you can automate these controls, which is the end state you should be pursuing.
Organizations should not underestimate the importance of process-driven responses when dealing with insider threats. Many responses will require collaboration between different functions, including the owner of the insider threat program and other stakeholders including HR and the legal team — with the latter being especially important for ensuring compliance with privacy laws and other legal considerations.
For instance, gaining access to an employee’s HR or SAP data might first require clearance from HR and legal, and may be dependent upon preliminary evidence that a user is engaged in anomalous or suspicious activity. Relevant information — e.g., a complaint, disciplinary write-up, or performance improvement plan (PIP) — can then be incorporated into the overall package of evidence, potentially leading to actions including suspending digital and physical access, and termination.
To have an effective insider threat program, you need to have all the steps above working well and then continuously review the results and provide feedback.
The establish, monitor, respond, and operationalize loop should evolve with your business and risk, while providing key performance indicators (KPIs) that can be used to track progress, optimize security investments, and keep stakeholders informed.