Best Practices for Implementing an Insider Threat Program

Best Practices for Implementing an Insider Threat Program
Download PDF Version

Summary

Managing insider threats — especially malicious and compromised insiders — is notoriously difficult:

  • Preventative measures are a balancing act between minimizing risk and ensuring workers have the privileges they need to perform their functions efficiently and effectively.
  • Detection is tricky since you must distinguish between acceptable activities and those that put the organization at risk or are outright malicious.

To truly be effective (and efficient) at detecting, predicting, and preventing insider threats, an organization must combine people, processes, and technology within a dedicated insider threat program.

However, creating such a program from scratch can be intimidating. From our experience working with enterprises around the world, we recommend a seven-step process:

Initiate the program by engaging with all stakeholders: While an insider threat program is typically owned by Information Security or perhaps Information Technology, implementation and ongoing success of the program depends heavily on collaboration. Other stakeholders should represent a wide range — much wider than most might think — of groups or departments.

Define your use cases and threat indicators: Insider threats manifest in a wide variety of ways and the threat indicators are far different from the indicators of compromise (IoCs) of external cyber-attacks. To get the most out of your insider threat program, it’s helpful to focus your efforts on a subset of the most common or highest-impact use cases.

Select a platform that provides a unified view of security and risk: It’s important to have one place where you can consolidate all the available data, run the analytics, and get actionable results.

Link information across multiple sources to build a holistic view: A good insider threat platform should have link analysis algorithms (machine learning model chaining) built-in to interconnect all the data sources and build context of user and entity (e.g., a device or resource) activity to better confirm real threats.

Establish a baseline of normal behavior for all users and their peer groups: To identify anomalous behavior that could be indicative of malicious or risky activity, you first need to know what ‘normal’ activity looks like — and the insider threat platform requires a high level of sophistication and customizability to account for the dynamism of today’s organizations.

Monitor ongoing behavior and respond to potential threats: Now it’s a matter of the insider threat platform operating in a steady state (e.g., looking for anomalies, assessing their risk levels, and sending alerts when appropriate), continued collaboration between all stakeholders, and responding accordingly to suspicious and risky events.

Operationalize your processes: Continuously review the results using key performance indicators (KPIs) to track progress, optimize security investments, and keep stakeholders informed.

Following these steps will help to get an insider threat program off the ground and keep it running efficiently and effectively while adapting to an ever-changing threat landscape.

Introduction: The reality of insider threats

An insider threat refers to the risk associated with or attributable to individuals within an organization — particularly those who have authorized access to sensitive information, systems, or resources.

With today’s extended workforces, insiders include employees, contractors, temporary workers, and business partners (e.g., vendors, suppliers, customers), any of whom can become an insider threat actor:

Careless Insider: An insider who unknowingly introduces risk, for example through negligence, lack of awareness, or failure to follow established security protocols.

Malicious Insider: An insider who intentionally breaches security policies and misuses their privileges, typically for personal gain or to harm the organization.

Compromised Insider: An insider whose credentials or access rights have been compromised, enabling threat actors to gain unauthorized access to privileged resources.

The most common form of insider threat is unauthorized disclosure of sensitive information. This type of incident occurs when an individual with authorized access to confidential data — e.g., customer data, trade secrets, personally identifiable information (PII), protected health information (PHI), financial information, etc. — shares it with unauthorized individuals or entities.

The cause can be as innocent as misaddressing an email or including the wrong attachment (e.g., more data than intended, or the wrong data), or as malicious as deliberately disclosing proprietary knowledge to a competitor for personal profit.

“68% of respondents are concerned about insider risks as their organizations return to the office or transition to hybrid work.”

– Insider Threat Report by Cybersecurity Insiders

Malicious and compromised insiders are the biggest risk — but are also the hardest to manage

Malicious and compromised insiders generally present more risk to the organization.

For example, an attacker who deliberately compromises security by stealing sensitive data, engaging in fraud, introducing malware, or sabotaging systems — can (to list just a few consequences):

  • Inflict significant direct financial losses
  • Expose an organization to regulatory penalties
  • Cause long-lasting reputational damage.

Verizon’s 2023 Data Breach Investigations Report (DBIR) found that the top three motivations for the Privilege Misuse attack pattern are:

  1. Financial (89%)
  2. Grudge (13%)
  3. Espionage (5%)

In fact, IBM’s Cost of a Data Breach Report 2023 found that attacks initiated by a malicious insider (6% of breaches, compared to 15% for compromised insiders) were the costliest, at an average of $4.9 M USD — nearly 10% higher than the average cost of a data breach. These costs are based on post-breach expenditures driven by four process-related activities: detection and escalation, notification, post-breach response, and lost business.

Unfortunately, managing insider threats — and especially malicious and compromised insiders — is notoriously difficult. Preventative measures like identity and access management (IAM) and security policies must balance minimizing risk with ensuring members of the workforce have the privileges they need to perform their functions efficiently and effectively.

And detection and response are even more challenging.

IBM’s Cost of a Data Breach Report 2023 found that the mean time to identify and contain a data breach was 277 days. When a breach is initiated by a malicious insider, that figure rises by 11% to 308 days; breaches resulting from stolen or compromised credentials take 328 days —more than 18% longer than the average.

Detection requires distinguishing between acceptable activities and those that either put the organization at risk or are outright malicious. Doing so is easier said than done, and the reality is that many organizations simply don’t have the systems and solutions in place to identify such threats in a timely manner.

Effective response might demand real-time and coordinated intervention in a variety of forms — from common technical measures such as isolating endpoints, suspending access privileges, and locking out certain devices, to procedural actions including alerting the HR and IT departments, and initiating full incident response (IR) plans.

Cybersecurity Insiders’ 2023 Insider Threat Report revealed that 48% of cybersecurity professionals consider insider attacks more difficult to prevent and detect, compared to external cyber attacks (with an additional 44% of cybersecurity professionals regarding them as equally challenging).

Managing organizational risk with an insider threat program

Since 2010, Gurucul has helped companies of all sizes, nearly every industry, and around the globe to implement effective insider threat programs.

This document distills some of our best practice recommendations based upon our experiences.

Note: Insider threat programs are complementary to, but distinct from, general cybersecurity programs that include measures like implementing layered security controls, enforcing least-privilege access, deploying privileged access management (PAM), using FIDO2 authentication, requiring all members of the extended workforce to complete phishing and security awareness training (PSAT), etc.

Step #1: Initiate the program by  engaging with all stakeholder

The foundation of a successful insider threat program is engagement and continuous collaboration with all stakeholders throughout the entire organization.

Involving all stakeholders is essential for two reasons:

  • Their contributions and specialist insights can make the program more effective and efficient
  • Their buy-in and support are necessary conditions for program success.

Cybersecurity Insiders’ 2023 Insider Threat Report found that insider threat programs are most frequently led by:

  1. CISO (25% of organizations)
  2. IT Security Managers (24%)
  3. Director of Security (14%)
  4. Information Security Officer (13%)
  5. VP of Security (4%)

Line of business leaders from across the business areas should be given responsibility for informing the rolebased risk assessment and managing the implementation of program requirements in their domains.

In our experience, the insider threat program is typically ‘owned’ by Information Security or perhaps Information Technology departments. Other stakeholders should represent groups or departments including:

  • Facilities Management and Physical Security
  • Operational Technology (OT)
  • Contract Management and Procurement
  • Finance
  • Legal
  • Corporate Communications
  • Employee Development/Training
  • Human Resources (HR)
  • Staff/ Trade Union representatives

It’s especially important to include the HR department  in the implementation effort. This group is generally the keeper of the official employment records, which include each person’s position and employment status.

This information is critical for understanding key elements of user behavior such as access permissions, group affiliations, and account provisioning/deprovisioning.

For example, if an employee leaves the company, this  information needs to quickly feed into the system that will be looking at user behaviors and managing access privileges. If a departed employee’s account is utilized, the activity should immediately be flagged as high risk and stopped/contained to prevent malicious actions such as data exfiltration or deletion.

How does company culture impact the program?

Company culture should be considered as the insider threat program is being created and, where applicable, communicated. Culture informs the appropriate messaging to the workforce about the program and how it will be managed, so that people recognize that the program itself is about preserving security for everyone’s benefit — while respecting privacy.

Practically, the culture around risk differs by industry and organization.

A financial services company accepts that insider risk is a problem and the company as a whole generally understands this reality and the need for employee monitoring. For them it’s not a trust issue but a security control.

In contrast, other organizations may have more relaxed cultures. In such an environment, implementing an insider risk program might be viewed as a significant departure from the norm, causing the workforce to wonder why management has lost trust.

Getting communications right is essential to implementing an insider threat program without undermining the very culture that has made the organization successful to date.

Contact us for templates that can help guide you through these crucial communications.

What role does the board play?

Depending on your organization, you may or may not need Board-level engagement.

In recent years, cyber risk has gained Board-level attention in many organizations, alongside the usual array of business risks that need to be managed. Insider threats may be included within the broader umbrella of cyber risk, or may be considered as an overlapping area — particularly if your organization’s insider threat concerns extend beyond the cyber realm.

At a minimum, the Board of Directors should be informed  of the program, but deeper engagement might be needed for budget approval and to translate the strategic vision for the insider threat program into practice.

Plus, the support of the Board is a strong signal to the rest of the organization that the insider threat program is a priority.

“Gurucul really stood out because the analytics engine was the most powerful. I don’t think there’s a day that goes by where we don’t have a new interesting use case. we didn’t think of before. We’re down to the level of ingesting physical security logs from our parking ramp to determine who is here. Could they really have done what they did? They weren’t even at the building. These types of use cases, there’s really no end to it.”

– William Scandrett, CISO, Allina Health

Step #2: Define your use cases and threat indicators

Insider threats manifest in a wide variety of ways, and the greatest risks often vary by industry and organization. To get the most out of your insider threat program, it’s helpful to focus your efforts on a subset of the most common or highest-impact use cases — and collaborating with stakeholders is very helpful in this regard.

The 12 fundamental insider threat use cases that you may wish to consider are:

  1. Flight risk users: Flight risk users: Detect users who may be preparing to leave the organization
  2. Data hoarding/collectors: Data hoarding/collectors: Detect users gathering data from corporate assets
  3. Remote access monitoring: Detect any suspicious remote connections or unusual user behavior patterns while connected remotely
  4. Off-hour activities: Detect suspicious user activities performed during out-of-norm working hours
  5. Unauthorized access to critical assets: Detect users gaining unauthorized access or performing unauthorized activities on critical infrastructure or information outside of their normal job duties
  6. Network scan/wanderer: Detect users attempting to scan through the organization’s assets and information
  7. Privileged access misuse: Detect users abusing administrative, super user, etc. access privileges
  8. Account compromise: Detect unusual account login patterns and potential compromise
  9. Unusual host processes execution: Detect hosts/entities running processes that may indicate the presence of malware
  10. Disgruntled behavior: Detect users whose behavior suggests discontent
  11. Data or service destruction/disruption: Detect users attempting to cause interruption to / destruction of the organization’s infrastructure or information
  12. Data exfiltration: Detect unauthorized movement of intellectual property (IP), customer data, sensitive information, etc. outside the corporate environment Indicators of insider threats are far different from the indicators of compromise (IoCs) of external cyberattacks.

In an external attack, you would look for things like communication with known malicious URLs, traffic to command-and-control (C&C) channels, suspicious process execution, and so on.

Insiders typically don’t use those sorts of tactics to execute their bad behavior, so instead you might look for behavioral conditions like:

  • Hours at which an insider accesses resources
  • Locations from which an insider is connecting
  • An insider logging in from two locations at the same time
  • Frequent failed login attempts from one or multiple insiders
  • Insider attempts to access systems or data outside the scope of their role
  • An insider copying, downloading, deleting, or altering large amounts of data

Engaging with a wide range of stakeholders can help to create a comprehensive list of threat indicators.

After all, no one has a better grasp of the activities involved in a particular role or department’s function than those who are deeply familiar with it on a day-today basis.

Step #3: Select a platform that provides a unified view of security and risk

Insider threat detection requires a completely different way of looking at information and signals from an inside-out perspective by using context and not merely transactions.

Choose a platform that gives you a unified view of security and risk so you can consolidate all the available data, run the analytics, and get actionable results, all in one place. Siloed systems only further complicate an already complex domain.

Threat indicators should be specific to your business, but it can be hard to get started from scratch.
Click here to see threat indicators for each  of the 12 use cases listed above.

The platform should inform you, “Here is the confirmed threat, here is the evidence, and this is what should be done in response.”

The next step is to build the context across various systems, looking at users’ access activity and any alerts, and building that holistic view and linking it together model chaining.

While the data sources available vary by organization, we recommend integrating your insider threat platform with:

  • Human Resources (HR) & Enterprise Resource Planning (ERP) systems
  • Identity systems (e.g., IAM, IGA, PAM, other directories)
  • Logs
  • The wider security stack

This avoids relying upon correlation rules, which they are too basic to give you the highest efficacy. A good insider threat platform should have link analysis algorithms built-in to properly link all the data sources and build a 360-degree contextual view of user and entity (e.g., a device or resource) activity.

Step #5: Establish a baseline of normal behavior for all users and their peer groups

To identify anomalous behavior that could be indicative of malicious or risky activity, you first need to know what ‘normal’ activity looks like. Thus, you must establish the baseline of normal behavior for all users, their dynamic peer groups, and the devices they use.

Establishing baseline behaviors takes considerable effort, as you must consider an employee’s behavior relative to that of their peers.

The insider threat platform requires a high level of sophistication and customizability to create baselines — the algorithms should be able to create dynamic peer groups and understand and learn time-based norms.

For example, suppose Alex, a junior staff member in the Finance Department, accesses the company’s annual report file. Is this something that other junior staff members can do, or is this activity unusual for someone in Alex’s position?

However, Alex’s peer group can change. Perhaps their manager has legitimate access to the annual report, but is on a leave, and has temporarily granted Alex some managerial privileges. In this temporary context, Alex accessing the file is not by itself cause for concern.

Step #6: Monitor ongoing behavior and respond to potential threats

Once the user and peer group baselines have been established, ongoing monitoring begins. You’ve already defined your use cases and threat indicators, linked your data sources to create context, and established normal behavioral baselines, so now it’s a matter of the insider threat platform operating in a steady state — looking for anomalies, assessing their risk levels, and sending alerts
when appropriate.

A fit-for-purpose solution should provide a minimal volume of alerts. To illustrate the point: in our experience, a well-governed organization with 100,000 users should be getting ~100 confirmed insider threat alerts per month — a far cry from the thousands of alerts typical of signature or rules-based systems.

Many organizations struggle with this aspect of insider threat management, as it requires a different mentality and way of looking at things as compared to how a SOC monitors for external threats.

The next step is to set the right response mechanisms, build the right playbooks, and have the right collaboration and communication processes established with stakeholders. As your program matures, you can automate these controls, which is the end state you should be pursuing.

Organizations should not underestimate the importance of process-driven responses when dealing with insider threats. Many responses will require collaboration between different functions, including the owner of the insider threat program and other stakeholders including HR and the legal team — with the latter being especially important for ensuring compliance with privacy laws and other legal considerations.

For instance, gaining access to an employee’s HR or SAP data might first require clearance from HR and legal, and may be dependent upon preliminary evidence that a user is engaged in anomalous or suspicious activity. Relevant information — e.g., a complaint, disciplinary write-up, or performance improvement plan (PIP) — can then be incorporated into the overall package of evidence, potentially leading to actions including suspending digital and physical access, and termination.

Step #7 Operationalize your processes

To have an effective insider threat program, you need to have all the steps above working well and then continuously review the results and provide feedback.

In a modern, fit-for-purpose insider threat solution, the algorithms should be able to tune themselves because they’re self-learning — leading to higher efficacy over time.

The establish, monitor, respond, and operationalize loop should evolve with your business and risk, while providing key performance indicators (KPIs) that can be used to track progress, optimize security investments, and keep stakeholders informed.

 

Download PDF Version