Insider Threats Are on the Rise and Growing More Costly. You Need the Right Tools to Detect Them

Free Services to help you during COVID-19 Learn More

Support Request a Demo Contact Us Blog
Blog Insider Threats Growing

A recent report on cybersecurity spending says that companies have been raising their budgets in this area lately. In fact, security spending has been growing faster than general IT budgets. This is a testament to the high risk to an organization if it suffers a breach of any significant size.

Each company has its own spending priorities, of course, but the areas getting the most attention right now seem to be cloud security, network security, endpoint protection, and strong authentication. This aligns with my observation that most organizations devote a majority of their IT security budget and other resources on the prevention and detection of cyber-attacks coming from outside the network perimeter.

Meanwhile, a more pernicious threat is already sitting inside the network and wearing an employee badge—if not literally, then at least figuratively. We’re talking about the insider threat. This actor can be a true insider – employee, contractor, business partner or vendor – or an external intruder who is mimicking an employee through subversion of a legitimate set of credentials, perhaps obtained off the Dark Web or through a phishing campaign.

Information security professionals say that insider attacks are far more difficult to detect and prevent than external attacks. Unfortunately, according to a recent Ponemon Institute study, insider threats have become more frequent in recent years. You need the right tools to detect insider threats.

Ponemon’s study claims that insider cybersecurity incidents have risen 47% since 2018. As if that’s not bad enough, the average annual cost of an insider-caused breach also increased, up 31% to $11.45 million. The events are attributed to negligent insiders (62%), criminal insiders (23%), and credential insiders (14%). Clearly, this is a significant problem that should not be ignored.

What’s Needed to Detect Insider Threats

Traditional defense and detection systems are largely ineffective in detecting and surfacing insider threats. These systems are primarily looking for indicators of compromise (IoC’s) – signatures of known attack methods – whereas workers inside the perimeter don’t need to use malware, phishing or other attack techniques to gain access to sensitive servers, databases and applications. They already have legitimate access to these systems, giving them the opportunity to expose or steal information, corrupt essential computer systems, or simply disrupt business as usual.

Rather than IoCs, the defining characteristic of an insider attack is privilege abuse, which entails doing things the person doesn’t have legitimate permission to do. One of the best means to detect an insider’s privilege abuse is with User and Entity Behavior Analytics (UEBA). UEBA involves keeping track of what users are doing – particularly those with elevated privileges such as system administrators, and workers with access to highly sensitive information like intellectual property (IP) or customer account data – and looking for behaviors that are outside the range of their normal activities.

But just looking at behavioral patterns isn’t enough. There are too many opportunities for false positives or, worse, false negatives. The scrutiny of activity has to be further refined because a determined actor, especially one who knows the internal systems intimately, can do his damage without raising red flags over his behavior.

Gurucul uses a much more effective approach combines UEBA with in-depth intelligence about a user’s identity attributes and the privileges he has on the network. People often have multiple digital identities for the various systems they log into and applications they use. And for each identity they might have multiple entitlements; for example, the right to upload and download data, to change or update records, to delete data or files, and so on.

Altogether, these numerous identities and privileges create quite a threat plane—places where data or information can be stolen or damaged in some way.

Ultimately, this combined approach involves analyzing the access rights and entitlements a person has; the activities he has been performing across multiple accounts, both now and in the past; and the typical activities that members of his peer groups are doing. It takes a combination of the right data sources, sophisticated machine learning, and perceptive data science to pinpoint truly aberrant actions that are good indicators of misuse of assigned privileges.

Determining the Risk of a User Identity and Its Activities

Identities and entitlements are often in a state of excess access due to manual processes built upon legacy rules for identity management. This provides the insider or hijacked account user more room to roam than desired, creating undue risk waiting for abuse. The perfect balance is the right access for the right user when they need it for their job, and no access when they do not need it.

Applying user and entity behavior analytics for risk-ranked access is changing identity management to reduce excess access, manage high privilege access and detect orphan and dormant accounts.

To really understand a user identity, and to determine the risk of that identity as a threat plane, Gurucul collects relevant data from a variety of sources, including:

  • Identity management systems – Data is drawn from internal directory services, identity and access management platforms, human resources systems—wherever “people” and “account” information is kept. Gurucul has out-of-the-box connectors to ingest this data from other vendors’ systems. Collecting this identity data allows us to understand who the people are within the organization, and what legitimate access rights have been assigned to them.
  • Privileged account management systems – Many enterprises use specialized tools to control and track the activities of powerful accounts, such as those belonging to system administrators, database administrators, security professionals, and so on. Moreover, these identities are often the target of spear phishing attacks. With all the things that these accounts can do, they are prime for privilege abuse and must be monitored closely. Gurucul collects this data in order to understand what the most privileged accounts are doing.
  • Directories – The most common is Active Directory (AD) for on-premises; this source may also include LDAP directories or other directory servers.
  • Log sources – These sources track every activity that goes on in an environment. The data can be collected from log aggregators, SIEMs, syslogs, databases, applications, and individual end systems. By collecting this information, we get every bit of activity that is taking place throughout the enterprise, and who those activities are attributed to.
  • Defense-in-depth systems – Systems such as DLP, anti-malware, IDS, IPS, firewalls, SIEM, etc. raise alerts when they find suspicious activity. We want to include those alerts in our analysis and correlate it to specific network identities.
  • Intelligence sources – This information typically comes from external sources that are tracking a broad scope of indicators of compromise and threat patterns. In addition, Gurucul has built our own out-of-the-box algorithms to quickly detect risk situations. We overlay these algorithms and libraries to the identity information to help detect internal threats.

Gurucul gets very fine-grained with all this data. We build our machine learning algorithms to accommodate 254 different attributes around identity. What’s more, the architecture is open to various structured and unstructured data sources from the cloud or from on-premises systems using a flexible meta data framework.

Once this data is collected, normalized, and stored in a big data repository, it’s ready for machine learning to perform the analytics. The ML algorithms can look at every new transaction by a given identity and score it according to risk. Using clustering and outlier machine learning makes suspicious behaviors stand out from benign activities.

But Wait, There’s More Fine Tuning!

For even more accurate analysis, there’s one more step to take: baselining a user’s behavior and comparing it to his dynamic peer group. People in a dynamic peer group generally perform the same types of activities and hold the same permissions, even though they may not be in the same directory group.

The directory services that enterprises use – Active Directory and similar products – tend to put people into static groups to facilitate access provisioning. People are grouped by department, by job roles, by location, and so on. This information is somewhat useful for analyzing identities, privileges, and activities, but a more accurate measure is to look at a person’s dynamic peer group.

These groups define people according to the types of activities they typically perform, as well as the types of identities and privileges they hold. Dynamic peer groups yield a much tighter clustering of behavior and much more accuracy in terms of highlighting outlier activities in behavior patterns. Thus, when someone is abusing the privileges of their digital identity, the behavior really stands out. This significantly reduces the chance of false positive alerts often seen with static peer group analysis.

Add a Self-Audit for One More Security Measure

Any behavioral anomalies that surface from the processes outlined above are very likely to be insider threats. Certainly, they would be setting off alerts to prompt investigation. This, in its own right, would constitute a strong insider threat detection program. But there is one more safeguard that provides the cherry on top: a user self-audit.

Much like a credit card statement shows every transaction in a time period, individual users can be shown their own risk-ranked anomalous activities, identities, access rights, devices and other key data points via a web portal. When users detect an anomaly, the false positive rate is very low and the context provided is richer and faster than IT can provide. What’s more, the visibility of what data sources are monitored and analyzed against dynamic peer groups also acts as a deterrent against insider threat.

Conclusion

To detect insider threats, organizations need to implement a completely different approach and set of tools from those that are used to detect threats coming from the outside. A combination of user behavior analysis and identity attributes and privileges can be used to surface truly anomalous activity that is well out of the realm of normal behavior, thus setting off alerts prompting response and mitigation.

Want to learn more about what you can do to detect insider threats? Read the whitepaper Best Practices for Implementing an Insider Threat Program.
detect insider threats

Share this page:

Related Posts