Real-world use cases illustrate the power of analytics for detecting stealth threats
Machine learning has emerged as the ideal tool for exploiting the opportunities hidden in big data, as traditional methods are not well suited for rooting out the full value hidden in data lakes. It’s especially useful when the volume of data is too large for comprehensive analysis, and the range of potential linkages and relationships between disparate data sources are too great for humans to process, test all hypotheses and extract the intelligence it contains. Machine learning shines when we reach the limitations of human-scale thinking and analysis.
While machine learning can deliver many benefits for security practitioners, it is not the panacea that it is sometimes purported to be. Let’s consider what we can expect from applying machine learning to advanced security analytics. The following real-world use cases have been anonymized to protect the privacy of the organizations involved.
Too much access
Applying machine learning and security analytics to directory, identity, access and activity data can reveal access outliers and excess access in user accounts that have been over-provisioned. The findings can be eye openers. A large financial organization was able to reduce account privileges and entitlements by 83% over a nine month period based on the visibility into users’ access and actions achieved with machine learning. In another example, a manufacturing firm was able to achieve an 89% reduction to account privileges and entitlements. Access is often managed with a CIO perspective to enable access, while CISOs view compromise and misuse of access as a risk to the organization. This often creates a divide between the CIO and CISO camps on this issue, also known as the one-hundred foot wall between identity and security.
Hidden privileged access
Monitoring administrators with the keys to the kingdom is one of the most popular real-world use cases for machine learning since it can drill down past the account level into privileged entitlements to uncover unexpected results. A healthcare insurance company uncovered that 70% of privileged access entitlements in user accounts within its environments were unknown/undocumented. This visibility achieved from machine learning enabled them to monitor newly found high access privileges for individuals. The lesson is that entitlements define privileged access, not the account. Replacing legacy methods of tracking privileged access at the account level machine learning provides advanced visibility into privileged access and activity at the entitlement level.
Leveraging user context beyond a SOC
With so many users and activity, it’s manually unfeasible to monitor individual accounts for security incidents. Machine learning can routinely comb through millions of access and activity logs to identify anomalous behavior and high risk individuals. This are many real-world use cases here. One company discovered that a highly privileged account had been compromised for three and a half years after anomalous behavior was exposed after a weekly self audit risk report was instituted using machine learning and shared with employees for their review and feedback. The employee whose account had been compromised reviewed the self audit risk report with personal context that was unavailable to security operation center (SOC) personnel. In this particular case, the employee noticed the account was being used on a PTO day, which is a clear sign of a shadow account login. Context is important for machine learning and is often found outside security teams among employees, partners and customers.
Hybrid cloud/data center risks
Hybrid environments that blend cloud and on-premises infrastructures inevitably create security blind spots. One company used machine learning on data sources from on-premise and cloud sources to identify risky activity. They discovered confidential data was being accessed and downloaded from the cloud by authorized users, only to then be shared with unauthorized users via both on-premise file shares and cloud file sharing apps. Machine learning can show new and high risk uses of access and activity in hybrid environments where data access and flows are unexpected. This becomes increasingly important as environments become more fragmented with users, apps and data.
Running a proof of concept using new technologies such as machine learning is always interesting. For example, you don’t typically expect to see a security team member flagged with one of the highest risk scores during the project! Occasionally, the most trusted users in an environment can be the biggest security policy violators since they sometimes operate under the assumption that they are not subject to the security controls they are responsible for enforcing. In one case, a user began performing unusual activity on external employment websites 20 days before the PoC. Since machine learning models are not based on rules, they can discover anomalies regardless of the security clearance and privileges associated with the user account.
Good apps behaving badly
SaaS applications like Microsoft Office 365, Google Apps, etc. are now commonplace in most enterprise environments. Detecting when the execution space of these known good applications has been hijacked and unrecognizable executables are communicating with external IP addresses or unauthorized/unknown domains is manually difficult. Machine learning can provide visibility into these suspicious activities for immediate investigation in these sorts of real-world use cases.
These real-world use cases all share a common thread: Machine learning picks up where human analysis of large security data sets, events and activity leaves off. By shining a light on what SOC teams cannot discern through manual inspection, machine learning makes it possible to isolate high risk behaviors that require the attention and investigation by security analysts.