Using big data and machine learning to assess the risk, in near-real time, of user activity
Day after day, an employee uses legitimate credentials to access corporate systems, from a company office, during business hours. The system remains secure. But suddenly the same credentials are used after midnight to connect to a database server and run queries that this user has never performed before. Is the system still secure?
Maybe it is. Database administrators have to do maintenance, after all, and maintenance is generally performed after hours. It could be that certain maintenance operations require the execution of new queries. But maybe it isn’t. The user’s credentials could have been compromised and are being used to commit a data breach.
With conventional security controls there’s no clear cut answer. Static perimeter defenses are no longer adequate in a world where data breaches increasingly are carried out using stolen user credentials. And they have never been of much use against malicious insiders, who abuse their privileges. Today’s BYOD environment can also leave a static perimeter in tatters as new rules have to be continually added for external access.
A new approach called User Behavior Analytics (UBA), can eliminate this guesswork using big data and machine learning algorithms to assess the risk, in near-real time, of user activity. UBA employs modeling to establish what normal behavior looks like.
This modeling incorporates information about: user roles and titles from HR applications or directories, including access, accounts and permissions; activity and geographic location data gathered from network infrastructure; alerts from defense in depth security solutions, and more. This data is correlated and analyzed based on past and on-going activity.
Such analysis takes into account — among other things — transaction types, resources used, session duration, connectivity and typical peer group behavior. UBA determines what normal behavior is, and what constitutes outlier or anomalous activity. If one person’s anomalous behavior (i.e., midnight database queries) turns out to be shared by others in their peer group, it is no longer considered medium or high risk.
Next, UBA performs risk modeling. Anomalous behavior is not automatically considered a risk. It must first be evaluated in light of its potential impact. If apparently anomalous activity involves resources that are not sensitive, like conference room scheduling information, the potential impact is low. However, attempts to access sensitive files like intellectual property, carries a higher impact score.
Consequently, risk to the system posed by a particular transaction is determined using the formula Risk = Likelihood x Impact.
Likelihood refers to the probability that the user behavior in question is anomalous. It is determined by behavior modeling algorithms.
Meanwhile, impact is based on the classification and criticality of the information accessed, and what controls have been imposed on that data.
Transactions and their computed risks can then be associated with the user who is making the transactions, to determine the risk level. The calculation of user risk typically includes additional factors, such as asset classification, permissions, potential vulnerability, policies, etc. Any increase in these factors will increase the risk score of that user.
Custom weighting values can be used for all the factors in these calculations, to automatically tune the overall model.
In the end, UBA collects, correlates, and analyzes hundreds of attributes, including situational information and third-party threat information. The result is a rich, context-aware petabyte-scale dataset.
UBA’s machine learning algorithms can not only weed out and eliminate false positives and provide actionable risk intelligence, but also revise norms, predictions, and overall risk scoring processes based on the information collected.
Changes in information classification as well as operational changes (such as new departments, new job codes, or new locations) are automatically incorporated into the system’s datasets. For example, if an IT administrator is temporarily granted a higher level of system access, their risk scores will be altered during that period of time. UBA can also, in automated fashion, determine what custom weighting values have the most operational significance in reducing false positives.
The resulting intelligence can be mined off-line for insights into the enterprise’s security posture, often uncovering unsuspected vulnerabilities, such as the provisioning of more user groups than users, the presence of unused credentials, or users with significantly more or fewer access privileges than they should.
Less obvious malicious behavior, such as sabotage, the theft of an enterprise’s trade secrets, or longer-term activity like financial fraud, will also produce patterns of anomalous behavior that a UBA system can detect.
Finally, if a user is found to pose a significant risk, the system can react accordingly, from blocking further access to imposing risk-based adaptive authentication that will challenge them for a second form of identification. The user’s post-login activities may also be restricted.
UBA is transforming security and fraud management because it enables enterprises to detect when legitimate user accounts/identities have been compromised by external attackers or are being abused by insiders for malicious purposes.
Gurucul is a provider of identity-based threat deterrence technology. The author is a recognized expert in information security, identity and access management, and security risk management. Prior to founding Gurucul, Saryu was a member of the founding team at Vaau, an enterprise role-management start-up acquired by Sun Microsystems. She has held leadership roles in product strategy for security products at Oracle and Sun Microsystems and spent several years in senior positions at the IT security practice of Ernst & Young.
External Link: http://www.networkworld.com/article/2904356/security0/detecting-advanced-threats-with-user-behavior-analytics.html