How self-audits expose threats only users can detect
Leslie K. Lambert
CSO | Aug 8, 2016
Sometimes it takes a village. In the case of information security, sometimes it takes an employee. Forward thinking enterprises can go beyond simply providing IT security awareness training and hygiene tips for their users, and enlist them in the threat monitoring process.
Take for example the financial services firm that decided to provide users of high privilege accounts with weekly ‘Self Audit’ reports in which all of their access and activity is given a risk-score. Upon receiving the report on a Friday, one employee who was out of the office on Wednesday and never logged into their accounts becomes suspicious. The report shows account login and activity on that day. Upon further investigation, the security team discovers that one of the company’s high privilege accounts had been compromised for over 3.5 years by an external intruder. Without the unique context provided by the self audit report — machine learning risk scores combined with user visibility — the breach may have continued for several more years.
To bring this closer to our own experience, let’s use an example that we’re all familiar with – our monthly credit card statements. When it arrives, most of us review the list of charges to verify that they are legitimate. When we notice something amiss, we immediately contact the credit card company to identify and dispute fraudulent charges.
Beyond monthly statements, we sometimes receive urgent phone calls or emails from credit card fraud departments alerting us to anomalous use of our account or potential fraud. Sometimes, suspicious transactions are blocked. Many of us have experienced the embarrassment and inconvenience of ‘false-positive’ fraud events, when a legitimate transaction is blocked unnecessarily because we didn’t notify the credit card company that we were taking a well deserved vacation to the Greek islands. Who knew that travel to Greece would be an anomaly?
How do the credit card companies identify these anomalies? Data, and lots of it. They review and monitor hundreds of thousands of transactions on a global basis, and in near real-time they block and/or alert us to suspicious activity. It’s time to apply this powerful protection partnership model to enterprise information security.
In the same fashion, let’s ask our information security departments to send us regular ‘statements’ that outline our online activities and assign them risk scores based on past behaviors. Applying the same principles of data collection, normalization, and link analysis (i.e., machine learning) employed by credit card companies, will enable employees to review and easily pinpoint any online actions that are risky, out of character or that they just plain didn’t take.
This is a new concept, deputizing users and asking them to join in the fight against security threats! It can dramatically increases the size and power of internal information security teams. There’s one additional benefit, it may deter insider threat activity since users know their access and online activities are being monitored.
There’s more to it, though. Success depends on the ability to communicate the need for and create a company culture of partnership and transparency between users and the information security team. Too often, a company’s information security function is viewed by users with suspicion and seen as “Dr. No”.
The use of ‘self audit’ processes holds great potential, but as with any double-edged sword, it requires careful planning, communication and management to win over potentially distrusting users.