Articles

Anomaly detection and log management; State of virtualization security and more

Anomaly Detection and Log Management: What we Can (and Can’t) Learn from the Financial Fraud Space

Have you ever been in a store with an important purchase, rolled up to the cash register and handed over your card only to have it denied? You scramble to think why: “Has my identity been stolen?” “Is there something wrong with the purchase approval network?” “Did I forget to pay my bill?” While all of the above are possible explanations – there’s a very common one you may not think of immediately: anomaly detection. Specifically, if the purchase you have in your hand doesn’t match up with your buying history, your bank might think it’s fraud and refuse the transaction. Even small changes in buying habits can trigger an alert. For example, credit card holders traveling outside the US for the first time may find their card declined in Paris on a European vacation. Buyers that rarely charge items over a couple of hundred dollars in value could find their first large ticket item (like a couch or a piece of jewelry) purchase blocked, at least temporarily.

While moderately annoying, this kind of card block can usually be cleared up quickly with a call to the card company’s customer service department. And over the years, the sensitivity of these alerting mechanisms has become so accurate that a card may be simultaneously used for legitimate purchases while fraudulent ones are being denied. On-line banking anomalous fraud detection has also made significant advances in the past few years. Using information like originating IP (internet protocol) address and time of login, many banks can now flag suspicious activity and block fraudulent transactions before they occur.

Anomaly Detection Complexities
One of the early promises from anomaly detection solution providers was that misuse on the corporate network could be flagged as quickly as credit card companies call out financial fraud. While this is a very appealing notion for IT managers, the reality is significantly more complex. Why? Well, in large part because our IT network use is more complex than our financial activities. An average corporate IT user may access 10 or even 100 different systems, applications, and services during their course of a standard business week. And to achieve highly accurate anomaly awareness, solutions must analyze activity on each of those systems with the same level of contextual awareness as a single credit card or bank account.

On the surface, the IT network complexity argument may sound a little specious. Aren’t these all simply transactions? Can’t the same metrics and algorithms be applied? Yes and no. Interdependency and rapidly shifting responsibilities in an organization do impact the correlative ability and anomaly detection capabilities. Consider a retail company that is going through a merger and is simultaneously moving messaging hygiene, some storage, and CRM (customer relationship management) to a cloud computing model. Roles and access from the company being merged aren’t normalized to the existing organization’s roles and new rules for DLP (data leak prevention) and management of customer PII (personally identifiable information) must be applied to maintain compliance in the cloud.

Trying to learn what’s “normal” for all of these new models may take months, or even years, from an anomaly detection standpoint while “traditional” access control methods (rules and policy controls) can be implemented immediately based on corporate requirements. As use of the new systems normalize, and review of the anomaly monitoring is tuned to decrease false positives, the company will probably be able to achieve higher accuracy with the anomaly detection assessment of the log data.

What can we learn?
Financial services anomaly awareness for fraud detection teaches us that when the scope and dependencies of the data being analyzed is kept to a relatively narrow space, high accuracy rates can be achieved. Contributing to the accuracy is the history of usage data that’s available to card companies and bank to make what’s “normal” and what’s “abnormal” decisions.

To leverage this success, use anomaly detection for well-defined and well-known purposes. An activity that changes frequently or has unpredictable types of access is probably a poor fit; going too broad with assessment criteria will lead to too many false positives. For example, alerting on any access to a critical server’s configuration file is too loosely defined – if there is a business need to update that file frequently, this would result in significant false positive alert activity. Tuning the anomaly trigger to a more tightly defined set of criteria would bring the alerts down to a manageable number. In this case, setting rules for approved roles (administrator access only), devices (only approved IP addresses) and time (only during normal business hours) will enable the anomaly detection engine to be far more accurate.

Another key area where anomaly detection can be of use in risk assessment is in the realm of device and system activity – independent of user or service activity. A DDoS (distributed denial of service) attack will usually show up clearly in the log files and can be keyed in to alerts generated. While a SYN flood as recorded in the log files of a web server may not indicate a user is attempting to perpetrate fraud, it is anomalous and, in most instances, indicates an attack or other unwanted activity in process.

Summary
Behavior that’s out of the ordinary is often a red flag that fraud or misuse is occurring. Analyzing a single user’s credit card transaction history for fraud requires understanding of what’s “ordinary” in a relatively narrow assessment space. Understanding how a user interacts with a complex corporate IT ecosystem by parsing the log data from multiple services and systems is far less narrow. This doesn’t mean anomaly detection can’t be accomplished with intelligent log management detection solutions, but it does mean
that in order to be successful with anomaly detection on corporate IT systems, companies must be able to narrow down the scope of the rule sets and look for flags in areas where “ordinary” can be determined with a level of certainty. Other assessment criteria such as rule and role based access control settings, policy enforcement, and compliance requirements are necessary components of a comprehensive log and event management solution. Carefully tuned anomaly detection, however, is also a critical part of the misuse puzzle.

Related resource: EventTracker’s Anomaly Detection module detects new and significant variations from normal operations, a baseline that can be configured by users to match activity patterns specific to their organizations.

Next month: Stay tuned for a new series on Log Management and Compliance by Anton Chuvakin

Did you know? As the article states, “The problem with bans is that employees find ways around them resulting in an even worse cybersecurity posture”. A more effective way to prevent loss/theft of data via USB devices is with the Trust and Verify approach that EventTracker provides.

Info Security Products Guide, a leading publication on security related products and technologies has named EventTracker a finalist in the 2010 Global Product Excellence Awards under the Security Information and Event Management (SIEM) category. These awards were launched to recognize cutting-edge, advanced IT security products that have the highest level of customer trust worldwide.

This site uses cookies to store information on your computer. Some are essential to make our site work; others help us improve the user experience. By using the site, you consent to the placement of these cookies. Read our Privacy Statement to learn more.