Design

Data Aggregation and Bayes Classifiers

By Arunava Chatterjee, October 24, 2009

CEP can monitor business events and adapt business processes in a near real-time fashion.

With the advent of distributed systems and paradigms such as Service-Oriented Architecture (SOA), data can be combined from disparate sources to create information in a manner that was previously not possible. While this leads to an unprecedented level of insight into the behavior of an organization, it also allows inferences about activities that may not be desirable -- for instance, leading to a security breach. This has been previously called the "Security Inference Problem" and has been an area of some study [1].

In this article, I discuss this problem and present possible solutions in the context of a metadata driven SOA implementation. Fundamentally, I tailor the "Document Classification Problem" for the purposes of security. The solution I present involves a combined deterministic and probabilistic approach to ascertain the security level of data composites prior to presenting the data to the user. This is achieved through Complex Event Processing (CEP) as an implementation of Operational Business Intelligence.

Background

Historically analytic processing has been conducted off line. However, with improvements in hardware, the notion of analytic processing in operational environments is gaining interest. CEP has been proposed in the SOA world as mechanism to monitor business events and adapt business processes in a near real-time fashion. The introduction of CEP and related approaches such as Event Stream Processing (ESP) into an operational scenario implies a level of business agility that has been thus difficult to achieve. In principle, operational intelligence allows a business to monitor trends and execute on them with unprecedented speed. Such activities include:

Fraud Analysis

User Behavior

Intrusion Detection

Risk Analysis

Scheduling and Control

Another possible activity for CEP is security enforcement. Of the possible scenarios that can be encompassed by security enforcement, I focus on the possibility of users inferencing information beyond their security level by combining data to which they already have access. This is sometimes called the "Security Inference Problem." While it may be an impossibility to eliminate this problem altogether (i.e., a user may have access to multiple unrelated systems and thus combine information external to any given report), it should be possible to reduce the likelihood of any one system from providing data composites that are themselves a security breach.

Problem Context

The situation in which I've encountered the data aggregation problem is a large-scale SOA implementation requiring data to be combined from numerous data sources. Metadata is maintained for the data and data sources and provides the body of information used to determine the security level of any particular datum. The desire for the ad hoc combinations of data through client and server-side mashups leads to the possibility of the Security Inference problem. I present one possible solution in this context.

Reducing the Security Inferencing Problem: Bayesian Approaches

The Security Inference Problem has been discussed in terms of information flow and database security [2,3]. Of the proposed solutions, approaches leveraging Bayesian Statistics have been considered [4]. A detailed discussion of Bayesian approaches is beyond the scope of this article. For more information, I suggest you check the "References" for further reading [5,6,7]. I am not interested in the religious battles between Bayesians and non-Bayesians. My only interest is to find a solution for a particular scenario. I discuss Bayesian Statistics briefly focusing on Naive Bayes Classifiers.

Bayes Theorem can be interpreted in terms of a hypothesis and evidence supporting the hypothesis. In effect, it states that the probability, P, that a given hypothesis, h, is true is related to a collection of data, G={ei}, supporting the hypothesis that are not false positives. More succinctly it can be written as:

Equation 1

Taking the denominator as the probability of G, leads to:

Equation 2

Bayes Theorem provides a means for updating the probability of h given new evidence, ei. Given this relationship you can conceive of probability thresholds beyond which a hypothesis could be considered "of interest."

As mentioned earlier, in my case, contextual metadata is maintained on all data used for reports. The evidence is a collection of metadata and data points. The hypothesis then is that some grouping G={ei} of n unique data elements, each at possibly distinct security levels , has a combined security level given by h. That is, the n elements taken together, leads to an inference that breaches the security boundary.

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task.
However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Video

This month's Dr. Dobb's Journal

This month,
Dr. Dobb's Journal is devoted to mobile programming. We introduce you to Apple's new Swift programming language, discuss the perils of being the third-most-popular mobile platform, revisit SQLite on Android
, and much more!