Researchers at IBM Haifa (Israel) have developed a software program called MAGEN (Masking Gateway for Enterprises), which is the Hebrew word for “shield”. It’s designed to keep sensitive or personal information from appearing on computer screens in the presence of unauthorized personnel.

This is a proof-of-concept creation, one designed primarily for enterprise customers, but is one which has merit and could eventually filter down into our daily lives. IBM has applied for two patents related specifically to this software, one for manipulating images and one for scrambling words.

MAGEN’s value comes from the fact that it does not change the software program or data itself. All it does is filter out that information before it reaches the monitor, and MAGEN’s rules database can be modified as needed to address evolving confidentiality, regulation or users changes.

Examples of envisioned use include health insurance companies that outsource customer service and claims processing to third parties. Rather than having separate databases, some with confidential information, some without, the same source can be used for both, with only the visible portions necessary for the task being displayed.

According to Haim Nelken, Managing Integration Technologies at IBM’s Haifa Israel Research Lab:

It kind of irritates me when I read press releases like this, but they don’t give any real details about how the technology works. Since it reportedly does not change the underlying software, but only changes the presentation on the screen, how does it know what to mask?

Is there some snooper software which looks for keywords? Is it looking for data patterns, such as the format xxx-xx-xxxx for a Social Security number? What if I wrote a custom application that used that same format for some other non-private purpose? Would my software be affected?

The system works at the screen level by ‘catching’ the information before it hits the screen, analyzing the screen content, and then masking those details that need to be hidden from the person logged in.

That sounds peachy, but how does it analyze this? What cues is it given to know one pixelated form from another? At the time of this writing, the linked demo was inaccessible as were other demo links on the main research page.

To be honest, this sounds a little Big Brotherish, as if the machine is capable of determining for the people what they should and shouldn’t see, and based on parameters that (at least from my current perspective) seem whimsically close to magic.

Reader Comments

http://www.active-base.com Alon

It’s not realistic way (especially with development tools and reporting) that for example, can outsmart it…
We apply Dynamic Data Masking (DDM) by rewriting the user request in real-time to a ‘masked’ request, thus preventing security breach.
For a simple example, when the user runs ‘select credit_card…’ ActiveBase Security will rewrite the request in real-time, before reaching the database into: ‘select scrmble(credit_card)…’ making the result set safe and anonymous…
Try it out:http://www.active-base.com/activesecurity.html