ISVs with popular online computing offers (notably Apple, Google, and Microsoft) have each adopted and endorsed an “App” model. This writer has a lot of conceptual familiarity with Microsoft’s version of this approach. Microsoft has positioned its Office 2013 App Model as a better approach to online security, but is it really?

For readers unfamiliar with the broad technical structure of “Apps” and how it might enhance online security for consumers, the key principle is “isolation”. In theory, “Apps” transition a lot of computer processing from servers to clients. In other words, a lot of the activity handled in the past by the server is transitioned over to the PCs, smart phones, tablets, and even game consoles consumers use to process computing tasks online. The method of processing this activity is to instruct these computing clients to act on commands written in some version of the JavaScript programming language, or the latest version of HTML (HTML 5 at the time of this post).

In the case of the Office 2013 App Model, the jQuery function library is heavily used by developers to add procedures quickly, which already exist somewhere online, with all of the supporting libraries required for successful execution. But this practice poses several difficulties, a couple of which directly impact on online security for consumers. First, there are different versions of the jQuery function library. So, when an App is developed with one version, and another App is added to a computing environment (for example, Office 365), the potential for App conflict arises, which can result in degradation of service for the end consumer.

Second, inadvertently to advocates of this type of development, the App model’s reliance on a client-side method like JavaScript can be said to insulate the server, but, inadvertently, this approach shifts the burden of security over to the client. Since their are hundreds, if not thousands, and even millions of different clients in use to interact with one server (or many servers in a load-balancing scenario, which act as one server), there is a much higher likelihood of a security breach on a client machine. Once clients are successfully compromised, they can be added to bot networks and re-purposed for other types of malicious activity.

For better or worse, in late 2014 the best defense against malicious online activity remains best represented by a correct set of operational risk management processes, at least for large organizations of users.

As both the number and intensity of successful attempts to subvert popular cloud, SaaS offers increases, some prominent industry experts are calling for mandatory two-step verification procedures. But, if past history provides us with any reliable metrics on the usefulness of these added security controls, two-step verification methods need to be tightly managed if they are to provide a useful deterrent to subversive attempts.

Just two days ago a post was published to this blog on a related topic. This post addressed the recent, highly publicized successful effort of hackers to penetrate a celebrity’s account on Apple’ iCloud storage service. This post advocated a broader, perhaps mandatory, requirement of consumers of services like iCloud, OneDrive, Google’s Drive, etc. Any/all users of these services should be required to implement two-step identity verification methods.

It was, therefore, encouraging for us to review a short video interview with Tim Bucher, a respected authority on online security topics. This interview, titled Apple iCloud options buried: Expert, records very similar opinions, expressed by Bucher, to those voiced in the post to this blog.

But readers should be aware of a couple of instances, in the recent past, where two-step verification methods (including the RSA system Bucher describes in the interview) have been compromised.

Back in April, 2011, RSA’s SecurID system was, unfortunately, successfully hacked. Of course RSA has long since cleaned up the errors, and, to their credit, the fact an expert of Bucher’s authority makes reference to the system as a reliable safeguard is good news.

Back in 2013, Duo Labs identified, and subsequently publicized potentially dangerous problems with Google’s two-factor authentication system. Once again, these problems have been corrected.

The point of offering these examples is not to discourage readers from implementing similar trusted solutions, but, rather, to illustrate that any/all of these controls have vulnerabilities. When considered outside of the context of a sound attempt to implement an operational risk management policy truly capable of safeguarding online interaction with a cloud, SaaS offer, no control should ever be considered a completely infallible defense against hackers.

Readers may wonder just what constitutes “a sound attempt to implement an operational risk management policy”. Such an attempt is defined as an effort persistently enforced over any/all daily online computing instances. Any breakdown in the persistence of these procedures can, and, unfortunately, often does lead to successful subversive efforts.

Unfortunately, “dumbing down” doesn’t work when online computing is the activity at hand and the need is to safeguard confidential information.

Small to Medium Sized Businesses (SMBs) in the U.S. are starting to directly feel the pain of the increased daily volume of cyber attacks, not to mention the malicious intent of the payloads they often include. Whether this pain amounts to persistent, annoying junk email, or the mess resulting from a mistaken click on a link in one of these junk email messages, or worse, the end result is the same — SMBs are growing more aware of the risks inherent to what this writer refers to as our consumerized, mono protocol data communications world.

Anyone with an interest in the Internet of Things marketing communications theme, which has been echoed by a number of participants, from Cisco, to Microsoft and beyond, should take note of what impact, if any, a more skeptical SMB market will have on the success of this effort. Perhaps it is worth taking a sentence or two to explain why the Internet of Things is actually little more than a marketing communications theme.

“Things” have been connected for data communications purposes long before the Internet became the average consumer’s notion of data communications between computing devices over a wide area network. Whether the protocol was one of the buses (MODBUS, PROFIBUS, FIELDBUS, etc), or a serial, RS-232 hardwire connection between a computer running a Human Machine Interface (HMI) application and a remote process, or just a sensor, smart machines have been connected to computers since the mid 1970s.

With many protocols in use for data communications the threat of malicious individuals manipulating data communications sessions was generally limited to someone physically rearranging some wires on a Plain Old Telephone Service (POTS) peg board.

So the Internet of Things, for anyone familiar with industrial automation, and process control, is little more than simply a marketing theme promoted by some of the “also ran” players who did not participate in the birth of Computer Numerical Control (CNC) machining, SCADA, etc.

But what makes this trendy image particularly scary, and what, in this writer’s opinion may amount to a strangely disinterested market should this cycle of hacking go on and accelerate further, is the reluctance of the businesses with a commitment to it to look into diversifying the number of data communications protocols in use, so as to patch the near defenselessness represented by data communications over TCP/IP and web pages called the Internet.