The Risks of Blurring Embedded, Industrial, and Desktop Systems

The lines between the once traditional separations of embedded and industrial computing blurred some years ago. The definitions once were as simple as computing that was “embedded” into a larger system, whereas industrial tended to dictate a standalone system. In those early days, the system invariably comprised a 19-in. rack or something comparable.

I’ve no personal qualms with this amalgamation as the technologies share identical universal values and of course their raison d’être. What I do take issue with is the increasing trend I began witnessing last year, where the lines between embedded/industrial computing and their (rightly) distant cousins, desktop computing, are now themselves becoming blurred.

There are a number of influences in why the public perceives that gap to be shrinking and offering the temptation to consider them under one umbrella term. For our desktop computers, the historical primary focus of performance has since given way to reducing power consumption and improving reliability, which are naturally key drivers in embedded/industrial computing. That need for raw processing power of course still exists, and demands active cooling (fan) technologies to satisfy high heat dissipation requirements, so at least at the mid- to high-performance level, there remains a clear distinction against our industries almost ubiquitous use of passive cooling.

At the lower performance end of the commercial market, both passively cooled and ultra-compact desktop PCs appeared in the guise of the Intel x86 Core NUC (Next Unit of Computing). Shrinking further still with their Compute Stick, they can offer Atom/Core M performance that fits in your shirt pocket! Of course such products catch the eye of those specifying embedded/industrial systems, as at first glance, they appear suitable.

With a more competitive embedded/industrial computing market than ever before, costs are being aggressively driven down, thereby closing the financial gap between their commercial counterparts. A good example of how this is changing our industry is the increasing use of ruggedized touchscreen PCs in the point-of-sale and retail sectors.

Finally, the “one-stop-shop” approach of embedded/industrial vendors, designed to maximize profits by providing the most complete solution a project can facilitate, has vastly reduced the integration effort where components were invariably purchased from multiple sources.

This is good for all parties, right? Actually no. The decreasing gap between what were once polar opposite implementations has led many companies to reassign embedded/industrial computing sourcing to their IT departments—which has many dangers attached.

Whilst the gap is shorter, it can’t be ignored that at least currently, expertise in the IT departments lies in commercial computing. For example, it’s true that a desktop PC from a reputable manufacturer will typically operate for a decade or more, but their reliability testing considers friendly air-conditioned office environments, not dusty and/or wet industrial locations. The image below shoes an extreme example.

Our industry also has far wider considerations than the product itself. Obsolescence and managing the supply chain remain paramount where complex and expensive industry-specific or geographical approval requirements exist. The commercial sector still struggles with implementing such concepts. The reality is that maintaining continuity of supply is never the lowest cost route and price will always be king in commercial sectors.

To the untrained eye, commercial and industrial motherboards can appear similar, but the “guts” are of course designed with different primary aims. Commercial motherboards tend to specify the lowest cost, multi-vendor components, so board makers can simply inject whichever component vendor’s product happens to be cheapest, or available, at the time.

Industrial motherboards are designed with long-term reliability as the chief influence. An example is the common implementation of more expensive solid-state capacitors vs. traditional types. In commercial computing’s defense, this isn’t just about cost; they expect their products to be upgraded within a few years so extending projected lifetime from say five to ten years at any cost is rarely viable.

It’s true that an IT departments’ KPI is monitoring down time and rapidly responding to failures, though those system failures generally dictate a user having to temporarily use an alternative desktop whilst theirs is resolved. This vastly differs from an entire production line being down, causing vast financial losses for even short periods of downtime.

In these scenarios, it’s prevention not a cure that must be the priority, and that often means spending more on hardware to attain that higher level of reliability and prevent the failure from occurring in the first place. IT departments are quick to financially compare industrial solutions with commercial alternatives, yet they often don’t have a sufficient understanding of the bigger picture and what the consequences of saving cost on hardware today could be.

I could analogize this by comparing traditional desktop deployments to that of a server solution. Downtime can be acceptable for a desktop, but not for a server. IT companies pay a lot for their servers to be more robust. They may also invest in a backup supply (or UPS) or complete backup systems to take over if the server does go down.

One day this will all change, and lessons will be learned. But in the interim, if you find that the IT department is the primary decision maker in your client’s project, you should go back to square one by not just describing the USPs of your specific solution, but going back to grass roots to sell the advantages of embedded/industrial computing itself.