Werner Dr., Walter

Werner Mgt. Services, CEO

Werner Dr., Walter

Werner Mgt. Services, CEO

Biography

Dr. Walter Werner is head of the Werner Management Services, a consultancy company in the field of lighting and the Internet of Things. He worked as Head of System Architecture at the Austrian lighting enterprise Zumtobel Group from 2011 until 2014. From 2009 to 2011 he worked as an Innovation Consulter and parallel to that taught at the Institution for Higher Education in Rankweil, Austria. From 2006 to 2008 he was the Managing Director of a Swiss software startup company called mivune, situated in Zurich. He was employed at Moeller of Germany as Technical Manager Switchgear from 2004 – 2006 and prior to that, formed the smart lighting agenda of Zumtobel in the years 1985 to 2004. Dr. Werner completed his studies at Innsbruck University in Experimental Physics with a PhD.

In classic controls environments, the system standards (like DALI, KNX, BACnet, LON, and many many others, naming only those that are less private) cover almost every single aspect from electric properties of the signal through to the interpretation rules and response times of specific bit-patterns, to ensure interoperability and common functionality. The upside of strict and comprehensive standards are / should be well working systems, where multiple suppliers adhering to this standard are able to supply their devices into a mixed system that finally works well.

The obvious downside is the huge effort it takes to keep the promise of real interoperability: Rigorous and exhaustive testing on the one end, and ongoing maintenance of the standard itself on the other side. Both efforts are never complete, and therefore the promise of full interoperability is never kept completely. The more hidden downside, when adhering to such strict standards is slow innovation speed, and missing vendor competition. The missing vendor competition on system features is straight forward caused by the standardization process: Only standardized features may be used when adhering to the standard! The slow innovation uptake on the other hand is caused by the complexity of the negotiating and decision process, especially for features that are only in the focus of the innovators. These processes dominated e.g. the DALI and the BACNET standardization efforts and the standards. Of course they are both working well, but from an innovators perspective both are not helpful at all. Now, with IT methods that get some momentum in local controls also (under the name “IoT”), things may start change, and get rearranged differently.

I have to say, that surprisingly the IT world is organized completely different from the controls world. There is no problem in IT with competing devices, including competing features, on any IT network! There are multiple computer vendors with multiple operating systems and multiple innovation speeds (including the whole mobile world) on the same network, and they work well together. AND: There is no central Lab that fully tests all devices and softwares in all combinations, and strictly handles the “internet compatible” logo. If we would have waited for that, we would not have google, facebook or amazon. Never ever. But how does IT provide interoperability? Or, in simpler words: Why does it finally work? Who is watching the strict implementation of the standards?

The main steps for IT interoperability are strict layering, and 1:1 relation. Strict layering is organized along the ISO/OSI layering model. Each layer has its own set of standards and reference documents. The independency of the layers proved to be one of the main innovation supporters, as they may easily develop at different speed, and a mix of different layer qualities may be used in a given system. (You may easily mix 10 / 100 /1000 MBit LAN segments without jeopardizing the overall functionality.) The simplicity of layering and mixing layer technologies is also based on a strict 1:1 relationship for the communication. A LAN connector of one device connects to exactly one other LAN connector. Remember the old days, when fax machines started up with a beep, then had a kind of different hum and finally changed to some random noise before the were quiet again? What happened there is typical for IT: The two end points negotiated the connection parameters.

Behind the scenes in IT a lot of adaptability was created, that (also nowadays literally) “negotiates” the way the two ends work together. And this principle is used throughout the layers! The transceivers of a Switch adapt to the capability of the transceivers not he other end of the LAN cable, and the web servers adapt to the browser and the display properties used on the other end of the connection etc. This adaptability allows a “relatively loose” standardization environment: It is always a 1:1 relation that needs to work, so “pair compatibility” is what is requested, tested and advertised. A kind of “system compatibility” as requested in the heritage controls world, is not needed, and not provided. To switch from strict system to simple pair compatibility is not sufficient. There are some properties in lighting, that needs some basic system compatibility: With lighting some communication needs low latency or high bandwidth efficiency. Low latency is often realized using group communication. When talking to a group, a 1:n relation is created, and more than pair compatibility is needed. Group communication is used also to realize high bandwidth efficiency (needed in some RF environments), and again group compatibility is needed to allow for this optimization.

Summarizing: An IT based lighting controls standard needs to standardize the base group communication needed for latency and bandwidth efficiency reasons. All 1:1 messaging in lighting controls does not need any complicated standardization, a “Pair compatibility statement” is sufficient. Example: a “Group_on” command needs to be standardized, as all participants of the group need to interpret the same signal identically. But for a “set_scene_value” command there is no standard necessary, as it is sufficient for a device to be pair compatible to the controller / commissioning tool that wants to write the scene value, and the tool may need to use a different coding for the next device. This way multi-compatible tools and multi-compatible devices easily interact in a common system WITHOUT the need to support a specific comprehensive standard. Only when it comes to low latency and to bandwidth efficient group communication a small standard needs to be supported above the pair compatibility. If future devices will support a multitude of pair protocols (like LWM2M, OIC etc.) or if the more central control servers will simply support many pair protocols, or if both happens together is not clear, but neither way creates a problem. The only need is that a basic group protocol is shared in systems that need low latency or bandwidth efficient group communication.