New Technology Together With Old Thinking Equate to Random Outcomes

It seems whenever a new method or new technology makes its way into the IT world, it takes a while for people to come to grips with it. People are quick to embrace the externalities but often slower to shift their thinking.

When relational database technology was new, indexed files were the norm. For several years, people designed their databases as if they were indexed file systems. Every table was laid out like a file with the primary key as the file key. It took some time for people to start to think in relational terms; to stop fighting RI constraints and start using them.

When iterative development methods were new, linear methods were the norm. For several years, people ran their iterations as if they were mini-waterfalls, complete with specialized functional silos and formal hand-offs. It took some time for people to get the hang of incremental delivery through iterative development; to deliver something early rather than nothing for a long time.

When HTTP was introduced and people perceived the potential of the Web as an application platform, stateful client-server CRUD applications were common. For several years (well, even today), people designed their Web applications as stateful client-server CRUD apps, with most of the functionality stuffed into POST requests. It’s only relatively recently that people have begun to take advantage of the Web’s natural RESTful architecture.

Today, people are becoming aware of devops, continuous delivery, and cloud-as-data-center. A couple of concepts that are fairly new have begun to catch on; namely, cloud-first development and dynamic infrastructure management.

And people are generally following the same pattern of adoption: They’re embracing the externalities without adjusting their thinking to align with the nature and capabilities of the new concepts and technologies.

Hence, we see eyebrow-raising questions online, such as:

Do containers cause reliability problems?

Is the use of immutable servers creating security risks?

Our eyebrows go up because such questions do not seem to be based on a clear understanding of the subject. Our eyebrows relax once we realize that people are asking these questions from the perspective of the pre-cloud and pre-devops world. The same level of questions were asked by people who designed CRUD apps inside the HTTP POST method, who treated iterative development as a series of mini-waterfalls, and who designed relational databases as indexed file systems. It will pass.

It turns out the person who asked the question about containers was freshly frustrated by a negative experience with his first attempt to use containers. Based on a single experience with Docker, still high on the novice learning curve, he was ready to dismiss the idea of containers categorically. There are ways to make containers work reliably, and there are other ways to deploy to the cloud besides using containers.

Maybe a little more practice would be useful, to learn the “gotchas” and workarounds of the tools. You know, build up some chops. It’s as if he concluded brass instruments were categorically useless because he, personally, had been unable to produce a high C on the trumpet after his first lesson.

It turns out the person asking about security risks associated with immutable servers was assuming the immutable server instances would run for weeks or months. Traditionally, it was seen as desirable to have a server instance stay up and running indefinitely, once you worked out how to get it to run at all (using manual methods to configure it). Sysadmins used to be rewarded at performance review time based on the up-time statistics of the servers they supported.

This person had not made the mental leap to a world in which server instances are destroyed and re-created many times daily using automated scripts, on an infrastructure that maintains seamless logging and database/file support across instance creations so that applications appear to be always-on. Instead of a long-running OS instance watching applications come and go all day, applications treat an OS instance as just-another-resource.

Security? Well, an instance that stays alive for one hour and then goes away won’t offer a hacker much time to install malware or explore the system. And when the server is re-created from source, the malware won’t be there. With an immutable server strategy, there’s no need for anyone (or any software) to know the root passwords of servers, because they won’t need to log into them. The only open inbound ports will be connected to the application. Hopefully, the application doesn’t run as root. (If it does, you have more serious issues.)

To take advantage of this (or any) technology, it’s necessary to align our thinking with the way the technology works.

Of course, people don’t remember their adoption of new technologies in the way I’m describing it. They’ve adjusted their memories so that they always knew what they were doing. I certainly did; at least, that’s how I remember it.

And when the next new thing comes along, the next generation of IT professionals will go through the same learning process before they adjust their memories. It’s as the Fargo character V. M. Varga said: The past is unpredictable; the future is certain.

Dave Nicolette has been an IT professional since 1977. He has served in a variety of technical and managerial roles. He has worked mainly as a consultant since 1984, keeping one foot in the technical camp and one in the management camp.