10 old-school technology strategies that CIOs should not forget

Technology races ahead with new methods and capabilities, but that doesn't mean that CIOs should forget some of the management strategies that have always worked.

CIO image.jpg

A host of new and reformed practices have IT departments reinventing themselves: collaborative software development; rapid application prototype development and placement in production; new project meeting methodologies; the growth of BYOD, which encourages democratic device usage in the field; and more. Nevertheless, the fundamental requirements for quality systems that work right the first time are not going to go away. The rudiments of IT asset protection, disaster recovery, and business continuation also remain. Consequently, many tried and proven “old school” IT practices still make venerable companion strategies for emerging IT trends. Here are ten “old school” technology strategies that CIOs should not forget:

1. Project management by walking around

IT is a project-driven discipline. However, no matter how collaborative and informational your project management software is, it can never replace just walking around to see how staff is feeling about the projects they are working on. Body language and face to face communications will tell you much more about the health of a project than any software can. The technique worked thirty years ago, and it still works today.

2. Data retention and access meetings

A myriad of rules can be built into automated systems that patrol for security clearances to applications, or that automate the data backup and purge operations. But none of this means anything in the context of enterprise data governance if business units aren’t onboard with it. Data retention meetings can be long and arduous, because everyone these days is mindful of promulgating regulations. Understandably, users are hesitant to get rid of data. They are also cautious about who gets access to sensitive data within their own work groups. Discussions and decisions about data retention and access are still best facilitated in old fashioned, face to face meetings because of the complexity of issues that can arise. A system portal with fill-in parameters for data retention and security can never do the process justice.

3. Tape and slow, but cheap, hard disks for backup and archiving

We’ve been hearing about the impending demise of tape backups for decades, but tape is still here and companies are continuing to invest in it. Slow, but cheap, hard disks also grace the field of data backup and archiving as faster flash storage (and in-memory storage) occupy the field for rapidly accessed data. It is doubtful that older disk and tape storage will be replaced anytime soon in the province of data backup and storage because of their dependability, economy—and the number of backup systems and procedures that enterprises have built around them through the years.

4. Life cycle “spend downs” of old servers

Workstations of power users can be redeployed as they age to average or light users, and aging “workhorse” servers in IT production can be redeployed for testing applications or even for use as network proxy servers. The object is getting every ounce of capability out of IT assets. In the “old days,” this meant “spending down” resources even after their depreciation cycles were met. The practice still works.

5. Respect for the traditional software development life cycle.

It’s not uncommon for some companies today to design applications on the fly, briefly test-drive them, and drop them into production. In these cases, users and IT know that apps won’t work perfectly—but they concede that it’s better to be fast and agile than to drag out software development and deployment. Especially in Web app environments, this can work to competitive advantage. However, for mission-critical applications that must work right every time and also comply with industry regulations and security standards, software has to be of very high quality. Accordingly, it is important to cycle this software through requirements definition, application design and development, quality assurance and deployment to production. These steps are codified in traditional software development methodologies that have been in place for over thirty years. With so much at stake, the checkpoints for quality that are inherent in these traditional methodologies shouldn’t be overlooked.

6. Application stress testing

The U.S. health.gov website is the latest example of a software application that didn’t work because it was never adequately stress-tested. When you are under the gun, it’s easy to skip important steps in the quality assurance process, such as ensuring that your application can handle the maximum number of users or transactions that could potentially ever arrive at one time. Today as in the past, there are proven and automated test tools that simulate maximum application stress loads. This QA checkout point should never be skipped.

7. Change management and version control

Documenting changes to systems and applications and ensuring that software on distributed workstations and mobile devices is synchronized to latest release levels continue to be weak spots in IT—despite the fact that change management and version tracking software has been around for years. The problem is not lack of tools to do the job, but loose enforcement of IT practices and policies that ensure that change management and version control are always done. As part of the process, application developers should be given guidelines on how they should document the software they develop—and documentation review should be part of QA checkout. Often, software documentation is skipped in the effort to get software out quickly. This places a great burden on the software maintenance staff, which now must deal with software that is almost “black box” in nature at the same time that they are trying to troubleshoot a production problem.

8. IT asset and mobile device tracking

There is software on the market that tracks IT assets and also “moving” assets such as mobile devices in the field. What’s more challenging for IT is crafting user policies for usage of company-owned devices, such as what (if anything) should be stored on these devices, whether (and who) makes upgrades to devices, and who may use the devices. Ten years ago, it was relatively straightforward to enact these policies—but with BYOD (bring your own device) and changing attitudes about personal use of devices, IT needs to revisit (or in some cases, enact) policies that will meet corporate security and regulatory standards. Older policy statements can be helpful in new policy formulation.

9. In-person system and application walk-throughs

Despite breakthroughs with collaboration software, nothing is better for a technical design review of a complex system or application than to get every expert in the room and to go through a detailed walk-through of the system. When you have the DBA, the network specialist, the application developer, the business analyst, and the system specialist all collected together in a “live” and interactive environment, it’s still the best way to flush out hidden problems in technical design that couldn’t be seen in individual or virtual design reviews.

10. Manual procedures

Ten years ago, banks still had “old hands” on board who remembered how to use a paper ledger to record bank transactions when the core banking system went down. This need hasn’t changed. Although we now have many automated failover systems and methodologies, organizations are also more dependent on IT than they were a decade ago. The likelihood of a major Internet or technology outage increases in this context. This is why individual business units including IT, should be encouraged to maintain a set of manual procedures for operation. Hopefully, these guidelines will just gather dust in desk drawers—but if you ever have a total outage, you will appreciated how valuable it is to have “old fashioned” methods of doing business—and employees who are trained to operate “by hand” if they have to.

About Mary Shacklett

Mary E. Shacklett is president of Transworld Data, a technology research and market development firm. Prior to founding the company, Mary was Senior Vice President of Marketing and Technology at TCCU, Inc., a financial services firm; Vice President o...

Full Bio

Mary E. Shacklett is president of Transworld Data, a technology research and market development firm. Prior to founding the company, Mary was Senior Vice President of Marketing and Technology at TCCU, Inc., a financial services firm; Vice President of Product Research and Software Development for Summit Information Systems, a computer software company; and Vice President of Strategic Planning and Technology at FSI International, a multinational manufacturing company in the semiconductor industry. Mary is a keynote speaker and has more than 1,000 articles, research studies, and technology publications in print.

Item 10 is important to continue critical business functions in the worst of scenarios. Would you stop selling and repairing your products or taking orders? How would that work with your customers? Would hospitals stop medicating their patients? Airlines ground their flights? Delivery vans sit idle?

If I need a repair from one of your dealers, or a meal at your restaurants why do I need to wait for your computers to work? If a customer can't buy from me, some will buy from someplace else.
Maybe not a big deal for a restaurant, but what if I;m selling cars, air conditioners or
houses? Big ticket sales like those will be lost forever!

Maybe the continuation solution is paper, or maybe its local systems with store and forward. It can be just about anything that keeps your critical business activities running, temporarily.

Obviously you can't redesign a 767 without full computer capability, but if an airline that spends 100s of billions with you needs a repair part... are you going to tell them to ground the aircraft for a few more days, because your data center floods from a burst chiller?

Disaster planning especially for alternative critical process continuation is certainly important. Today you can put a million part data base copy on a portable hard drive! Innovative ways exist to continue most critical business activity, but planning will certainly be required up front.

I'm an old school guy and we had IT groups dedicated to disaster planning not just to recover operations as quickly as possible, but to assure critical business processes could continue with little delay.

As for number10 - Many and perhaps most companies do not have a possible manual process that they can use to keep operating when the systems are down.

If you company has over a million part numbers, a call center person cannot take an order (assuming that their phone is working).

If you are shipping something that has already been ordered, connectivity to the shipping system is necessary to print a packing list and shipping label. You also need to scan the packing list to confirm that it has shipped. To do otherwise could leave product in a limbo state that you could not bill for.