Yet another tech blog…

Monthly Archives: October 2013

Bishop Fox (formerly Stach & Liu), a very well respected ICT security firm, has some issues with Linkedin and their latest app offering called Intro. It seems that there is a bit of a risk using this app as it forces all your iOS based device emails to go through Linkedin servers.

If the findings of Bishop Fox are correct the implications for any data reliant business could be quite severe. Even if it is only the metadata they are looking at, it is still a gross invasion of privacy and in terms of regulatory compliance could well spell trouble as well.

This is why, when looking at new business practices like “bring your own device”, there are inherent risks that the business needs to be aware of. In this case it isn’t even a BYOD issue. It is one of trust being possibly hugely violated by Linkedin. When you look implementing new technologies there are reasons why you test the technology, understand the technology in some depth, understand the implications of the technology and security surrounding the technology.

Just sitting back and hitting next on the installer without some research is a path to pain.

And a possible violation of your organisations ICT policies.

I would also suggest looking into suitable data/email encryption technologies as a standard business practice.

For me its all about the availability of data based on the needs of the business.

If you are offering ICT services (internal or external) you will always encounter (or rather you should encounter these words!) – Service Level Agreement (SLA).

SLA’s are all about setting a level of expectation between a service provider and a customer. Yes…your service desk, your engineers, your developers are all service providers towards the customer (business). All work towards the delivery of a solution within a certain time frame, if you go outside of that time frame there are usually some forms of penalty.

What does this have to do with the single most important KPI (key performance indicator)? Well from my perspective data is essentially the business. Sure you can manufacture something and sell it, but to build something you need the data first. Be it the plans or even the code running the robots or the work instructions for those who work on the assembly line. Data drives everything in a business.

What happens if that data is not available? What impact on the business? Certainly some data is more critical to a business than other data but all data serves some role in the business, yes even the list of customers birthdays. So when looking at designing services the primary question should be focused around how the business views its data and how it uses data. Taking that knowledge gives you an idea of what kind of availability should be built around your ICT services.

I’m pretty much a support guy. I say this with the knowledge that something I build will either require maintenance to continue operating effectively or that it will break. So I like to ensure that what I do build is supportable. I take great pride in working with people who are also able to fix gnarly. hideous problems. Nothing worse for a client to loose their production systems, whereby their business starts to loose money. Huge headaches for them. One of the reason proper fixers get paid well is because we tackle the problem with a number of tools and most importantly our own knowledge and skills, gained over many years of dealing with the unexpected.

What about the home user? I certainly do not expect home users to have the same use and view on computing resources as an enterprise does. I also do not expect a home user to take on people like me or my colleagues to fix their home PC. However I do expect that when they do have a problem they can turn to someone who is professional, capable and not a rip off artist.

However when a company contacts a home user “proactively” to report an issue, that raises my ire. Most of us in ICT are aware of this scam, where a company cold calls a telephone number and scams the person on the other end by creating concerns regarding their home computer. Until recently it has usually been people with Windows machines. Well now we can include Apple Macs on the list. The scammers are growing. This nothing more than out right fraud.

Note – this is not pro or anti Apple, what it is is anti-scammers, who I hate as much as spammers.

We all know about “Patch Tuesday”, Microsoft release date for patches to it’s operating system, .net and other software. We all have plans around this activity and many of us have processes set up to facilitate these updates. Well, ok, most of us do.

However your ICT estate does not comprise of MS alone. You will have other kit that has software running it. You might even have linux servers as well as switches and routers available from a variety of vendors.

Do you know if these are up to date?

I recall a conversation regarding patching with a group of data centre techies and managers. We addressed the MS side quite easily and with little headache for the most part. However, after an audit, it was apparent that the linux servers had not been patched in years, and by years I mean more than six or seven. Mainly because it was felt that they didn’t need patching when they were rolled out, linux being considered more secure.

However patching is not only about killing exploits. It is also about increasing the efficiency of the software as well as hardware. So these servers were running flavours of linux that were very much out of date, that in some cases had vulnerabilities. All were impacted in terms of running the latest revision of firmware, drivers, and OS.

Then we have the other parts of the estate – switches, routers, firewalls, load balancers, storage devices. Certainly all these can be compromised and result in embarrassing the IT department. As IT professionals we certainly don’t want that, right?

So how do you weigh the risks? There are costs involved, obviously, as well as possible impact on production. I recall one organisation running a update on their back up software that resulted in a pretty major catastrophe by taking the storage array off line.

Keeping up to date with your vendor’s updates is vital as is understanding what the update will do both during the up date and after. Will it break applications for example (with storage), will the device reboot and leave your network unprotected (firewall)? Hopefully the answer is no. However if there is one thing that you should do, if at all possible is test the updates first before releasing. Unless it is a hugely critical security update (and even then I seriously suggest testing before release, and have a back up plan to hand when it goes wrong) you can take time to do this.

If you have to wait for a suitable window in which to do an update then that is ideal to also do testing. Testing tells you if the update is stable, it gives you an opportunity to learn about any changes and certainly should tell you if you can expect any major issues.

Sure patching is a pain and no one really likes to do it because the pain it can cause but as indicated above there are steps that do help reduce or limit that pain. However you need to weigh up the pro’s and con’s as a business.

During a meeting last week I recalled an incident from early on in my career, of something that should never have happened.

I had occasion to visit our company data centre, newly built and opened, for a meeting (my life is really that exciting!). As I had not seen the data centre I was invited over with the carrot being a tour of the machine room. Me being an inveterate lover of data centres and technology (yes…buttons and flashing lights, not my fault! I grew up in the 70’s on a diet of 1960’s sci-fi!) I could not refuse. This facility was not what one would call small or cheap. Had all the latest kit from servers, switches and storage to the best in fire suppression as well as HVAC (Heating, Ventilation, Air Conditioning) and power. Fantastic structured cabling, managed server racks. Enough good stuff to make you want to move in, although the chillers would have been a bit problematic. Given I rather like being warm.

So while waiting at the entrance to the actual machine room, looking through a rather very large reinforced plate glass window, admiring the view I noticed some movement out of the corner of my eye. As one does, waiting at the entrance of a sealed and access controlled facility. At first I thought my eyes were acting up or that I had a particularly bad case of floaters. So I did what every sane person would do and rubbed my eyes and tried to look for whatever it was that I had seen.

After a while I managed to track down this thing that had caught my attention. It was a flurry of white and grey and a bit of pink I suppose. Yes. It was a pigeon. In the data centre. Being rather taken aback I rubbed my eyes again and, yes, it was a pigeon. Flying. Loose. In the data centre.

Around the corner of the area I was in, was the office of the centre director. I popped my head in and asked if they had a minute and and any experience in animal control, being the humourous kind of guy that I am.

The sight of several engineers, managers and a director chasing down a pigeon, that turned out to be breeding pair was something that did make my trip well worthwhile. Downside was that the tour was canceled but thoroughly understandable given the circumstance. Its not often one sees a pigeon in a data centre.

Eventually we discovered what had happened. It seems that during the site survey of the original building there were some ducting holes that ran from the room to the outside and not been filled in. This building having been originally a secure telco exchange. So assumptions were made.

Chronic electrical surges at the massive new data-storage facility central to the National Security Agency’s spying operation have destroyed hundreds of thousands of dollars worth of machinery and delayed the center’s opening for a year, according to project documents and current and former officials.

There have been 10 meltdowns in the past 13 months that have prevented the NSA from using computers at its new Utah data-storage center, slated to be the spy agency’s largest, according to project documents reviewed by The Wall Street Journal.

To my mind, if you are spending millions or billions you should be following some kind of methodology rather than plug every thing in and hope it all works. In my first example, assuming that a building is secure because of its past life is not a good thing. You need to approach the survey (in this case) with a clean slate and probe every aspect of the facility.

In the case of the NSA power issues I find it hard to believe that the power is so flaky as to cause actual damage and what seems to be a huge risk to life and limb. In this case, a unique facility with perhaps unique power needs, cannot be approached in a business as usual manner. There has to be a methodology to be followed. My first question in any kind of root cause analysis would be if there was recognition of the systems uniqueness, to what level had the vendors been involved in the design, and most importantly had there been any testing done?

Of course each outage we experience, or pigeon, is not a time for finger pointing but rather to learn from those issues and ensure that such events do not happen again.

(some circumstances regarding the pigeon incident have been changed to protect the innocent, and the pigeon…but the salient points are true – there was pigeon, it was a new DC, it was expensive and the building was considered secure, and yes the director did chase the bird)

Not really a massive (!) surprise but still very well deserved and a vindication of theoretical physics, science and so called “Big Science” experiments.

The official citation for Englert and Higgs reads –

“For the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the Atlas and CMS experiments at Cern’s Large Hadron Collider”.