Commentary from a long term NetWorker consultant and Backup Theorist

This blog has moved!

This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).

Enterprise Systems Backup and Recovery

If you find this blog interesting, and either have an interest in or work in data protection/backup and recovery environments, you should check out my book, Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. Designed for system administrators and managers alike, it focuses on features, policies, procedures and the human element to ensuring that your company has a suitable and working backup system rather than just a bunch of copies made by unrelated software, hardware and processes.

Advertisements

This blog has moved!

This blog has now moved to nsrd.info/blog. Please jump across to the new site for the latest articles (and all old archived articles).

Archive for the ‘Aside’ Category

I’d like to take a moment to wish all the regular readers – and any new visitors – a very safe and happy holiday season. There’s too many of you to send cards out to everyone (even if I knew all your contact details) so I thought I’d give seasons greetings through a suitably NetWorker way:

Advertisements

Posted in Aside | Comments Off on Seasons greetings from the NetWorker Blog

At University, I had a fascinating lecturer. His typical mode of dress was a t-shirt, stubbies and to go barefoot around the campus. He had a great big bushy beard that barrelled along in front of him which at times looked like a mane. He had a reputation for reciting the entirety of The Ballad of Eskimo Nell (a rather ribald poem – I’m not providing a link) – though by the time I was at University, he could only ever be encouraged to let fly with a single verse.

None of this though made him fascinating.

What made him fascinating was his name. For your reference, his full name is:

Simon

That’s right, Simon. Just a first name, no last name. You see, at some point in the past Simon had decided to legally remove his surname. So he literally did not have a last name.

Simon was a fascinating case study in the implications of unexpected input in computer programmes. He was in fact a walking case study in the implications of unexpected input in computer programmes – almost exclusively due to his name. (This led me to having some joy in pointing out this XKCD cartoon to him a couple of years ago.) Every year, the people who made the phone book struggled to work out where to put him. He confounded registration systems everywhere, and turned compulsory fields on forms to rubbish. Simon was a walking lesson in the lessons of designing interfaces to handle unexpected inputs.

Not long after I finished University, I decided to change my name. Not anything so drastic as a removal of my surname; in fact, it was to add to my surname. You see, when my family emigrated to Australia several generations ago, they changed their surname from “de Guise” to just “Guise” so they could more easily assimilate. (So the story goes.)

Not being all that interested in blending in, and having an appreciation of the long term history of the name “de Guise”, I decided to reinstate it. (Some might question why I didn’t remove my middle name or at least change it from “Macdonald” – but that’s another story, to be told another time.)

It was at that point that I started to get an appreciation of the daily struggle Simon must have had in dealing with systems that were not adequately designed to work with non-conformist input.

I’ve learned therefore over the years that there’s far too many programmers with names like:

Mary Jones

Bob Smith

David Peterson

Jane Davidson

And far too few programmers with names like:

Simon

Preston de Guise

Carlos de la Cruz

Peter O’Toole

(I had already learned, by the way, that there were far too few companies that simultaneously employed a McDonald, Macdonald and MacDonald.)

So here’s my pet peeve in interface design, stated as examples:

I am not Preston De Guise

I am not Preston De guise

I am not Preston de

I am not Preston De

I am not Preston Deguise

I am not Preston DeGuise

I am not De, Preston Guise

I am not even, any longer, just Preston Guise (and I certainly don’t have a middle name of de).

There are too many lazy and/or inconsiderate programmers out there. (There’s also too many lazy and/or inconsiderate data entry operators as well.*)

If you’re a programmer, and want to get onto my good side in 2010, make sure your system gets my name right.

—
* In the past I have been guilty of name mutilation myself. In my last job I setup an account for someone, mistaking the first word of her surname as a middle name, and egregiously never got around to correcting it. It is actually something I genuinely regret.

There’s still been a lot of chatter about FAST, EMC’s new system for having LUNs automatically moved between different tiers of storage. I clearly need to read up more about FAST, because so far I don’t see it so much as FAST but as 2nd gear. Sure, being able to automatically move whole LUNs around is nice, but I thought the magic was in sub-LUN migration years ago when I saw a Compellent demo on it. Clearly I’m missing something, because a lot of people have been getting very excited. Then again, my focus in storage has been protecting it rather than optimising primary access, so it’s likely I’ll get more interested in it as I read more about it.

Over at Search Storage, there was an interesting article about data reduction vs data deduplication and compression. This prompted me to pull my finger out, and so sometime soon I’ll have another blog article myself here called All this Deduplication is Making me Thirsty. I’ll leave it as an exercise to the reader what I’m talking about (or a surprise for the article.)

Moving on, I found this excellent article by Drew Robb over at ServerWatch called Tape vs Disk: Tape Refuses to be Evicted. Regular readers of my blog will know that my opinion on the “tape is dead” pronouncements we get every few months is to blow a big fat raspberry at move on. Drew’s article was pointedly useful, in that he dug into the IDC studies on tape sales. While a lot of the media is enjoying running around saying that IDC shows that tape sales are declining, they’re not telling the whole truth, as Drew points out. In actual fact, it’s the low end tape sales that are falling off quite a bit, but the enterprise stuff is still running along quite nicely. This is completely understandable – I’ve seen a lot of small businesses that used to rely on cheap and cheerful tapes like DAT transition to more reliable and longer lasting media, but I don’t see a lot of enterprises killing tape. (What’s the old saying from that old ad – linear serpentine, good for data; helical scan, good for parties … I think the party is fizzing out, but the data is still going strong…)

Over at Grumpy Storage, Ian penned a fantastic article called Show me the Money. I think this should be mandatory reading for every sales person and consultant in the tech industry – certainly in previous companies I’ve dealt with sales people who have been shot down and lost deals for failing to follow these rules that should be self evident.

On the lighter side, someone (apologies, I can’t remember who) twittered a link to Death By Powerpoint. This should be mandatory watching for everyone in business, full stop.

On a slightly different note, our Australian government has decided it’s going to attempt to introduce national mandatory net censorship laws next year. I would say what I think about such draconian subjects except it would undoubtedly be retroactively censored next year and this article deleted. I’d hate to upset my article count, so suffice to say, in a more polite way, that I hope they take notice of online polls that show upwards of 95% of people against the idea.

Over at Daring Fireball (yes, I know, not a storage blog), John Gruber has a rather excellent analysis of the garbage that’s been coming out of AT&T lately. Apparently they only want customers to buy, not use bandwidth. Silly, selfish users who expect to be able to buy and then use their services are apparently to blame for profit obsessed companies that have no interest in upgrading their infrastructure to meet needs.

Next to last, as a bit of self-advertising, I took time out from this blog to write up some thoughts on all those marketing slides that encourage people to do TCO calculations to compare pushing infrastructure out of the data centre and into the Public Cloud, and suggested that an alternate calculation needs to follow, one that I call Total Cost of Impact.

One final comment: please go and see Avatar. If you don’t, you’ll be missing out on the greatest block buster of all time (so far). If you’re a Star Wars fan, you doubly need to see it, so you can understand what a good movie looks like.

Regular visitors may note that there’s a new addition to the pages on this blog – one covering Support and Services.

I run this blog in my own time (probably using up a little too much of my own time to be quite truthful) and ask for no payment or reimbursement from my readers – well, other than an occasional pitch for people to buy my book, that is.

My day job however is a consultancy and support role at IDATA Resolutions, and the Support and Services page outlines some of the key things IDATA could do for you, if you happen to be looking for service, support, consulting or training for your environment.

If you’re looking for an independent review of your environment, or considering support options, looking at a new solution or needing some training (whether that’s one-on-one, customised or general), I’d invite you to check out the Support and Services page above to see what IDATA can do for you.

In the land of Dilbert, I’d probably be obligated to wear suspenders and have my socks pulled up past my knees, but ultimately I think I’m becoming an old Unix hack. Why?

Not because of my disdain for Windows. (Though that probably helps.)

Not because of my passion for Linux. (I have little in that regard.)

Not because of my rigid adherence to a particular Unix platform. (Used to be Solaris, now Mac OS X.)

Because of my ongoing use of vi.

I’ve been using Mac OS X now since 2005. The date is fairly well fixed in my head simply because it happened about a month after 10.4 (Tiger) was released. It’s also fixed in my head since I’ve never been as productive on a computer as I am on a Mac.

The Mac has changed a lot of my workflows, but the one thing that it hasn’t changed is the absolute automatic way I lunge for vi whenever I need to edit text, source code, etc. Now, I’ll admit I have the absolutely fantastic BBEdit program from Bare Bones Software. I even use it a lot of the time for in-depth coding across a lot of files. I’d certainly recommend anyone doing lots of software development on the Mac outside of Xcode to buy a license.

But it’s never what I open first when I need to edit a file. There’s something so spartan and uncomplicated about vi. (Which incidentally is probably why emacs just never appealed. It was never spartan or uncomplicated – at least in my opinion.)

I know it’s arcane. The idea of a editor mode and a control mode freaks a lot of people out. The use of freaky control commands that make WordStar look like the Paragon of User Interface Design take a lot of getting used to. Yet, whenever I’m in Word, or OpenOffice*, or even BBEdit, I still find myself automatically trying to type in vi search and replace commands. (Hint to any Bare Bones product manager that stumbles across this. Please please, pretty please, can we get a “vi” mode in BBEdit?)

To me, and I know a lot of Mac users out there will probably have a conniption in response to what I’m going to say: vi is a lot like Mac OS X. It’s like a butler. It doesn’t jump up and down and pester you every 5 minutes (like Windows) about what you want to do, or that you’ve got an icon not being used on your desktop, or that a new network was found, or any other garbage like that. It just hangs back, lets you work, and jumps to your assistance when you want it.

Call me an old Unix hack if you want, but I can’t go a day without vi. Being able to do things such as the following:

(esc) :.,$s/^/insert into blah(x) values(‘/

(esc) :.,$s/$/’);/

Is for some reason vitally important to my ability to work productively. Heck, I even use vi in NetWorker, thanks to default editor settings and nsradmin‘s response to the keyword ‘edit’ on Unix platforms.

I think every technical person who works on heterogenous systems should learn vi. It’s pretty much the one interactive editor you can guarantee being available on every Unix system. (Discounting ‘ed’, and disrespecting emacs ;-) ) I can also guarantee that anyone who has used vi for more than 5 minutes and successfully saved a document can navigate around the user interface behaviours of the Windows default editor, ‘notepad’, or the Mac OS X default editor, ‘Text Edit’. The same isn’t in reverse, and I find that a lot of say, Windows admins who start doing bits and pieces of work on Unix systems are usually hampered by the entire vi experience. vi, it seems, is suitably foreign to people who grow up in GUI only environments that it taints the entire Unix interactivity process. However, being an old Unix hack, I don’t think this is vi‘s fault. Indeed, I’d suggest that anyone who can’t type “vi quick reference card” into Google and then use the results productively is doing themselves a disservice.

If you’re a Windows admin and you’ve just assumed I’m having a dig at you for not knowing vi, I’m not. Like knowing a cross platform scripting language (e.g,. perl), I merely recommend that administrators in heterogenous environments enjoy their job more, and can do their job more easily, if they know vi.

Oh, and as a final point, can someone please explain why almost everyone else on the planet except me seems to save and quit in vi either through multiple actions or more obscure commands (e.g., esc :wq) than just:

(esc) :x

—
* And if someone could explain the arrogance of having OpenOffice on the Mac takeover all possible document types whenever it is first run, I’ll be very interested in rebutting your arguments.

We’re now rapidly heading towards 2010. The world did not collapse at the start of the year 2000, thanks in no small part to the efforts of developers and system administrators across the globe in mitigating Y2K risks.

So where was I, 10 years ago?

Well, I was still working for the most part as a system/backup administrator for a large resources company. I was neck deep in Y2K mitigation projects, notably:

Major efforts on a Tru64 environment where the core application could not be upgraded to a supported Y2K compliant version, so the surrounding OS and underlying database had to be upgraded instead to mitigate the risk.

Ensuring all my NetWorker servers were running 5.1 so that I could rest easy, knowing that anything that might fail could still be recovered.

The first project was the most frustrating for me. Not because of the work, but because of the “technical project manager” assigned to it. I knew I was in for a long hard haul the first time I had a conversation with the TPM and it boiled down to a 1 hour discussion where I kept on trying to explain why you had to add disks to a machine if you needed additional storage. From that point on my colleagues always knew when I was on the phone to that TPM due to the look of exasperated frustration I would wear throughout the conversation. It was even more maddening that the TPM had a laptop so old and clunky that he could run MS Project or Outlook, but never both at once. That would mean significant delays in responses to emails…

The funny thing is – when I reflect back on that project these days, I realise that it helped to turn me into a consultant. The standard engineer/sysadmin approach to such challenging people usually doesn’t work, so you have to learn to be a consultant to actually make any headway at all. So thankyou, challenging TPM. 10 years on and I don’t find myself silently screaming when I remember that project – instead I’m grateful that I was assigned such a complex project where self management became very important almost immediately out of the gates; it taught me that I was interested in far more than regular system administration. Instead of just being part of a team that did managed services and consulting, it made me want to actually be a consultant myself.

(As to Y2K itself, I spent around 8pm through to around 1am for the crossover at my desk, waiting for the world to fall down around our ears if we’d got it wrong. In a beautiful case of irony, the only major system that fell over during the Y2K transition was the Microsoft Access database designed by some psuedo-admin in another division of the company at the last minute to record Y2K failures…)

For those of you interested in setting this up for testing purposes, I’d also recommend reading the follow-up article I wrote this month, “NetWorker and LinuxVTL, redux“, which details recent advances Mark Harvey made in the code to allow NetWorker to use multiple virtual tape drives in the VTL. This makes LinuxVTL very capable as a supplement to a test or lab environment.

(As an aside, if you haven’t yet visited my new blog, I am the Anti-Cloud, you may want to flag it for reading. At Anti-Cloud, my goal is to point out the inadequacies of current attitudes by Public Cloud providers towards their customers, deflate some of the ridiculous hype that has grown out of Cloud Buzzword levels, and point out that not all of the revolutionary features are all that new, or revolutionary.)

In my spare time I tend to either write, watch a bit of TV or whip out my camera and take some photos. When I bought my 50D, we were lucky enough to also snag a nice 50mm macro lens, which does beautiful depth of field as well as actual macro photography.

My cats are particularly enjoying the 50D – while it’s not as good as my partner’s 5D, it does have one advantage over older cameras – the ability to take decent shots in lower light situations without needing a flash. Since the cats are often the subject of my photography, they’re at least starting to put up with a camera that doesn’t half blind them every time the shutter goes.

Even a very average amateur photographer like myself can manage to get great shots with a great camera, thus proving that if you want to take good photos, step yourself up to a digital SLR – even on full program mode, it’ll outstrip the best of the “prosumer” point and click devices.

I’m particularly pleased with this morning’s efforts:

In case you’re not familiar, that’s a Burmese. They’re identifiable almost instantly by their yellow to golden eyes. They’re amongst the most intelligent cats you can get whilst simultaneously being the most loving, frequently putting dogs to shame. And yes, they fetch too.

My partner for the last 13 years is a graphic designer (and an excellent photographer to boot). As you may imagine, over the years, I’ve watched him use a successive number of Adobe products. I even periodically got cast offs myself – e.g., floating around on my system I’ve got a first generation InDesign that I had been working on my book in for a while.

One thing I’ve noticed over the years is that it doesn’t matter how fast his systems are, the one thing that will generally slow them down are Adobe products. The only time this isn’t the case is when he has 10GB of RAM or more.

But this isn’t just limited to the “pro” Adobe products. So when John Nack over at Adobe had the gall to say that Adobe are “sensitive to bloat”, I couldn’t believe that anyone from Adobe could say that with a straight face.

For goodness sakes, even Acrobat Reader, the absolute (should be minimalist) stable horse of Adobe is bloated and slow. It’s practically zombie-esque. You launch it and it staggers out of its directory/crypt, shakes off the spider webs, looks around for some fresh brains to snack on, begrudgingly brings up a window, freezes for a little while in case there’s a zombie hunter around, then eventually opens the requested document.

Then there’s Flash. That travesty of an animation product that’s actually electronic tar. You know what I do when I see a vendor using Flash for something I need to do? I clear my calendar. I quit every app I have to give the memory leaking sucker enough room to work for a while before crashing, then I sit and wait for it to do its best at trashing my systems. Honestly, the best speed up I got with web browsing was when I installed ClickToFlash for Safari. It makes web browsing a dream, and means I can actually go to the Sydney Morning Herald without 6-10 flash apps starting every time I go to the front page*. It’s amazing the number of sites you can go to and have a smooth clean web browsing experience when you’ve got Flash turned off.

There’s no excuse for such tired and frumpy software. Anyone with a Mac for instance will tell you how quickly Apple’s Preview launches and starts displaying PDFs. On the rare instances where you have to, for some esoteric compatibility reason, open Acrobat Reader instead, well, you want to go fix yourself a coffee while you wait for it to load.

Honestly, Adobe should maybe spend a year or two demanding that developers stop adding code and functionality, and instead learn some lessons and start subtracting code.

—* SMH would undoubtedly argue that it’s a “Mac problem”. Honestly, much as I love that paper, it’s so anti-Apple that it’s a wonder someone hasn’t founded a Journalistic Bias Awards just to give them the inaugural golden bong for to celebrate whatever crap they smoke before they write stories about Apple. They make that insidious piece of online garbage, The Inquirer, look fair and balanced a lot of the time!

As an employee of an EMC partner, I periodically get access to nifty demos as VMs. Unfortunately these are usually heavily geared towards running within a VMware hosted environment, and rarely if ever port across to Parallels.

While this wasn’t previously an issue having an ESX server in my lab, I’ve slowly become less tolerant of noisy computers and so it’s been less desirable to have on – part of the reason why I went out and bought a Mac Pro. (Honestly, PC server manufacturers just don’t even try to make their systems quiet. How Dull.)

With the recent upgrade to Parallels v5 being a mixed bag (much better performance, Coherence broken for 3+ weeks whenever multiple monitors are attached), on Thursday I decided I’d had enough and felt it was time to start at least trying VMware Fusion. As I only have one VM on my Mac Book Pro, as opposed to 34 on my Mac Pro, I felt that testing Fusion out on my Mac Book Pro to start with would be a good idea.

[Edit 2009-12-08 – Parallels tech support came through, the solution is to decrease the amount of VRAM available to a virtual machine. Having more than 64MB of VRAM assigned in v5 currently prevents Parallels from entering Coherence mode.]

VMware’s Unity feature actually merges Coherence and Crystal without needing to just drop all barriers between the VM and the host.

VMware Fusion will happily install ESX as a guest machine.

(For the above reason, I suspect, though I’ve not yet had time to test, that I’ll be able to install all the other cool demos I’ve got sitting on a spare drive)

VMware’s Unity feature extends across multiple monitors in a way that doesn’t suck. Coherence, when it extends across multiple monitors, extends the Windows Task Bar across multiple monitors in the same position. This means that it can run across the middle of the secondary monitor, depending on how your monitors are layed out. (Maybe Coherence in v5 works better … oops, no, wait, it doesn’t work at all for multiple monitors so I can’t even begin to think that.)

Areas where Parallels kicks Fusion’s Butt:

Even under Parallels Desktop v4, Coherence mode was significantly faster than Unity. I’m talking seamless window movement in Coherence, with noticeable ghosting in Unity. It’s distracting and I can live with it, but it’s pretty shoddy.

For standard Linux and Windows guests, I’ve imported at least 30 different machines from VMware ESX and VMware Server hosted environments into Parallels Desktop. Not once did I have a problem with “standard” machines. I tried to use VMware’s import utility this morning on both a Windows 2003 guest and a Linux guest and both were completely unusable. The Windows 2003 guest went through a non-stop boot cycle where after 5 seconds or so of booting it would reset. The Linux guest wouldn’t even get past the LILO prompt. Bad VMware, very Bad.

When creating pre-allocated disks, Parallels is at least twice as fast as Fusion. Creating a pre-allocated 60GB disk this morning took almost an hour. That’s someone’s idea of a bad joke. Testing creating a few other drives all exhibited similarly terrible performance.

Areas where Desktop Virtualisation continues to suck, no matter what product you use:

Why do I have to buy a server class virtualisation product to simulate turning the monitor off and putting the keyboard away? That’s not minimising the window, it’s called closing the window, and I should be able to do that regardless of what virtualisation software I’m running.

Why does the default for new drives remain splitting them in 2GB chunks? Honestly, I have no sympathy for anyone still running an OS old enough that it can’t (as the virtual machine host) support files bigger than 2GB. At least give me a preference to turn the damn behaviour off.

I’ll be continuing to trial Fusion for the next few weeks before I decide whether I want to transition my Mac Pro from Parallels Desktop to Fusion. The big factor will be whether I think the advantages of running more interesting operating systems (e.g., ESX) within the virtualisation system is worth the potential hassle of having to recreate all my VMs, given how terribly VMware’s Fusion import routine works…

[Edit 2009-12-08 – Parallels tech support came through, the solution is to decrease the amount of VRAM available to a virtual machine. Having more than 64MB of VRAM assigned in v5 currently prevents Parallels from entering Coherence mode.]