Thursday, December 17, 2009

Last spring I directed a bunch of excitement this way about the first trailer for Where The Wild Things Are, based on the book of the same name by Maurice Sendak. I finally got to see the movie last night, just barely squeaking in before it left the main theatres in the city.

In addition to the excitement, I approached the whole thing with a little bit of fear, too. Wild Things is an iconic book from my childhood, and I had to wonder how it was going to be possible to fill a couple of hours from a 350 word story and still be true to the original. Oh, also while not managing to look distractingly silly with people in Wild Thing suits.

Happily, not only was the movie not a disappointment, it completely outstripped all my hopes. Jonze and Eggers managed to add depth and detail to Max's life without breaking the feel of the original story. They painted a picture of a wildly struggling young boy with all the fear and loneliness of a difficult life at home and the pure happiness and joy of his refuge in simplicity of childhood games and his own imagination.

Max Records was a brilliant pick for Max. He played the role honestly and perfectly, and incidentally, wins my award for best actor's name ever.

Tuesday, September 29, 2009

As of the end of last week, I was all set to put together a new post tonight about the latest load of BS from Nominum, but now that will have to wait a few days. There's something much more pressing (and probably relevant) to the five or six people reading this on the day that I post it that I think I should talk about. This morning, the new issue of a weekly podcast that I listen to was posted.

Jesse Brown is the host of Search Engine, a weekly technology news podcast that I have been listening to for about six months, every week, no exceptions (well... actually... excepting weeks when there's no show). Jesse – I hope he doesn't mind if I call him Jesse – seems to me to concentrate on the social aspects and effects of technology (and he may have even stated this himself, on his show), and by and large I think he does it well. This week's issue, titled Are You Gay? (The Internet Wants To Know), was posted this morning, and the second interview (starting at about seven and a half minutes in) was about two subjects I know very well: the DNS, and CIRA. Unfortunately, what I heard on Jesse's show this morning was both shocking and disappointing. And Itoldhimso. In retrospect I was perhaps overly harsh in some of my criticism, but only insofar as twitter is a terrible medium for conveying nuance or detail.

My problem with the interview is that both interviewer and interviewee were wholly unprepared. I'll get to the interviewee in a moment, because what he had to say is my main source of outrage. For the next paragraph or two I'll quickly sum up what I thought was wrong with the way Jesse approached the whole thing, and then get on with the meat of the matter.

The interviewee in this case clearly came to the table with an agenda, and its this agenda that the whole interview was really about: the desire to make sweeping changes to the status quo. Jesse not only did not challenge the position that change is needed, but Jesse admits that he arrived at the interview without knowing anything about the subject at hand. He did no background research and didn't even look for dissenting opinions. This approach is full of fail. If the interviewer doesn't challenge the agenda, and isn't informed enough for critical analysis of the interviewee's answers then it becomes the interviewer's equivalent of publishing a press release as a news item. Of course the person with the agenda is going to present the facts in such a way as to support their argument, and not a balanced view. Worse in this case, the "facts" weren't even really facts for the most part. One would like to believe that a director of an organization would be able to coherently discuss what that organization does, and why it does it that way, but one might be wrong. This whole thing will cause me to reassess Search Engine as a source of information. When covering those subjects that I really know nothing about, will I be able to trust that the expert on the air is really an expert? Certainly not as I have trusted in the past.

I first became aware that Jesse was working on something like this interview last Saturday when he asked a question on twitter that was right up my alley (I missed an earlier, more direct reference). I responded there and in email, offering to help fill in the blanks. I also happen to know that several other people made similar offers. Jesse didn't take me up on my offer, or anyone else's that I'm aware of, and unfortunately it's clear now the reason is that the interview had already been completed at that point, and was simply waiting to be aired. That is far too late to look for supporting information.

So what was this whole thing about?

Barry Shell would like you to elect him to the board of directors for the Canadian Internet Registration Authority (CIRA), the organization that manages the .ca Internet domain. This in itself is not news. What's newsworthy is why he would like you to elect him. He claims that CIRA is too big and expensive and that the organization should be run more like free online services like craigslist.org, or perhaps like the .ca domain was run back in the late 90's. The problem is that Barry's views are hopelessly naïve, are based on a simplistic understanding of what CIRA actually does and, if implemented, would not only threaten the stability of your Internet (well... the two or three Canadians reading this) but would also threaten the stability of your economy, and possibly even your life.

A bold claim, I know. I plan to back it up. But first, some background on me and where I'm coming from so that you can judge my agenda.

As anyone who has read the sidebar will know, until fairly recently I was the DNS Operations Manager for CIRA. I no longer work there, and I am not a part of, nor a candidate for the board of directors being elected right now. I am still a DNS specialist, and I work for a different (some would say competing) domain registry. So my only association with CIRA at this point is that I am a .ca domain-holder concerned about how the organization is run because it directly impacts the Internet that I use every day. Really, the only difference between me and most of the people likely reading this is that I happen to possess some very detailed information about how CIRA is run currently, about the environment in which it operates, and about the possible side effects of changing either of those things. My agenda is to try to share as much of that information as I can, and hopefully to convince you that the way in which CIRA is run is far more important than Barry Shell would have you believe.

Before I get into the finances, which believe me will be brief, I think it's important to correct the collection of misinformation, bad assumptions, and vague statements that permeated the interview, and which would lead the uninformed to incorrect assumptions.

To begin with, CIRA is not "a server." CIRA isn't even an organization that just runs "a server." CIRA is what is known as a domain name registry, but what is that? To explain, I'll step back a bit from CIRA and start with two other related groups.

First, there's the people who register .ca domains: the registrants. These registrants go to a web hosting company, or their ISP, or some other company to pay to register a new domain. This company, known as a registrar, will charge anywhere from $10 to around $50 to take care of the process, possibly also setting up some email or a web site or some other service to go along with the newly registered domain. In the background, this registrar submits the newly registered domain to CIRA, and pays them $8.50. What does CIRA do for that $8.50? Well, there are two core services that CIRA provides.

As a domain registry, CIRA is responsible for ensuring uniqueness. Just like the land registry, CIRA ensures that no two people or organizations think they've registered the right to use the same space; the difference is that CIRA deals in domains rather than plots of land. The second core service that CIRA provides is to include that registration in a global directory known as the Domain Name System, the DNS. Note that is Domain Name System, not Domain Name Server, as Barry says. The key here is that the DNS is a large interconnected database made up of at least hundreds of thousands, more likely millions of servers. The directory is structured like a tree, with the root branching out to the top level domains, or TLDs, like .ca, .com, .net, .org, .info, and others. The TLDs branch out to registrants' domains like cbc.ca or craigslist.org.. and so on.

What this directory system does is convert those memorable host names you type into a web browser (like www.tvo.ca) or into your mail client as part of an email address, into numeric addresses and other information that the computers of the Internet actually use to talk to each other. This is no simple task, but Ben Lucier has a great little layman's explanation of how part of it works.

CIRA's position in this directory is at the top of the .ca branch of this tree. It is responsible for making sure that any computer that looks up a .ca domain gets to the right place. Due to some shortcuts built into this system, the DNS servers at the top of the tree only see a tiny fraction of the total lookups that occur, but even that tiny fraction means that the servers responsible for the .ca domain answer about 13,500 of these lookups every second.

Every. Second.

Now, that's actually pretty easy work for a bunch of DNS servers, but the statistic starts to underline the importance of CIRA's position in making sure that all of those .ca domains continue to function. And that number doubles approximately every 18 months. When you take into account that most Internet businesses keep equipment in service for three to five years, that means that equipment CIRA is putting into service today to handle 13,500 DNS lookups every second must be able to handle over 100,000 per second by the time it is replaced. It's important that this DNS service that CIRA provides never be unavailable, or those lookups go unanswered.

But what happens if CIRA's DNS servers are unable to answer those queries for a few seconds... or a few minutes, or hours, or days? Does it really matter that much?

Perhaps not, if you're just talking about someone's personal web site, as Barry seems to mostly be concerned with, or a blog, or the place you download a weekly podcast. And in 1998, before CIRA existed, when the .ca domain was run by a bunch of volunteers led by John Demco (not just the two or three people Barry says it was), perhaps it wouldn't have been important if these sites failed to work for some period of time. But of course, we don't live in 1997 anymore. And the model of running a domain registry with a handful of volunteers and some donated servers was replaced with CIRA precisely because the old way of doing things was no longer working.

Today these uses of the Internet are all important, and they're part of what has made the Internet such a fundamental part of our daily lives. But, one must also remember that because the Internet has become a fundamental part of our daily lives, it has also become a major engine in the world economy. According to a study commissioned by the Interactive Advertising Bureau, the Internet is responsible for $300 billion in economic activity in the US every year. I'm certainly no economist, but if one were to very simplistically scale that back to the size of the Canadian economy, that would mean the Internet injects $27 billion into our economy every year. That's not chump change. If the Internet is unreliable, what happens to that money?

But let's move beyond corporate uses of the 'net and the economy. What's this poppycock about a broken Internet threatening my life?

Well, the Internet is now a part of daily business. What most people forget, is that doing business online doesn't just mean shopping for gifts, or playing online games. People forget that most organizations now use the Internet for internal communication. Organizations like online stores and social media, sure, but also organizations like our governments, critical infrastructure like our water, power and gas distribution... and our emergency services.

You may hear claims from your ISP that they guarantee "five nines" of availability. It's a fairly common service guarantee on the Internet, and it means that they are up and running 99.999% of the time. Put another way, it means that they permit themselves about five minutes of down time per year. Domain registries like CIRA don't do "five nines". They can't afford five minutes of outage every year. The DNS at that level must be a 100% uptime proposition, or Bad Things happen.

When this is taken into account, a pretty high level of redundancy to ensure availability seems warranted. Not only does CIRA need to ensure that there's enough capacity to handle all of the DNS queries their servers receive, without the servers becoming overloaded, they must also ensure that servers can be taken offline for regular maintenance, and that unexpected failures like power loss, crashed computers, network failures at an ISP, or other breakages don't take down the whole system. And that doesn't even address the threat of deliberate vandalism.

You may have heard of a style of attack against Internet services known as "denial of service", or DoS. One form of this type of attack involves sending extremely large volumes of requests to a service in order to tie it up, and reduce the resources it has available for legitimate requests. It's becoming increasingly popular to direct these attacks at DNS services. Today, these attacks are carried out using what's known as a "botnet" which is tens of thousands to hundreds of thousands of computers on the Internet which have been taken over to be used for often illegal purposes. Remember how I mentioned that CIRA's servers have to be ready to handle 13,500 queries per second today, and over 100,000 in five years? Well, as it turns out, it's quite simple for a small botnet to dwarf those numbers.

If CIRA simply built to the expected normal load, and added a bit to handle broken servers, they would still be vulnerable to being taken out by a bored high school student. This is nothing compared to the resources available to organized crime, or other nation states. And if you think nations attacking each other over the Internet sounds like a bad spy flick then get out your popcorn, because it's already happening. In order to prepare for these potential attacks, some registries build out their DNS infrastructure to support well over 100 times the expected load.

Given all of this, the money Barry Shell seems to think that CIRA is wasting seems pretty well spent, to me. And I've really only scratched the surface of one part of CIRA's budget, which is available online as part of the annual report, by the way. The side of the registry which is responsible for actually taking registrations may not be quite as essential a service as the DNS is, but many Canadian businesses, the registrars I mentioned earlier, depend on that registry being available to take registrations or they start to lose money, so those systems need to be well built to a different level of tolerance to failure or attack. Then there's CIRA's customer service department, programmers to write the software, systems people like me to make it run, the back-office functions, required by any business, like finance and administration staff... it adds up pretty quickly. CIRA actually operates pretty modestly compared to most domain registries.

Now getting to those finances..

It's been suggested that CIRA should reduce the wholesale cost of registering a domain from $8.50 per year to something smaller. However, it's been demonstrated in past reductions in price that wholesale price reductions don't get passed on to the general public as you might expect. $10 or $15 a year to register a domain isn't really an onerous sum for the average Internet user who wants their own domain, and a few cents to a dollar reduction in that cost really doesn't benefit the average Canadian all that much. The only people it does benefit are organizations that buy domains in very large numbers, like the domain registrars that sell to the general public, and another class of Internet user known as a "domainer". Domainers are those people who own literally thousands to millions of domains each, and frequently use them to put up those web pages that are nothing but advertizing, hoping that when you mistype amazon.com and accidentally go to azamon.com that you'll click on one of their ads and make them a few cents. CIRA takes its stewardship of the .ca domain – a national public resource – very seriously, and has no interest in supporting the interests of domainers over the interests of average Canadians who may want to register those domains that are just being used for ads.

It's true that most years CIRA operates with a budget surplus, in recent years as much as $2M. Where does that money go? Not-for-profit organizations are required by law to not have a profit. It's in the name. If a not-for-profit organization does find itself actually making profit, then the Canada Revenue Agency steps in for its cut. There are some pretty specific rules about when a not-for-profit is permitted to have a surplus, and what it can do with those surplus funds. What CIRA has done with its surpluses so far is to pay off a debt owed to UBC in exchange for all of the years UBC volunteers managed the service before CIRA existed, and to pay into a fund which is meant to support CIRA through any lean or financially disastrous years that may be yet to come. This is standard operating procedure for many companies, and is an especially important layer of insurance for an organization that operates a piece of critical national infrastructure.

Barry suggests that instead of these things, CIRA should be supporting research and other concerns of benefit to Canadians' use of the Internet, as if this is his own idea. In actual fact, CIRA staff have been lobbying the board to do just that for several years, and in the last year or two CIRA has already engaged in operational support and direct funding for several programmes, to the extent that it has been able to do that without stressing its rainy day fund or regular budget.

For having been on CIRA's board for a year, Barry shows pretty intense ignorance of CIRA's business and the environment it operates in. It's one thing for a new candidate, without any prior experience on a board, or without experience in the domain sector of the Internet industry to arrive fresh-faced with misconceptions about what CIRA does. It is essential for anyone who wishes to serve on a board of directors to inform him or herself to the best of their ability about the business they're operating, and the industry in which it operates. For someone who has been doing the job for a year to be so uninformed as Barry Shell requires almost willful ignorance. It's actually a shame that this interview aired when it did, because the election in which Barry is running ends tomorrow at noon, and I'd like everyone voting to listen to it; this week's Search Engine the best argument to not vote for Barry Shell that there is.

Friday, September 4, 2009

For years now I've been watching interest in words printed on paper steadily decline among many of the people that I deal with on a day to day basis. Being in high tech, and the Internet in particular, the people around me are on the leading edge of this decline. It freaks me out, partly because I can't completely comprehend it, but mostly because I think there is a lot to be lost if the same disinterest permeates average folks to the same degree.

A couple of months ago I moved from Ottawa back to Toronto. For various reasons, in this move I chose to hire a professional moving company rather than just rent a truck and move everything myself. The cost of the move has come up in conversation a few times, and since the cost was based entirely the weight of the stuff I was moving, every conversation eventually leads to the same question: "How could you possibly have so much stuff?!" The reason for the surprise should be self-evident when you hear that I had nearly 4,500 pounds of possessions packed into a one bedroom apartment. The answer to the question lies partly in my upbringing as a pack rat, but mostly in the size of my library; nearly half of the boxes (and therefore well over half the weight) were books.

Some people react to this news in the way I originally expected, with a look that says, "oooooh, that explains it!" There are a significant number of people in my circle of friends (who are mostly geeks) and in the group of people I work with (virtually all geeks) who react in a completely different way.

"Haven't you ever heard of a PDF?""You know about the Gutenberg Project, right?" "Why don't you just get a Kindle or something?""Dude, sell that shit. You need to do a purge."

Every one of these people, at some point, reference the same argument in some way. Sooner or later they all get around to saying that paper is obsolete, and that I should get with the times and move it all to digital formats. I can't express strongly enough how much I disagree with this view without sounding ridiculous, even to myself. My reasons are many.

On the practical side, there are all the usual arguments about the stability of the two technologies: paper doesn't crash, get corrupted, or become unreadable when the power is off. Sure, there are counter arguments to these, but none that I take very seriously. Someone once tried to counter the "books don't crash" argument by saying, "yeah, but they burn real nice." I pointed out that drive crashes that result in a total loss of all data have been far more frequent than fires that gut my apartment (so far, five to nil). Besides, any fire that's likely to take out my library is going to take out any hard drives in my computer at the same time.

I have more than purely practical reasons for preferring paper, though. There's a comfort with paper that simply hasn't been reproduced with any electronic medium so far, and I dare to predict won't be even when we have paper-thin computer displays. I mentioned some of this back in January. Electronic books don't let me flip quite as easily between pages. They don't take pencil marks in the margins all that well, and even when that's possible it's never quite as simple or convenient as with a book. They don't balance quite so comfortably over my head when I'm laying back on my couch engrossed in that pulpy novel. And, browsing a list of book titles on a computer is nothing like reading the spines along a shelf.

Incidentally, I'm the same with my music. I have encoded my entire CD collection into digital formats for ease of listening, but I still have all 600 or so discs on display in shelves because, unless I'm searching for a specific song, or specific artist, it's way easier to flip through a stack of CDs and find something I want to listen to than to scan through a cold list of 7,000 individual tracks.

This sensual aspect to the printed word – the tactile experience and several thousand years of ergonomic refinement – can't be replaced by any combination of technology we have today. Books have a smell, and a weight, and a unique feel that we connect to as much as we connect to the information they contain. And anyway, let's face it: there's something awe inspiring about the visible mass of knowledge in a library, or in the care and craft put into many books. This is something you just can't get from standing in front of computer no matter how many electronic books it contains.

To cement my reputation as a complete geek, I'm going to quote an old episode of Buffy The Vampire Slayer, because all the truth you need is in fiction. In the first season, the episode I Robot... You Jane introduced Jenny Calendar, the school computer science teacher. In a conversation at the end of the episode, Rupert Giles, the librarian and Buffy's handler and advisor in all things ancient and supernatural, explains to Calendar why books are so important:

"Honestly, what is it about them that bothers you so much?," Jenny asks, referring to computers.

"The smell."

"Computers don't smell, Rupert," she protests.

"I know. Smell is the most powerful trigger to the memory there is. A certain flower, or a whiff of smoke can bring up experiences long forgotten. Books smell: musty and rich. The knowledge gained from a computer is – it has no texture, no context. It's there and then it's gone. If it's to last, then the getting of knowledge should be tangible, it should be... um... smelly."

It's because of all of this that I reacted with a particularly strong and unpleasant combination of confusion, astonishment, and disgust when I heard that Cushing Academy, a prep school in Ashburnham, MA, had gotten rid of virtually its entire library, to be replaced with a coffee shop, study space, a handfull of Kindles, and a subscription to an online library. Yes, you read that right: according to The Boston Globe, aside from a small collection of rare volumes Cushing has either sold or donated its entire library to other organizations and individuals.

It's one thing for someone to convert their, relatively speaking, small personal library into electronic formats. It's quite another for a school, of all places, to eliminate all of its books and hope that an electronic equivalent will fill the void. I believe it's foolish to think it could be a substitute even in the best of circumstances, and utter folly to hope that students who are still learning to learn will have any hope of getting the same education sitting in front of a computer, with its myriad distractions in email, instant messaging, and other in-your-face social media, as they would sitting at a desk with a textbook and some note paper. And that's just textbooks I'm thinking off. I can't help but think literature is entirely doomed among the students of this particular school.

And I know I'm not the only one who has this sort of reaction. Earlier this afternoon I was witness to a short exchange (online, no less) between the friend who pointed this story out to me, and a friend of hers.

Refasionista may have thought she was being glib, but she reinforces the point about the visceral connection people have with knowledge gained through books. This is something that just can't be replaced by any other technology we have today, and may never be replaced.

As I've thought about this more today, my disgust at James Tracy, the headmaster at Cushing, has turned slowly to fear.

I'm behind by a few years, but I've just recently finished watching The Wire, an astonishingly good HBO crime series that aired from 2002 to 2006. One of the major themes of the fifth and final season was an examination of how print news is reacting to the pressures of an increasingly digital world. The move to an online format, where news is given away for free, is setting the entire industry up for an epic fail, and I fear that a new, functional business model won't be found in time to save print news from disappearing in a puff of blogger commentary.

Distribution of the traditional printed newspaper is dropping like the proverbial stone, and online advertizing based on page views and click-throughs is unpredictable, and a slim income at best. The financial foundation of the print media is a sandy beach, and the tide is coming in. And I'm part of the problem. Practically my entire generation has turned away from print media for our news. I don't have a good explanation for this, except perhaps for our desire for less time consuming pursuits, or the simple fact that most of the print news is available online for free anyway.

If this important pillar of the fourth estate were to completely collapse, I don't see how it could ever be recovered, or how the void it would leave could ever be filled. It could spell the doom of current events knowledge among the general population. TV news doesn't have the same ability to surround a story, and examine it in any sort of depth, and bloggers by and large don't do news. To use myself as an example, other than linking to a few outside sources, I'm not reporting any facts here; this is all opinion. Somebody who links to a news story and writes a few pages about how that news affects people isn't doing news, they're doing commentary. Real news takes time and dedication. It takes full time professionals with a access to resources, a beat, contacts, and a certain set of ethics. The few bloggers out there who are trying to do news are lacking those things to varying degrees. Without some sort of in-depth reporting going on, people's knowledge of the world at large is at risk.

I fear that print news is on its way out, and I worry that it may be the toad in the environment of print media, whose death is an early warning that the books I love so much aren't long for this world.

Tuesday, August 25, 2009

It started off innocently enough: a bit of cutting edge physics here, a new idea in ergonomics there. Before I knew it, I was spending hours at a time mainlining new talks... as many as I could get my grubby little hands on.

For those who are not familiar with TED, I'll say only that it's a highly addictive source of fascinating information, compelling thought, and inspired discourse. Those already familiar with TED know what I'm talking about. For the rest of you, begin viewing it at your peril because, like Lays, you can't stop at just one.

You can go watch it later, if you want to risk visiting the site. For now, I'll sum up to say that most of what he talks about is extrinsic motivators ("I'll give you a bonus if you get this done faster") and their effects on tasks that require creative thought. He references several studies which state, put simply, that typical carrot and stick motivators work great for simple mechanical or procedural tasks that have a clear path and end result, but that for virtually everything else these sorts of motivators either do not work, or in many cases actually harm productivity. Extrinsic motivators narrow our focus, and restrict creativity, reducing productivity in areas that require us to be creative.

The solution he proposes, and again he references studies to support the idea, is to use intrinsic motivators... incentives built into the work itself. Specifically, the incentives he refers to are autonomy (our desire to direct our own lives), mastery (our desire to improve at something that matters to us) and purpose (the desire to work as part of something larger than ourselves). Autonomy is key to his talk, and he references several cases where high levels of autonomy result in even higher levels of productivity from creative workers.

My first reaction to this talk was, "okay here we go, more studies into the obvious." Like the recent study by UK Music that states that lots of kids download music. But the more that I thought about it, the more I realized that this isn't such an obvious conclusion for most people. I happen to work in an industry where the most productive people tend to be those quirky loners, wearing all black, with an unusually high tendency toward Asberger's. There's a certain percentage of us that are treated just a bit differently, partly because we're all socially stunted to wildly varying degrees, and partly because big business depends on us so much to keep the lights on and you don't want to upset the emotionally fragile guy in the basement that could cripple your entire business. And as we progress in our careers, and we can make more demands of our employers, that special treatment only grows.

In the last ten years, I don't think there's been more than two days in any given week that I've shown up at the office before 10am. I often work long hours, especially compared to most nine-to-fivers, but I've tended to work for employers that recognize that my job puts odd time demands on me. For example, it's not unusual for me to have to do scheduled maintenance outside of regular work hours, or to be paged in the middle of the night. And so, I've experienced an incredible degree of autonomy over the years, culminating in my current position where I can work pretty much where and when I want, as long as the work gets done. This is how I'm able to be sitting on my balcony at 4am, with a beer, writing, instead of sleeping so that I can be at work at 8:30 in the morning.

I occasionally forget that most of the workforce doesn't experience this degree of self-determination, and thus my flawed first reaction that Dan Pink is telling us things we all already know. So maybe it's not so obvious to everyone, but I think the fact that it seemed obvious to me, due to my experience, shows what truth there is to what he has to say. And anyway, as Pink says in his talk, this isn't a feeling, or philosophy, it's science.

So if science tells us that virtually the entire industrialized world is doing it wrong, why are we still doing it that way? My answer is, the momentum and dogma of middle management. I've seen both represented in several managers that I've seen in action.

The dogma is that employees will not work unless someone is watching them. This is perhaps true for some employees.. but if you have these employees you already have a problem. It's best to let them be lazy, and fail, so that they can be discovered and removed, rather than keep them on a tight leash to make sure they're engaged in some passable, minimum effort.

The momentum is the managers' own work styles. I've had this conflict with a couple of managers in the past: due to the fact that my career is all about managing servers on networks in many locations, there is no requirement that I do my job from any one location, however some of my managers have been incapable of handling a remote employee who they cannot see, or speak to in person. I use the metaphor of momentum to describe this because I believe it will taper off over time as the older managers, who are not used to the online world with dozens of methods of instant communication slowly retire. As younger workers who are used to communicating in ways that do not provide the additional bandwidth of face-to-face communication take over, this will be less of an issue.

I'm quite happy that my current managers do not have either of these problems, but I have had to deal with them in the past. And, most people deal with them on a day to day basis, though they might not realize it.

So then how do we solve the problem? Is it simply a waiting game, where we hold our breath and wait for a slow evolution in the way businesses manage their people, or is there some revolutionary step we can take to change the minds of hundreds of thousands of managers convinced that this is the right way, as well as the millions of employees sold on the idea of bonuses and stock options? I've only been thinking about this for a few hours, so I don't have a solution yet, but I'd bet than Dan Pink has some ideas. After all, he has a book on this very subject due out soon.

I, for one, will be looking forward to the rest of what he has to say.

Tuesday, June 2, 2009

I don't plan to make a habit of talking about things specific to my job here... in fact, I will almost never do that. It's just easier than always having to disclaim any relationship to the views of my employer, and so forth. However, today I can't help but toot our own horn.

A few hours ago, Public Internet Registry (PIR – the manager of the .ORG Internet domain name) announced that the .ORG zone has been secured with DNSSEC, the DNS Security Extensions. This makes ORG the largest Top Level Domain that has been signed to date, and the only open registry to implement DNSSEC (open in the sense that all of the other signed TLDs are at registries which have restricted registration policies: six national TLDs, and .GOV).

Monday, May 4, 2009

I'm not sure what it is... maybe it's the economy... maybe it's part of the overreaction to the swine flu (more on that later)... or maybe these things just come in cycles like cicada broods.. but whatever the reason, zombies seem to be on the rise right now.

Just this morning I was tweeting that in 28 days it will be the six year anniversary of my move to Ottawa. Of course, that led to an obvious joke. Almost immediately I heard back from @bestswineflu about his posted defense tips for the coming swine-flu-fed zombie apocalypse. It turns out that right about the same time, a friend of mine was announcing the grand opening of his new web site, The Daily Zombie – a news site for the informed zombie. Not to be left out of the zombie action today, @thinkgeek chose this afternoon to pass on this video showing the zombie defense training being inflicted on two Japanese kids. And that's just today.

Does the collective unconscious know something about the future that us poor individuals are missing?

Wednesday, March 25, 2009

So it's out. The first trailer for the Where The Wild Things Are movie was released this morning, and it makes the movie look like everything I hope it will be. I'm still a little afraid though... it really is an iconic book from my childhood, and I have to wonder if any movie, especially a live-action film with people in Wild Thing suits, will be able to live up.

A few friends that I've directed my excitement at have responded with weird looks and statements like, "I have no idea what you're talking about." To those people I say, "your childhood is incomplete. Go back and get your Sendak credit."

Saturday, March 7, 2009

A couple of days ago a link started going around for Thru You, a musical project by Kutiman, an Israeli musician and producer. What he's done is take several dozen Youtube videos, chosen for their audio content, and cut and splice them together into a collection of seven brand new, original pieces of music. It's an amazing technical feat to begin with, but the music he's created is incredible in its own right.

The first and last time I saw anyone do something like this was in 1998 when Coldcut and Hexstatic got together to release the Timber EP (Timber remixes video of logging operations, and has a strong anti-clearcutting message). But even Coldcut/Hexstatic didn't quite commit to the concept the way Kutiman has; Timber contains several audio tracks that clearly aren't from the video sources, including some synthesizer sounds.

Thru You is also an incredible mix of styles; funk, dub, R&B, and even big-beat electronic. All seven tracks are excellent, and if I can get my hands on some clean mp3s they'll be going into high rotation on the home stereo. The first track, The Mother of all Funk Chords, is the best demonstration of the video remix concept, but my favourites are probably This is What it Became, an awesome dub track, Babylon Band, which is a bit like an Eastern-European Prodigy meets Nusrat Fateh Ali Khan, and Just a Lady, a nice slow R&B tune.

The Thru You web site has gone down at least once, probably due to its popularity, so Katiman has posted some alternates on Youtube itself. The videos on the main site seem to be better quality, so it's probably best to view them there (you can also see the original video sources that way by clicking on the Credits link). But, just in case, I'm including links below to the Youtube postings.

Sunday, February 22, 2009

It's generally accepted that using any sort of stateful load-balancer in front of a set of DNS servers is a bad idea. There are several reasons for this, but my favourites are that:

it means adding an unnecessary potential point of failure

the state engines in load-balancers aren't scaled for DNS, and will be the first thing to fail under heavy load or a DoS attack

The issue with the state engine in a load-balancer is that it is scaled for average Internet traffic, which DNS is not. Each DNS connection (each flow) is typically one UDP packet each way, and well under 1KB of data. By contrast, with most other common protocols (smtp, http, etc.) each flow is hundreds to millions of packets, and probably tens, hundreds or even millions of KB of data. The key metric here is the flows:bandwidth ratio. Devices are built so that when they reach their maximum bandwidth capability, there's room in memory to track all of the flows. The problem is, they're typically scaled for average traffic. Since the flows:bandwidth ratio for DNS is so very much higher than other types of traffic, you can expect that a load-balancer in front of busy DNS servers will exhaust their memory in trying to track all the flows long before the maximum bandwidth of the device is reached. To put it another way, by putting a load-balancer scaled for 1Gb of traffic in front of DNS servers scaled for the same amount of traffic, you actually drastically reduce the amount of DNS traffic those servers can handle.

There are better ways.

ISC, the maker of BIND, has an excellent technote which describes using OSPF Equal Cost Multi-Path (ECMP) routing to distribute load between a set of DNS servers. In effect, it's a scheme for doing anycast on a LAN scale, rather than WAN. Put simply, it involves using Quagga or some other software routing daemon on each DNS server to announce a route to the DNS service address. A wrapper script around the DNS process adds a route just before the process starts, and removes it just after the process exits. The approach works quite well as long as the local router can handle OSPF ECMP, and as long as it uses a route hashing algorithm to maintain a consistent route choice for each source address without needing a state table. For example, the Cisco Express Forwarding (CEF) algorithm uses a hash of source address, destination address, and number of available routes to produce a route selection.

The down sides to the ISC method are that there's a small amount of complexity added to the management of the DNS server itself (for example, you can no longer use the standard application start/stop mechanisms of your OS for the DNS software) and the risk that a failure may occur which causes the DNS software to stop answering queries, but not exit. If the latter occurs, the route to that server will not be removed. This is pretty safe with BIND, as its designed to exit on any critical error, however that's not necessarily the case with all DNS server applications.

There's another method available (that I'm going to describe here) which, while being very similar to the ISC methodology, does not have these particular flaws. I should point out here that the method I'm about to describe is not my invention. It was pieced together from the ISC technote and some suggestions that came from Tony Kapella while chatting about this stuff in the hallway at a NANOG meeting a while back. After confirming how easy it is to get this method to work I've been singing its praises to anyone who will listen.

At a high level it's quite similar to the OSPF method. The DNS service address is bound to a clone of the loopback interface on each server, and ECMP routing is used, but rather than populating the routes with OSPF and running routing protocols on the DNS servers, route management is done with static routes on the local router linked to service checks which verify the functionality of the DNS service.

Setting It All Up

In this example, we'll use the RFC 3330 TEST-NET. The service address for the DNS service will be 192.0.2.253. This is the address that would be associated with a name server in a delegation for authoritative DNS service, or would be listed as the local recursive DNS server in a DHCP configuration or desktop network config. The network between the local router and the DNS servers will be numbered out of 192.0.2.0/28 (or 192.0.2.0 through 192.0.2.15). The server-facing side of the router will be 192.0.2.1, and that will be the default route for each of the DNS servers, which will be 192.0.2.10, 192.0.2.11 and 192.0.2.12. This network will be the administrative interfaces for the DNS servers.

Once the servers are reachable via their administrative addresses, make a clone of the loopback interface on all three servers. Configure the second loopback interface with the DNS service address.

On FreeBSD, the rc.conf entries for the network should look something like this:

It's a little more difficult to represent the configuration under Linux since it's spread across several config files, but the above should give you a pretty good idea of where to start.

Once the network setup is finished, configure your DNS server software to listen to both the administrative address and the service address. So, on the first DNS server, it should listen to 192.0.2.10 and 192.0.2.253.

That's all that needs to be done on the servers. Note that doing this was far simpler than configuring the servers to run OSPF and automatically add and remove routes as the DNS service is started or stopped.

The last few steps need to be taken on the local router. The first of these is to configure the router to check up on the DNS service on each of the three servers and make sure it's running; this is where Cisco's IP SLA feature comes into play. Configure three service monitors, and then set up three "tracks" which will provide the link to the service monitors.

This sets up three IP SLA Monitors which repeatedly query the administrative address on each server for the A record www.example.ca. The DNS server must respond with an A record for the QNAME you use; if it is unable to respond, or responds with a different record type, the monitor fails. In the example above the monitor attempts the lookup every second (frequency) and fails if it doesn't receive a valid A record within 500ms (timeout). You may need to experiment with the timeout value, depending on how responsive your DNS servers are. If you find individual servers appear to be going out of service when the daemon is still operating fine you might have the timeout value set too low.

With the monitors in place, turn on CEF and then configure three static routes to the service address via each server's administrative address. The routes are linked to the service monitors using the track argument:

And that should be it. DNS queries arriving at the external interface of the router bound for 192.0.2.253 should now be routed to one of the DNS servers behind it, with a fairly equal load distribution. Since the router is using a hashing algorithm to select routes the load distribution can't be perfect, but in practise I've found that it's incredibly even. The only likely reason to see an imbalance is if your DNS servers receive an unusually high percentage of their queries from just one or two source addresses.

It's important to point out that most of the cautions in the ISC technote, particularly in reference to zone transfers and TCP DNS, apply equally here. I highly recommend reviewing the ISC document before implementing this in production.

Of course, there is still one big downside to this particular method of load balancing: it's depedant on one particular vendor. I have not yet found a way to reproduce this configuration using non-Cisco routers. If anyone is aware of a similar feature available from other major routing vendors please let me know and I'll integrate instructions for those routers here.

Tuesday, January 20, 2009

I think it's amazing the degree to which Obama has managed to inspire a new kind of patriotism among American citizens. For too long, American patriotism has been about how the US is cool just for being there. But really, what has the country done lately that Americans can be proud of? In the last few years a lot of people have been waking up to this, and attitudes are starting to change.. slowly... and I think Obama will be the catalyst to cause a new attitude of doing something about it to spread like wildfire.

People who criticize artists for speaking out about issues should love this video. Personally, I love the idea that there are people willing to try to use their celebrity to educate and affect important issues. But, for those who don't like to hear what celebrities think you should do, here is what they pledge to do themselves, and a challenge to find your own.

For the other geeks out there, this seems like a great start. Are there other service projects geeks can get involved in?

Sunday, January 18, 2009

The song stuck in my head today is Tournament of Hearts by The Weakerthans (you can sample it at Amazon). Aside from it being a solid, up-beat, alt rock track, I just love the idea of using curling as the metaphor in a love song. It's pretty uniquely Canadian.

Saturday, January 17, 2009

I'm one of those people who hasn't been able to completely give up paper. Particularly when I'm dealing with reference material, I like having a book I can flip through, shove bookmarks in, or hold over my head while I lie back on the couch and read. It's hard to do any of those things with a laptop and a PDF. As a result, I've got a shelf behind me that's a veritable rainbow of O'Reilly titles and other technical references. But here's my problem... what to do with outdated editions?

I think I've got three or four copies of DNS & Bind, of varying editions, multiple versions of the bat book and some extras of C and Perl references. I'd like to try to find a way to reuse these before just dumping them in the weekly recycling. Surely someone out there might find these older editions useful for something..

Friday, January 16, 2009

In wandering the Ether this afternoon I rediscovered a friend's blog and his latest post, Cloud computing is a sea change. Mark Mayo has a lot to say about how cloud computing is going to change the career path of systems administrators in every corner of the industry, and he references Hosting Apocalypse by Trevor Orsztynowicz, another excellent article on the subject.

There's definitely a change coming. Cloud computing has the potential to put us all out of work, or at least severely hamper our employability, if we don't keep on top of the changes and keep our skills up to date.. but that's been true of every shift in the industry since it came into being. Every time a new technology or shift in business practises comes along, individual sysadmins either adapt or restrict their potential employers. The difference with cloud computing is that it promises a lot of change all at once, where in the past we've mostly dealt with at most two or three new things to think about in a year.

I think there are some potential drags on the takeover by cloud computing that will slow the changes Mark and Trevor warn of, however.

A few times now we've been warned that the desktop computer is doomed, and that it's all going to be replaced by thin clients like the Sun Ray from Sun Microsystems, or more recently mostly-web-based Application Service Providers like Google Apps. Despite years of availability of thin clients, and cheap, easy access to more recent offerings like Google's, this hasn't happened. Individual users, and even some small organizations may have embraced web apps as free alternatives to expensive packages like Microsoft Office, but I'm not aware of any significant corporations that have gone down this road. The main reason? I think it has to do with control of data. Most companies just don't want to hand all their data over to someone else. In many cases, simple reluctance can become a statutory hurdle, particularly when you're talking about customer data and there's a national border between the user and the data store. I think this same reasoning will come into play with even stronger force when companies start considering putting their entire data centre in the hands of another company. The difference in who has access to your data between co-location and cloud computing is significant.

Additionally, I think the geography problem will keep cloud computing out of some sectors entirely. As I noted in the comments to Mark's article, the current architecture of choice for cloud computing products is the monolithic data centre. Having only a handful of large data centres around the world will keep the cloud from consuming CDNs like Akamai and keep it out of other sectors entirely where wide topographic or geographic distribution is required, and a large number of small data centres are used, like root or TLD DNS infrastructures.

Mark correctly points out that the geography problem will be solved in some ways as cloud computing becomes more ubiquitous and the big players grow even larger, and in others as the cloud providers become the CDNs. But until a cloud provider can guarantee my data won't leave my local legal jurisdiction I'd never recommend the service to whoever my employer happens to be... and even once they can I'd still recommend the lawyers have a good hard look at the liability of handing over all the company's data to another party.

Mark's core point remains valid however: change is coming. Whether it's fast and sudden, or slow and gradual, sysadmins had better be prepared to learn to deal with the cloud computing APIs, and be comfortable working with virtual machines and networks, or they'll be left behind.

Tuesday, January 13, 2009

I've been thinking for some time about starting to regularly post thoughts about random things somewhere visible — blogging, if you will. This afternoon I was thinking about it somewhat more earnestly, trying to choose between several topics for a first post, when a friend pointed out this interview. Seeing as the subject (the DNS) is what I spend most of my days doing, it seemed like an excellent place to start.

First let me say that I disagree with Tom Tovar's basic position on open source DNS software. As a general class of software, there is absolutely nothing wrong with using it for mission critical applications — not even for critical infrastructure. The various arguments about "security through obscurity" vs. open analysis and vetting have been made so many times that I won't bother with the details here. Suffice to say that, all other things being equal, I'd be far more inclined to trust the security of my DNS to open source software than closed, proprietary commercial software. Not that his position is that big a surprise. After all, he is the CEO of a company that sells some extremely expensive DNS software.

Mr. Tovar's early statements about DNS security, Kaminsky, and the current concerns of the DNS industry as a whole are hard to argue with. However, as one gets further into the interview, some serious problems with what Mr. Tovar has to say crop up including one glaring error in a basic statement of fact. I'll approach those in turn as I work my way through the article, addressing flawed points in the order they come.

I'll approach one particular point here as a pair of answers, since there's tightly related information in both.

GCN: The fix that was issued for this vulnerability has been acknowledged as a stopgap. What are its strengths and weaknesses?

TOVAR: Even to say that it is pretty good is a scary proposition. There was a Russian security researcher who published an article within 24 hours of the release of the [User Datagram Protocol] source port randomization patch that was able to crack the fix in under 10 hours using two laptops. The strength of the patch is that it adds a probabilistic hurdle to an attack. The downside is it is a probabilistic defense and therefore a patient hacker with two laptops or a determined hacker with a data center can eventually overcome that defense. The danger of referring to it as a fix is that it allows administrators and owners of major networks to have a false sense of security.

GCN: Are there other problems in DNS that are as serious as this vulnerability?

TOVAR: I think there are a lot of others that are just as bad or worse. One of the challenges is that there is no notification mechanism in most DNS solutions, no gauntlet that the attacker has to run so that the administrator can see that some malicious code or individual is trying to access the server in an inappropriate way. If UDP source port randomization were overcome and the network owner or operator were running an open-source server, there would be no way to know that was happening. This has been a wake-up call for any network that is relying on open source for this function.

Mr Tovar's last point in the first question is right on the money: the patches released for dozens of DNS implementations this summer do not constitute "a fix." Collectively they are an improvement in the use of the protocol that makes a security problem tougher for the bad guys to crack, but does not make that problem go away.

As for the rest of what he has to say here, on the surface it seems perfectly reasonable, if you make a couple of assumptions.

First, you have to assume that commercial DNS software, or at least his commercial DNS software, has some special sauce which prevents (or at least identifies) Kaminsky-style attacks against a DNS server, which cannot be available to operators of open source DNS software. It may be reasonable to take the position that it is beneficial for DNS software to notice and identify when it is under attack; however, it is not reasonable to suggest that this is the only way to detect attacks against the software. When the details of Kaminsky's exploit were eventually leaked, almost immediately tools began to spring up to help operators detect poisoning attempts, such as the Snort rule discussed in this Sourcefire Vulnerability Research Team white paper [PDF].

The second assumption you must make to consider this point reasonable is that a network operator is going to fail to notice a concerted effort to attack one or more of her DNS servers. Overcoming source port randomization, the "fix" under discussion, requires such a concerted effort — so much bandwidth consumed for so long a period — that it is generally considered unlikely that a competent network operator is going to fail to notice a cache poisoning attempt in progress when it is directed at a patched server. It's a bit like suggesting that it takes 10 hours of someone banging on your front door with a sledgehammer to break through it, and that it's unlikely you'll notice this going on.

This is the only reason informed DNS operators will allow anyone to come anywhere near implying that the patch is a "fix" of some kind. In fact, the only currently available fix for this vulnerability in the protocol is securing the protocol, which is where DNSSEC comes in.

TOVAR: The challenge of an open-source solution is that you cannot put anything other than a probabilistic defense mechanism in open source. If you put deterministic protections in, you are going to make them widely available because it is open source, so you essentially give the hacker a road map on how to obviate or avoid those layers of protection. The whole point of open source is that it is open and its code is widely available. It offers the promise of ease of deployment, but it is likely having a complex lock system on your house and then handing out the keys.

There actually is a widely available deterministic defense mechanism which is implemented in open source, as well as commercial software. It's known as DNSSEC, and it comes up later in the interview. Putting that aside though, I'd be curious to know what other deterministic defense mechanisms Tovar is implying are available in commercial software, but are not available in open source software. Obviously he doesn't say; he can't. His own point addresses why non-revealable deterministic countermeasures are insufficient for securing DNS software: should they be revealed (and let's face it, corporate espionage happens) then they are immediately invalidated.

GCN: BIND, which is the most widely used DNS server, is open source. How safe are the latest versions of it?

TOVAR: For a lot of environments, it is perfectly suitable. But in any mission-critical network in the government sector, any financial institution, anything that has the specter of identity theft or impact on national security, I think using open source is just folly.

He is, of course, welcome to this opinion. I do not share this opinion, and I would bet real money that neither do a majority of the other operators who manage critical DNS infrastructure. Refuting the basis for this opinion is what this post is all about, so I'll move on.

GCN: Why is BIND so widely used if it should not be used in critical areas?

TOVAR: The Internet is still relatively young. We don’t think poorly of open source.

The implication being... as the Internet ages and we gain more experience, we should think more poorly of open source?

In fact, many of our engineers wrote BIND versions 8 and 9, so we do believe there is a role for that software. But the proliferation of DNS has occurred in the background as the Internet has exploded. DNS commonly is thought of as just a translation protocol for browsing behavior, and that belies the complexity of the networks that DNS supports. Everything from e-mail to [voice over IP] to anything IP hits the DNS multiple times. Security applications access it, anti-spam applications access it, firewalls access it. When administrators are building out networks it is very easy to think of DNS as a background technology that you just throw in and then go on to think about the applications.

The rest of this statement is about how people deploy DNS, not about how the software is designed or works. Nothing here explains why he feels open source DNS software can't be used in mission critical situations. All he explains is how he thinks it can be deployed in sloppy ways. Commercial software can be deployed sloppily just as easily as open source software. Nothing inherent in the software prevents or aids the lazy administrator in being lazy.

GCN: Why is DNSsec not more widely deployed?

TOVAR: Not all DNS implementations support it today, and getting vendors to upgrade their systems is critical. Just signing one side of the equation, the authoritative side, is a good step, but it’s only half the battle. You need to have both sides of the DNS traffic signed. And there is no centralized authority for .com. It is a widely distributed database and you have to have every DNS server speaking the same version of DNSsec. So the obstacle to DNSsec deployment is fairly huge. It is going to take government intervention and wide-scale industry acknowledgment that this is needed.

And here, at the end, is where I think Mr Tovar shows his ignorance of DNS operations in general, and in particular the operation of critical infrastructure. There is, in fact, a central authority for the .COM zone. That authority is Verisign; they manage and publish the .COM zone from one central database. Mr Tovar is, perhaps, thinking of the .COM WHOIS database. The WHOIS database holds the details of who holds each registered domain, and is indeed distributed among the .COM registrars.

This sort of fundamental misunderstanding of the DNS infrastructure is a good indicator of just how much weight should be given to Mr. Tovar's opinions on how that infrastructure is managed.

UPDATE: Florian Weimer raises an interesting question, one that I'm embarassed to say didn't occur to me when I was originally writing this. Even though the question itself is facetious, the point is valid. The data sheets [PDF] for the main product that Mr. Tovar's company markets say that it only runs on open source operating systems. How can he keep a straight face while on the one hand claiming that his product is suitable for mission critical applications but open source isn't, while on the other hand his software only runs in open source environments? It's quite baffling.

During the day, and often at night, Matt works as a DNS infrastructure operator. He is the former DNS Operations Manager for the Canadian Internet Registration Authority (the organization that manages the .ca Internet domain), has worked DNS operations at Afilias (.info, .org, and several others), has served on the board of directors of DNS-OARC, and currently works as the Sr. DNS Engineer for Rightside (.ninja, .rocks, and many others) ... though he doesn't speak for any of the above here.

He enjoys live music, good beer, and speaking about himself in the third person.