Very nearly every company I've worked for seems to have a constitutional aversion to "slack" in the work pipeline for development. Sometimes this manifests as a pile of inward-facing operational development that passes hands every few weeks and never really gets done because as the current developer is urgently re-allocated to a paying client. In consulting, sometimes it just means a stretch without pay when there's nothing to bill on. Sometimes it results in developers being assigned increasingly low-value, low-clarity, or low-interest busywork.

All these things are an extremely poor usage of available developer resources.

These situations seem to originate in a view of engineering manpower as either a cost center (e.g. in an IT department) or a kind of inventory (e.g. in a contracting/consulting firm). For accounting purposes, fine. But that doesn't mean you have to actually treat them that way. I'm not sure if my view is any more valid, but I tend to think of engineering slack as a surplus. We have the developers' time. It's probably paid for. And even if it's not, it's probably bad for morale not to pay for it. Sure you can burn off the excess and get not much more than waste heat out of it. Or, you can invest it, and make back the cost in dividends.

Nearly any company has systems that can benefit from a little more automation, a little more customization, a little more integration. This is what the vast majority of developers in business are doing. Most of them are just doing it because the work is going to actively mitigate an operational cost or support a revenue stream. Sometimes the direct value added isn't worth the cost of the developer's time. But good engineering often pays dividends in indirect value, via force multipliers or ongoing and compounding efficiency.

Not all work is created equal, but if you look carefully, there's probably some benefit to be had from a little extra development time. And the next bit can compound on that. And the next. And the next. Before you know it, your operations could be humming like finely tuned machinery. Or you have an experimental beta feature that could be your next surprise hit. But you'll never know if you keep burning off your excess instead of investing your surplus.

I have historically had a lot of trouble finding motivation to work on toy projects solely for the benefit of learning things. I tend to have a much easier time if I can find something to build that will have value after I finish it. I view this as a deficiency in myself, though. So I continue to try to work on toy projects as time allows, because I think it is a good way to learn.

One of my more successful instances of self-teaching happened a few years ago. I was trying to learn about this new thing all the hip JavaScript kids were talking about called "promises". It seemed to have some conceptual clarity around it, but there was disagreement on what an API would look like. Multiple standards had been proposed, and one was slightly more popular than the others. So I thought it might be fun to learn about promises by implementing the standard.

And it was! Thankfully the Promises/A+ spec came with a test suite. It required node.js to run it, so I got to learn a little bit about that as well. I spent a few evenings total on the effort, and it was totally worth it. I came away with as deep an understanding of promises (and by extension most other types of "futures") as it is probably possible to have. This prepared me better to make use of promises in real code than any other method of trial-and-error on-the-fly learning could have.

Here's the end result on GitHub: after.js. The code is just 173 measly lines--far shorter than I expected it to be. It also hosts far more lines of documentation about promises in general and my API specifically. It has a convenient NPM command for running my API through the spec tests. And most satisfying of all it can now serve as a reference implementation for whoever might care to see one. I think it's a great example of the benefits of a toy project done well.

Evaluating the technical chops of job candidates is difficult. Especially in a "screener" situation where the whole point is to decide if a deeper interview is a waste of time. I haven't done a lot of it, so I'm still developing my technique. Here are a few things that I think I like to do, so far.

As long as the candidate is in the ballpark of someone we could use, I don't like to cut the interview short. There's always the chance that nerves or social awkardness are causing an awful lot of what might appear to be ignorance or confusion.

I like to ask the candidate what technologies (languages, platforms, frameworks) they most enjoy and most dislike, and why. This gives me a peek into how they think about their work and what their expectations are of their tools and platforms. I want to see at least one strong, reasoned opinion in either direction. Not having one is an indication that they either lack experience, or are not in the habit of thinking deeply about their work.

Here's the big one: In order to figure out what questions to ask and how to word and weight them, I also like to ask the candidate to evaluate their skills in a few of the technologies that are relevant to the job they are applying for. Even if they have identified their relative skill levels on their resume, I ask them to put themselves on a 5 point scale. Zero is no experience. One is beginner. 2 is still beginner. 3 is comfortable. 4 is formidable. 5 is expert.

At self-rating of 1, I mostly just want to find out what they've built and what tools they used. Anyone who rates themselves at 2 or 3 is a candidate for expert beginner syndrome. They'll probably grow out of it as they get more experience. I ask questions all over the spectrum to establish what they know and what they don't.

A self-rating of 4 is probably the easiest to interview. A legitimate 4 should have the awareness of self and the space to see that they know a lot, but also have a good conception of where their gaps are. 2s and 3s are more likely to self-label as 4, but they are easy to weed out with a couple challenging questions. Beyond that, I mostly care about how they answer questions, because this candidates value is as much in their ability to communicate about tough problems and solutions as it is in coding and design.

A self-rating of 5 is essentially a challenge. I'm not interested in playing a stumping game. But I do care whether the confidence is earned. Someone who is too willing to rate themselves an expert is dangerous both on their own and on a team. A 5 doesn't need to know everything I can think to ask. But I expect an honest "I don't know" or at least an attempt to verbally walk it through. And instead of confusion and misunderstanding, I expect clarifying questions. Communication and self-awareness are crucial here. Confident wrong answers or unqualified speculations are bad news for a self-proclaimed expert.

The .NET runtime has two broad categories that types fall into. There are value types and there are reference types. There are a lot of minor differences and implementation details that distinguish these two categories. Only a couple of differences are relevant to the daily experience of most developers.

Reference Types

A reference type is a type whose instances are copied by reference. This means that when you have an instance of one in a variable, and then you assign that to another variable, both variables point to the same object. Apply changes via the first variable, and you'll see the effects in the second.

This reference copy happens any time you bind the value to a new variable. Whether that's a private field on an object, a local variable, a function parameter, or a static field on a class. The runtime keeps track of these variables as they float around and doesn't allow the memory holding the actual object to be freed until it is sure that none of the references are in scope of any active objects.

Value Types

A value type is a type whose instances are copied by value. This means that when you have an instance of one in a variable, and then you assign that to another variable, the second variable has a new object, with the value of each property, and which you can change independently of the original value.

Value types can get tricky. The thing to remember is that this policy goes only one level deep. The properties of a value type have their own copy type, and that will be how they get copied to the new containing object.

Here we see that the changes we make to the reference type instances are retained across the value types, because it's only the bit of information that points at the reference type that is duplicated, not the object that's pointed to.

Memory

The last thing we should talk about is memory. Unfortunately, this bit is complicated despite most often being inconsequential. But it's a question that a prickly interviewer might decide to quiz you on if you make the mistake of claiming to be an expert.

You might guess that based on this difference in copying behaviors that passing around complex value types would be computationally expensive. It is. And potentially memory-consuming as well. Every new reference is a new variable with new copies of its value-typed properties. Instances also tend to be short-lived, though, so you have to work to actually keep the memory filled with value types. Unless you box them.

"Boxing" is what happens when you assign a value type to an object variable. The value type gets wrapped into an object, which does not get copied when you assign it to other variables. This means that you can end up with very long lived value types, with lots of references to them, if you keep them in an object variable. Fortunately, you're not allowed to modify these values without assigning them back to a value typed variable first.

Folks will often talk about stack and heap when asked about the differences between value types and reference types, because stack allocation is way faster. But value types are only guaranteed to be stored on the stack when they are unboxed local variables that aren't used in certain ways. The decision whether to do so in other cases is often not dictated by the CLR spec, so depending on the platform it might or might not do so in a given situation. In short, it's not worth thinking about unless you are bumping into out of memory errors. And even then, there are almost certainly more permanent wins to be had than by worrying about the whereabouts of your local variables.

I have at times in my past been called a very "conservative" developer. I think that title fits in some ways. I don't like to do something unless I have some idea what the impact will be. I don't like to commit to a design until I have some idea whether it is "good" or not, or how it feels to consume the API or work within the framework.

And I used to believe very strongly in designing things such that they were hard to misuse. This was so important to me that I would even compromise ease of proper use if it meant that it would create a barrier of effort in the way of using something in a way that I considered "inappropriate".

I once built a fluent API for defining object graph projections in .NET. While designing the API, I spent a lot of time making sure there was only one way to use it, and you would know very quickly if you were doing something that I didn't plan for. Ideally, it wouldn't compile, but I would settle for it blowing up. I also took great care to ensure that you always had an graceful retrograde option when the framework couldn't do exactly what you needed. But that didn't matter.

Once the framework got into other peoples' hands I realized fairly quickly that all this care had been a tremendous waste of time. The framework was supposed to be a force multiplier for the build teams at my company, but what happened was very different. Because the API had to be used in a very particular way, developers were confused when they couldn't find the right sequence of commands. When what I considered to be the perfect structure didn't occur to them, they assumed their situation wasn't supported.

I gave my fellow developers a finicky tool that the practice leads told them it was fast and easy and they needed to use it. So when it wasn't clear how to do so, they just stopped and raised their hand, rather than doing what a developer is paid to do: solve problems. By trying to protect the other developers from themselves, I had actually taught them to be helpless. And the ones that didn't go that route just totally side-stepped or subverted the tools.

All this came about because I didn't trust the people who would use or maintain my software after I was gone. I thought I needed to make sure that it was hard or impossible to do what I considered to be unwise things. In reality all I did was remove degrees of freedom and discourage learning and problem solving.

We are developers. Our reason for being is to solve problems. Our mode of professional advancement is to solve harder, broader, more impactful problems. If I can't trust other developers at least to learn from painful design decisions, then why are they even in this business, and what business do I have trying to lead them?

In JavaScript, we have objects, and closures, but we don't have data types. At least, not custom ones. There are the built in types and there are objects. But just because the runtime doesn't distinguish one from the next doesn't mean you can't build your own data types and benefit from them.

All you *really* need to build data types is closures and hashes. And those, JavaScript has. In fact, a JavaScript object *is* just a hash with some extra conveniences added on. This makes the exercise really straightforward in JavaScript.

Every time you call this function, you will get back an object that obeys a certain contract. You can always be sure that what you get has the same properties and the functions all have the same signatures. It even has the ability to encapsulate data in variables that you define locally to the function. It turns out you can even interact with this the way you would any other object, because of the way JavaScript works.

But that's just gravy. You could do this same exact thing in C# with anonymous delegates and a Dictionary<string, object>. It's more verbose, and the syntax for actually making use of it doesn't sync with what the type system and compiler provide. But the result is the same as in JavaScript. You get a constructor that produces a structure that has both state and behavior, both private and public, and whose contract is consistent every time you call the function.

We know what a closure is. It's a function that gets a little bubble of data pinned to it at runtime, which it can then make use of wherever it goes.

If we really want to shorten this explanation down, we might say that a closure is a bit of behavior tied to a bit of data. As it happens, this is very similar to how objects were described to me when I was first learning object-oriented programming. In his book "Object-Oriented Analysis and Design with Applications", Grady Booch says that "an object is an entity that has state, behavior, and identity." Most other things that an object is are derived from these attributes. Identity is just bonus for our purposes, but in JavaScript at least, it also happens to be true of functions.

An object is an entity that has state, [and] behavior...

— Grady Booch

Now lets squint a little at that object. Focus on some things and let others fade into the background. For example, lets imagine an object with just one public member function. And then, imagine it also has no public properties or fields. It does have some private fields, though. And those private fields are initialized by the constructor of the object's type.

So now what do we have here? A member function, with a little bubble of data pinned to its object at runtime via the constructor function, which it can then make use of wherever it goes.

If you're working in a modern programming language, you likely use closures from time to time. It's a fancy sounding word, but the meaning is simple. A closure is a dynamic function defined in some other function's scope and then either returned or passed off to some other function. But specifically, it is a function like that which references variables from the scope of the outer function in which it is defined. One place they are very commonly used today is in event handlers.

As of this writing there are probably three most common uses of closures: DOM event handlers and AJAX callbacks in browser JavaScript, and callbacks in node.js.

The mechanism defines a function that will be called by some other piece of code somewhere else. This function can make use of a piece of information that it doesn't create or fetch on its own, and which the caller has no knowledge of. And yet, this information is also not passed in a parameter. It's carried in by the function, in it's pocket, ready to pull out only at the appropriate time.

So why the funny name? A function that doesn't return anything and doesn't call any other functions encloses its local variables, like a bubble of air. The variables live and die with the function call. But if you define a dynamic function and return that, or hand it off to some other function or object, it forms another bubble. This inner bubble carries with it the bits of information it uses, closing around them as it leaves the safety of the parent function's bubble. This is a closure.

Anyone who has worked on "internal" software--the stuff you write that your customers never see, but your co-workers use every day--is probably familiar with the idea of "forms over data" or "CRUD apps". The idea is that the quickest path between a schema in an RDBMS and an app to maintain the data in it is to make two screens for each table. One screen has a grid of the data in the records in the table, and the other screen is a form with fields for each column used to edit or create rows in the table.

If the users express a need to do "mass edits", you might find yourself searching for an editable data grid control that will let you edit data right in the list/table display, rather than editing one row at a time in a specialized form. And if you're especially unlucky your users will ask that fateful question: "can you just make it work like Excel"?

The answer is that of course you can. You can buy a dev license for some fancy user control for your platform of choice that will attempt to imitate Excel's grid interaction paradigm as closely as possible, while giving you all sorts of knobs, dials, and hooks with which to customize, extend, or otherwise deviate from said paradigm. Or you can just commit to the devil's deal and use VBA to customize and extend actual Excel spreadsheets.

Actually, if you're not in a position to resist the demands, that's probably the least of all evils. I mean, VBA is pretty gross as a development platform. But it's also extremely low barrier to entry. And you don't have to answer questions about why things don't quite work like Excel, and why you can't make them.

But regardless which direction you go, if you find yourself giving your users an Excel-like interface, there are probably some decent reasons why. Either your team can't afford to take the time to build a task-oriented UI (because doing UI well is hard,) or you just honestly don't know what your users need to do with their data. So you need to provide a way for users to work with their data when neither they nor you know what their task flows are yet, without a big engineering investment.

Don't know what your users need to do? Can't afford to find out? No shame in that, do what you gotta do. Don't care? Have no intention of ever finding out? Then... Well... Maybe a little shame is appropriate.

I know of one surefire way to destroy the morale of a group of engineers. That is to take away their ability to finish things. There are a number of root causes that can lead to this, but the proximate cause is usually a lack of vision, focus, or courage in the people responsible for setting priorities.

A lot of developers got into this gig because of an intrinsic motivation rooted in the feeling of accomplishment derived from completing the construction of a useful thing. When you tell these folks to solve a problem and let them pour their focus and effort into it, then stop them and say "actually, that's not so important, work on this other problem instead," you rob them of their intrinsic reward for good work.

Jerking a group of developers around like this is a good way to end up with sad and bitter employees. And that's the best case. More likely you'll end up with ones that feel betrayed and act belligerent, or even disloyal. Developers don't like thrash. They don't all react the same, but very few handle it well. One thing is for sure: it's no way to get people to do their best.

Last year I read a book that broke my brain, called "Leaders Eat Last". I had begun to think that a "management" role might be somewhere in my future, and I knew enough about the topic to realize that people tend to disrespect "managers" and respect "leaders". I thought I should try to learn the difference, and whether and why it matters.

What "Leaders Eat Last" is about is essentially the idea of "servant leadership." The book, through a number of stories, illustrations, and explanations, argues that what makes a leader out of a manager, or a person in any other role, is a dedication to putting the team ahead of individual ambitions and obligations. The book is full of examples of managers, executives, military officers, and other people in leadership roles, taking on what some might consider undignified tasks or unnecessary risks, to make sure that the team gets a win, that the more vulnerable members aren't left behind, etc.

As a person with no official authority I have struggled with translating this into advice I can enact. But lately it occurred to me that if you take the consideration of official position or role out of the picture, what's left is very simply a good teammate, a good follower. This has proven a jagged pill to swallow. While I think I have good instincts, valuable perspective, and decent judgement... I don't think I have historically been a good follower.

Whether it was sowing dissent because I believed the manager was making bad choices, or keeping learning opportunities away from junior devs because I wanted to make sure things got done right, at the time I felt certain I was doing the right thing. In hindsight, and in the context of the idea of servant leadership, it's clear that at best I was being selfish, and at worst I was a seed of dysfunction hiding behind the guise of shipping product.

I am humbled. Clearly I have plenty of room to grow. But for once, I think I know what direction I need to grow in, and that is comforting. And it's strangely empowering to know that the path to greater impact and responsibility lies in taking on a bigger burden and building up others, rather than worrying about positioning and appearances and aggressively pursing and defending correctness.

When I was first learning to program, one of the fundamental things that every introductory language course covered was the idea of a "constant". I never thought too much about them at first, because the idea seemed so simple and obvious. A thing that is constant does not change. So a constant variable is one that does not change. Easy. What's next?

In the courses, what's next was invariably something like functions or pointers or string interpolation. But I think new programmers might be better served instead putting a follow-up question next.

What does it mean for a variable to "change"?

The C++ language, as an example, embraces the ambiguity of the question and tries to answer all interpretations. Consider a pointer to an object type value. You can change the pointer address, or replace the value, or you can mutate bits of the value itself. C++ allows you to lock down all of those kinds of change independently, even down to individual fields internal to the object.

C# walked away from this madness and made things very simple, but still not very intuitive for newbies. C# has two different kinds of invariable variables: "const" and "readonly". How can we understand them as easily as possible? We ask, "What does change mean?"

With regards to "readonly" and "const" in C#, change means simply to be assigned different instances over time. And the distinction between readonly and const is at what point in the life of the variable this change becomes prohibited.

A readonly field is one whose value cannot be reassigned after it is initialized. Initialization can happen at the point of definition, or in a constructor. Local variables cannot be readonly. I haven't seen a reason for this except that the CLR doesn't have this feature, and the benefit to adding it at the language level was low.

A const variable is one whose value cannot be reassigned after the program is compiled. This has a couple of natural consequences that are surprising unless you start from this definition. At compilation you cannot construct complicated objects or assign references, you can only assign primitive literals and nulls. And while local variables can be const, fields that are const are implicitly also made static. Why? Because it saves memory, and there's no reason for them not to be.

There are a lot of things that we do, as developers, that we tend to feel are very important. A lot of these activities are often viewed by "business people" as a necessary evil, a cost center, symptoms of the finicky nature of perfectionists, or even a distraction to be minimized.

I think in reality developers, even experienced ones, tend to do a very poor job of defending our practices in terms that make sense to anyone else. I think it's an easy argument to make that, generally, carefully built software is more valuable than hastily built software. But software is also not valuable unless it is marketed, sold, delivered, explained, and supported. The tool for coordinating these things is called a business. And in business most questions are ones of degree. To misquote the apostle Paul, all things are possible, but not all things are profitable.

In a business, we are all supposed to be working toward a common goal. If you're lucky, that goal is not just to make money, but to actively improve the world for your customers. Even in that wonderful scenario, the software you deliver is meant to contribute to one of those goals, at least indirectly.

The unfortunate thing about indirect contributions is that they tend to be invisible to everyone who hasn't been involved in them. If you have a good manager, they might be able to make the connection for others on your behalf. But you won't always, or probably even usually, have a good manager. So in the end it's no one's responsibility but our own to defend the activities that we know are important, by explaining how they contribute to the common goals of the organization.

There's a line connecting what you do to why it's worth other people's money for you to do it. You can see it, implicitly. But sometimes you have to draw that line in nice fat marker for other people to see it.

I took a new job at the beginning of August. I wasn't at my last gig very long, but I have only good things to say about them. There's a decent chance I'll end up back there at some point in my career. But I'm excited for the new job in all sorts of ways I'd have a hard time putting into words.

One thing I can talk about is how the daily worky-work is going to be different. My last job was spent almost entirely writing a node web server and a single-page web app, and all development was done in an Ubuntu VM. The new job will be like the one before the last one. I will be doing .NET web and service development, fully immersed in the Microsoft ecosystem.

Things I'm going to miss:

The power and concision of working with a *nix CLI. Powershell just ain't the same.

OS-level package manager (apt-get)

Git. 'Nuff said.

The light footprint of Sublime Text

The compactness of server-side JavaScript. You can accomplish *so much* with so few lines, due to the absence of type system noise and the robust FOSS package ecosystem.

A culture of command-line build tools.

Things I'm not looking forward to:

Static types. For all their strengths, you can end up jumping through a lot of hoops and contorting your code into very unpleasant shapes in order to make the static type system happy. It only gets worse if a framework or library has decided to leverage it to solve a problem for which it's a poor fit.

The complexity of Visual Studio. So. Many. Features. It's really a mess in a lot of places.

TFS version control. After using Git exclusively for almost a year, going back to TFS feels like using a computer without a keyboard. Sad panda.

Things I'm not going to miss:

Ubuntu as a desktop environment. It looks and feels good, for a Linux. But that's an awfully low bar.

Linux video drivers. Just.... Ugh. What a mess.

The Unix Way: 20 ways to do anything. All of them involve text processing. 16 of them are kludgy, and the other 4 don't work consistently.

Things I'm looking forward to:

Static types. Apart from the typical discussion of the benefits of static typing, there are some things that are just quicker and simpler to express via the type system. In JavaScript, for those problems you've got to write the engine and define the declarative data structures that will be processed by it.

The power and convenience of Visual Studio. I've never encountered a dev tool quite like it. It can understand your code and give feedback like no other And it has tons of really handy tools to do things for you that would be annoying to deal with by hand.

Applying the new perspective and wisdom I've gained from working in a dynamic language to write better static-typed code.

In the last post, I talked about why I decided to get a new domain and use a new service for my blog. More importantly, I also explained why I decided to move all the old content to the new domain and service. I left off with the unanswered question:

How do I move the content without breaking people's old links or destroying my "Google Juice"?

The short answer is 301 redirects. 301 is the HTTP status code for "Moved Permanently." It basically is a signal to anyone--whether human, browser, or web robot--who comes looking for a URL that it should go look in a specific other spot for it. And oh by the way that is the new place to always look for the content that used to be at the old URL, so just forget the old one and replace it with this new one in all of your records.

Put simply: A 301 is how you tell search engines to transfer the rep from one domain to another. It also conveniently pushes any browsers on to the correct URL if they happen to click an old link somewhere.

You can set up a domain-wide 301 in most any DNS service by way of a URL record, but it generally only allows you to take a particular domain or subdomain and send it straight to an unadorned other domain or subdomain. Doing the redirects this way would mean that any link to a specific page of my old blog would go to my *landing* page instead of the new home of that specific content. Which would effectively break old links, and would destroy the ranking position of any individual piece of content. So exactly what I wanted to avoid.

Next I went looking at whether the blogging services themselves could do the job for me. It turns out that Blogger does have a way to configure 301s, but only for individual URLs, and only within the exact same domain and sub-domain. Disappoint.

Squarespace, as it turns out, is much more amenable. They will let you configure a page to redirect to an external URL. They also feature customizable blog post URLs. This hatched a hare-brained scheme in my mind: Move Turbulent Intellect, whole and complete, over to Squarespace, and then redirect everything to the other blog.

Pay for a second Squarespace blog.

Import the old content to the second Squarespace blog.

Ensure all new URLs are identical to the old ones.

Reassign the old domain from my Blogger blog to my second Squarespace blog.

Add redirects for each of the imported blog posts in the Squarespace TurbulentIntellect blog over to the new new blog.

Alas it was not to be. Blogger blog URLs get a .html added to the end of the slug, and Squarespace does not support the dots in their custom URLs.

So now I was back to the option I had been hoping through all this to avoid: setting up a server somewhere just to do the redirecting. I've never set up a public web server before. I've barely set up private web servers, to be honest. And those I did were IIS. So I took my first tremulous steps into these waters and signed up for a Digital Ocean account.

I went with Digital Ocean because friends recommended it, and because they have a $5/mo tier where I get a tiny little virtual server that would provide plenty of power and very little learning overhead. I went with an empty Ubuntu server, and installed only just what I needed. Fortunately, my most recent gig afforded me a chance to get comfortable with rudimentary Linux administration and SSH. This would have been painful without that as I wouldn't have known where to start, despite my needs and the tasks being very simple.

I grabbed the Ubuntu dev VM I had laying around from my node.js experiments, used the RSA key I set up for Github, and SSH'ed into my shiny new virtual server, and started walking through Digital Ocean's great tutorials on configuring an Apache web host and installing the mod_rewrite module I knew I would need for regex-based dynamic redirects.

The file where you set up redirects is called .htaccess. It took some playing to figure out what I needed in order to redirect with and without the www subdomain, and how to make sure I correctly captured and converted the slugs I cared about. In the end, I ended up with a handful of lines, below, that explicitly redirect a few URLs that don't follow any predictable pattern, and then use regex to handle the vast majority of posts.

With this in place, all that was left was to hook everything up in the right order. Order turns out to be sort of important if you want to make sure Google doesn't think you're content scraping. Step, by step I:

I recently moved my blog. I moved it between blogging service providers, and between domains. This turned out to be a fair amount of work and decision-making. I thought that other people who are considering a similar transition might benefit from seeing how I dealt with it.

I considered leaving the old content at the old service and domain, especially because it was at Blogger, which was free. But I had grown dissatisfied over the years with Blogger's available templates, and have never been super happy with their composition tools. And I never really was satisfied with the old domain. It was good enough, but I was always on the lookout for something that really sang to me. And more, I didn't want to split up my blogging history over such an arbitrary line in the sand.

Moving services can be a pain, but it seems like most modern services and tools can handle exporting and importing, especially from an 800lb gorilla like Blogger. The domain change is a big deal, though, for a couple of factors. Firstly, there are a few people and sites that have linked to me over the years. While I'm no Jeff Atwood, or Rands, I didn't want to "break the web" in even a small way, if I could avoid it. Secondly, you can really screw yourself by way of the search engines, if you're not careful in how you copy or move content.

I ruled out just duplicating the content fairly early on. For starters, I didn't want to split my traffic. (Also a reason not to just leave the old stuff where it was and post only new stuff to the new domain.) It would dilute search rankings, and it would give anyone who ends up at the old domain a potential dead end that might cause them never to find my new stuff. And most importantly, if you just duplicate the content, you're all but sure to get one or the other domain flagged as a content-scraper and de-listed from the search engines.

So that left moving. Fortunately Squarespace has a convenient tool that scrapes posts and comments from Blogger's RSS feeds, cleans it up, and plops it right into a fresh Squarespace blog. So that grunt work was dealt with.

The next step was figuring out how to move it without suffering undesired consequences. How do I move the content without breaking people's old links or destroying my "Google Juice"?

All the old Turbulent Intellect content has now been imported, and every post and content page URL, plus the RSS feed, should all be 301'ing on over here. I may eventually kill off the old domain, so take appropriate measures. ;)

Welcome to my new blog! Chances are if you are here, you came by way of my old blog, Turbulent Intellect. Thanks for following me over to the new digs. There's not a lot here, now. But anything new I write will show up here, and soon all the old stuff will be backported and redirected to here.

I'm happy to announce that I will be moving my blog to a new domain, and to Squarespace, sometime in the next week. The RSS may or may not continue to work, depending on how your reader handles the redirect. And Blogger's search and monthly list pages will stop working. But the main page, resume, and all blog posts should permanently redirect to the corresponding pages at the new domain. (The exception being Blogger's post search and list pages.)

The new blog will be at whilenotdeadlearn.com, and everything is already set up over there except for the content transfer, if you want to go check it out. And just to be safe, I'd recommend you subscribe to the new RSS feed at whilenotdeadlearn.com/blog?format=rss, even if the redirect does end up working for you.