My Links

Advertisement

News

Welcome to my weblog. My name is Jeff Smith, I am software developer in Boston, MA and I was recently named a 2009 SQL Server MVP. Check in frequently for tips, tricks, commentary and ideas on SQL Server and .NET programming.

You write and maintain millions of lines of code, compiling your applications takes several hours, and your databases contain hundreds of tables with millions of rows.

Even the best and brightest and most experienced programmers, now and then, come upon a class or library or technique that is new to them. That's right, even you! You can admit it, no one's looking. But it really has no effect on your productivity, because you can quickly refer to the documentation or books or Google or many other resources to obtain the knowledge you need without missing a beat. Heck, you don't even need to post a question in a programming forum -- who could help you, after all? It's usually the other way around, right?

So, you've done your research and gained the knowledge to use this feature and it's time to put it to use. Implementation -- that's what separates the men from the boys! You have a hectic schedule and you are very, very important with no time to waste, but you're armed with knowledge, you're feeling good, you just drank 4 diet Pepsis for the caffeine and you have an unlimited supply of Cheetos ready to go. No pressure, but the fate of the entire organization is counting on you.

You open up your enterprise application in the development environment, wait 10 minutes for the massive project to load, and then you spend 20 more minutes carefully going through code and checking to see where the new feature needs to be implemented. It might be a new T-SQL feature in SQL 2005 that you are using, a library in .NET that you haven't worked yet, a general programming technique that you are about to employ for the first time, or maybe an external library that you will incorporate into your project. It's really irrelevant -- the key is, it's time to make some changes and you, and you alone, have the expertise to get it done on such a large scale application.

After all, who else can go through millions of lines of code to figure it out? Not those junior programmers, that's for sure. They sure have a quirky way of doing things, mostly because they are not on the same time constraints and at the same level as you.

You see, those junior programmers are just not capable of implementing and testing this new technique on your enterprise applications. They don't have the patience, the skill, or the know-how to do it. When you see them "working" on applications, you notice that a lot of time they seem to instead be working on "Hello World" projects! Simple 10 line projects! Single SELECT SQL statements! Talk about programmers who have much to learn! You haven't touched a simple project or SQL statement in years and you wouldn't be caught dead doing so! The horror! Could you imagine such a scenario?

So, the development environment is ready to go. You've read the documentation, you know exactly how to implement the new technique into your code, and after 10 or 15 minutes -- it's done! Not too shabby, eh? Time for a compile, another Pepsi, and a test run.

Now, even the big-time programmers get bugs. You may be the best, but even the best experiences these, right? Sure enough, you get a run-time error. Back to the documentation. Ah ha! This class library requires a few more properties to be set up or there's some other tweak to make that wasn't specified in the documentation. Clickety-clickity-click -- done! Recompile ... wait ... execute the application ... wait ... drink a Pepsi .... wait ... bathroom break ...wait ... run the test ... wait .. Damn! Another run-time error. You notice some other quirk that wasn't clear in the documentation. Hmm, you may have to try this a few ways to get it to work right. Fair enough, that's how it goes for enterprise level important programmers, right? This stuff almost literally is rocket science, after all!

Well, after several more changes and compiles, now it runs, but something doesn't quite look right. The new technique isn't doing just what you thought; time to make some changes. Back to the source code, check it out line by line, and run it again. You'll get it figured out before too long, right? Unfortunately, this is what happens when you work with complicated systems,and it's what you get paid for.

One day, while waiting for the latest compile and then for the testing process to run, you look over and see "junior programmer #5" working again on yet another silly little program.

"Yeah, I guess I am pretty lame," junior responds, ashamed. "Remember last week when you recommended that I exclusively use FULL OUTER JOINS in my SQL code? I didn't understand how they work, so I did something similar to this ... I created two small tables, put in a few rows of sample data, ran some simple scripts in Query Analyzer and looked at the results, testing different joins and verifying the effect on the results. Because I am so new and inexperienced, it was the easiest way for me to learn how it works. Kind of embarrassing, huh?"

You laugh. "Yikes! A newbie!" you exclaim, crumbs of Cheetos spitting from your mouth. "When I first used the new T-SQL features in 2005, I went straight to the General Ledger and started changing the reports! I don't have time to mess around! The data looked good to me after a few runs, the totals seem to match, it all compiled and worked after only a few tries. There were a few bugs here and there we are still looking in to, but I think it's unrelated. The real problem is waiting for those damn QA guys to finish testing and tying out the ledger reports to let me know if things are working right. How the hell can I be sure that these T-SQL enhancements work correctly until they get back to me with the results??!! It takes them forever! Idiots!"

Junior nods and goes back to "work" on his "important project." Ah, kids these days ....

Anyway, the compile is done, the program is executing -- and it looks OK! It took several weeks of work and lots of tweaking, but it looks right as best you can tell, the new technique you've used seems to work fine. There was some trial and error (the damn documentation wasn't clear!) but you've got it now. You send off the latest build to the QA guys, and add another bullet point to your resume!

A job well done, you sit back, put your stocking-and-sandaled feet up on your desk and look over at Junior while you absent-mindedly play with your pony tail.... Poor Junior, he has a lot to learn from you about programming!

It is interesting speaking with guys who approach development this way and how they don't seem to recognize that half their job as a good engineer/developer is ensuring quality, not just quantity in what they produce. That's where the real ROI for a company is in custom software. Your time is cheap compared to the 10s, 100s, or even 1000s using your software and how much they cost the company with every bug they encounter.

What senior level dev is this targeted at?? I think it's the junior guys who do what you describe. Everyone longtime dev I work with is perfectly comfortable writing small test programs, and does so quite often. It would be impossible to work with huge APIs like the .NET Framework without testing in at least some kind of isolation...

BTW, was "I mean, come on!" a reference to the South Park "Cripple Fight" episode?

Well, I call myself a Senior Developer, having 20 years of experience, and I am doing Unit-Tests for the last 7 years or so. I am also trying to convince all of the colleages I get to work with, if they don't already unit-test their stuff. But I was simply baffled at the answer I got from someone "more senior" than me:

"If you program correctly from the start and do a little thinking beforehand, then you don't need no stinking test cases, then they're all superfluous"

Anyway, my joke is that the programmer isn't as senior or as important as he thinks he is ... we see these people all the time, especially in the sqlteam forums. Someone will dump a huge SQL statement and ask for help on a LEFT OUTER JOIN and how it works, and I always tell them: If you don't know how a left outer join works, don't just try it in your production code! Use northwind or a table of fruits or numbers or letters and LEARN how it works! don't just run it on your actual data and hope that it "looks ok"! Of course, they always refuse, because they just don't have time and they don't see the value in doing that. Same with reports, even Excel formulas .... I see it over and over ... I just find it funny.

You're right, Adam, a true senior developer will ALWAYS break things down into small, simple apps to test and learn new techniques. You're lucky to have worked with good developers over the years, because I sure haven't!

I have on more than one occasion had to bring a 'senior developer' down to earth by teaching them of Failures Rich Rewards. One of the most effective ways of doing this is locking them out of source control until they can demonstrate that their solution works in isolation and in a controlled (sandboxed) environment. They kick and scream for the first couple of times but when they miss their first deadline and the managers descend they're usually more than happy to do all the testing they can.

This topic could also be extended to many other engineering related topics. Like Real Programmers don't use branches, or Real Programmers don't use source control.

Unit tests can take too long to design, code, run, have their own bugs and would never make the crazy source data I have seen, so the product would still have to be tested with live data to reveal the issues.

Real programmers need to continuously learn and try out ideas to unlearn bad techniques and improve there coding e.g. Thread and GUI based programming can be very subtle and full of traps, which can cause many types of sneaky bugs if you don't understand the interdependencies.

Real programmers don't only test applications, those are just QA testers, a limited sub-set of people with a specific mindset and IMHO a tolerance for boring work, some consultants and advanced programmers do a broader type of code checking as part of their work e.g. debugging and code reviews.

Junior programmers are just programmers who are learning how to be real programmers (if they ever make it), we had to start somewhere!

BTW I'm using Java because .Net sux (and is only a Microsoft copy of Java and Delphi) and mono is a bad joke.

I've been writing code for (passing bus drowns out embarrassingly large number) years and my attitudes towards code have changed radically. At this point, I literally don't believe it's code if there isn't some sort of automatic test. I look at a line of code and if I don't know that there's a test that covers that line, I doubt it works.

Code will inevitably break or need to be changed; the chances are good that you'll have to build some sort of test code to debug it sooner or later; but mainly I think: how do I know that the code works at all? It looks good but...?

If you think of your code as a scrappy fighter, going out to battle the demons of Real World Data, then you can't just get a fighter that looks good in the mirror or dances around a lot; you actually have to test him(*) in combat to see how he performs. This also has the advantage that you can find problems in the test arena and fix them without getting your fighter killed (i.e. having your program collapse in production).

The other part that doesn't get mentioned very often is that you often find bugs while coding the test cases, before you even run them. I'll say, "What would happen if this list were empty? Well, it'd definitely fail! Hmm, what should I do in that case, in fact?"

The other advantage is that designing for testability makes your code better overall. For example, once you get used to mocking everything up, you start to write your software as decoupled as possible. Once you're used to mocks, you make less use of the "magical null" antipattern (where you use the empty pointer as a special class instance with specific behaviour implemented by if statements in the code - the key smell is lots of code looking like if (foo)...)

In C++, I've been replacing my complex classes with a single function a lot recently, for the same reason: it makes it much easier to test and it has the side-benefit that your header files become tiny.

DoThings has a tiny header file that just contains the SomeSimpleStruct and a single function.

Only the implementation knows about SomeComplicatedClass; even better, you can save quite a lot of duplication by eliminating the .h altogether forward declarations altogether and having the class code defined and implemented at the same time.

At this point I'm sure you're thinking, "Wait, how can I test all the methods on SomeComplicatedClass now? it's hidden from me!"

The beauty of this is that you simply don't need to test anything other than that one function. The whole reason for SomeComplicatedClass to exist is to implement that function; if you removed half the class and replaced it, it'd be fine as long as that function continues to work. Of course, you still want test coverage of all the lines in the code; but code that cannot ever be executed by calling DoThings (and so cannot be tested) shouldn't exist in the first place.

This is a splendid technique because it reduces the number of things you need to test, but increases their importance, so you are motivated to create a very comprehensive set of tests and you have the time to do so.

Hmm, I have a rule: "test interface, not implementation." Probably someone thought of this first. :-D

You sure are a senior programmer... but you act worse than a junior!!! My senior programmer was laughing at the top of his lungs when he read your blog... You must be one of those nerds who think he's some sort of a freakin' oversized braineater... I guess you never had friends... what a sad life you have...;(