A man's got to know his limitations

I talk a lot about unit testing. And why shouldn’t I? Unit testing is the key to writing clean, maintainable code. If you concentrate on writing code that is easily testable, you can’t help but end up with decoupled, clean, high-quality code that is easy to maintain. What’s not to like?

But sometimes, there are questions over the definition of terms when it comes to unit testing. For instance, what, exactly, is a “unit”? What does “mocking” mean? How do I know whether I actually am doing unit testing? In this article, I try to clear up some of those definitions.

What is a “Unit”?

The first question that comes up when discussing unit testing is, well, what is a unit? You can’t do unit testing without knowing what a unit is.

When it comes to unit testing, I view a “unit” as any discreet module of code that can be tested in isolation. It can be something as simple as a stand-alone routine (think StringReplace or IncMonth), but normally it will be a single class and its methods. A class is the basic, discrete code entity of modern languages. In Delphi, classes (and records which are conceptually very similar) are the base building blocks of your code. They are the data structures that, when used together, form a system.

In the world of unit testing, that class is generally referred to as the “Class Under Test (CUT)” or the “System Under Test”. You’ll see those terms used extensively – to the point where it is strongly recommended that you use CUT as the variable name for your classes being tested.

Definition: A unit is any code entity that can be tested in isolation, usually a class.

Am I Actually Doing Unit Testing?

So when you are doing unit testing, you are generally testing classes. (And for the sake of the discussion, that will be the assumption hereafter…) But the key thing to note is that when unit testing a class, you are unit testing the given class and only the given class. Unit testing is always done in isolation – that is, the class under test needs to be completely isolated from any other classes or any other systems. If you are testing a class and you need some external entity, then you are no longer unit testing. A class is only “testable” when it’s dependencies can be and are “faked'”, and thus tested without any of its real external dependencies. So if you are running what you think is a unit test, and that test needs to access a database, a file system, or any other external system, then you have stopped doing unit testing and you’ve started doing integration testing.

One thing I want to be clear about: There’s no shame in doing integration testing. Integration testing is really important and should be done. Unit testing frameworks are often a very good way to do integration testing. I don’t want to leave folks with the impression that because integration is not unit testing, you shouldn’t be doing it – quite the contrary. Nevertheless, it is an important distinction. The point here is to recognize what unit tests are and to strive to write them when it is intended to write them. By all means, write integration tests, but don’t write then in lieu of unit testing.

Think of it this way: Every unit test framework – DUnit included – creates a test executable. If you can’t take that test executable and run it successfully on your mother’s computer in a directory that is read only, then you aren’t unit testing anymore.

Definition: Unit testing is the act of testing a single class in isolation, completely apart from any of it’s actual dependencies.

Definition:Integration testing is the act of testing a single class along with one or more of its actual external dependencies.

What is an Isolation Framework?

Commonly, developers have used the term “mocking framework” to describe code that provides faking services to allow classes to be tested in isolation. However, as we’ll see below, a “mock” is actually a specific kind of fake class, along with stubs. Thus, it is probably more accurate to use the term “Isolation Framework” instead of “Mocking Framework”. A good isolation framework will allow for the easy creation of both types of fakes – mocks and stubs.

Fakes allow you to test a class in isolation by providing implementations of dependencies without requiring the real dependencies.

Definition: An isolation framework is a collection of code that enables the easy creation of fakes.

Definition: A Fake Class is any class that provides functionality sufficient to pretend that it is a dependency needed by a class under test. There are two kind of fakes – stubs and mocks.

If you really want to learn about this stuff in depth, I strongly recommend you read The Art of Unit Testing: With Examples in .Net by Roy Osherove. For you Delphi guys, don’t be scared off by the C# examples -- this book is a great treatise on unit testing, and gives plenty of descriptions, proper techniques, and definitions of unit testing in far more detail than I’ve done here. Or, you can listen to Roy talk to Scott Hanselman on the Hanselminutes podcast. I openly confess that this blog post is a faint echo of the great stuff that is included in both the book and the podcast. If you really want to get your geek on, get a hold of a copy of xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros. This heavy tome is a tour de force of unit testing, outlining a complete taxonomy of tests and testing patters. It’s not for the faint of heart, but if you read that book, you’ll know everything there is to know and then some.

Stubs

A stub is a class that does the absolute minimum to appear to be an actual dependency for the Class Under Test. It provides no functionality required by the test, except to appear to implement a given interface or descend from a given base class. When the CUT calls it, a stub usually does nothing. Stubs are completely peripheral to testing the CUT, and exist purely to enable the CUT to run. A typical example is a stub that provides logging services. The CUT may need an implementation of, say, ILogger in order to execute, but none of the tests care about the logging. In fact, you specifically don’t want the CUT logging anything. Thus, the stub pretends to be the logging service by implementing the interface, but that implementation actually does nothing. It’s implementing methods might literally be empty. Furthermore, while a stub might return data for the purpose of keeping the CUT happy and running, it can never take any action that will fail a test. If it does, then it ceases to be a stub, and it becomes a “mock”.

Definition: A stub is a fake that has no effect on the passing or failing of the test, and that exists purely to allow the test to run.

Mocks

Mocks are a bit more complicated. Mocks do what stubs do in that they provide a fake implementation of a dependency needed by the CUT. However, they go beyond being a mere stub by recording the interaction between itself and the CUT. A mock keeps a record of all the interactions with the CUT and reports back, passing the test if the CUT behaved correctly, and failing the test if it did not. Thus, it is actually the Mock, and not the CUT itself, the determines if a test passes or fails.

Here is an example – say you have a class TWidgetProcessor. It has two dependencies, an ILogger and an IVerifier. In order to test TWidgetProcessor, you need to fake both of those dependencies. However, in order to really test TWidgetProcessor, you’ll want to do two tests – one where you stub ILogger and test the interaction with IVerifier, and another where you stub IVerifier and test the interaction with ILogger. Both require fakes, but in each case, you’ll provide a stub class for one and a mock class for the other.

Let’s look a bit closer at the first scenario – where we stub out ILogger and use a mock for IVerifier. The stub we’ve discussed – you either write an empty implementation of ILogger, or you use an isolation framework to implement the interface to do nothing. However, the fake for IVerifier becomes a bit more interesting – it needs a mock class. Say the process of verifying a widget takes two steps – first the processor needs to see if the widget is in the system, and then, if it is, the processor needs to check if the widget is properly configured. Thus, if you are testing the TWidgetProcessor, you need to run a test that checks whether TWidgetProcessor makes the second call if it gets True back from the first call. This test will require the mock class to do two things: first, it needs to return True from the first call, and then it needs to keep track of whether or not the resulting configuration call actually gets made. Then, it becomes the job of the mock class to provide the pass/fail information – if the second call is made after the first call returns True, then the test passes; if not, the test fails. This is what makes this fake class a mock: The mock itself contains the information that needs to be checked for the pass/fail criteria.

Definition: A mock is a fake that keeps track of the behavior of the Class Under Test and passes or fails the test based on that behavior.

Most isolation frameworks include the ability to do extensive and sophisticated tracking of exactly what happens inside a mock class. For instance, mocks can not only tell if a given method was called, it can track the number of times given methods are called, and the parameters that are passed to those calls. They can determine and decide if something is called that isn’t supposed to be, or if something isn’t called that is supposed to be. As part of the test setup, you can tell the mock exactly what to expect, and to fail if that exact sequence of events and parameters are not executed as expected. Stubs are fairly easy and straightforward, but mocks can get rather sophisticated.

I’ve written about the Delphi Mocks framework in a previous post. It takes advantage of some cool new RTL features in Delphi XE2. It’s also a very generous and awesome gift to the Delphi community from Vince Parrett, who makes the very awesome FinalBuilder. If you have XE2 and are doing unit testing, you should get Delphi Mocks and use it. If you don’t have XE2 and are doing unit testing, you should upgrade so you can start using this very valuable isolation framework.

But again, the whole point here is to test your classes in isolation; you want your CUT to be able to perform its duties without any outside, real, external dependencies.

Thus, a final definition: Unit testing is the testing of a single code entity when isolated completely from its dependencies.

Some tweets call for further explanation and expansion, and so I’ve added a new category called Tweet Expansion to cover posts that do just that.

Here at Gateway Ticketing, we have an interesting development and business model. At our core, we are an ISV. We sell a software package that we build to customers. But, we also will customize our software to meet customer specifications. That makes us sort of a VAR to our own product. Thus, we do both enhancements based on customer requests and our own internal product development projects to make our product more valuable in the marketplace.

This distinction – between internal projects and projects driven by specific customer requirements – makes for some challenging project management. But we have a solid team here at Gateway, and we make it all work. But because we do work that amounts to us being consultants, we end up having to closely track our developer time. We do that for a number of business reasons, the main one, of course, is profitability. You need to track your time to ensure that you are actually making money on a given endeavor.

Now I know that no one really likes to track their time. It’s a pain. It’s challenging to be accurate. It’s hard to get time properly categorized. But it is also invaluable for making those important business decisions.

But there is a bigger problem with measuring time. The only thing you can really measure is “Butt Time”. Butt Time is the actual amount of time someone has their butt in a chair while working on an issue. You need Butt Time to get any work done, of course.

But Butt Time isn’t reallywhat you want from your developers. Butt Time isn’t at all equivalent to productive development time. Butt Time includes reading email, answering questions and generally handling any number of interruptions, meetings, and other distractions. And those distractions, however minor, break a developer’s concentration.

And when you get right down to it, development is all about concentration. With a coding project of any size at all, doing development requires your complete focus and concentration. You need to build up complex data structures and patterns in your mind in order to really be productive. Doing so takes time -- I’ve heard it described as building a house of cards in your mind. Building that house of cards can take upwards of fifteen minutes depending upon the project. But here’s the bummer: It only takes a second for that house of cards to come tumbling down. One distraction can make your fifteen minute investment disappear.

And of course, time spent in “The Zone” – with that house of cards constructed and true, highly productive work getting done – is what we all are seeking. We’ve all probably had that awesome experience of getting into that zone for hours at a time and wondering where the time went. We know how cool that is – and how precious. That’s what we want to measure – Brain Time.

But that’s a really hard thing to do. Getting accurate, meaningful Butt Time measurements is difficult enough. But how in the world can you actually measure Brain Time?

I’ll argue that you really can’t. In the end, Butt Time is only a poor proxy for Brain Time. What we need to do is to try to increase the Butt Time to Brain Time ratio by providing an environment where Brain Time is maximized as a percentage of total Butt Time.

There are ways to do that – ensuring that your developers have an office with a door that they can close is an important first step. The book Peopleware is really the Bible for this, and Joel Spolsky has talked a lot about it as well. Uninterrupted quite time is the key. Leaving developers alone is the key to maximizing Brain Time.

Seriously – you need to get and read Peopleware if you have anything at all to do with leading developers. This is the definitive book on managing software developers. Be sure to get the Second Edition. The physical book is, I believe, out of print, causing the price to be pretty high, but I was delighted to notice that you can now order it on the Kindle .

Another thing we need to do is to respect – indeed, celebrate – those developers that are quiet and don’t say much. We have a not small number of developers here at Gateway that, well, don’t say much at all. They come to work, write great code, and go home. They don’t have a lot to say. But sadly, we don’t always respect this. I’m as guilty of anyone of too often saying super-clever things like “Stop being so boisterous, Meredith” or “Did you say six words today?”, instead of recognizing that Meredith is maximizing her Brain Time and, by being quiet, not breaking other team members’ concentration. Being quiet is a very valuable virtue in a software developer – both because they themselves are being productive and they aren’t breaking other’s concentration -- and we should give honor and respect to that.

Butt Time is easy to come by and fairly easy to measure. Brain Time, however, is a precious and hard to measure commodity that needs to be nurtured and respected.

Now I’m going to discuss another way you can invert the control of your code – by using anonymous methods.

Anonymous methods were one of two major language features added in Delphi 2009 – the other being generics. Generics are probably easier to understand and use as they have a great use case with collections. Anonymous methods are a little harder to understand because their use and their purpose are not as easily seen. I hope that you’ll start seeing their advantage after reading this post.

Basic Idea

The basic idea is pretty simple – anonymous methods allow you to pass code, i.e. functionality, like any other variable. Thus, you can ask for functionality via an anonymous method just like you can ask for a dependency via an interface. Because you can assign code to a variable – or even pass code directly into a method – you can easily invert the control of your class by passing in code to your class instead of embedding it in the class itself.

One of the principles of Inversion of Control is “Don’t call us, we’ll call you”. That is, if a class needs something, the class itself should ask for (call) the functionality, rather then doing that functionality itself. If a class needs to, say, record the processing of widgets as it happens, it should ask for a widget recording class and record things using the dependency that is passed in. A class should not create it’s own instance of a widget recording class and then use it.

Given that example, it’s not difficult to see how one could pass recording functionality into a class as an anonymous method.

Doing it the “Normal” Way

Let’s use the above discussion in a code example. As noted, perhaps your class needs to record the processing of widgets. Typically, you might do something like this in order to use good dependency injection techniques:

You’d store the widget recording interface and use it in your class as needed. And that’s an excellent thing to do – your class is decoupled completely from the recorder, and you are using constructor injection to ask for your dependency.

Again, this is a great technique, and probably preferable most of the time. I don’t want to leave the impression that what I discuss below is always a replacement for good, sound use of interfaces. However, it may not always be what you want to do.

IOC with Anonymous Methods

As noted above, you can use anonymous methods to invert the control of your class. Perhaps declaring an interface and instantiating a class to implement it is too heavy. Perhaps the dependency is really simple and straightforward. In those cases, using a simple anonymous method to inject your dependency might be a better solution for decoupling your code.

This simple class declaration takes an anonymous method as a parameter to its constructor, and then exposes that function as a property. That’s a bit of a commitment – you may not always want to do widget recording. If you wanted, you could not pass the anonymous method in the constructor and simply assign it to the property as desired. In any event, the widget recording functionality is implemented externally, and your class merely depends on the anonymous method type.

Now, you have a class that can record widgets any way you want to – to the console, to a file, to email, a database -- whatever. And you don’t have to know anything about the implementation of how the widgets are recorded. You just have a variable that refers to the code, which you can easily call in your code.

Using Your New Widget Recording Tool

So, now we have a TWidgetProcessor that can record widgets with an anonymous method. Cool – but how? Well, you simply declare an anonymous method and pass it in. Here’s an example which just writes out to the console:

All this code does is declare an anonymous method that matches the declaration of TWidgetRecorderProc and writes the information out to the console. That’s pretty simple. But what if you wanted to write it out to a file? Easily done without changing the TWidgetProcessor class at all.

Of course, you could write a similar anonymous method to record the information to a database, send an email, or any combination thereof.

And just as importantly, if you want to test The TWidgetProcessor class, you might want to do so by passing in a TWidgetRecorderProc that does absolutely nothing:

RecorderProc := procedure(const aCount: integer)
begin
// Do nothing, as this is a stub for testing purposes
end;

And of course if your classes are easy to test, then they are very likely well designed. The ability to stub out functionality as above is a hallmark of well designed classes.

Conclusion

So now you have another tool in your Dependency Injection/Inversion of Control tool box. The power here is in the ability to decouple functionality from the class that needs that functionality. Anonymous methods provide a nice way to wrap up functionality in a single variable. Instead of injecting an implementation of an interface, with anonymous methods you can inject a specific chunk of code that will provide the functionality. Either way, you end up with testable code that will be more maintainable in the future.

Consumer or Producer?

By now I suspect that most of you have heard of generics. Perhaps you have started using them via the Generics.Collections unit, where you can find TList<T>, TStack<T>, etc. The Delphi Spring Framework contains an even more sophisticated set of collections and interfaces to those collections, including the very powerful IEnumerable<T>.

These are all very useful classes, and our code is made much simpler, easier to maintain, and more type-safe because of them. But using them merely makes us consumers of generics. If we want to truly take advantage of generics, we need to become producers of generic classes. It’s a big step to see the benefit of generic collections, but the truly big step is to start seeing opportunities for the use of generics in your own code.

A Contrived, Simple Example

This past weekend, some of my team members and I attended the C-Sharpen event here in Philly. It was a valuable day, made particularly so by the presentations given by Steve Bohlen. He gave an excellent presentation on the topic I discussed above – how we need to start thinking as producers of generics and not merely consumers. In that presentation, he gave a simple example of how generics are powerful and can make your code simpler, more effective, and more powerful. So, I’m going to show that example here. It’s a bit contrived, but very illustrative of how to think about generics.

Now this code is very simple – a set of classes that might represent a simple order entry system. But right away, something should occur to you. All three have something in common – they are entities in your system. Down the road, you may want to act upon all the entities in your system, so you might create a superclass like so:

But you might be frustrated because you want your TEntity class to have an ID field that is in use by all descendants, but that pesky TCustomer class can’t oblige – it needs a GUID for its ID tag instead of an integer like the other classes.

Instead of fretting about the different types of ID tags, how about just create one that doesn’t care what type it is? Well, this is where the power of generics comes in. How about you give the TEntity class a parameterized type – a generic – as it’s ID, and then just tell all the classes what type their ID tag will be?

Now, given the above, your entities can all have ID’s, but you don’t have to have the same type for all of them. If a TEntity descendant needs an ID of a different type, you can just descend from TEntity and pass the correct ID type in the class declaration.

Again, the example is quite contrived, but I think it does a nice job of showing how you can starting “thinking generically” and not just accept a rigid type structure.

In addition, it illustrates why the more formal name for generics is “parameterized types”. The type in the brackets is passed in to the type declaration, and then used within the class, just as method parameters are passed in to functions and procedures.

A simple, contrived example, sure. But hopefully it illustrates how generics can be used to turn you from a consumer of generic classes to making them a common tool in your code.

Update:

As a number of commenters have pointed out, my original example was sub-optimal. Rather than post a "correction" post, I've taken the liberty of updating and improving the original article.

I have this friend who totally doesn’t get Twitter. He doesn’t get it so much that it actually kind of makes him angry that anyone does get – and like – Twitter. To him, it is a total waste of time in every way. He can’t imagine why anyone would spend any time at all having anything to do with Twitter. And that’s fine – to each his own. But I do find it amusing that someone would feel that way about a service used and enjoyed by millions of people.

I’m a Twitter lover. I find it entertaining, amusing, interesting, and good for my brain. I really enjoy reading it, and I really enjoy posting tweets. I get good news, good development information, and a good laugh when I read twitter. I get to express my self in short bursts that help me formulate my thoughts. What’s not to like?

The common impression is that Twitter is for “letting you know what your friends are doing” – or at least that is how it was originally marketed. The common misconception is that Twitter is just a bunch of people posting “Now I’m eating lunch. Yum!”, and perhaps it was that in the very beginning. But in the spirit of “Let a thousand flowers bloom”, Twitter became a lot more than that. Twitter is a forum for expressing not only what you are doing, but what you are thinking, what you are watching, what you are reading, and anything things that you are up to. It is many things. It is a means of conducting a conversation across the world. It is a means of sharing information with like minded people . It can help you track your customers are thinking. It can be a source of entertainment. It can ensure you are up to date on the latest news, and it can even help foment a real live revolutions. Not bad for a site that posts things 140 characters at a time.

Short and Sweet

Here’s the main reason why I like Twitter: It forces us to express ourselves in short, concise sentences. 140 characters isn’t a lot, but it’s not nothing. It is sort of real-world application of Strunk & White’s exhortation of “Use fewer words”. It’s really quite amazing the amount of humor, wisdom, and pith people can cram into that little chunk of text. Anyone who cruises around the internets knows that, while it allows people to publish openly without the barrier of a publishing house, folks – this blog included – don’t get the benefit of an editor. Twitter is the one place where you can know that if you read it, you’ll get things in short, concise, crisp chunks. (And I should add that I pride myself on never using ‘4’ and ‘2’ and other similar shortcuts….)

I find that pleasing as a reader, and valuable as a writer. If I can express my thought in 140 characters, then I know that I’ve really distilled it to the essence of the thought. For instance, it took a while for me to get Hodges Law down to 140 characters but I did. (And I also now have the added advantage of it being in a single place that I refer to my awesome idea – as well as showing that I was the first to think of it. )

The other big reason I like Twitter is that it is an “easy read”. If I have a few minutes to kill, I can pull out my Kindle and flip through the latest on my Twitter feed. I’ve knocked out more than a few pages of Twitter feeds at the Doctor’s office. It also is perfect for solving that First World Problem of being bored while, ahem, “indisposed”.

Third reason? It’s a wealth of information about Delphi and software development. Many great developers tweet, and point to articles and blog posts that teach and explain about development. Want to know what the latest Delphi articles are? Follow DelphiFeeds. You can’t say it is impossible to know what is going on with Embarcadero – they have a Twitter feed that updates with just about everything going on with the company. You can tailor your feed to bring you whatever you are interested in. I’m interested in Software Development, the NBA, specifically the Timberwolves, certain American Idol contestants, and generally funny stuff. That’s exactly what my feed brings me.

In The End

Yes, Twitter would be boring if it were nothing more than people telling you what they were doing. It may have been that at one point, but it’s become way more than that. Shoot, a recent cover of Sports Illustrated had a hashtag for the cover article about Jeremy Lin. In the end, Twitter can me what you want it to be. You can follow technologists, pop stars, actors, authors, characters from books and movies, business, news sites, newspapers, bloggers, family members, humorists, athletes, historical characters, and more. You can keep up with technology, world events, politics, all manner of news and events, what your favorite authors/actors/singers are up to. It’s really a huge, fascinating party that you can control to your heart’s content.

I’d like to show one way that you might go about it for the purpose of starting a discussion about the technique. I’ve seen the method I’m going to describe below used in a number of places, most notably back in WebSnap. (Remember that? WebSnap was actually a pretty cool library of code, doing a lot of things in a somewhat MVC fashion before MVC and web development was in, well, fashion.)

Anyway, here is how the implementation pattern I’m talking about here works: You create a “base” class that implements each member and property of the interface. Then, you add a virtual, abstract method that corresponds to each interface method with a naming convention of DoMethod. You then call that virtual, abstract method inside the implementation of the interface methods.

This base class then becomes the parent for any class that implements the interface. In essence, you create an easy-to-inherit class that easy implements the interface so you don’t actually have to worry about implementing the interface over and over again.

So, what you have now is a class – TBaseDemoInterfaceImplementor – that can be used to really implement the functionality of the IDemoInterface. You have an abstract class to inherit from, and two methods to override to implement the functionality.

So, of course, the class that you’ll actually use would look something like this:

What happens is that the implementation becomes much less painful once you have the base class defined with the details of implementing the interface neatly tucked away. You are doing simple class inheritance at that point.

Now I’ve thought about this pattern for a while, but I’m not sure about it. I’m very interested in hearing what you all think. I guess one downside is that the class doesn’t clearly declare that it implements a given interface. Perhaps a naming convention would do that?

On the one hand, it seems a nice way to create a class library that implements an interface. To me, it seems like a good use of polymorphism. There may be times when you want a large chunk of your library to implement a “basic” interface, and using this inheritance trick can make that easy. Once you create a base class, you can now build a class hierarchy based on that class, with all the classes implementing the interface with much less hassle and more cleanly. In addition, it makes it easier to create any number of descendants from the base class if you want to have a number of different implementations for the interface. For instance, if you had ICreditCard, you might create TBaseCreditCardImplementor, making it easy to descend from and create TVisa and TMastercard, etc.

Pattern? Anti-pattern? Maybe I’m crazy and this is a really silly idea. Maybe this is a known pattern already and I’m just ignorant. Maybe this is the coolest idea ever.

Okay, I want you all to notice that in my previous post about the use of Create, I was pretty careful not to say “Never call Create”. And near the start I said “For the most part”. So let’s be clear – just as I didn’t say “Never ever call FreeAndNil”, I’m not saying “Never call Create”. And while I am at it, I didn’t say “Use interfaces everywhere” either. Those of you who are accusing me of saying so are wrong. Incorrect. Inaccurate. You should be ashamed and you should correct yourselves. You should also read more carefully and think more precisely.

Enough about what I didn’t say. What am I saying? I’m saying “Architect your code so that you avoid calling Create, because doing so causes problems”.

Problem number one is that your code becomes difficult to test. This is a fact: When you call Create – when you create more than just an instance of a specific instance of a specific class – you are tightly coupled to that class and it’s entire dependency graph. (If you don’t know what a dependency graph is, you can find out more here.) Some classes have huge dependency graphs. That is, the creation of that class cause a cascading of creation of other classes which can initiate any number of things: Connections to databases, locking of files, attachment of limited resources like turnstiles or cash registers – who knows what?

Classes that have dependency graphs are classes that should not be manually created, but instead should be created by a class that can control if and when that class is created. The creation of such classes should be definable and controllable. They should be referenced by an abstraction and they should be created in a configurable way. If you want to test the dependent class, then you can have your system create a mock instead of the real thing. If you call Create on that class, you can’t do that.

How should we refer to these types of classes – classes that have dependency graphs of any sort? Let’s refer to them as “complex classes”. (I looked for the commonly accepted term here, and couldn’t seem to find one. I haven’t the slightest doubt that one of you will tell me what the “right” term here is. I’ve seen the term “volatile” used, but I don’t like that because of the keyword volatile that some languages use.) So for the purpose of our discussion, a complex class is one that has a dependency graph; it’s a class that, when created, brings in other external dependencies. Sometimes those external dependencies are easy to define, and sometimes the dependency graph gets so complex it is hard to know what is happening exactly when that class is instantiated. For instance, think of TForm. When you create TForm, holy mackerel – a long, complex host of other things are being allocated – Windows handles, other VCL classes, fonts and this and that and the TKitchenSink. TForm is most definitely a complex class.

But, what about something like TStringList? TStringList has, as far as I can tell, no real dependency graph. Create one of those, and you really aren’t creating a host of external dependencies. TStringListis basically self contained. It is a known, limited entity. (Look at it’s constructor and the constructor of TStrings. It basically doesn’t create anything, so it really can’t have any dependency graph.) It can be created without worrying that you are going to end up calling some charge-money-every-time-you-use-it web service. It’s simple, clean, and limited. It’s not a complex class. It’s a simple class.

So, to answer the question posed in the title of this post – you can (and should) call Create on what I’m calling simple classes. Generally, these will be classes defined in the RTL that clearly are stable or simple in that they don’t have dependencies that are non-deterministic, unknown, or complex. Classes like TStringList, TStringBuilder, TList, the classes in Generics.Collections.pas, and many others are examples of classes that you should feel fine about creating. These classes are know and proven by virtue of being part of the RTL. You probably have many similar classes in your own library – classes that you have bathed in unit tests so you know that they are isolated, stable, and simple. You can determine the complexity of such classes by looking at the constructor of a given class. (And note: The use or lack of use of Create in those classes indicates whether they are complex or simple. Hmmm.)

You should feel safe creating such simple classes because you know that you aren’t dragging in the TKitchenSink when you create them.

I think in this case, the “fight” (I don’t like that word for this particular case, but we’ll roll with the metaphor…) is really about the approach one takes towards memory management – specifically, how one views the role of class creation (and thus memory allocation) when writing code. In my Dependency Injection Series (which I really need to continue…), I spoke about the notion of creating classes without constructors (other than the default one, of course). In a sense, this post might be considered part of my DI series, because in this post I am basically making the case for using a Dependency Injection container. In this post, I’m going to argue that for the most part, you should not call Create. And you should design your code in such a way that you don’t need to call Create.

Life Too Short?

Okay, so I was watching a terrific video by Neal Ford (a former Delphi guy, actually – ) in which he introduces the notions of and “way of thinking” behind Functional Programming. It’s a great video, and you should watch it. Near the end of it, Neal says something that I’ve been pondering a while: “Life is to short to call malloc”. He doesn’t think he should have to worry about memory management anymore. This somewhat provocative thought stream-lined with a notion I ran across as I was investigating and thinking about unit testing – the idea from Misko Hevery where he is deeply suspicious of the new operator, Java’s equivalent of .Create. At first, this seems like a really weird notion, particularly since Delphi is a native language, and many Delphi developers appreciate the ability to control the lifetime of their objects and directly manage memory . But I started to see the wisdom in it, and have finally come to agree with Neal – Life is too short to call .Create. I don’t want to be spending my limited thinking and coding time worrying about memory, particularly since there are now frameworks and language features that can do this automatically. In languages like Java and C# (among others), developers use garbage collection to manage memory. We don’t have garbage collection in Delphi, but we do have ways of making memory management something that isn’t a huge concern.

And hey, if you are a Delphi developer, and you want to completely control your objects lifetime manually, they hey, you can. More power to you. But again, I’m going to show you how you can spend less time worrying about that and more time doing the things that we developers really are trying to do -- which is come up with solutions and products.

This isn’t the first time that I’ve thought or mentioned all of this stuff:

I’m Lazy

First, I am not the sharpest knife in the drawer and I have a limited number of CPU cycles in my brain. If I don’t have to, I don’t want to spend them doing repetitive “busy work”. And calling the Create/Free cycle is, to a large degree, busy work. We all do it – we carefully ensure that if we call Create, we code the try…finally block and call .Free. If we create an instance class variable in a constructor, we immediately call .Free in the destructor. Those are all good practices, and bravo to you for following them, but I think we’ve now gotten to the point where we don’t always have to do these kinds of tedious, wrote coding activities anymore. If we can spend time thinking more about higher level solutions and less about lower-level intricacies, we are more productive. (One could argue that the entire field of computer science has been nothing other than a giant march towards spending time on higher level solutions and away from tedious, low-level coding…)

Don’t Cause Dependencies

But that is more of a basic, low-level reason in itself -- the idea that you can save brain power. There are some deeper technical and practical reasons why you should eschew creating anything. The first is that calling .Create creates more than just an object instance -- it creates a dependency. If you actually create a class instance your self and use that class via a direct class reference, that is, do something like this:

you are dependent on that class and that class alone. You’ve created an unbreakable dependency on that particular implementation. You have to include that classes unit in your uses clause. You’ve very tightly coupled yourself to a specific means of getting something done. If you want to change it, you actually have to go and alter the code itself. Yikes! If you want to test it, your tests have to rely on the entire dependency chain of TSprocket, and who knows what the heck that entails? And I hope we can all agree that tightly coupling yourself to specific implementations is bad, right? (If we can’t, I’m afraid that we can’t be friends anymore. Alas.)

you are still creating a dependency on that particular class. Your uses clause still has to include the unit for that class, and your code is explicitly tied to that particular implementation. Using the interface is a step in the right direction – using an interface makes you not have to worry about memory management – but you still end up with dependency on a specific implementation. And we agree that a dependency on a specific implementation should be avoided, right?

So, we agree that calling Create causes a hard-coded dependency and that it is bad. And if you think about it, anytime you call create in one class, you create a dependency. One reason such dependencies are bad is that it makes your code hard to test. True and proper unit testing means that each class should be able to be tested in isolation. If ClassA is irrevocably dependent on ClassB, then you cannot test ClassA without invoking ClassB. You can’t easily provide a mock class for ClassB either. Thus, a call to Create can make your code hard to test.

Following SOLID

But there’s yet another reason to avoid calling Create – the ‘S’ in the SOLID principles. The ‘S’ stands for the “Single Responsibility Principle” which is “the notion that an object should have only a single responsibility.” If your class has a mission and it is creating stuff to do that mission, then it actually has multiple responsibilities – doing its mission and creating stuff. That’s two things, not one. Instead, let your class do it’s main mission, and leave creating stuff up to another class whose main mission it is to create stuff. Plus, if you have dug into the SOLID principles, you know that the ‘D’ stands for the “Dependency Inversion Principle” which is the notion that “one should depend upon abstractions [and] not depend upon concretions.” Or, as you might have heard me say say “Program against abstractions, not against implementations”. (And of course, it wasn’t me that said that – it was Erich Gamma of “Gang of Four ” fame.)

And where does that leave us? Well, right back with not calling Create, and having a class whose specific, single purpose in life is to create stuff for you. Sound familiar? It should, we just described either a class based on the Factory pattern, or a Dependency Injection container.

That way, there is no Create call, and thus you aren’t creating any dependency other than on the interface. You get instances of your objects from a Factory or from a Dependency Injection container -- and either one can produce any implementation you want or ask for without creating a dependency. Either way, it gives you control over how the interface is implemented without coupling to that implementation. You could even add a parameter to the GetSprocket call to ask for a specific kind of Sprocket implementation, and even that wouldn’t cause you to be dependent on that implementation.

In the end, calling Create merely causes a dependency, takes brain cycles, and violates the SOLID principle. No wonder you shouldn’t use it much!

So then the question becomes, When should you call Create? Well, I’m glad you asked. I’ll answer that in a future post.

I’ve been railing on why you should be coding against interfaces, and a number of you have been asking me to write an article about it, so I did. It turned out to be pretty long, so rather than make it a blog post, I turned it into an article.

I love a good, testy comment section in a blog post. The discussion in the comment section of my FreeAndNil post has been interesting and lively. In addition, the thread in non-technical continues apace, with new threads springing up! Along the way, a couple of interesting points were made that I’d like to highlight here because I think they are germane.

First, I can sum up the article as follows: “Don’t use FreeAndNil. If you feel the need to use FreeAndNil, then your code almost certainly needs to be refactored to limit the scope of your references.”. People arguing for it’s use are, in my mind, simply saying “I don’t know how to or don’t care to control the scope of my pointers”

Second, I want to stress again that I totally get that sometimes you have to use FreeAndNil. Sometimes you maintain old code that played fast and loose with pointer scope, and didn’t contain things like we now know you should. I get that. But those situations should cause you shame and embarrassment, and should motivate you to refactor your code to control references and make the need for FreeAndNil go away. The reason that I totally get this is that I manage such a codebase. The point isn’t about maintaining legacy code, it’s about how to write new code the right way. And if you are writing new code while feeling the need to FreeAndNil stuff, then you aren’t doing it right, quite frankly.

Jolyon Smith wrote the following in the comments: “Surely you must also admonish everyone to write code that never requires the use ofif Assigned(someReference) then…”Well, yes, I suppose that is exactly correct in most cases, unless you are doing lazy initialization, which I wouldn't recommend using anyway. If you are using Dependency Injection -- and you should be -- there is never a reason to be worried about your references not being assigned.

John Jacobson writes: “Reference-counted interfaces are what you should be using anyway, keeping your actual class implementations private and hidden in the implementation section of their unit.” He’s absolutely correct. Ultimately, you should be coding against abstractions, not implementations. If you are doing that – and of course you should be in Delphi via reference-counted interfaces – then you shouldn’t be freeing anything other than local, non-volatile variables, and you should never need to FreeAndNil those.

By now many of you all are probably saying “This guy is a nut! Get over the FreeAndNil thing already!” Well, okay, I agree I’ve beaten this drum quite a bit. And I agree that I’m pretty much a nut. But I really think that it represents a critical point that we all need to understand in order to move forward: Code needs to be under control. Good development dictates that we limit the scope of references. And finally, out of control references are the cause of many, many bugs, and are the cause of 100% of access violations. The use of FreeAndNil is a blatant symptom of this problem.

Here’s the bottom line, and I’ll be blunt: If you are arguing in favor of using FreeAndNil, what I really hear you saying is “I learned to code in 1991 and haven’t learned a thing since.”

Okay, that’s a little harsh. I’ll put it a bit softer: Worrying about pointers and references is, well, an old-fashioned way of thinking. There are ways to code so that worrying about whether a pointer is valid or not is no longer something you have to worry about. This way of coding is more productive, cleaner, and more effective. It produces high-quality, testable code.

Doesn’t that sound enticing and intriguing? Why wouldn’t you want to learn more?

I am in the latter camp. I am in the the latter camp for a very good reason: because the latter camp is right. There’s almost never a reason to use FreeAndNil in the new code that you write.

And I want to be clear about that – I’m talking specifically about new code. If you have old code that was designed in such a way that the scope of your pointers wasn’t tightly contained, than yes, you’ll probably have to use FreeAndNil to make that code work right. But if you are doing that, I hope that you recognize that it is a problem and plan to refactor the code to contain the scope of your pointers. I’m totally aware that legacy code may very well require that you nil pointers because the scope of those pointers is not well managed. I know this because our system has such code, and thus contains calls to FreeAndNil.

So, anyway, here’s an explanation why I think that FreeAndNil should only be used very, very sparingly.

Before I start, I want to add that this blog post is heavily influenced by the eloquent wisdom and excellent explanations of a number of people who participated in the thread, including Bob Dawson, Wayne Niddery, Rudy Velthuis, Joanna Carter, Mark Edington, and Pieter Zijlstra. Any profundity, excellent examples, pithy similes, or clear descriptions of things are very likely a result of me reading their posts in the thread.

Introduction

FreeAndNil is a function declared in the SysUtils unit and was introduced in Delphi 4, if I recall correctly. I myself suspect that it was added more because of customer demand than because the R&D Team felt some need for it, and I’m reasonably sure that if they had it to do over again, they would not have added it at all. But there it is.

But it doesn’t. It looks the way it does for a couple of reasons. First, the parameter passed needs to be a var parameter because two things need to happen. The object referenced needs to be freed, and the reference itself needs to be altered -- that is, set to nil. Thus, you need the freedom to change both the reference and the thing being referenced that the var parameter gives. Second, the parameter is untyped because when you pass a var parameter, “Types of actual and formal var parameters must be identical.” Given that, if you declared the parameter as a TObject, then you could pass only a TObject to the method and not any of its descendants.

I should point out that the use (or non-use) of FreeAndNil is not an insignificant and uncontroversial issue. The thread that spawned this is typically long. Allen Bauer, the Chief Scientist at Embarcadero, blogged about it, and quite a discussion ensued in the comments – so much so that he felt the need to blog about it again. StackOverflow has a whole bunch of questions on the subject. The VCL uses FreeAndNil in places that I wouldn’t necessarily approve of. I think that in most places its use indicates, uhm, an “older” design choice that probably wouldn’t be made today, given newer language features. In any event, clearly folks have strong views on this and that the use (or note) of FreeAndNil is not “settled science” (though I believe it should be…).

Okay, So When Should You Use FreeAndNil?

In my mind, the answer to the question “When should I use FreeAndNil?” is “never”, or at least “Almost never, and if you must use it, make sure that there is a really, really good reason to do so and that you clearly document that reason”. I myself have never (to my best recollection – I fully expect someone to find some obscure reference to code I wrote years ago that uses it….) used the procedure and see no possible scenario where I would want or need to in the code I write. My recommendation is that you never use it either, because I don’t believe that you are writing code that needs it either (unless you are on the Delphi R&D team working in the bowels of the RTL, I suppose).

Why I Don’t Use FreeAndNil and Why You Shouldn’t Either

There are a number of reasons why I don’t use FreeAndNil.

First, a call to Free is sufficient. It gets the job done. Free will, well, free the memory associated with your reference. It does the job completely and totally. Can’t do any more. Setting a pointer to nil doesn’t get you anything. The memory isn’t going to be more free or freed faster as a result of calling FreeAndNil. Since it’s always a good practice to use exactly the right tool and nothing more, there’s no need to make the extra call. Consider this – there’s no SetToZero call for integers, and if there were, why would you use it? All code should be written with “considered intent,” and the indiscriminate use of FreeAndNil shows a lack of consideration and intent.

Second, using FreeAndNil where Free alone will do just fine obfuscates your code. Using a call that executes unneeded instructions sends a message to future readers of the code that shouldn’t be sent. A subsequent developer maintaining your code might look at the call and say “What the heck? Why is FreeAndNil being used here and not just Free? Is something going on here that I don’t know about?” Time might then be wasted investigating, and a satisfactory answer may never be found. Code that uses Free and FreeAndNil as exactly the same thing has reduced the amount of information that your code can convey. And when you are dealing with something as important as memory management, you certainly don’t want to reduce the amount of information your code can convey.

FreeAndNil has a clear meaning – it is a very clear indicator that the pointer being freed has meaning outside of the scope where it is used. If it doesn’t say that, then you shouldn’t use it. If you use FreeAndNil when that is not the case, then you’ve sent a bad message to future maintainers. Clarity in code is paramount – nothing should be done to decrease that clarity. Code should be intentional and there for a reason. Code that is there that doesn’t need to be can be misleading and distracting. Misleading and distracting are not two thoughts that developers want crossing their minds while maintaining code.

Free has meaning as well – it clearly states that the use of that pointer reference is now done and over with. As noted above, there’s no need to call anything else. The indiscriminate use of FreeAndNil fails to draw the clear distinction between Free and FreeAndNil. Losing clarity in your code is bad, right?

Third, one of the justifications for using FreeAndNil is that it is defensive and that it protects against using a pointer at the wrong time. The claim is that if a pointer is nil, and you then use that pointer, then you’ll know right away, and the bug would be easy to find. Thus, if you feel the need to use FreeAndNil to ensure that you don’t misuse a pointer somewhere, then it is very likely you have a design problem: the scope of the pointer in question is larger than the use of that pointer in your code. Or, in other words, the scope of a pointer and the scope of the use of that pointer aren’t the same and they should be. If they aren’t , you are simply begging for trouble.

If you want to really be persnickety, a variable that is broader in scope than it’s use is a form of a global variable. And I would hope that we agree that global variables are bad. If we can’t agree on that, well, then we can’t agree on anything.

Maintaining proper scope is critical to good, clean code. I’ve discussed this before, and so I won’t go on about it here. The germane point here is that if the scope of a pointer is of the “laying around waiting to be used”, then there is no limit to the mischief that this wayward pointer can cause. So, if you don’t “leave a pointer lying around”, you can’t misuse it. So, well, don’t leave a pointer lying around. If you don’t leave roller skates at the bottom of the stairs, you can’t go careening down the hallway. Keep your pointers and the use of those pointers in the same scope and you can’t misuse a pointer. And you won’t feel the need to use FreeAndNil.

And if you do use it for defensive reasons, you have to use it everywhere. You have to use it in ever single place it is needed and you can’t miss a single place. And, every single maintainer of the code after you has to as well. One instance of not using it basically removes all the reasons for using it. It’s a much better plan to simply control your scope and never feel the need for it.

So, in the end…

In the end, I guess the argument for using FreeAndNil seems to boil down to:

“Of course I use FreeAndNil – it protects against bugs and makes other bugs easy to find, and besides, what’s the harm?”

Well, it would seem that none of those reasons is really true. The real argument is:

“If your code requires you to use FreeAndNil to reveal and easily find bugs, then your design is wrong. Good, clean code never feels the need to worry about errant pointers.”

Hey, look: design your code however you like. However, if you were to ask me, I’d say to design your code in such a way that FreeAndNil sends no signal, doesn’t find any bugs any sooner, doesn’t protect against anything, and thus becomes utterly superfluous.

If you missed CodeRage 6, or you didn’t get to every session that you wanted to see (hear?), it is now all online. That link also points to the latest offers and ways to find out more about XE2. I love XE2, and think it’s the best Delphi ever. And I say that not even using the FireMonkey/cross-platform stuff – so it’s even better than I think.

I was digging around in my boxes in the basement – we’ve moved a ton, and so I’ve got stuff scattered all over – and came across a CD labeled “Website”. I opened it up, and lo and behold, there was a copy of one of my very first web sites, built with NetObjects Fusion. It was fun to poke around and see some of my really old content. (As a point of reference, the homepage has “This site was last updated on Tuesday, December 18, 2001” at the bottom. Remember when we used to do that?) Actually, I think some of the stuff will end up on my current site. Not all the links work, but if you’ve been around a while, you might remember some of it. Most of it was hand-maintained, but you can see where I tried to integrate in some early Delphi-based CGI stuff. I actually still like the colors and the template.

The Generics.Default.pas unit is an interesting one – you may never have cause to use it directly, but it contains a lot of interesting stuff in support of the classes in Generics.Collections.pas. It’s worth poking around in. I was doing just that, and came across some interesting code – a function called (and I quote) BobJenkinsHash. It is used rather extensively throughout the unit, and appears to be a general purpose hashing code. Who is Bob Jenkins, you may ask? Well, apparently he’s a guy that wrote a very powerful and useful hash function, and Embarcadero has utilized it as part of their generics library. And here’s the interesting part – they created it using a set of GOTO(!!) statements whose use , well -- I seriously can’t believe I’m actually saying this – actually kind of make sense. The C code depends on the “fall through” nature of C’s switch statement, and the GOTO calls actually mimic that rather nicely. I’m open to suggestions on how it might have been written better. (Again – I can’t believe I just said that, but there it is.) And to redeem myself, I’ll chastise the author for not defining his interfaces in their own unit. (Sorry, Barry – I had to do something to restore my street cred for actually liking the way the GOTO’s worked…..) Anyway, interesting little find in the bowels of the Delphi RTL.

I’ve added a new category, Three Sentence Movie Reviews. I watch a lot of movies, and have all these aspirations of writing up movie reviews when I watch, but I never do because it takes too long. So I thought I’d simply limit myself to three sentences in reviewing the film, and that way I might actually get the review done. I might have to get a bit creative – sort of like keeping tweets to under 140 characters. Should be fun. If you read this blog via DelphiFeeds, you won’t see it as I’ll not be putting the Delphi category tag on them. Just another reason to subscribe to my real feed.

I thought that a little more explanation would be in order. Let’s say you have ten cool features on your “Things Customers are Screaming For” list. There are two basic approaches you can take to getting them done: You can do them in series or in parallel. If you do them in parallel, you’ll get them all done sooner, but you may not get them done as thoroughly. If you do them in series, it will take you longer to do them all, but you’ll likely get each one done more thoroughly.

However, doing them in series – that is, sequentially doing only the most important remaining item – has an added benefit: It can help you not do things that you shouldn’t do. You may have ten things on your “We need to get these done right away”, but as time passes, some of those things may prove to be not needed, overtaken by events, or just plain dumb ideas. Doing things in parallel may mean that you get everything done sooner, but it also means that you might do something that proves to be a waste of time later on.

For example, if you have a team of five folks, and you have five ideas that take six man months each, you might give each person one idea to work on, and then six months later, you have all five ideas done. Great! But uh oh! -- as it turns out, over the course of those six months, things changed and events transpired in such a way that two of the ideas weren’t really good ideas after all, and at the end of the six months you regret ever starting on them. So in the end, you have three things done that needed doing, but have wasted your time on two ideas that you should have left undone. Furthermore, since you only had one person working on each idea, you may not get a fully fleshed out solution, but instead, one that may have missing features or is not complete in some way.

But consider what happens if you work on them in series: say that instead of starting in all at once on the entire list, you pick the single most important of the ideas on the list. You focus your whole team on doing that one idea. You will likely be able to get it done somewhat sooner, say in one or two months instead of the six months in our example. (Five team members working on a six man-month project will likely take a bit longer because of transaction costs.) In addition, you will get a “five-headed” solution instead of a “one-headed” one, and thus the solution would likely be more complete, fleshed out, and feature rich. In other words, you might very well end up doing one thing properly and thoroughly instead of doing five things not so completely.

The added benefit comes when, after doing the most important project, you realize that one of the ideas you had originally thought was awesome isn’t really that awesome, and that you can take it off the list and not waste time on it. You might add another item to the list, or another item that was on the list suddenly becomes vastly more important than it was at the start of the first project. Instead, you can repeat the process and start working on the next most important thing. You end up with a very nice implementation of each project you do undertake, and you don’t do the projects that shouldn’t be done.

In a rapidly changing technical environment, that which looks like a no brainer in January might be old news by July. Obviously you want to avoid working on that project. A practical example might be that you are a software tools vendor, and people are pressing you to do, say, a development tool for Windows Mobile 6. You could choose to add staff and get that request done sooner, or you could stay the course and do more important things, only to discover with massive relief that you didn’t do Windows Mobile 6 at all when Windows Mobile 6 becomes a legacy technology. (Sound familiar? )

Now, I’ll grant that if you follow this plan, you’ll end up with fewer features in the long run. But you’ll also end up with more complete features with less wasted effort. You won’t have spent time on things you ultimately should not have. It might take a little longer to get any particular feature to market, but in the above example, you’ll end up with three really solid features and no time spent working on things that you should not have worked on at all instead of five half-baked features, two of which were a waste of time.

Repeat this process enough, and it becomes much more likely that you will end up with a product that has the right – and fully rendered -- feature set. In many ways, inefficiencies are the result of choosing to do the wrong thing. If you keep your choices finely grained – that is, always put your efforts only into the things that are obviously the very most important thing to do do right now – you will end up doing the right thing every time, even if there is slightly less of it.

It’s often been said that knowing what you should do is easy; it’s knowing what you shouldn’tdo that’s hard. If you repeatedly focus on and complete the one single thing you absolutely should do and do it well, it will be more readily clear what those things are you should not do. So, I guess ultimately, you have to choose: More features done less thoroughly with time spent on things that turn out to be a waste, or fewer, more complete features with fewer projects that you shouldn’t have done.

WooHoo! Made it to #50 of these Flotsam and Jetsam articles. I published the first one of these right after I left Embarcadero on July 20, 2010. That’s about once every ten days or so. Not bad. Seems like a long time ago in a galaxy far, far away when that first one came out. A lot of good things have happened since then. Thanks for sticking around for the fiftieth one.

Hallvard Vassbotn has been sighted in the wild! Hallvard is an amazing developer and a great writer, and I’m delighted at the prospect of him starting to blog again, especially given his propensity to stretch Delphi language and RTL features to the limit. Given all the new things that have happened in these areas since Hallvard’s last blog post, one can only hope for really interesting stuff.

ninite.com is a really cool site that provides a valuable and helpful service. It allows you to create a single install for a pretty broad and selectable collection of popular software. This is particularly useful when setting up a new machine. There’s always a million of these things that you want to install – Skype, Chrome, Firefox, your favorite IM client, media players, various utilities, etc. -- and Ninite.com allows you to select all of these and get a single installer for them all. It does all the “smart” things that you want it to like get the 64-bit version if possible, ensures you have the latest versions -- and best of all -- it clicks all the “Next” buttons so you don’t have to. Sweet.

I tweeted this notion this week -- “Crazy Idea of the Day: Every class that raises an exception unique to itself should have it's own exception type to raise. “ – and I thought that I’d expand a bit on it here. One of my pet peeves is unclear error messages. Somewhat of a corollary to that is the irritated feeling you get when an exception is raised, but you can’t tell right away where it came from. Thus was born the notion above – that if your class is raising an exception unique to itself (i.e., not something “normal” like EConvertError or EFileNotFoundException or something like that….) it should be raising an exception type defined specifically for the class itself. This way, you can put a very descriptive error message in the exception, and the developer or user seeing the exception can know exactly where it came from. In addition, it allows anyone using your class to easily trap exceptions specific to your classes. I’ve been doing this for years, but have never seen anyone in the Delphi community really discuss it.

My Book

Stuff You Can Find Here

A Pithy Quote for You

"No matter how rich you become, how famous or powerful, when you die the size of your funeral will still pretty much depend on the weather."
– Michael Pritchard

Amazon Gift Cards

General Disclaimer

The views I express here are entirely my own and not necessarily those of any other rational person or organization. However, I strongly recommend that you agree with pretty much everything I say because, well, I'm right. Most of the time. Except when I'm not, in which case, you shouldn't agree with me.