A team using C++.. When I walked in, they had just finished writing the second for loop. Both for loops were sitting, one right after another in the same method.

One iterated a map that associated users and subscriptions with IDs (map int -> std::pair<std::string, Subscription>). The other iterated a map which associated time limits with IDs (map int -> unsigned long) I asked a few questions. Was there a time limit for each subscription and user, and vice versa? Yes. When we add or delete an entry in one map do we have to do the same thing in other map? Yes. Why wasn't there just one map, one which associated each time limit, user, and subscription with an ID?

Why didn't they do it that way?

...

A team using C#.. A model class needs to tell a view that a new item has been added. Seems like a job for .NET's events. A programmer adds an event to the model and registers the view on the event with a delegate. Works fine. Later the team discovers that sometimes when an item is updated the view needs to be notified. A programmer adds a second event and registers the view on the model a second time with another delegate. A few days later, they discover that the view needs to know something else. Another programmer adds yet another event to the model. Now there are three events on the model and only one object registers for them, but you can't tell by looking at the model class.

> Oddly (to me) many people, even those working in software> design, simply do not consider such issues. Those who do> become 'system designers' or 'software architects'!> > The people you comment on, Michael, are not in your> league. I think it's as simple as that.

I don't know. I'm tempted to approach things the same way they did. I stop myself, but what makes it so tempting?

It's tempting because it's easy to just add to the code what you need to make your bit work. That way you don't have to look at and understand what others have done. It's less work.

We all (I certainly) feel the temptation. It's just that some people can leave it at that, and others know that if they do, they'll be walking around with a bad feeling for the rest of the day. So they take the extra time to do it properly, leave the code a little clearer, and actually feel better for having set something *right*.

> It's tempting because it's easy to just add to the code> what you need to make your bit work. That way you don't> have to look at and understand what others have done. It's> less work.> > We all (I certainly) feel the temptation. It's just that> some people can leave it at that, and others know that if> they do, they'll be walking around with a bad feeling for> the rest of the day. So they take the extra time to do it> properly, leave the code a little clearer, and actually> feel better for having set something *right*.

I agree, but let's get a little more specific. In both those scenarios, it seems that it was easier to do it the wrong way when it was time to change. Why?

> I agree, but let's get a little more specific. In both> those scenarios, it seems that it was easier to do it the> wrong way when it was time to change. Why?

I'm going to play devil's advocate and say that the "wrong" way is sometimes the right thing to do. If you think of programming as a purely economic endeavor, where you are trying to minimize your short- and long-term cost and maximize your revenue, then sometimes the cost/benefit analysis favors the kludge.

It's always tempting to do the kludge because the short-term cost is generally lower, sometimes a lot lower, but usually the long-term (maintenance) cost is a lot higher, so you should do it the right way. However, there are some mitigating factors that can minimize the long-term cost of the kludge:

1) The effects of the kludge are highly local, that is, they don't affect a lot of other code. For example, a kludgy implementation of a side-effect-less method is going to be pretty local, whereas a kludge in an interface or the signature of a commonly used method is going to be very non-local.

2) The cost of fixing the kludge will not increase with time. This includes the programmer(s) needing to take time to rediscover the fix. So if the fix for the kludge is obvious, or well-documented, that helps.

I personally don't have a problem with occasionally cutting corners on a method body (but preserving correctness), and putting in a comment at the top that begins with "HACK:". This is especially true when the main benefit of the "right" solution would be improved performance--it's amazing to me how rarely I need to come back and fix these.

Of course, most good programmers are concerned not only with economics, but also with aesthetics, and therefore are willing to incur more cost to make their code more aesthetically pleasing. But if that's what we're doing--paying more for beauty--let's do so with our eyes open.

> I'm going to play devil's advocate and say that the> "wrong" way is sometimes the right thing to do. If you> think of programming as a purely economic endeavor, where> you are trying to minimize your short- and long-term cost> and maximize your revenue, then sometimes the cost/benefit> analysis favors the kludge.>> It's always tempting to do the kludge because the> short-term cost is generally lower, sometimes a lot lower,> but usually the long-term (maintenance) cost is a lot> higher, so you should do it the right way. However, there> are some mitigating factors that can minimize the> long-term cost of the kludge:> > 1) The effects of the kludge are highly local, that is,> they don't affect a lot of other code. For example, a> kludgy implementation of a side-effect-less method is> going to be pretty local, whereas a kludge in an interface> or the signature of a commonly used method is going to be> very non-local.> > 2) The cost of fixing the kludge will not increase with> time. This includes the programmer(s) needing to take time> to rediscover the fix. So if the fix for the kludge is> obvious, or well-documented, that helps.

It's interesting to look at both of these cases then. It looks like the percieved cost of fixing the kludges did increase with time, so much so that the programmers started to do the wrong thing. And, it all happened very quickly. We can say that they just weren't being diligent, but is there more to this?

There are probably as many reasons as there are programmers, but here are some patterns to consider. First, the programmer adding the new code must understand the original code. Second, the programmer must understand the commonalities between the new code and the old code. Third, the programmer must be confident that changing the old code won't break anything.

It's a shame how many shops I've been in where the first step isn't even possible. The second step is where the real understanding begins. When the original code is already poorly factored, the task of teasing out what it does that is common becomes as complex as the number of side effects and inappropriate factorings. Without collective code ownership and good communication (or pairing), understanding code written by another programmer is slow subject to error.

The third step, of course, requires the team to have and confidently use a safety net of unit tests. Without unit tests, it's costly in time and effort to verify the code works as it appears to work and continues to work after changes are made. The "voodoo" of leaving the old code alone almost always does the job -- it takes no time, and whatever was working before will still be working.

> There are probably as many reasons as there are> programmers, but here are some patterns to consider.> First, the programmer adding the new code must understand> d the original code. Second, the programmer must> understand the commonalities between the new code and the> old code. Third, the programmer must be confident that> changing the old code won't break anything.

I agree. Those are all important, but in this case I think that the programmers could've avoided a bit of trouble if they picked their abstractions more carefully.

For instance, holding a std::pair in a map is fine, until you need to add another item. And using an event is fine, until you need to need to send another message to the objects that are registered for it. Some abstractions don't evolve as well as others. And, it's a shame. When we pick a way to implement something for 1, sometimes the choice we make can make it harder to change to 2. If the price of two is too high, doing the wrong thing is easier. We've created our own roadblock, and we have to back up and correct.

I am a little curious about the second item. Without separate events, if you somehow managed to cram all that data into one event callback, wouldn't you end up with a single callback that used case analysis to figure out what happened, which is a code smell of its own? Why is it important for the model to know that there's only one view that registers events? What would have been a better implementation? (In Java the Listener interface would have just been expanded by one method for each event. Would it have been okay in Java, but not in C#?)

> I am a little curious about the second item. Without> separate events, if you somehow managed to cram all that> data into one event callback, wouldn't you end up with a> single callback that used case analysis to figure out what> happened, which is a code smell of its own? Why is it> important for the model to know that there's only one view> that registers events? What would have been a better> implementation? (In Java the Listener interface would> have just been expanded by one method for each event.> Would it have been okay in Java, but not in C#?)

That's the unfortunate thing about events in C#. It's great for one, but the 'price of two' is too high. You have to refactor back to a Java-style listener to make it sensible.

It would be nice if C# had something like an event that had an interface type. Then you could use += and -= and send whatever messages the interface supports. Sort of a built-in composite type.

Re why it is important that the model know that it a single view registered for the events.. To me, it is just cleaner to see that all of the events are being received by the same object. If we make that explicit in the code by using an interface, it can make the design clearer.

> Re why it is important that the model know that it a> single view registered for the events.. To me, it is just> cleaner to see that all of the events are being received> by the same object. If we make that explicit in the code> by using an interface, it can make the design clearer.

One could argue that it's not the model's place to know who is going to listen to what combination of its events.

(Personally, I sympathize with your interface-based approach, since I like to have the compiler know as much as possible. But I know some talented programmers who would vigorously argue the other side.)

> I agree. Those are all important, but in this case I> think that the programmers could've avoided a bit of> trouble if they picked their abstractions more carefully.

One thing that has become more apparent to me as I practice object-oriented programming more is that it is important to pick the right abstractions. Rarely are the types built in to a language good abstractions for a business problem domain. However, I have never learned to pick the right abstraction in advance very well, so I do a lot of refactoring. But who was it that said, "you're a lot smarter than me, so we'll try it your way."?