Pages

Thursday, July 5, 2012

The Misuse of Reuse

The software industry has been pursuing reuse for at least four decades. The approach has changed over that time. It started with structured programming promising reusable snippets of code. We then moved to object-oriented programming promising reuse through inheritance. Today we are focusing on service-oriented architectures promising reusable services that can be written once and then used by multiple applications.

For forty years we have been pursuing reuse and for forty years we have been failing. Perhaps it is time to reexamine the goal itself.

Let's start by reviewing the arguments in favor of reuse. They are quite simple. And, as we will soon see, they are quite flawed.

The argument goes as follows. Let's say we have three systems that all make implement the same function, say Function 1. This situation is shown in Figure 1.

Figure 1. Three Systems Implementing Function 1

It seems fairly obvious that implementing Function 1 three times is an ineffective use of resources. A much better way of implementing these three systems is to share a single implementation of Function1, as shown in Figure 2.

Figure 2. Three Systems Sharing A Single Function 1

In general, if there are S systems implementing Function 1 and it costs D dollars to implement Function 1 that the cost savings from reuse is given by

D * (S - 1)

If D is $10000 and S is 5, then reuse should save us $40,000. Right? Not so fast.

In order to evaluate the claim of cost savings through reuse, we need to apply some principles of IT Complexity Analytics. IT Complexity Analytics tells us that the complexity of Function 1 is exponentially related to the number of systems using the Function. This is because each system is not using the exact same function, it is using some variant of the same function. Function 1 needs to be generalized for every possible system that might someday use it, not only those we know about, but those we don't know about. This adds considerable complexity to Function 1.

If the size of the circle reflects the complexity of the functionality, then a much more realistic depiction of the reuse scenario is shown in Figure 3.

Figure 3. Realistic Depiction of Sharing Functionality

Since system cost is directly related to system complexity (one of the axioms of IT Complexity Analytics) we can say that in most cases, the theoretical cost savings from reusing functionality is overwhelmed by the actual cost of the newly introduced complexity.

However, the situation is even worse than this. Not only is the cost savings from reuse rarely achieved, but a number of additional problems are introduced.

For example, we now have a single point of failure. If the system implementing Function 1 fails, all three of our systems fail.

We have also compromised our security. As IT Complexity Analytics predicts, the overall security of a system is inversely related to its complexity. The more complex a system is, the lower its inherent ability to maintain security.

And we have created a highly inefficient system for running on a Cloud. The extra cloud segments we will need to pull in to support our reuse will dramatically increase our cloud costs.

Given all of the problems we have created, we most likely would have been better off not attempting to create a reusable function in the first place.

Now I should point out that I am not totally opposed to reuse. There are situations in which reuse can pay dividends.

In general, a reuse strategy is indicated when the inherent complexity of the functionality being shared is high and the usage of that functionality is relatively standard. In these situations, the complexity of the functionality dominates over the complexity of the sharing of the functionality. But this situation is unusual.

When should you pursue reuse? It all comes down to complexity. Will your overall system be more complex with or without shared functionality? This requires a careful measure of system complexity with and without the proposed sharing. If you can lower system complexity by sharing, do it. If you can't, don't.

Complexity trumps reuse. Reuse is not our goal, it is a possible path to our goal. And more often than not, it isn't even a path, it is a distraction. Our real goal is not more reusable IT systems, it is simpler IT systems. Simpler systems are cheaper to build, easier to maintain, more secure, and more reliable. That is something you can bank on. Unlike reuse.

...............................

Roger Sessions writes about the topic of Organizational Complexity and IT. If you would like to get email notifications about new posts, use the widget on the right.

23 comments:

For the same function in 3 different systems, I think there is no one model to rule them all: actually there are possibly 3 contexts at play with the 3 systems. As described in Domain Driven Design, each model is tied to a context (a context is the environment that set the meaning of a word).

But that's right simplification is the key: here are my thoughts about complexity and simplicity (with some thoughts borrowed from Rich Hickey) http://www.zenmodeler.com/design-matters/software-design/simple-and-easy-software-design-qcon-london-2012

. It’s always tough to generalize... Actually the first and leading use (Fortran II?) of reuse was with subroutine libraries and by any measure has been highly successful.. . The particular guideline “In general, a reuse strategy is indicated when the inherent complexity of the functionality being shared is high and the usage of that functionality is relatively standard” is particularly misleading. Consider for instance a square root routine. Simplicity of functionality and amount of use are important... In any case, principals of design that support reuse are always useful, since a major use of reuse is maintenance. Without this maintenance and evolution quickly leads to patch work and can eventually lead, when reuse is no longer possible, to total replacement of systems.

A good, thought provoking article!A few points, though.A lot depends upon what you mean by 'reuse'. If we consider ‘reuse’ and ‘complexity’ in isolation, or in too narrow a sense, we miss many opportunities for architecture to add value to software development and for getting good software to market quickly.I think we have been doing pretty well with 'reuse' of one kind or another over the past 40 years.For example POSIX has had a measure of success in ensuring that code is portable across different operating platforms, enabling C library implementations to be written that enable application programmers to perform both 'complex' and 'simple' tasks using standard system calls. The BSDs and Linux have been ported to many platforms, from z390 mainframes to Android phones with enormous code reuse.More recently, we have seen the rise of projects, such as Eclipse, Struts, Hibernate, JSF, Camel and a host of others with masses of portable, re-usable code, all of it free (as in 'beer' and as in 'freedom') some of it doing 'simple' stuff and some of it doing more 'complex' stuff.I think that what trumps everything is 'cost effectiveness'. In other words, what delivers the required functionality to market with the best overall combination of time, cost and quality? This is, after all, what our customers expect and are paying us for. They want good features in their hands, in good time, at a good price.For example, for our next world-beating shoot-em-up, we may 'reuse' PC hardware and firmware and a particular operating system. We will probably 'reuse' an IDE, a physics engine and maybe some rendering and animation software. On the other hand, we may discover that a critical bit of our polygon rendering subsystem just won't cut it in today's market and we have to hand-code that bit in assembler ourselves; it’s complex and difficult, but it will beat the competition and bring the money in. And we can 'reuse' the assembler and the shared library we’ve developed on this project and sell it on to others. Maybe we have been enlightened enough to save a lot of legal costs and 'reuse' the GPL, release our code under that and make money doing support and consultancy. Maybe this is a big project and we need to 'reuse' our Human Resources department and the services they provide (including the HR system that helps us get a programmer’s bum in a chair more quickly). We may also succeed in 'reusing' agile software development techniques and 'reusing' established supply chains to market and distribute our software.

I think that architects really earn their pay when they understand the many types of use/reuse, complexity/simplicity and come up with a good blueprint aligning everything to deliver the best time-to-market/cost/quality combination.

The article only focusses on software programming & code reuse. In actual practice, reuse occurs over all other phases of the software project also in addition to Development: Systems Analysis, Design, Testing, Implementation & Maintenance.

I agree with the points made in the post. However, Reuse is not only a tool used to reduce initial development costs, it is also a tool used to achieve increased maintainability on an ongoing basis. If a change is needed, it only needs to be done once, versus many times. The problem is, this can also result in an unintended side effect of increased testing cost resulting in increased time to market anytime that shared code needs to be modified. So I think another factor to be considered for reuse is an estimation as to how frequently that shared bit of code would need to be modified.

Code reusability is not the target, it's just an expected result of good programming.

When you create solution for case A and then working on case B which is quite similar, a good solution should be generic, covering both cases A and B. And that would usually result in reusable code, class, library, script, service, etc.

The same could be applied to simplicity, it's not the target, it's an expected result of good programming.

Reusability and simplicity are not mutually exclusive, and often are caused by proper code refactoring and optimization.

I think we are on the right track, software solutions and tools are evolving.

I must say it as one of the "Best" articles I have read. Probably I liked it more as it is in sync with the principles I believed in and used to share with others. The beauty of this article is that way it is structured and articulated. Also, I would like to add a few notes from one of my work-in-progress articles on my blog - http://beyondyourcode.com.

-- As far as making architectural and design decisions, add this simple rule "Return On Investment" to your "Design Requirements". It is most of the times from the technical point of view like maintainability, extensibility. But you should try to add a business sense as well - "How does the customer benefit from this or otherwise?". Take a common businessman approach - "If I spend 2 days in coding better using reusability, would it save me 10 days later on?".

-- I liked the point that the 'reusability is not the goal and it is just a path'. It should come out by itself per the behavior of the problem which is followed by the behavior of the program.

I do agree with you that complexity is an important point to be taken into account when reusing code. We have also to take into account that as much complex the functionality is, more impact in the system as a whole it causes if changed. If we don't have a good control of these points where we reused it, we will have some headache. So, even tough reusing code could save money, we may have to spend more time analyzing the impacts when changes are necessary and, this time costs money.

Thank's for sharing Roger. I do agree with you that complexity is an important point to be taken into account when reusing code. We have also to take into account that as much complex the functionality is, more impact in the system as a whole it causes if changed. If we don't have a good control of these points where we reused it, we will have some headache. So, even tough reusing code could save money, we may have to spend more time analyzing the impacts when changes are necessary and, this time costs money.

Thank's for sharing Roger. I do agree with you that complexity is an important point to be taken into account when reusing code. We have also to take into account that as much complex the functionality is, more impact in the system as a whole it causes if changed. If we don't have a good control of these points where we reused it, we will have some headache. So, even tough reusing code could save money, we may have to spend more time analyzing the impacts when changes are necessary and, this time costs money.

Thank's for sharing Roger. I do agree with you that complexity is an important point to be taken into account when reusing code. We have also to take into account that as much complex the functionality is, more impact in the system as a whole it causes if changed. If we don't have a good control of these points where we reused it, we will have some headache. So, even tough reusing code could save money, we may have to spend more time analyzing the impacts when changes are necessary and, this time costs money.

Télcio,You raise a good point. This is a bad harmonic at play. The more reusable a system is, the more complex it is. And the more complex it is, the more it is likely to fail. And the more the system has been reused, the greater the impact of that failure.- Roger

Roger, what you describe is really a practical example of the conundrum between specialisation and generalisation found in nature, where successful 'design' is always a compromise.

You can also turn your example around to show that they biggest cost saving in IT is provided not by designing to cope with the complexity of many variations in business requirements, but to reduce those variations. That is to start with 're-use' (generalisation) in the business, then worry about re-use in IT.Best wishes - Ron

Roger, what you describe is really a practical example of the conundrum between specialisation and generalisation found in nature, where successful 'design' is always a compromise.

You can also turn your example around to show that they biggest cost saving in IT is provided not by designing to cope with the complexity of many variations in business requirements, but to reduce those variations. That is to start with 're-use' (generalisation) in the business, then worry about re-use in IT.Best wishes - Ron

Reuse has the most bang for the buck during design. Once I learn a design that works to solve a type of problem, I will reuse it. (Some people like to give everything a name and call this design patterns).

Reuse during construction and execution really only works for a few simple cases. For example, math subroutines. Even then, there is no agreement on what a string object should be able to do or not do. Reuse during construction and execution is hard, adds to complexity, and not be worth the effort. Look at how many packages bring their own copies of Java or DLLs.

Hi Roger, I think your point is very valid though it is very situation based. And I have seen situations where people tend to forget the basic principle of architecture which is 'keep it simple' and they tend to over architect/design things.

> Now I should point out that I am not totally opposed to reuse. There are situations in which reuse can pay dividends.

I was forwarded this article and I know it will be used for argument against reuse, perhaps you can be more explicit in "for reuse" reasons. Personally, I find reuse of limited use.. The worst case is "Helper"\"Util" classes. But composition\delegation makes good argument for reuse. Inheritance is good for very specific cases but not much.. what do you think?

Complexity Links

About Me

Roger Sessions is the CTO of Roger Sessions, Inc. and ObjectWatch. He has written seven books and dozens of influential white papers. He is recognized as a Fellow of the International Association of Software Architects. He has spoken at hundreds of conferences around the world. He holds multiple patents in software and Enterprise Architecture. He is the inventor of the SIP methodology, a patented Enterprise Architecture Methodology for minimizing the complexity of large IT systems. Join him on Twitter: @RSessions.