A summary of OO Priciples

This article is a very brief collection of OO-rules that can guide us in writing well designed code

Introduction

OO Design is more than just using an OO language. Over the years many bright
programmers have built up a collection of rules that help to build well designed
maintainable code. This article lists the main rules of OO programming. The
intention is to inspire the reader to think about these rules and make further
reading. There is a lot of material on the web that drills down into more
details with plenty of examples.

Ivar Jacabson said ”All systems change during their life cycles. This must be
borne in mind when developing systems expected to last longer than the first
version”. In other words software requirements change with time. The goal of
Object Orientated Design is to program in such a way that such changes to the
software are predictable and do not make a large impact in the program. In other
words it should be stable in the presence of change

Bad design is characterized by

A single change affects many other parts of the system (Rigidity)

A single change affects unexpected parts of the system (Fragility)

It is hard to reuse in another application (Immobility)

The Dependency Inversion Principle DIP

Imagine you have a simple database program. You don’t want to change the
entire application when changing the database. This principle is targeted at
removing such unwanted interdependency that can cause a design to be fragile.
The rule states

High level modules should not depend upon low level modules. Both should
depend upon abstractions

Abstractions should not depend upon details. Details should depend upon
abstractions

Booch said “All well structured object orientated architectures have
clearly-defined layers with each layer providing some coherent set of services
through a well defined and controlled interface.”

In other words design applications in layers where high level layers call
lower level layers using Abstract interfaces. To conform to the principle of
dependency inversion, we must isolate abstraction from the details of the
problem. Then we must direct the dependencies of the design upon the
abstractions.

Good dependencies are extremely unlikely to change. In other words they are
stable. We would like to base our architectural design around stable,
non-volatile modules.

The Open-Close Principle OCP

Software entities (Classes, Modules, functions etc) should be open for
extension, but closed for modification

In other words design classes that never change. When a new requirements come
add new code and don’t edit existing code. It is not possible to close against
all possible changes. Therefore an experienced developer needs to understand the
possible future wishes of users in order to make Strategic Closure. There are
two ways of closure:

Using Abstraction to gain explicit closure - This means the programmer
applies abstraction to those parts of the programmer the designer feels are
subject to change.

Using Data Driven Approach to achieve closure

Liskov Substitution Principle LSP

Every function that operates upon a reference or pointer to a base class
should be able to operate on derivatives of that base class without knowing it.
This means that virtual member functions of the derived class must expect only
all the corresponding member functions of the base class. In other words any
function that uses a base class must not be confused when a derived class is
substituted for the base class

This is a difficult principle to apply. To conform avoid overwriting base
class functions because this involves programming with details, instead try to
program in abstractions

If this is violated then functions that operate on the pointers must first
check the type of the actual object in order to work correctly.

Heuristics and Conventions

Make all member variables Private: Otherwise no function that calls the class
can be closed to change. For example a status variable can change from Boolean
to an enumeration, if this is not handled as a property then we cannot close
status. This is called encapsulation.

No Global Variables. Because misbehaving modules may write erroneous data
to such global variables whose effect can be felt in many places throughout the
program. Sometimes Global variables are useful e.g. cout and cin in c++. If they
do not violate the open close principle then sometimes they are worth the style
violation

Stability Dependencies Principle SDP

The dependencies between packages in a design should be in the direction of
stability of the packages. A package should only depend upon packages that are
more stable than it is.

Some volatility is necessary if the design is to be maintained. This is
achieved by using the Common Closure Principle, in this way we design packages
to be volatile and we expect them to change. Any package that we expect to be
volatile should not be depended upon by a package that is difficult to
change.

Some things we don’t want to change. For example architectural decisions
should be stable and not at all volatile. Therefore classes that encapsulate the
high level design should be stable.

The stable Abstractions Principle SAP

Packages that are maximally stable should be maximally abstract. Instable
packages should be concrete. The abstraction of a package should be in
proportion to it's stability

Common Reuse Principle CRP

If you reuse one class of a package, you reuse them all. This because any
package delivered contains a released set of classes, therefore a change in any
class means a new release of the entire package.

The Reuse / Release Equivalence Principle REP

The granule of reuse is the granule of release. Only components that are
released through a tracking system can be effectively reused. This principle is
important when there are several teams working on an application. To avoid one
team disrupting another all packages used are tested and released. In this way
the introduction of modified packages is in a controlled way.

The Common Closure Principle CCP

The classes in a package should be closed together against the same kinds of
changes. Any change in a package affects all classes in the package. Just like a
well organized team has a common goal because they all have to work together.
This principle means that you should have a common strategic closure concept
used through all classes in a package because they have to be released all
together.

The stability Dependencies Principle SDP

The dependencies between packages in a design should be in the direction of
stability of the packages. A package should only depend upon packages that are
more stable than it is.

Designs that are highly interdependent tend to be rigid, not reusable and
hard to maintain

The Acyclic Dependencies Principle (ADP)

The dependency structure between package must be a Direct Acyclic Graph
(DAG). This means that if you plot out all packages it should be possible to
arrange the dependencies to always point from top to bottom. Also it should not
be possible to follow any lines of dependence and end up back at the same
package. Because such packages would have to be released all at the same time
defeating the object of having them as separate packages

The Interface Segregation Principle ISP

Clients should not be forced to depend upon interfaces that they don’t
use.

This principle deals with the disadvantages of fat interfaces. Fat interfaces
are not cohesive. In other words the interfaces of classes should be broken into
groups of member functions. Each groups servers a different set of clients.
Separation can be achieved by:

Separation through Delegation

Separation through multiple inheritance

If this principle is violated then there is a coupling between all clients

Polyad vs Monad

Monad is when properties are grouped into 1 single object that is then passed
in a function parameter. Unfortunately this brings a dependency across all
properties in that single object. Therefore its better to pass smaller objects
(Polyad), in this way the dependencies are broken into smaller groups.

Interface Pollution

As we build up classes there is a tendency for us to add functionality that
is specific for a particular implementation. In this way the interface gets
populated by functions and properties that are not required if the class was in
a different context, thus making the interface fat. In this way this violates
the Liskov Substitution principle. Separate Clients means separate
interfaces

There is a backward force applied by clients upon interfaces. For example a
user may wish to add a trivial extra function that cannot be exactly positioned
in existing interfaces.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

Comments and Discussions

Except a million pages of pseudo-documentation that nobody can read, a billion lines of bloated code, abysmal performance?

The very first commercial program is supposed to have been a payroll system for General Electric. How is that program improved by the application of OOD, OOP, AOP, C++, Java, EJB, ASP, ADO, Design Patterns, Rational Rose, RDBMS, and a whole host of nonsense that has been thrust upon the programming community?

I do grant that it has provided us with a livelihood for the last 50 years, though

I don't think it IS a troll: it CAN BE in certain circumstances, but not always:

For example: saying "data members cannot be public" (and - for some sort of talebans- even "protected") means "you need one Get and one Set function for each independent data".

Independently of the opinion of each, this IS "code bloating" (in the sense that "code gets longer", and longer gets the time o code).

The real problem, at this point becomes: is it really necessary ?
Is it really necessary to add two months to the developing of an app to made it a well and good app, if that app is expected to leave no more than 5-6 months ???

And what do we "save" after those 6 month, to be reused ? aSet(bool) and Get(bool)? It's faster to recode than to think how to code to be reusable!

It's not a problem of "yes " or "not". It's a problem of good balance.

If I read you carefully, you talk about the point in wasting time thinking to reuse an application that doesn't last more than 6 month. But isn't it a waste of time for an app' to last that short?

"What do we save by reusing get(bool) and set(bool)"? In that form? nothing because you've missed one point: abstraction. Isn't get(state)/put(state) much more meaningfull to you?

I think this is how many people react nowadays: do things fast and care afterwards. I am part of those who think that a little bit of attention before you rush can save time. But saving time occurs long after the application has been released. In terms of maintenance, support, evolution, extension things are much more convenient for everybody when the base is stable, when the initial developers took care of the what they had to create.

Rules exposed here mean taking time. When you go far enough you sometimes realize that it's not even worth developing because there is a human problem no software on earth would be able to fix...

Instead, now, we rush. Because we have to get fast (the faster the better, huh?). Fast? Why? If you don't put comments - for instance - who else but you will be able to decode the awfull spaghetti mess that you've put here and there? And if you leave? And...

Think of the future and not only of the present will enable you to consider how to design before coding. For instance, is creating a 6-months-wide application really required? Can't it really be extended for a larger period? Is the need worth the effort? Was there a need at all? Wouldn't it have been better to spend time on a long-term solution?

I recon we've been educated to change fast with no actual need but the designer's will (Microsoft? ). Or "There is enough memory so why optimize". Or "It's faster to upgrade a machine than optimize code". Or even "Disk slow? change for serial ATA". What else?

Things are so easy today that we cannot even imagine, we developers, too, to spend time designing strong, reliable applications. After all if a customer is not satisfied, he can use something else made by somebody else... Time is often spent in the wrong direction(s).

If we want stuff we create to be simple for use we MUST take care of the difficulties ourselves. We must pass every single step our users would encounter. It takes time but we don't want to take it - or we mostly are told not to, right?

Vince C. wrote:But isn't it a waste of time for an app' to last that short?

yes ... or not. it depends: if you are changing a data structure maybe you write a program that does the reformatting. But you'll throw it away after the migration complete.

To stay with your assertions, I can also propose something about not only “time” but even “material” How many space in our disks if full of copy of something else ? Is that real information or just entropy?

My consideration was intented for some “small” activities.
But thinking to set(bool) in term of a template<class state> set(state) you code a mystate=val only once. But … do all of your programmers remember it next year?

I had a similar experience when – as a student – I was coordinating the deployment of a complex project with many junior students. I discovered that the strchr function is one of the most rewritten function. Why? Simple:
They were faster to recode by themselves than to search – in rigorous alphabetic order, in the C library documentation – “a function that seek a character in a string”.

That’s where reusability fails: when you don’t remember what you can reuse.
Sometimes it seems that a good programmer is the one who better remember the past or better remember which function or classes to use.

I don’t have a solution for that. But I find strange to see someone sometimes go out and say “I’ve got it !”

I see your point and I've been in such situation many times - I mean the student reinventing the strchr wheel . This is - IMHO - a usual trap. I often avoid it by thinking "Isn't there someone out there who needed it and wrote it in a more powerful way than I would?". That's how I'm trying to bear with the human factor...

For single and simple things like this is (e.g. strchr) it works. But, as you said, they were faster coding it again. Apparently: because who thinks of controlling bounds, trailing zeroes, 8 or 16 bits chars or multibyte chars, aso. Developers of "strchr" made it so that it is secure enough and handles all possible cases. Coding it again can only result in loss of <whatever> - hence bugs - since limited in features: too fast, not secure enough.

I think we agree on that. To avoid it... should it be avoided at all, in fact? This is what makes our experience. Of course seniors can tell the youngers there is that "strchr" that will do what they want. This is education.

My point reaches yours in that younger developers, for instance, should read on first before starting to code - be patient, somehow. You work best when you know your tools. So with the library or anything. And they should be told early at school that one never does a good job skipping important steps. But they should also be tought what the important steps are, of course.

I think it's nothing more than willing to know first before implementing. If one knows the tools (API, editors, hardware,...) at his/her disposal, he/she won't reinvent the wheel. Provided one wants to learn.

But I admit I'm going far. But I always see now in things more than what they look like. Maybe I should have studied philosophy instead of electronics

I notice how all of you OO guys have been going ga-ga over Design Patterns. If design patterns are so general and so useful, why doesn't someone knock off a complete ERP in, let us say, about a month's time? You should be able to, right?

Hallo Vivic,
To a certain extent I can understand what you mean. A few weeks ago I met a Software Architect that would make a VS Solution with about 8 sub projects for even temporary applications. In this case there is a lot of overhead to simplify maintenance that is not needed. In other words I would not use a cannon to swat a fly. Also sometimes for performance reasons it is necessary to restructure.

But in the case of a Payroll system for General Electric I am convinced an OO approach would be better for my health. It just must be applied wisely. It is not necessary to describe every detail in the documentation because these take too much time to maintain, instead only those of architectural importance.

I think the message is to use common sense when building applications and learn from the lessons of other programmers. Also perhaps think about buying shares in Intel

I have two views on this. Firstly, I think there is a great tendency on the part of developers to resist any attempt to impose standards on what they do. Many started tinkering with code in their spare time and are unwilling to submit to any systematic methodology, regarding themselves as clever enough to always select the best method of working (which is always ad hoc keyboard bashing). This is clearly arrogant nonsense.

Secondly, however, I think we have to be careful before we call on false analogies to decide how software should be developed. I myself have been quite fond of the construction analogy - you wouldn't start putting up a building without first making plans. However, it has its limitations, because buildings, due to the cost of the labour and materials, have to be around for a quite a few years and so they have to be done right first time. It's quite correct to point out that software is not always like this; it costs less to build a program than a skyscraper, and less to rip it down again. So the cost of designing and reuse must be weighed against the gains.

What happens in practice is that indiscipline in development often causes design and reuse to be skipped when they would be useful, whereas, at the other end of the spectrum, rigid and ignorant management tries to impose monolithic methodologies irrespective of their appropriateness.

I myself absolutely favour design and reuse in moderation, unless the project is very temporary. Unless you are Einstein, it is very difficult to detect the fundamental problems you will encounter in any significant project early on without designing. And making a habit of packaging basic functionality into libraries which can be reused avoids reinventing the wheel thousands of times; providing your time horizon is longer than eighteen months the investment should pay off.

Vivic wrote:Except a million pages of pseudo-documentation that nobody can read, a billion lines of bloated code, abysmal performance?

Ah, such heart-warming cynicism

So let's remove all the hyperbole and get down to the nub of what you are trying to say.

(i) Assertion: OO code is poorly documented.
Actually most of the engineers I work with find OO diagrams much clearer and easier to take in than reams of contractual interface definitions. They're easier to work with in design sessions and very easy to manipulate. Crucially, they are also understandable to non-programmers because they deal with relationships in the problem domain, aiding team communication. And of course the diagrams map in a direct way to code, making implementation straightforward.

(ii) Assertion: OO code is bloated.
It's not that difficult to write bloated procedural code. Perhaps the assertion is that it's easier to write bloated OO code than non-OO code. I would assert that it's even easier to write code that doesn't work, but we're hardly going to advocate that, are we? If you're looking to OOD as something for nothing, then forget it, you know TANSTAAFL. If you view is as "make effort here, reap the benefit elsewhere", then that's closer to the mark.

In any event, I've seen many C projects trip themselves up trying to be OO, eg with tables of function pointers, alongside claims that the code is "better" than C++ (faster - even though it's not, smaller - even though it's not) and so on.

(iii) Assertion: OO code performs poorly.
Nonsense. Poorly designed architectures, OO or otherwise, performs poorly. Critically in a project, when considering performance, one must understand the target hardware AND the implementation of the language, and design data flow to change as little as possible as infrequently as possible. In my experience it is easier to design and understand multi-platform architectures using OO principles that satisfy the requirements for high-performance code.

I've used all these principles in several game architectures, systems that require very high data throughput in real time on consoles that have puny amounts of RAM and CPU cycles comapared to a PC. Bloated and poorly performing? I think not.

Read more in my book, "Object Oriented Game Design" (Addison-Wesley) available on Amazon or elsewhere.

Julian Gold wrote:I've used all these principles in several game architectures, systems that require very high data throughput in real time on consoles that have puny amounts of RAM and CPU cycles comapared to a PC. Bloated and poorly performing? I think not.

I don't know what game consoles you used but the Sony's game machine released a couple of years back was supposed to have half the power of a Cray.

Julian Gold wrote:Read more in my book, "Object Oriented Game Design" (Addison-Wesley) available on Amazon or elsewhere.

Not very interested in game programming. Thanks anyway.

However, I did ask -- if OO design enables re-use and design patterns provide basic structural components that are suuficiently general to be used in all applications -- why one of you guys don't knock off a complete ERP in a month's time. I was met with thunderous silence. with all your OO design, UML modeling, use-case analysis, automatic code generation, etc., one would think that you should be able to produce a product competitive with SAP or Oracle ERP pretty quickly.

Let me ask you one simple question: since your company's sales orders are another (one or more) company's purchase orders, shouldn't you be able to write just one system and change a few screens by inheriting from the various classes used for the other system? Since receiving into inventory is the opposit of shipping from inventory, shouldn't one set of classes be sufficient? Has a single system been written this way?

Julian Gold wrote:Ah, such heart-warming cynicism

What else can one resort to when one is constantly assaulted with the superiority of OO design, C++, Unix, etc., etc., without one single proof that all these new-fangled "technologies" have actually contributed to the solution space?

Vivic wrote:I don't know what game consoles you used but the Sony's game machine released a couple of years back was supposed to have half the power of a Cray.

I used to work for Sony so I know the machine well. Playstation 2 is very powerful... but 32MB RAM? Sod all VRAM? A 300MHz CPU? Parallelism is the key to exploiting the power of PS2. And guess what: we have an abstract OO framework just ready to take some of the pain out of writing synchronous processes.

Vivic wrote:Not very interested in game programming. Thanks anyway.

Sorry, couldn't resist! But the book's about using OO in games, not about games per se - there's still a lot of generic stuff there.

Vivic wrote:However, I did ask -- if OO design enables re-use and design patterns provide basic structural components that are suuficiently general to be used in all applications -- why one of you guys don't knock off a complete ERP in a month's time.

Pardon my dumbitude, I don't know what an ERP is. However, I can answer the question in a general fashion. First off, OO is not a magic bullet. It's a tool. As a design paradigm it mirrors certain aspects of the way that we model the world in our heads, but there are obvious limits. Defining an abstract behaviour does not define all possible behaviours, it merely provides a framework by which behaviours may be implemented. Much of the work of production is to create the concrete subclasses from the abstract. The OO paradigm means we know - or equivalently, don't need to care - about how the rest of the system operates. Our class interface fulfils its contract and that's all we need to focus on, but there's still work to be done.

One big problem is that, yes, writing reusable code takes effort and specialised skill. If you don't want to put the work in and acquire the know-how that's fine. But your ascerbic cynicism doesn't seem to be backed by anything other than limited anecdotal experience. FYI I've implemented several OO architectures whose components have been reused in many projects and products. IMHO there's a strong business reason for doing so which overrides any technical argument. And we all work for businesses, right?

It's become my view that (generally speaking) people who are "anti-OO" often...

- prefer to tap away writing rubber-band & sticky-tape code, rather than sitting down to analyse and plan (after all, it takes a lot less discipline).

- find OO too difficult to understand (or usually simply can't be bothered to work at understanding it).

- are happy that the software development industry is renowned for rarely delivering on time or within budget.

- are happy to deliver unreliable / inflexible software - after all, they are generally the people that get employed to fix it.... or used to be.

When properly applied, OO really (really) works. E.g. we delivered a system with 14,000 lines of code after 2 months design and 2 weeks coding in January 2004. We have had a total of 6 bugs - each of which was fixed in no time at all. You can imagine how well it was received.

It is interesting that you simply make convenient assumptions about people who question your beliefs and cherished approaches for their questions threaten your basic assumptions.

It's become my view that (generally speaking) people who are "anti-OO" often...

- prefer to tap away writing rubber-band & sticky-tape code, rather than sitting down to analyse and plan (after all, it takes a lot less discipline).

For your information, I don't write code anymore. I haven't written any for the last 15 years even for fun and professionally for the last 25 years. Writing code is the most boring clerical activity ever invented and I got over it after about 3 years though financial circumstances kept me in the code cutting business for another four. After which I became a manager responsible for delivering projects on time and within budget.

- find OO too difficult to understand (or usually simply can't be bothered to work at understanding it).

I never had to learn a language by going to class. I have taught myself Fortran, Algol, Cobol, Assembler (for a couple of machines), Pascal, C and C++. Of these, I have delivered commercial-grade software in Cobol and Assembler to US government entities, written to FIPS PUBS standards as demanded by the contract. And I have done it on machines as varied as a Xerox Sigma 5, a Unisys 1108, an IBM 360 series system, using hierarchical, network, relational DBMS, etc. (No, they didn't support Unix, OO, or C++.) Today, my teams deliver code in Java, C, C++, ASP or any other buzzword customers throw at us on boxes from Sun, HP, etc.

Syntactic sugar doesn't excite me the way it seems to make you all hyper-active.

- are happy that the software development industry is renowned for rarely delivering on time or within budget.

Read my response to the above two points.

are happy to deliver unreliable / inflexible software - after all, they are generally the people that get employed to fix it.... or used to be.

If you have ever worked for a government agency in the US, you would know that maintenance contracts are NOT awarded on the basis that a particular company wrote the original software but strictly on price. And maintenance is not easy when you have government employees destroying all source code except the one paper copy that they maintain at their homes in order to keep their jobs. I Know, I have been there.

I visit this site and others similar to it to see what new ideas guys like you are hatching up . Surely, you don't have any objection to people continuing their education by whatever means at hand, do you?

For what it’s worth I am enjoying reading this conversation. Sometimes we can get so religious about one method that we don’t see the wood for the trees. So I really appreciate your view point because it really makes me think about why/how to use OO.

I forgot to add that I work for a company certified at CMM Level 5. Which means that all our work is either error-free or we have measures in place to track and fix bugs and reduce our error rates from year to year. We write as well as maintain/enhance programs written in Cobol, C, C++, Java, CGI, HTML and anything else the customer has in his legacy systems. We have no trouble maintaining our Level 5 certification despite dealing with a language like Cobol or HTML, which is why I question the opinion propagated by OO fans that OO is the saviour of the programming community.