One can often hear that OOP naturally corresponds to the way people think about the world. But I would strongly disagree with this statement: We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies.

Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP. Examples of such relationships are: "my screen is on top of the table"; "I (a human being) am sitting on a chair"; "a car is on the road"; "I am typing on the keyboard"; "the coffee machine boils water", "the text is shown in the terminal window."

We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance.

Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., "the text is being shown by the terminal window". Or maybe "the text draws itself on the terminal window".

Not only is the focus shifted to nouns, but one of the nouns (let's call it grammatical subject) is given higher "importance" than the other (grammatical object). Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow). But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)? [Consequences are operationally insignificant -- in both cases the text is shown on the terminal window -- but can be very serious in the design of class hierarchies and a "wrong" choice can lead to convoluted and hard to maintain code.]

I would therefore argue that the mainstream way of doing OOP (class-based, single-dispatch) is hard because it IS UNNATURAL and does not correspond to how humans think about the world. Generic methods from CLOS are closer to my way of thinking, but, alas, this is not widespread approach.

Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular? And what, if anything, can be done to dethrone it?

This question exists because it has historical significance, but it is not considered a good, on-topic question for this site so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. See the help center for guidance on writing a good question.

OOP is for 'programming', it is made for compilers to enjoy. it is not a way to model the real world in any way. It is common to use real world examples such as animal class and rectangle class, during teaching OOP but these are just examples to simplify the concepts. Unfortunately, programming language and paradigms 'happen' in a non-evolutionary and non-scientific fashion. With the few exceptions, the language design process is usually the child of few people's best guess!
– NoChanceJan 2 '12 at 13:48

3

I would disagree with the claim "the focus of OOP is designing individual classes and their hierarchies." since maybie it is the focus of a certain implementation(s) of OOP, but I for example have found that dynamic OOP languages usually don't have large hierarchies, or even classes. Well, the OOP definition may change in future, there is a try to change even the Object one wcook.blogspot.com/2012/07/proposal-for-simplified-modern.html
– AzderAug 2 '12 at 7:30

1

OOP is identifying objects in the system and constructing classes based on those objects. Not the other way around. This is why I also disagree with "the focus of OOP is designing individual classes and their hierarchies".
– Radu MurzeaAug 2 '12 at 8:42

I think this is a highly subjective question and it is full of weird assumptions about OOP and what the "real world" is and how people think about problems. And finally it even asks "And what, if anything, can be done to dethrone it?" which really shows that the OP has a very strong opinion about this. If you are that "religious" about programming paradigms, I guess you don't need any answers... you already know what you want to hear. If I had known about this question when it was first asked, I'd probably voted to close it...
– Simon LehmannMay 7 '13 at 19:52

22 Answers
22

OOP is unnatural for some problems. So's procedural. So's functional. I think OOP has two problems that really make it seem hard.

Some people act like it's the One True Way to program and all other paradigms are wrong. IMHO everyone should use a multiparadigm language and chose the best paradigm for the subproblem they're currently working on. Some parts of your code will have an OO style. Some will be functional. Some will have a straight procedural style. With experience it becomes obvious what paradigm is best for what:

a. OO is generally best for when you have behaviors that are strongly coupled to the state they operate on, and the exact nature of the state is an implementation detail, but the existence of it cannot easily be abstracted away. Example: Collections classes.

b. Procedural is best for when you have a bunch of behaviors that are not strongly coupled to any particular data. For example, maybe they operate on primitive data types. It's easiest to think of the behavior and the data as separate entities here. Example: Numerics code.

c. Functional is best when you have something that's fairly easy to write declaratively such that the existence of any state at all is an implementation detail that can be easily abstracted away. Example: Map/Reduce parallelism.

OOP generally shows its usefulness on large projects where having well-encapsulated pieces of code is really necessary. This doesn't happen too much in beginner projects.

I think an important point in the question was not whether OOP is natural, but whether the mainstream approach to OOP is the most natural OOP approach. (Good answer anyway.)
– GiorgioSep 21 '11 at 19:11

11

"OOP generally shows its usefulness on large projects where having well-encapsulated pieces of code is really necessary.": but this is not a specific property of OOP, since there are good module concepts also for imperative, functional, ... programming languages.
– GiorgioSep 21 '11 at 19:27

2

OOP's really on a different axis to procedural/functional; it tackles how to organize and encapsulate the code, not how to describe the operations. (You always partner OOP with an execution paradigm. Yes, there are OOP+functional languages.)
– Donal FellowsSep 22 '11 at 21:33

1

Couldn't it be said that OOP is just the boxes we put the procedural stuff in?
– Erik ReppenAug 2 '12 at 2:25

In OOP, you can still write methods. What makes OOP weaker than Procedural programming?
– Mert AkcakayaAug 2 '12 at 6:39

IMHO it is a question of personal taste and ways of thinking. I don't have much problem with OO.

We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies.

IMHO this is not quite so, although it may be a common perception. Alan Kay, the inventor of the term OO, always emphasized the messages sent between objects, rather than the objects themselves. And the messages, at least to me, denote relationships.

Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP.

If there is a relationship between objects, then they are related, per definition. In OO you could express it with an association / aggregation / usage dependency between two classes.

We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance.

But they still have their well-defined roles in the context: subject, object, action etc. "I gave you flowers" or "You gave me flowers" aren't the same (not to mention "Flower gave you to me" :-)

Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., "the text is being shown by the terminal window". Or maybe "the text draws itself on the terminal window".

I disagree with this. IMHO the English sentence "Bill, go to hell" reads more naturally in program code as bill.moveTo(hell) rather than move(bill, hell). And the former is in fact more analog to the original active voice.

Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow)

Again, it is not the same to ask the terminal to show some text, or ask the text to show the terminal. IMHO it is pretty obvious which one is more natural.

Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular?

Maybe because the majority of OO developers see OO differently than you?

+1 for directly mentioning that messages between objects and their importance from Alan Kay
– bunglestinkMar 18 '11 at 12:41

3

@zvrba, indeed, there were other OO pioneers as well, but that's beside the point of this discussion. "The most natural thing is to ask for the text to be shown." - now you are using the same passive voice you claimed is so "unnatural" in OOP.
– Péter TörökMar 18 '11 at 22:32

5

@zvrba, you say 'there is always an "agent" [...] doing some stuff', yet protest against the OO approach of identifying and naming those agents (objects), and handling them as first-class citizens in the language. To me this is part of modeling the problem domain - you need to do that work anyway, just then map the resulting domain model to code in a different way, depending on whether your language is OO or not.
– Péter TörökMar 19 '11 at 20:18

6

@zvrba, it is not OO per se, but a developer/designer who decides about the roles and responsibilities of different objects. Using the OO approach can yield good or bad domain models and designs, just as using a knife can get you a slice of fine bread, or a bleeding finger - still we don't blame the knife, because it is just a tool. Note also that the concepts/objects/agents in the model are abstractions of things in the real world, and as such are always limited and distorted, focusing on specific "interesting" aspects only.
– Péter TörökMar 20 '11 at 20:19

6

@zvrba, a car indeed doesn't start of its own will - just as a Car object would start() only when its method is explicitly called by someone.
– Péter TörökMar 20 '11 at 20:23

Some programmers find OOD hard because those programmers like to think about how to solve the problem for the computer, not about how the problem should be solved.

... but the focus of OOP is designing individual classes and their hierarchies.

OOD is NOT about that. OOD is about figuring out how things behave and interact.

We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance.

The focus on OOD is always on action, where actions are the behaviours of objects. Objects are nothing without their behaviours. The only object constraint of OOD is that everything has to be done by something.

I don't see the doer as more important then the thing having something done to it.

But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)?

To me that's the same thing with a different notation. You still have to decide who is the show-er and who is the show-ee. Once you know that, then there is no decision in OOD. Windows show text -> Window.Show(text).

A lot of stuff out there (especially in the legacy area) says it is OO when it is not. For example there is a huge amount of C++ code that does not implement a fig of OOD.

OOD is easy once you break out of the mindset that you are solving a problem for computers. You are solving problems for things that do stuff.

I do not like OOD exactly because I prefer to think of how to solve the problem, period. I don't want to think of how to translate the natural language of a problem domain into an alien world of objects, messages, inheritance and so on. I don't want to invent hierachical taxonomies where they do not naturally exist. I want to describe the problem in its natural language and solve it the easiest way possible. And, by the way, the world is not made of only "things that do stuff". Your view on reality is extremely narrow.
– SK-logicMar 18 '11 at 13:22

7

@Matt Ellen, I write programms that model behaviour of entites that are not "things" and that does not "do" any "stuff". Any immutable entity does not do anything. Any mathematical pure function does not do anything - it is just a mapping from one set to another, and this world is mostly described by such functions. Any logical formalism does not "do" anything. Yes, OOD won't help me, because I'm able to use much, much more powerful abstractions and models. Going back to a primitive, limited, limiting OOD will handicap me severely.
– SK-logicMar 18 '11 at 17:44

2

@Matt Ellen, yes, I'd like to see references that backs such strange claims that world is better viewed as a collection of "things that do stuff", or "objects that interacts via messages". It is completely unnatural. Take any scientific theory (as they're all are as close to modelling the real world as it is possible at this stage), and try to rewrite it using your terminology. The result will be clumsy and awkward.
– SK-logicMar 18 '11 at 17:47

4

@Antonio2011a, there is no single programming or modelling paradigm which can cover all the possible problem domains efficiently. OOP, functional, dataflow, first-order logic, whatever else - they all are problem domain-specific paradigms and nothing more. I'm only advocating the diversity and open-minded approach to problem solving. Narrowing your thinking down to a single paradigm is just stupid. The closest thing to a universal approach, not limiting you to a single semantic framework, is en.wikipedia.org/wiki/Language-oriented_programming
– SK-logicJan 3 '12 at 10:37

3

@Calmarius why would you make programming easier for computers? That's just silly. It's humans who have to do the harder work.
– Matt EllenAug 1 '12 at 21:14

Reading the first book on OOP (early 90s, Borland's thin manual to Pascal) I was simply amazed with its simplicity and potential. (Before, I had been using Cobol, Fortran, Assembly language and other prehistorical stuff.)

For me, It is pretty clear: a dog is an animal, an animal must eat, my dog is a dog, so it must eat ...

On the other side, programming itself is inherently unnatural (i. e. artificial). Human speech is artificial (do not blame me, we all have learned our languages from others, nobody knows the person who invented English), too. According to some scientists, human's mind is formed by the language he has learned first.

I admit, some constructs in modern OO-languages are a little bit awkward, but it is the evolution.

What exactly was your job for you to move from cobol to fortran, then to assembly (and the rest)?
– RookMar 18 '11 at 11:15

1

Wrong order. In fact, I have been starting with Basic, then moved to Fortran and then to Assembly and Cobol (yes, it is the correct order: Assembly first). The first computer in my life was a "gigantic" mainframe, 256kB of memory, typewriter its console, feeding it with punch cards, because I was its operator. When I had risen enough to become a systems programmer, a suspect thing called PC-AT had landed at my desk. So I had plunged into GW-Basic, Turbo-Pascal, ... and so on.
– NerevarMar 18 '11 at 17:55

1

I wasn't reffering to that. More to your job domain; cause I don't know many people (anyone really) who dealt with COBOL (business oriented), fortran (scientific domain), assembler (more cs oriented) and then those other ones. Nevertheless them being professional programmers or not.
– RookMar 20 '11 at 12:00

1

No mystery: joining teams and projects where decision on laguage approach had been made before. And some non-trivial level of curiosity about tools they were using.
– NerevarMar 21 '11 at 11:29

One thing that made it hard for me was thinking that OOP was about modelling the world. I thought that if I didn't get that right, something might come and bite me in the ass (a thing is either true or its not). I was very aware of the problems that pretending everything is an object or entity. This made me very tentative and under-confident about programming in OOP.

Then I read SICP and came to a new understanding that it really was about data types and controlling access to them. All the problems I had melted away because they were based on a false premise, that in OOP you are modelling the world.

I still marvel at the immense difficulty that this false premise gave me (and how I let myself be beholden to it).

Yes, OOP itself is very unnatural - the real world is not entirely made of hierarchical taxonomies. Some little parts of it are made of that stuff, and that parts are the only things that could be adequately expressed in terms of OO. All the rest can't be naturally fit into such a trivial and limited way of thinking. See the natural sciences, see how many different mathematical languages had been invented in order to express in the simplest or at least comprehendable way the complexity of the real world. And almost none of them can be easily translated into a language of objects and messages.

Well, that is equally true to any kind of formal notation trying to precisely describe the real world.
– Péter TörökMar 18 '11 at 10:50

1

@Péter Török, precisely my point. There is no single formalism that covers everything. You have to use them all, in all their scary multitude. And that's the reason why I believe in a language-oriented programming - it allows to adopt various, often incompatible formalisms into a solid code entity.
– SK-logicMar 18 '11 at 10:58

1

Everything can be categorized into hierarchical taxonomies. The trick is coming up with schemes that make sense. Multiple inheritance is often involved. A big difference between software and the real world is that in the real world, we often have to infer or discover the categorization schemes; in software, we invent them.
– CalebSep 22 '11 at 16:20

1

By using a term like "hierarchical taxonomy", I think you're thinking of, well, inheritance. Inheritance is indeed hard to reason about. That's why people suggest using composition over inheritance.
– Frank SheararSep 22 '11 at 17:47

1

@Frank: There are many types of hierarchical taxonomies in the real world, of which inheritance is only one. Doing this all properly requires better tools than OOP (e.g., ontological reasoning) and has caused problems for philosophers for millennia…
– Donal FellowsSep 22 '11 at 21:38

We (or at least I) conceptualize the world in terms of relationships
between things we encounter, but the focus of OOP is designing
individual classes and their hierarchies.

You're starting from (IMO) a false premise. The relationships between objects are arguably more important than the objects themselves. It's the relationships that give an object oriented program structure. Inheritance, the relationship between classes, is of course important because an object's class determines what that object can do. But it's the relationships between between individual objects that determine what an object actually does within the bounds defined by the class, and therefore how the program behaves.

The object-oriented paradigm can be difficult at first not because it's difficult to think up new categories of objects, but because it's difficult to envision a graph of objects and understand what the relationships between them should be, particularly when you don't have a way to describe those relationships. This is why design patterns are so useful. Design patterns are almost entirely about the relationships between objects. Patterns give us both the building blocks that we can use to design object relationships at a higher level and a language that we can use to describe those relationships.

The same is true in creative fields that work in the physical world. Anyone could throw together a bunch of rooms and call it a building. The rooms might even be fully furnished with all the latest accoutrements, but that doesn't make the building work. The job of an architect is to optimize the relationships between those rooms with respect to the requirements of the people using those rooms and the building's environment. Getting those relationships right is what makes a building work from both functional and aesthetic perspectives.

If you're having trouble getting used to OOP, I'd encourage you to think more about how your objects fit together and how their responsibilities are arranged. If you haven't already, read about design patterns -- you'll likely realize that you've already seen the patterns you read about, but giving them names will let you see not just trees, also but stands, copses, thickets, woods, groves, woodlots, and eventually forests.

OOP seems very natural to me, it's in fact hard for me to think about programming tasks any other way. Still, over the years I've come to understand that OOP tends to overemphasize nouns over verbs. This rant from 2006 helped me understand this distinction: steve-yegge.blogspot.com/2006/03/…
– Jim In TexasSep 22 '11 at 17:49

OOP is not hard. What makes it difficult to use well is a shallow understanding of what it's good for, where programmers hear random maxims, and repeat them to themselves under their breath in order to receive the blessings of the divine Gang of Four, the blessed Martin Fowler, or whoever else they've been reading.

First of all I would like to say that I never found OOP hard, or harder than other programming paradigms. Programming is inherently hard because it tries to solve problems from the real world and the real world is extremely complex. On the other hand, when I read this question I asked myself: Is OOP "more natural" than other paradigms? And therefore more effective?

I once found an article (I wish I could find it again so I could post it as a reference) about a comparative study between imperative programming (IP) and object-oriented programming (OOP). They had basically measured the productivity of professional programmers using IP and OOP in different projects, and the result was that they had not seen big differences. So, the claim was that there was no big difference in productivity between the two groups, and that what really counts is experience.

On the other hand, proponents of object-orientation claim that, while during the early development of a system OOP may even take more time than imperative, in the long run the code is easier to maintain and extend due to the tight integration between data and operations.

I have worked mainly with OOP languages (C++, Java) but I often have the feeling that I could be as productive using Pascal or Ada even though I never tried them out for large projects.

Contrast that with OOP where you first have to find one object (noun)
and tell it to perform some action on another object.

[cut]

I would therefore argue that the mainstream way of doing OOP
(class-based, single-dispatch) is hard because it IS UNNATURAL and
does not correspond to how humans think about the world. Generic
methods from CLOS are closer to my way of thinking, but, alas, this is
not widespread approach.

When I read this last paragraph more carefully I finally understood the main point of your question and I had to rewrite my answer from scratch. :-)

I know of other OO proposals where multiple objects receive a message instead of only one, i.e. several objects play a symmetric role when receiving a message. YES, this seems a more general and maybe more natural (less restrictive) OOP approach to me.

On the other hand, "multiple dispatch" can be easily simulated using "single dispatch" and "single dispatch" is easier to implement. Maybe this is one reason why "multiple dispatch" hasn't become mainstream.

Stop looking for an exclusively OOP paradigm and try on some JavaScript.

I currently have something of a scheme worked out where my UI objects are operating under an event-driven interface. That is to say, I'll have what looks like a typical public method that when fired results in an internally defined action. But when it's fired, what really happens is that I trigger an event on the object itself and a pre-defined handler inside that object responds. This event and the event object you can attach properties to that gets passed to any other listener can be heard by anything that cares to listen. You can listen directly to the object, or you can listen generally for that event type (events are also triggered on a generic object all objects built by the factory can listen to). So now for instance, I've got a combo box where you select a new item from the dropdown list. The comboBox knows what to do to set its own display value and update the server if it needs to, but I can also have as many other combo boxes as I want listening in to swap out their select lists which are tied to the current value of the first combo box.

If you want, (and I was surprised to discover that I don't usually want - it's a legibility issue when you can't see where the event comes from) you can have complete object decoupling and establish context via a passed event object. Regardless, you're still dodging single-dispatch by being able to register multiple responders.

But I'm not doing this with OOP alone and JS is not even 'properly' OOP by some definitions, which I find hilarious. Proper, for the higher levels of app development in my opinion, is having the power to bend the paradigm to whatever works for your situation and we can emulate classes just fine if we care to. But in this case, I'm mixing aspects of functional (passing handlers around) with OOP.

More importantly, what I have feels pretty powerful. I'm not thinking in terms of one object acting on another. I'm basically deciding what the objects care about, giving them the tools they need to sort things out and just dropping them into a mixer and letting them react to each other.

So I guess what I'm saying is this: it's not a switch statement kind of a problem. It's mix and match. The problem is languages and fads that want you to believe it's one thing uber alles. How can a junior java dev, for instance, truly appreciate OOP when they think they're always doing it properly by default?

The way it was explained to me was with a toaster and a car. Both have springs, so you'd have a "spring" object and they'd be different sizes, strengths and whatever, but they'd both be "springs" and then extending that metaphor onto the car, were you have lots of wheels (Wheels, obviously, plus the steering wheel, ect) and that made a lot of sense.

You then can think of the program as a list of objects, and it's much simpler to visualize then "It's a list of things that do stuff, so it's not like that list of instructions you saw before"

I think the real problem with OOP is how it's explained to people. Often (in my uni classes) I see it being explained by saying "it's about lots of classes that do little things, and you can create objects out of that" and it all confuses a lot of people, because it's using what are essentially abstract terms to explain these concepts, rather then concrete ideas that people have grasped when they were 5 years old playing with lego.

But do you categorize according to whether some objects have common attributes or common behaviors? Is everything that has name and surname a person? Is everything that walks an animal?
– zvrbaMar 18 '11 at 14:03

When it comes to categorization, being able to perform a certain behavior is an attribute. Zebras and horses can breed, but the offspring is a hybrid. Very difficult to predict this result based on the possessing the mating function unless you know they are not the same species.
– JeffOMar 19 '11 at 0:26

1

@zvrba: My answer to the question posed in the comment is that it doesn't matter. Everything that has a name and a surname is a person to every program that only cares about people. For any program that has no knowledge of people or non-people, it's an IHasNameAndSurname. Objects only need to solve the problem at hand.
– Tom WSep 22 '11 at 16:24

@Jeff O Even within the same species/breed/population there is variation and no ideal (essential) example. So, yes OO isn't as nature actually is, but it is a good fit to how humans naturally think.
– Tom Hawtin - tacklineOct 5 '11 at 8:20

I think some of the difficulty comes in when people try to use OOP to represent reality. Everybody knows that a car has four wheels and an engine. Everybody knows that cars can Start(), Move() and SoundHorn().

A light clicked on in my head when I realised I ought to stop trying to do this all the time. An object is not the thing that it shares a name with. An object is (i.e. should be) a sufficiently-detailed partition of data relevant to the scope of the problem. It ought to have exactly what the solution to the problem needs it to have, no more and no less. If making an object responsible for some behaviour results in more lines of code than the same behaviour being the job of some nebulous third party (some might call it a 'poltergeist') then the poltergeist earns its chips.

In order to manage complexity, we need to group functionality into modules, and this is a difficult problem in general. It's like the old saying about capitalism, OOP is the worst system for organizing software out there, except for everything else we've tried.

The reason we group interactions inside the nouns, even though there is frequently an ambiguity about which of the two nouns to group it with, is that the quantity of nouns happens to work out to manageable sized classes, whereas grouping by verbs tends to either produce very small groups like one-offs, or very large groups for a function like show. Reuse concepts like inheritance also happen to work out much easier when grouping by nouns.

Also, the question of deciding whether to put show with the window or the text is almost always much more clear in practice than in theory. For example, nearly all GUI toolkits group add with the container, but show with the widget. If you try to write code the other way, the reason becomes apparent fairly quickly, even though thinking about it abstractly the two methods seem interchangeable.

Have you ever seen a proper module system? How can you say that OOP is the best thing available? SML modules are much, much more powerful. But, even Ada packages are enough for most of the cases, even without a tiny hint of OOP.
– SK-logicMar 18 '11 at 13:40

@SK-logic, I think you're getting hung up on overly-precise definitions. By OOP, I don't mean classes, I mean logically grouping "verbs" by the "nouns" they operate on, and being able to reuse and specialize those verbs based on the particular nouns they happen to be operating on. Classes happen to be the most well-known implementation of that, but are not the only implementation. I admit to ignorance of SML modules, but the first example I saw when I looked it up was an implementation of a queue that could have come from any OO design book with syntax changes to make it functional.
– Karl BielefeldtMar 18 '11 at 14:52

it simply would not be fair to give an unfortunate OOP too much credit for something that does not belong to it. Modules are a great invention. First-class modules are fantastic. But OOP has nothing to do with them at all. Some OOP languages adopted some of the module features (most notably - namespaces), but modules are much more general and powerful concept than classes. You said that classes are "the best", which is far from truth. First-class modules are much better. And, of course, type systems are much wider and deeper than just an OO with its subtyping thingy.
– SK-logicMar 18 '11 at 14:57

1

"It's like the old saying about capitalism, OOP is the worst system for organizing software out there, except for everything else we've tried.": maybe we haven't been trying long enough. :-)
– GiorgioSep 21 '11 at 19:21

No. There are several ways to solve a proble using Programming: functional, Procedural, logical, O.O.P., other.

In the real world, sometimes, people use the functional paradigm, and sometimes, we use the procedural paradigm, and so on. And sometimes we mixed up. And eventually we represent them as a particular style or paradigm of programming.

There is also the "everything is a list or a item" paradigm, used in LISP. I like to mention as a different thing from functional programming. PHP uses that in associative arrays.

O.O.P. and "Everything is a list or item" paradigms are consider 2 of the MORE NATURAL programming styles, as I remember in some Artificial Intelligence classes.

Sounds to me weird, "O.O.P. is not natural", maybe the way you learn, or the way you have been taught about O.O.P. is wrong, but not O.O.P. itself.

Given these problems, how/why did it happen that the currently
mainstream way of doing OOP became so popular? And what, if anything,
can be done to dethrone it?

OOP became popular because it offers tools to organize your program at a higher level of abstraction than the popular procedural languages that preceded it. It was also relatively easy to make a language that had procedural structure inside of methods, and object-oriented structure surrounding them. This let programmers who already knew how to program procedurally pick up OO principles one at a time. This also led to lots of OO-in-name-only programs that were procedural programs wrapped inside a class or two.

To dethrone OO, build a language that makes it easy to transition incrementally from what most programmers know today (mostly procedural, with a little OO,) to your preferred paradigm. Make sure it provides convenient APIs to do common tasks, and promote it well. People will soon be making X-in-name-only programs in your language. Then you can expect it to take years and years for people to get good at actually doing X.

The OP does not argue that OO is bad in general and should be dethroned but that the "currently mainstream way of doing OOP" is not the most natural one (compared to "multiple dispatch").
– GiorgioSep 22 '11 at 14:14

OP also seems overly focused on defining type hierarchy, where the best OO programs tend to rely more on interfaces and composition. If Multiple dispatch is X, then making a language that lets people gradually learn the skills associated with multiple dispatch is still the key to changing the environment.
– Sean McMillanSep 22 '11 at 14:32

If understand it correctly, OOP is about black boxes (objects) which has push buttons on them that can be pushed (methods). Classes are just there to help organize these black boxes.

One problem is when the programmer puts the push buttons on the wrong object. The terminal cannot show text on itself, the text cannot show itself on the terminal. The window manager component of the operating system that can do that. Terminal window and text is just a passive entity. But if we think this way we realize the most entities are passive things and we would have only a very few objects that actually do anything (or simply one: the computer). Indeed when you use C you organize it into modules, these modules represent that few objects.

Another point is that the computer just executes instructions sequentially. Let's assume you have a VCR and a Television object, how would you play video? You probably write something like this:

This would be this simple, but you would need at least 3 processors (or processes) for that: one plays the role of you, the second is the VCR, the third is the TV. But normally you have only one core (at least not enough for all your objects). In college, lot of my classmates did not understand why the GUI freezes when a push button does an expensive operation.

So I think an object oriented design may describe the world quite well, but it's not the best abstraction for the computer.

Take a look at DCI (data, context and interaction) invented by the inventor of the MVC pattern.

The goal of DCI is (quoted from Wikipedia):

Give system behavior first-class status, above objects (nouns).

To cleanly separate code for rapidly changing system behavior (what the system does) from code for slowly changing domain knowledge (what the system is), instead of combining both in one class interface.

To support an object style of thinking that is close to peoples' mental models, rather than the class style of thinking.

One can often hear that OOP naturally corresponds to the way people
think about the world. But I would strongly disagree with this
statement (...)

As it is evangelized in books and elsewhere for decades, I disagree with it too. Nevertheless, I think Nygaard and Dahl were the ones that put it that way, and I think they were focusing on how easier it was to think about designing simulations compared with the alternatives of the time.

(...) but the focus of OOP is designing
individual classes and their hierarchies.

This assertion enters sensible territory given how popular misconceptions of OOP are, and how sensitive OO is to definition. I have more than ten years in the field, doing both industry work and academic research on prog. languages, and I can tell you I spent many years unlearning "mainstream OO" because I started noticing how different (and inferior) it is from what the earlier creators were aiming at. For a modern, up to date treatment of the subject I would refer to W. Cook's recent effort:

Given these problems, how/why did it happen that the currently
mainstream way of doing OOP became so popular?

Maybe the same reason QWERTY keyboards came to be popular or the same reason DOS operating system became popular. Things just get a ride on popular vehicles, despite their properties, and become popular themselves. And sometimes, similar but worse versions of something are taken to be the actual thing.

And what, if anything, can be done to dethrone it?

Write a program using a superior approach. Write the same program using an OO approach. Show the former has better properties than the latter in every aspect that is significant (both properties of the system itself and engineering properties). Show that the program chosen is relevant and the proposed approach sustains the high quality of properties if applied to other kinds of programs. Be rigorous in your analysis and use precise and accepted definitions when necessary.

Take Java: objects are a kind of abstraction bottleneck, most "things" are either strictly sub-components of objects or use objects as their sub-components. Objects must be multi-purpose enough to an entire abstraction layer between these two types of things -- muti-purpose means there is no one metaphor that they embody. In particular Java makes objects (& classes) the sole layer through which you call/dispatch code. This amount of things objects embody makes them, frankly, far too complex. Any useful description of them must be restricted to some specialized or restricted form.

The inheritance & interface hierarchies are "things" that use objects as sub-components. This is one specialized way of describing objects, not a way to derive general understanding of objects.

"Objects" can be said to have or to be many things because they are a multi-purpose abstraction that is more-or-less universal in an "OO Langauge". If they are used to contain local mutable state or to access some external world state then they look very much like a "noun".

By contrast, an object that represents a process, e.g. "encrypt" or "compress" or "sort", looks like a "verb".

Some objects are used for their role as a namespace, e.g. a place to put "static" function in Java.

I am inclined to agree with the argument that Java is too heavy on object method calls, with dispatch on the object. This is probably because I prefer Haskell's type classes for controlling dispatch. This is a dispatch limitation and is a feature of Java but not most languages, or even most OO langauges.

I have witnessed and participated in many online debates about OOP. The proponents of OOP usually do not know how to write proper procedural code. It is possible to write procedural code that is highly modular. It is possible to separate code and data and ensure that functions can only write to their own data store. It is possible to implement the concept of inheritance using procedural code. More importantly, procedural code is slimmer, faster and easier to debug.

If you build single-file modules with strict naming conventions, procedural code is easier to write and maintain than OO and it will do the same or more and faster. And don't forget that when your application runs, it is always procedural no matter how many classes feature in your script.

Then you have the issue of languages like PHP, which are not truly Object Oriented and rely on hacks to fake stuff like multiple inheritance. The big shots who influence the direction of the language have turned PHP into a patchwork of inconsistent rules that have become too big for what it initially intended. When I see developers write huge templating classes for what was intended as a procedural templating language, I can't help but smile.

If you compare properly-written OO code with poorly-written procedural code, then you will always come to the wrong conclusion. Very few projects warrant an Object Oriented design. If you are IBM and manage a huge project that needs to be maintained for years by multiple developers, go for Object Oriented. If you are writing a small blog or shopping website for a client, think twice.

To answer the original question, OOP is hard because it does not solve real-life programming dilemmas without resorting to solutions that are 100 times more complicated than they should be. One of the most powerful solutions to many programming problems is the judicious use of global data. Yet the new-wave university graduates will tell you that it is a big no-no. Globally-available data is dangerous only if you are a clumsy programming bunny. If you have a strict set of rules in place and proper naming conventions, you can get away with having all of your data global.

It should be a mandatory requirement for any Object Oriented programmer to know how to write a chess-playing application in assembler for a maximum available memory of 16K. They would then learn how to trim the fat, cut the laziness and generate ingenious solutions.