When I started using an object-oriented language (Java), I pretty much just went "Cool" and started coding. I've never really thought about it until only recently after having read lots of questions about OOP. The general impression I get is that people struggle with it. Since I haven't thought of it as hard, and I wouldn't say I'm any genius, I'm thinking that I must have missed something or misunderstood it.

I counter with: It's not difficult to understand, just difficult to master. Programming as a whole is this way.
–
Steve EversNov 1 '10 at 14:44

2

umm, it isnt? Maybe programmers who cant program find it hard blaming the oops instead of their lines of code? i dont know but i never heard anyone say its hard to understand. However i seen ppl say functional is weird especially since they cant change the value of their variable.
–
acidzombie24Nov 1 '10 at 20:56

3

@acidzombie42: functional isn't weird at all. you don't have to write a list of commands to change the contents of variables that represent a location in RAM. variables represent values, you do not change values; 42 stays 42, x stays x – whatever x is. you write functions that map from their parameters to results. values are: numbers, lists of values, functions from values to values, programs that produce side effects and a resulting value on execution, and more. that way, the result of the main function will be calculated by the compiler, that is the program that can be executed.
–
comonadJun 7 '11 at 3:21

4

I've been learning Haskell recently. The more I learn and understand about functional programming, the more I feel OOP is hard. I think the main reason for this is OOP tries to tie together the data (object/class) together with the function that operate on them (method). This is the source of the problem and many OOP design patterns are devised to avoid tying data and functions together. Example, factory pattern is used to separate the constructor (which is a function) from the actual object it's operating on.
–
lightbladeApr 13 '12 at 21:10

1

Because it's misnamed. It should really be Subject oriented programming, in that the Subject performs the Action (verb) and the Object receives the Action -- John threw the ball. So johnsBall.throw() doesn't really make sense.
–
Chris CudmoreAug 15 '12 at 16:22

22 Answers
22

I personally found the mechanics of OOP fairly easy to grasp. The hard part for me was the "why" of it. When I was first exposed to it, it seemed like a solution in search of a problem. Here are a few reasons why I think most people find it hard:

IMHO teaching OO from the beginning is a terrible idea. Procedural coding is not a "bad habit" and is the right tool for some jobs. Individual methods in an OO program tend to be pretty procedural looking anyhow. Furthermore, before learning procedural programming well enough for its limitations to become visible, OO doesn't seem very useful to the student.

Before you can really grasp OO, you need to know the basics of data structures and late binding/higher order functions. It's hard to grok polymorphism (which is basically passing around a pointer to data and a bunch of functions that operate on the data) if you don't even understand the concepts of structuring data instead of just using primitives and passing around higher order functions/pointers to functions.

Design patterns should be taught as something fundamental to OO, not something more advanced. Design patterns help you to see the forest through the trees and give relatively concrete examples of where OO can simplify real problems, and you're going to want to learn them eventually anyhow. Furthermore, once you really get OO, most design patterns become obvious in hindsight.

fabulous answer, especially number #1. Of course methods in OO look like methods in a procedural lang, but now they are housed in objects that also have state.
–
YarNov 1 '10 at 14:21

1

I especially like your 1st and 3rd points. My friend is gaga for design patterns, and I keep telling him that if you just use good OOP, most of them derive very naturally from the initial problem.
–
CodexArcanumNov 1 '10 at 14:35

5

+1 for "The hard part for me was the "why" of it". It takes some time and effort to understand and apply the basics of the language (encapsulation) and the design principles and the refactoring methods, toward common solutions (design patterns).
–
BelunNov 1 '10 at 15:34

7

I agree with #1, with the caveat that you need to do a better job of explaining OO to people who already know how to program procedurally than was done when I was learning. No, it's not useful or informative to talk about a Cat class which inherits from Mammal, use a real example from a real program that we would have written procedurally. Like a simple data structure.
–
Carson63000Nov 1 '10 at 22:58

3

-1 Design patterns are so overemphasised today. They are important, sure, but: first, they are important to all paradigms: functional, structured, as well as OO; and second, they are not part of the paradigm, just a convenient add-on that you can use, as many others. @SoftwareRockstar explains this nicely in his/her answer.
–
CesarGonFeb 11 '11 at 9:36

First of all, at least in "pure OOP" (e.g., Smalltalk) where everything is an object, you have to twist your mind into a rather unnatural configuration to think of a number (for only one example) as an intelligent object instead of just a value -- since in reality, 21 (for example) really is just a value. This becomes especially problematic when on one hand you're told that a big advantage of OOP is modeling reality more closely, but you start off by taking what looks an awful lot like an LSD-inspired view of even the most basic and obvious parts of reality.

Second, inheritance in OOP doesn't follow most people's mental models very closely either. For most people, classifying things most specifically does not have anywhere close to the absolute rules necessary to create a class hierarchy that works. In particular, creating a class D that inherits from another class B means that objects of class D share absolutely, positively all the characteristics of class B. class D can add new and different characteristics of its own, but all the characteristics of class B must remain intact.

By contrast, when people classify things mentally, they typically follow a much looser model. For one example, if a person makes some rules about what constitutes a class of objects, it's pretty typical that almost any one rule can be broken as long as enough other rules are followed. Even the few rules that can't really be broken can almost always be "stretched" a little bit anyway.

Just for example, consider "car" as a class. It's pretty easy to see that the vast majority of what most people think of as "cars" have four wheels. Most people, however, have seen (at least a picture of) a car with only three wheels. A few of us of the right age also remember a race car or two from the early '80s (or so) that had six wheels -- and so on. This leaves us with basically three choices:

Don't assert anything about how many wheels a car has -- but this tends to lead to the implicit assumption that it'll always be 4, and code that's likely to break for another number.

Assert that all cars have four wheels, and just classify those others as "not cars" even though we know they really are.

Design the class to allow variation in the number of wheels, just in case, even though there's a good chance this capability will never be needed, used, or properly tested.

Teaching about OOP often focuses on building huge taxonomies -- e.g., bits and pieces of what would be a giant hierarchy of all known life on earth, or something on that order. This raises two problems: first and foremost, it tends to lead many people toward focusing on huge amounts of information that's utterly irrelevant to the question at hand. At one point I saw a rather lengthy discussion of how to model breeds of dogs, and whether (for example) "miniature poodle" should inherit from "full sized poodle", or vice versa, or whether there should be an abstract base "Poodle" class, with "full-size poodle" and "miniature poodle" both inheriting from it. What they all seemed to ignore was that the application was supposed to deal with keeping track of licenses for dogs, and for the purpose at hand it was entirely adequate to have a single field named "breed" (or something on that order) with no modeling of the relationship between breeds at all.

Second, and almost importantly, it leads to focusing on the characteristics of the items, instead of focusing on the characteristics that are important for the task at hand. It leads toward modeling things as they are, where (most of the time) what's really needed is building the simplest model that will fill our needs, and using abstraction to fit the necessary sub-classes to fit the abstraction we've built.

Finally, I'll say once again: we're slowly following the same path taken by databases over the years. Early databases followed the hierarchical model. Other than focusing exclusively on data, this is single inheritance. For a short time, a few databases followed the network model -- essentially identical to multiple inheritance (and viewed from this angle, multiple interfaces aren't enough different from multiple base classes to notice or care about).

Long ago, however, databases largely converged on the relational model (and even though they aren't SQL, at this level of abstraction the current "NoSQL" databases are relational too). The advantages of the relational model are sufficiently well known that I won't bother repeating them here. I'll just note that the closest analog of the relational model we have in programing is generic programming (and sorry, but despite the name, Java generics, for one example, don't really qualify, though they are a tiny step in the right direction).

Heh, I was just thinking about this the other day. How all the first learning material I read on OOP seemed to focus on inheritance, while my experience more and more tells me that inheritance is mostly useless if not harmful.
–
rmacNov 1 '10 at 19:45

2

That's why interfaces are so great. You take the inheritance modeling and invert it to a bredth-first approach. For instance, with the dog example it can be assumed that every dog has a genus and up to two super-species (one for each parent) for a particular breed. There may also be a list property containing traits but the possible variety makes it pointless to shoehorn them into a definite structure. It's better to implement a deep search to crawl the traits and combine similar breeds based on those results.
–
Evan PlaiceAug 15 '12 at 22:12

@Jeff O - I disagree. Programming only requires the ability to tell someone how to do something in a step-by-step manner. If you can tell someone how to make a peanut butter & jelly sandwich, you have the ability to type commands into a programming interface. That is a completely different skill set than abstractly modelling a p,b&j sandwich and how it interacts with the world.
–
John KraftNov 1 '10 at 13:54

14

Handing someone 2 pencils and then handing them 1 pencil and asking how many pencils they have, is concrete. 2 + 1 = 3 is abstract.
–
JeffONov 1 '10 at 15:53

13

I agree with Jeff. Programming basically is managing abstractions. At least that’s true for anything but the most basic program flow (because everything else will be too complex without abstractions). There’s a distinct phase in learning to program when the novice learns how to control abstractions. That’s where the recipe metaphor falls down. Programming is nothing like cooking and while an individual algorithm may be likened to a recipe, programming is fundamentally different from implementing isolated algorithms.
–
Konrad RudolphNov 1 '10 at 16:24

2

@KonradRudolph Great point. +1 for "Everything else will be too complex without abstractions".
–
Karthik SreenivasanJan 24 '12 at 5:32

Any paradigm requires a certain push "over the edge" to grasp, for most people. By definition, it's a new mode of thought and so it requires a certain amount of letting go of old notions and a certain amount of fully grasping why the new notions are useful.

I think a lot of the problem is that the methods used to teach computer programming are pretty poor in general. OOP is so common now that it's not as noticeable, but you still see it often in functional programming:

important concepts are hidden behind odd names (FP: What's a monad? OOP: Why do they call them functions sometimes and methods other times?)

odd concepts are explained in metaphor instead of in terms of what they actually do, or why you'd use them, or why anyone ever thought to use them (FP: A monad is a spacesuit, it wraps up some code. OOP: An object is like a duck, it can make noise, walk and inherits from Animal)

the good stuff varies from person to person, so it's not quite clear what will be the tipping point for any student, and often the teacher can't even remember. (FP: Oh, monads let you hide something in the type itself and carry it on without having to explicitly write out what's happening each time. OOP: Oh, objects let you keep the functions for a kind of data with that data.)

The worst of it is that, as the question indicates, some people will immediately snap to understanding why the concept is good, and some won't. It really depends on what the tipping point is. For me, grasping that objects store data and methods for that data was the key, after that everything else just fit as a natural extension. Then I had later jumps like realizing that a method call from an object is very similar to making a static call with that that object as the first parameter.

The little jumps later on help refine understanding, but it's the initial one that takes a person from "OOP doesn't make sense, why do people do this?" to "OOP is the best, why do people do anything else?"

I particularly hate the metaphorism, more often than not, they confuses rather than describe.
–
Lie RyanNov 1 '10 at 17:20

2

"Then I had later jumps like realizing that a method call from an object is very similar to making a static call with that that object as the first parameter." I wish this were emphasized more from the beginning, as it is in languages like Python.
–
Jared UpdikeNov 1 '10 at 20:43

1

My problem with using metaphors to explain concepts is that the teacher often stops at the metaphor, as if that explains the entire thing. No that isn't a full explanation; that's just an illustration to help us wrap our heads around the actual explanation.
–
jhockingJun 29 '11 at 15:36

Sure, people can get used to the mapping of "left" as 270, and yeah, saying "Car.Turn" instead of "turn the car" isn't such a huge leap. BUT, to deal well with these objects and to create them, you have to invert the way you normally think.

Instead of manipulating an object, we're telling the object to actually do things on its own. It may not feel difficult any more, but telling a window to open itself sounds odd. People unused to this way of thinking have to struggle with that oddness over and over until finally it somehow becomes natural.

Good insight. I think the problem is that in real life, there's not that much an "object" can do that isn't relating to other objects. OO works well so far as objects are told to modify their internal state: rectangle.enlarge(margin_in_pixels) but I realized the limits years ago. One day we programmers were installing hardware. Someone wisecracked "screw.turn" Funny, but it got me thinking: sure, a screw can change it orientation, but it's really an operation between cabinet and screw; neither object can do the task itself. OO just isn't good enough.
–
DarenWNov 13 '10 at 0:18

Teaching OO from the beginning is not really a bad idea within itself, neither is teaching procedural languages. What's important is that we teach people to write clear, concise, cohesive code, regardless of OO or procedural.

Individual methods in good OO programs DO NOT tend to be procedural looking at all. This is becoming more and more true with evolution of OO languages (read C# because other than C++ that's the only other OO language I know) and their syntax that's getting more complex by the day (lambdas, LINQ to objects, etc.). The only similarity between OO methods and procedures in procedural languages is the linear nature of each, which I doubt would change anytime soon.

You can't master a procedural language without understanding data structures either. The pointer concept is as important for procedural languages as for OO languages. Passing parameters by reference, for example, which is quite common in procedural languages, requires you to understand pointers as much as it's required to learn any OO language.

I don't think that design patterns should be taught early in OO programming at all, because they are not fundamental to OO programming at all. One can definitely be a good OO programmer without knowing anything about design patterns. In fact a person can even be using well-known design patterns without even knowing that they are documented as such with proper names and that books are written about them. What should be taught fundamentally is design principles such as Single Responsibility, Open Close, and Interface Segregation. Unfortunately, many people who consider themselves OO programmers these days are either not familiar with this fundamental concept or just choose to ignore it and that's why we have so much garbage OO code out there. Only after a thorough understanding of these and other principles should design patterns be introduced.

To answer original poster's question, yes, OO is a harder concept to understand than procedural programming. This is because we do not think in terms of properties and methods of real life objects. For example, human brain does not readily think of "TurnOn" as a method of TV, but sees it as a function of human turning on the TV. Similarly, polymorphism is a foreign concept to a human brain that generally sees each real life object by only one "face". Inheritance again is not natural to our brains. Just because I am a developer does not mean that my son would be one. Generally speaking, human brain needs to be trained to learn OO while procedural languages are more natural to it.

+1 - I too don't think design patterns are necessary for OOP teaching at the fundamental level, you can be a good OOP programmer and not know any design pattern. On the flip side you tend to see known design patterns emerge naturally from good OOP programmers. Design patterns are always discovered, not invented.
–
Gary WilloughbyNov 2 '10 at 20:10

1

+1 - Great answer. I like the 4th point you have stated. It is very true that one can be a good OO programmer without knowing what design patters really are!
–
Karthik SreenivasanJan 24 '12 at 5:47

Because the basic explanation of OOP has very, very little to do with how it's used in the field. Most programs for teaching it try to use a physical model, such as "Think of a car as an object, and wheels as objects, and the doors, and the transmission ...", but outside of some obscure cases of simulation programming, objects are much more often used to represent non-physical concepts or to introduce indirection. The effect is that it makes people understand it intuitively in the wrong way.

Teaching from design patterns is a much better way to describe OOP, as it shows programmers how some actual modeling problems can be effectively attacked with objects, rather than describing it in the abstract.

I think many programmers have difficulty with upfront design and planning to begin with. Even if someone does all the design for you, it is still possible to break away from OOP principles. If I take a bunch of spaghetti code and dump it into a class, is that really OOP? Someone who doesn't understand OOP can still program in Java. Also, don't confuse difficulty to understand with not willing to follow a certain methodolgy or disagreeing with it.

You should read Objects Never? Well, Hardly Ever. (ACM membership required) by Mordechai Ben-Ari who suggests that OOP is so difficult, because it's not a paradigm that's actually natural for modeling anything. (Though I have reservations about the article, because it's not clear what criteria he feels a program needs to satisfy to say that it's written on the OOP paradigm as opposed to a procedural paradigm using an OO language.)

The hard part comes in doing it well. Where to put the cut between code so you can easily move things to the common base object, and extend them later? How to make your code usable by others (extend classes, wrap in proxies, override method) without jumping through hoops to do so.

That is the hard part, and if done right can be very elegant, and if done badly can be very clumsy. My personal experience is that it requires a lot of practice to have been in all the situations where you would WISH that you did it differently, in order to do it well enough this time.

I'd done GW-Basic and Turbo Pascal programming a fair bit before being introduced to OO, so initially it DID do my head in.

No idea if this is what happens to others, but to me it was like this: my thought process about programming was purely procedural. As in: "such and such happens, then such and such happens next", etc. I never considered the variables and data to be anything more than fleeting actors in the flow of the program. Programming was "the flow of actions".

I suppose what wasn't easy to grasp (as stupid as that looks to me now), was the idea that the data/variables actually truly matter, in a deeper sense than just being fleeting actors in program "flow". Or to put this another way: I kept trying to understand it via what happens, rather than via what is, which is the real key to grasping it.

I actually have a blog called "Struggles in Object Oriented Programming," that was born out of some of my struggles with learning it. I think it was particularly difficult for me to understand because I spent so much time using procedural programming, and I had a tough time getting my head around the idea that an object could be represented by a collection of attributes and behaviors (I was used to simply a collection of variables and methods).

Also, there's a lot of concepts that make a language object oriented - inheritance, interfaces, polymorphism, composition, etc. There really is a lot to learn about the theory of it before you can actually write code effectively, and in an object-oriented way, whereas with procedural programming, it's simply a matter of understanding things like memory allocation for variables, and entry point calls to other methods.

I don't think it is difficult to understand but it may be that a lot of the programmers querying are new to the concept, coming from procedural languages.

From what I have seen/read lots of people (in forums at least) look for a 'result' from OOP. If you are a procedural programmer who doesn't go back and modify extend their code it can probably be hard to understand the benefits.

Also, there is a lot of bad OOP out there, if people are reading/seeing that then it is easy to see why they might find it difficult.

IMO you need to wait until it 'clicks' or be taught by someone with real knowledge, I don't think you can rush.

If you come from an academic setting, the types of toy programs you write there really seem pointless in OOP as well. I started learning C in college, and that was easy enough to grasp because every program was 1-page, less than 100 lines. Then you try to learn OOP and it requires all this baggage and overhead of objects just to do the same thing, and it seems pointless. But until you've written a program across many files and a few thousands lines, it's hard to see why any programming style is useful.
–
CodexArcanumNov 1 '10 at 14:07

Motivation. It's harder to learn something when you don't see why, and also when you can't look at what you did and figure whether you did it right or not.

What's needed is small projects that use OO to do useful things. I'd suggest looking through a book on design patterns and come up with one that is obviously useful and works well with OO. (I used Strategy the one time I tried it. Something like Flyweight or Singleton would be bad choices, since they're ways of using objects in general, not using objects to accomplish something.)

I think it depends on age (age as a proxy for experience) and, more importantly, interest. If you're "young" (i.e., green, perhaps) and you've never thought any other way, it seems quite straightforward. On the other hand, if you think it's the coolest thing you've ever seen -- happened to me at age 28 or something -- it's easy to grok.

On the other hand, if you think, as many of my Java students did, "why are we learning this, it's just a fad," it's practically impossible to learn. This is true with most technologies.

@Murph to be honest these guys were mainframe programmers who they were trying to convert to Java. It was actually an amusing and unique experience (not normal fare for my Java Trainer days). It is true, however, that they had seen many things come and go, but obviously OO is more than a fad. On a separate note: I was kind of hoping this Unit Testing thing would blow over as a fad, but now I find that I have a lot of unit tests to write ... :)
–
YarNov 2 '10 at 23:36

Terminologies were my bump in the road when learning the principles of object oriented programming (POOP). It's when you get a grasp of the fundamentals that pieces start to fall into place. Just like all things learning new concepts are a little hard.

Agreed that design patterns should be tought at least parallel to OOP.

The main jump for me was just understanding the abstract concept of OOP. Now I'm very new to programming in general I've been programming for a year to a year and a half now so my introduction into OOP was with Actionscript and Processing. When I first learned Actionscript coding, it wasn't in OOP. I learned to code directly into the Actions panel and that is how I learned the basic fundamentals of programming (variables, functions, loops etc). So I learned it as doing something directly to the stage in Flash or Processing.

When OOP came into things, realizing that I could create methods and properties within an object to be able to use and reuse was a little difficult for me to grasp at first. Everything was very abstract and difficult to process but the programming languages themselves a lot better but it kind of took a leap of faith to make those connections at first.

Regardless which paradigm (OOP, functional, etc.) you choose, in order to write a computer program, you need to know what steps your program will do.

The natural way of defining a process is writing down its steps, for larger tasks you break down the task into smaller steps. This is the procedural way, this is how the computer works, this is how you go through your checklist step by step.

OOP is a different way of thinking. Instead of thinking of a checklist of tasks which needs to be done step by step, you think of objects, their abilities and relationships. So you will write a lot of objects, small methods and your program will magically work. To achieve this, you need to twist your a mind...

And this is why OOP is difficult. Since everything is an object, all they does is asking other objects to do something, and those other objects basically do the some. So the control in an OOP program can wildly jump back and forth between the objects.

As someone who is currently learning programming and having some issues in this area, I don't think it is so much that the concept is difficult to understand as are the specific implementations of said concept. I say this because I get the idea of OOP, and I've used it in PHP for about a year, but as I move on to C# and look at other programmers' usage of objects, I find that many people do so in ways that I just don't understand. It is this specifically that has lead me down the road to finding a better understanding of the principles of OOP.

Of course, I realize that the issue is most likely my lack of experience with a natively-OOP language, and that as time goes by I will find new ways to utilize objects that will be just as unclear to a new programmer as what I am currently experiencing. Jerry Coffin touches on this a few times, particularly in his comment:

This becomes especially problematic when on one hand you're told that
a big advantage of OOP is modeling reality more closely, but you start
off by taking what looks an awful lot like an LSD-inspired view of
even the most basic and obvious parts of reality.

I find this to be very accurate, as it's the impression I often get when seeing someone creating classes for things that aren't really things - a specific example escapes me, but the closest I can come up with on the fly is treating distance like an object (I will edit the next time I see something that causes this same confusion). At times, OOP seems to temporarily disregard its own rules and becomes less intuitive. This more often than not occurs when objects are producing objects, inherit from a class that is encapsulating them, et cetera.

I think for someone like me, it helps to think of the concept of objects as having multiple facets, one of which includes treating something like an object when it otherwise wouldn't be. Something like distance, with just a little paradigm shift, could come across as a theoretical object, but not one that could be held in your hand. I have to think of it as having a set of properties but a more abstract set of behaviors, such as accessing its properties. I'm not positive that this is the key to my understanding, but it seems to be where my current studies are leading.

A pair of points or objects should be regarded as an object. And distance should be taken as a function from the object. Using objects does not mean you make object from everything.
–
GangnusJul 28 '14 at 13:54