Scheme has some very important advantages. It has effectively no syntax to learn. It's very consistent (unlike Python). It does not have OO built it, so you can introduce not only how to use OO but how it works at a lower level (with state and messages).

Unlike Python, Scheme is not hostile to functional programming. It also makes mutation more evident--in Python, mutation is omnipresent and built right into the language; in Scheme you have to use functions like set! and set-cdr! so the mutation is more explicit.

However, just because it supports functional programming, Scheme does not eschew imperative programming or OOP. In fact, it can support either fairly uniformly--you write your code in a similar fashion; the only thing to change is what the code means. That is, not only does it support OOP, but it does it in a way that isn't built into the language or specified as part of the syntax.

Scheme is also great because it has very little incidental complexity. This means that you can concentrate on the fundamental ideas you're teaching rather than on grappling with language issues. Additionally, it makes writing an interpreter really easy, which is very instructive about the nature of computation.

So: Scheme is simple, elegant and flexible. It's a great choice to not just teach programming but also the fundamental ideas behind computation.

I definitely agree that ML would also be a good choice. However, I still think Scheme is better.

Particularly, I suspect Scheme is better for teaching about other paradigms like OOP. Additionally, while I personally love statically typed languages for actual development, I think a type system is just extra complexity in the very beginning. The main reason I like Scheme is that it's extremely simple with surprisingly few concepts getting in the way of your programming. After you've spent a little bit of time learning how to program with Scheme, learning about static typing (and good static typing as well) is very important, which is where ML or Haskell comes in.

However, I did find one major advantage to learning ML (well, in my case, Haskell): pattern matching and the syntax for functions made understanding recursion much simpler. Separating the function into cases was much more intuitive than just using if or cond for the same purpose, I found.

Interesting. I also always thought, that learning to program should be done
low-to-high: asm, C / Forth, maybe Java, then Python / Ruby, LISP/Prolog, ... As for the OOP place, I think its certainly after functional programming. I myself learned/understood the power of OOP after learing Python and was quite at loss with C++ "OOP". However, it is important to learn imperative way (and fundamental algorithms) before OOP. This makes: imperative-functional-OO-logical.

OTOH, some applied programming done by advanced users may never require anything but using ready objects. E.g., in enviroments like Squeak. But this may be a vary special case.

In a certain sense, the high-level languages (like Scheme but not Java or Python) are as fundamental as the low-level ones. The difference is that the low-level ones are fundamental because that's how the computer works--they describe how your program runs. The high-level ones are fundamental because that's how the math works--they describe what your program means.

Is this for purely philosophical reasons, or have you had success teaching students "low-to-high"?

Also, why start at assembly? A computer science program will typically go down to at least the gate level, and if you take it as part of an engineering or physics program, it could go much lower. Why did you pick that as the "low"?

I ask because a similar order was tried for me, and it did not work at all. After learning assembly, I was completely unprepared for something as (relatively) complex as C.

I would agree but only if the new programmer is at a high-school level or younger. I started learning programming in college and we jumped into OOP immediately (with Java). OO is used in so many places that it would be foolish to ignore it for too long.

I started learning C++ on my own while I was in high school, at age 16. It was my first programming language and I had a good experience with it.

I wanted to program a game at the time and OOP just seemed to make perfect sense. Objects could be used to represent actual things in the game world. Methods would implement behaviors of game objects, etc. I later learned that Simula, the original OOP language, had been invented with that kind of usage in mind (simulations).

Lots of people, perhaps especially the functional (read Scheme) community love to bash OOP, but it definitely has its uses. Like many other things, it's a question of balance. Making everything OOP, being forced to use objects for everything, makes little sense. Functional programming has cool applications, but perhaps functional everything is not ideal.

Mutable objects are bad obviously which we hate. So the idea of an object as a thing composed of other things is fine. What we don't like is the things inside being subject to change. Pretty simple in concept. We are perfectly happen to use immutable objects in fp langs.

The issue of forcing OOP is exactly why I prefer C++ to languages like Java. WTF does main() have to do with a class? Forced oddities like that are what makes things confusing, not the structure. You can also get monstrosities of static functions that don't really belong in a class.

The reason to become fanatical is out of compassion. We who like to think a lot about programming languages believe that we perceive the magnitude of the negative systemic effects of using tools that represent concepts in a way that does not promote rigor and quality. That said, the rational among us should acknowledge that our expertise and focus in the area can lead to a very non-pragmatic bias that colors the programming world perhaps too extremely. It is easy to get carried away with things that don't matter to most people, even when we think they are direly important for advancing our discipline.

I learned C++ at about the same age after a childhood using BASIC, and I hated it and still hate it. It's a terrible language to inflict on someone who isn't get paid to use it. Why any teacher would select it is beyond me.

If we're thinking of teaching the basics, why not? You have to have the basic skill to separate your code into functions before you can grok OOP properly. It's bad Java style, but maybe good a good pedagogical move.

I've (literally, rather than idiomatically) translated C programs into Java before now. I ended up with one class, plus a couple of exception types (to handle translating goto and setjmp), and hacks to emulate things like pointers being used as references, and one method per function, looking very like C except for the occasional capital letter and different memory allocation syntax.

There's pretty much no benefit do doing things this way unless you happen to really need pre-existing code in a particular language, but it can be done.

For the interested, the code. It's horrific Java, but moderately idiomatic C. And the original C, which I didn't write.

Other paradigms have similar ways of partitioning tasks. Most languages have concepts of packages, modules, namespaces, etc. Object orientation gives nothing unique with regards to encapsulation.

Infact, the interaction between inheritance and managing what terms are provided by a package leads to an explosion of reserved keywords and terminology. The reason that OOP is hard for new programmers to learn is that it literally has no reason to be constructed in the way that it is in modern language - it makes no sense.

In java, a method can be public / protected / private / package protected / static / abstract. This is a surplus of terminology to define

Instead, in Haskell, we can just specify what a module exports (giving capabilities of public / private), and can re-export names from other modules (giving the capabilities of protected). Since we don't have inheritance (because it's kinda broken), abstract and overriding is performed by our inerface mechanism (typeclasses).

Programmers use their tools to conquer problems. One of the biggest problems in large projects is partitioning the task, which is where OO's tools for data hiding and encapsulation really shine.

Except it doesn't actually do that in practice, because OO is not temporally aware. Since objects do not guarantee that you will see consistent state when accessing them, the whole thing is a giant farce. The only way this would work is if access to objects was atomic, which is not the case in any mainstream OO language.

It's a problem with OOP in a threaded environment. But even outside threading mutability presents a huge problem in large code bases, as it becomes very difficult to tell who's modifying what data and when.

When you work with functional data structures all changes are guaranteed to be contextualized. If I have some data and I make some change to it, I create a delta, and I don't have to worry that this piece of data is also being used somewhere else.

Referential transparency means that you can reason about pieces of code in isolation. When you have unchecked mutable state you must necessarily understand the entire program to know what each piece of code affects.

FP also deals quite nicely with encapsulation and data hiding, using closures. Actually, closures are so similar to objects that they do most of the same things, except that they usually don't natively support inheritance.

Sorry to sound like a jerk but a big pet peeve of mine is when people say that things like encapsulation or polymorphism are the great exclusive features of OO, while at the same time not really explaining the point of things like inheritance that actually are exclusive to OO.

(btw, I personally believe that the whole mutable state thing is kind of overblown. Good code will avoid mutability irrespective of paradigm and functional languages just nudge you a bit in the right direction)

Also, mutable state often goes hand-hand with being able to point at an object from multiple locations. While this is handy, it also means that when you are writing a function that modifies its argument, you are essentially saying "I have global knowledge of my program; everything that points to this object wants to see this change".

There is an analogy between mutation and manual memory management: when you mutate an object, you're saying "I don't want that old one, I want this new one". In this way, pure languages such as Haskell are supplying you with a more advanced form of garbage collection, in which you don't need to worry about implicitly freeing the old version of the object. This seems like it would lead to a lot of copying of memory, and it does lead to some, however, the optimizer and runtime system can make it efficient: mutations on objects that no one else points to can be implemented as mutations.

Congraduations, you've just used encapsulation in Haskell - you have an algebraic data type whos constructors are out of scope everywhere but the file it was defined in, so you can never refer to its implementation.

Sure, OO has tools for encapsulation, but plenty of other languages have equivalent tools. Really, the only thing that makes OOP unique at this point is subtype polymorphism & open recursion, and I'm not convinced either are good ideas.

It is debatable, but when you see the spaghetti-messes that some developers I have encountered manage to create, and the time that is then wasted on maintaining their code, the basic principles of encapsulating things and decoupling things come to seem like very good ideas indeed. For me, the most valuable aspect of OOP is not so much that you can code that a triangle and a circle are both shapes, but that it encourages encapsulation and decoupling and makes code easier to maintain.

Encapsulation isn't a OOP only thing and the problem with OOP (especially its bigger fans) is they can only solve a problem with OOP and not everything needs to be an object.

His suggestion about python is good, imo, in that you don't need to know about OOP but can transition into relatively easily and hopefully through the process you can become someone who uses OOP in a reasonable fashion rather than someone who won't write bash because it has no factory methods.

Encapsulation and decoupling is not really tide to OO. The first language who made encapsulation a religion is Ada 83, who is both far from being "pure OO" and being much stronger in these fields than OO ambassadors like Java or Smalltalk (you can forbid copying or comparison of objects if you think it is safer, for example).

FP is rather good in this field, too. See how Erlang deals with it, for example.

Yeah, we probably will. FP actually gives you strictly more ways to partition a problem into different units. This is its power, and its source of potential for abuse.

You can literally take any expression tree, select a subset of it, and straightforwardly put the subset you selected into one function, and the rest into another. In other words, any partitioning of code is possible, though some are smaller, and some would be idiomatically considered to be better FP.

This allows us to partition code in the way that most naturally matches our mental models, and allows others to construct libraries consisting of the mental model that they find useful for a particular type of problem. But yeah, those with unclear mental models will always write awful code. The good thing is, re-factoring code will lead such programmers to clear mental models. This re-factoring (potentially automated) is more straightforwardly accomplished in FP, whereas in other paradigms it is largely intractable due to the lack of sufficient information about a program's behavior.

the basic principles of encapsulating things and decoupling things come to seem like very good ideas indeed

These principles are not exclusive to OOP, and frankly other paradigms do it better job of facilitating them than OOP.

For me, the most valuable aspect of OOP is not so much that you can code that a triangle and a circle are both shapes, but that it encourages encapsulation and decoupling and makes code easier to maintain.

Easier to maintain than what exactly? I find functional code is far, far easier to maintain than OO code, because you have referential transparency in the code, and you can reason about pieces of it independently. Meanwhile you can use exactly the same encapsulation techniques without marrying your logic to the data.

Object, inherently, are little bundles of state. The most trivial example is that somebody's calling a.setFoo affects my calling a.getFoo. This means that some code can affect my code even though there is no obvious connection. The connections flow to the object (that is, we know the object you call the method on) but not from the object (that is, we do not know who calls the object's methods). This means that others' code can affect my code without an obvious path between the two.

In functional programming, this does not happen. My function can only depend on other functions it references explicitly.

Also, in my experience, functional program lead to smaller, more atomic units of code. Particularly, writing in Haskell and OCaml, I rarely see functions more than five lines long. The really long functions tend to have a bunch of helper functions defined internally (using something like where bindings in Haskell). In OO language, methods of twenty lines or more are common.

So yes: there is no question that OOP is better than procedural programming for decoupling code and encapsulation, which helps with maintenance. But it's also worse than functional programming.

I wouldn't say that, but it definitely consists of programmers that are somewhat trained to think of complex problems in one way, and I think it does them a disservice when trying to think of things in a "functional" manner -- as an example, some of the Java guys I've worked with flat-out don't know wtf to do with 'enterprise' (that is, well-formed, pattern-heavy, reliable) JavaScript... it just scares the hell out of them.

That's true of anyone that only leans one paradigm, or one approach to structuring code.

I've seen plenty of programmers who learnt at home, who churn out pages of horrendous procedural code. Everything is crammed into a couple of giant functions, endless globals, and tonnes of copy+pasta. The problem is that's all they know.

I think you just showed the strongest possible argument to avoid OOP for new programmers. OO is used in some places. Students often leave college thinking OOP is in the top 5 most important things you'll learn with a CS degree... It's not even top-50.

Functional programming is actually simpler (but not necessarily easier) than imperative programming, so I think it's a better place to start. I spent some time teaching high school kids programming and they uniformly had problems with mutation. In fact, I think complete beginners (with no experience at all) had as much problems with mutation as people do with recursion.

We, as experienced imperative programmers, are all innately familiar with mutation. But somebody starting isn't. In math, you never see any sort of mutation. To somebody like that (and most new programmers will know at least high school math), a line like x = x + 1 just doesn't make sense.

There is an important difference between OOP and FP: the first is just an extension of imperative programming while the other is a fundamentally different approach to computation. If you've never used mutable state, FP doesn't seem unnatural in the least; if anything, it's closer to what you would expect coming from math.

So, in fact, I think starting out with functional programming is better than starting out with any sort of imperative programming. Coincidentally, this is exactly what SICP does, and does brilliantly--it really is a great introductory text.

I'm trying to teach myself programming. I'd say I'm still a beginner. Jumping around from language to language, many of them imperative, I find I've been gravitating towards the functional languages--more so Haskell. So there was this problem in a Python subreddit where someone boasted solving a problem in 5 lines of Rudy. In the subreddit, people did it in 5 lines or less in Python. I did it in two lines of Haskell. In Haskell, my solution was just a series of 5 or 6 functions stringed together. I notice that 3 of those stringed-together functions can actually be useful by making them a function itself and used in other programs. I also noticed from the Python solutions, people easily coupled the presentation with the calculation--so they couldn't easily use this solution in other programs like I could with the Haskell solution, unless they did some modifications. My Haskell solution, I could make 3 of those functions into one function that can be used elsewhere without change and be included with my solution with additional functions attached to it and yet also be used elsewhere without change. This just revealed to me a very important insight about coupling, which I think would take years to understand if I did imperative. Also with the Python solutions, I felt that even if I knew what all the functions did, I felt in total, their solutions had too many distractions compared to the Haskell approach--that is just my opinion as a beginner programmer though. In the Python solutions, I had to keep a mental register of all the changes in "state" for each function whereas in Haskell I did not.

This is exactly why I say that functional programming is simpler (as opposed to easier). You're certainly understanding some of the very important ideas in code organization early on :).

This is off-topic and maybe sounds a little condescending, but your comment would be much easier to read if you broke it up into paragraphs. Even if you split the text in a sub-optimal way, it would still be easier to read than in one chunk. So try starting a new paragraph whenever you start talking about a new idea.

And if you really can't figure out where to break it up, throw a newline in every five or six sentences--it's still better than nothing :P.

(*24) and (+5) are functions in and of themselves--I can use either one in other programs. "g" can be used in other programs too. "g" can feed it's answer to another function or program that displays the answer however I want (html, text, string, json).

What I was seeing in the Python solutions was this:

def g(x):
x = x + 5
x = x * 24
print(x)

Keep in mind this isn't the best Python, it's just a simple version of what I was seeing in some of the solutions. To understand this as a beginner, I have to keep a mental tally of what "x" is each time it changes. Also the print() function was coupled with the function. So if I wanted to display it in some other way, I'd have to modify the function. Some of the solutions, you could not just easily remove the print function; to do so would require rethinking the entire solution.

A little while back I was teaching some high school students Java, and they had similar problems with mutation.

The problem is that almost every language and popular learning track pushes mutation at you. By the time you're at a place where you can consider learning a bunch of new languages, chances are you've used mutation a lot. And so the programming community at large believes there is something inherently simple about mutation where it's actually just cultural.

I've found that this difficulty in keeping track of mutation becomes even more apparent in larger Python projects. Passing a list or object into some function and not knowing if it will be mutated is really annoying. Now that I'm using Haskell and OCaml far more widely, I'm much happier in that regard.

In my quest to learn how to program, I dropped out of my C++ and Java classes. All the professors wanted to do, was have us teach ourselves the language by reading their chosen textbook, while in class they taught us "software design"--yet another software design class. I finally just decided to go my own way.

I spent some time teaching high school kids programming and they uniformly had problems with mutation [...] We, as experienced imperative programmers, are all innately familiar with mutation. But somebody starting isn't. In math, you never see any sort of mutation. [...] a line like x = x + 1 just doesn't make sense.

Is it possible the real problem is students already educated in math have trouble un-learning the syntax and style?

I would expect your average non-math-club learner to have a much easier time with imperative programming, since the semantics are much more accessible. Consider the metaphors of baking a cake, an assembly-line, or clockwork... All of which involve state changes...

Something missing from the article that may have advocates from either camp up in arms here is a time frame and the level where the author assumes the learner is at (is a high school student, college, graduate that suddenly needs to learn to program?)

At school I learned to program with Pascal and procedural programming, a few weeks later when we were comfortable with it we got changed to C++ and introduced to objects and I believe it worked ok.

At college we started with Scheme/Lisp, moved to Perl and then Java. I think that the author is right about getting the OOP paradigm out of the way for a bit of time, making sure he is teaching a mindset and not a language and then introducing the OOP concepts.

I think the real issue is that the way OOP is implemented in all the popular OO languages is fundamentally broken. Each object is essentially a state machine and a DSL for poking at that state implemented as methods on that object. But since the object makes no guarantees that you'll see a consistent state, it's very easy to fuck things up. If accessors were atomic in nature then objects would be a lot more useful. The only language that I know that actually behaves like that is Erlang.

But the cardinal mistake of any OOP programmer is to think in terms of objects. No, for that you need higher-level abstractions--thus the rise of patterns, which are then projected onto objects. So the OOP programmer learns to reason in terms of patterns, mentally project the implementation onto objects, and write his code appropriately.

Thus the common criticism of patterns--half these things don't even exist in non-OO languages. Functional & multi-paradigm languages don't need two discrete vocabularies like that. Instead you get one vocabulary, augmented with emergent idioms specific to the language.

All of this design pattern bashing is really just misdirection. Patterns are simply ways to manage information flow between objects. Patterns are always present in your code, no matter if you're using Java or Haskell. Some languages can do without some classes of patterns because of their stronger abstraction capabilities. But this isn't an indictment of OOP at all. This is a problem of the particular language only.

There is much to be learned from having an index of a set of patterns that are frequently reinvented independently, and a common vocabulary to refer to them. This is all the GOF was trying to do. All this bitching about patterns these days is the biggest strawman in CS.

Object oriented programming is one of the world's greatest module systems. It just isn't a good way to structure code inside the same module, especially if internal communication or state-sharing need to be done.

I don't see how these things are of more practical utility than others. OO allows you to structure programs poorly and not bother to have proper state management, which inevitably ends up being a huge mess in large programs.

I think teaching OOP first only makes sense if you are teaching in a fully OOP environment like Smalltalk or specialized environments like Alice. In these cases, objects are living things that exist in a world that you can interact with. You add loops and variables as ways to make your interactions more sophisticated.

In general purpose OOP languages, objects are abstractions, useful for program organization. But thinking in abstractions is a lot harder for a beginner to do, in which case dealing with concrete things like numbers and strings is a lot better place to start.

I'm the author of the linked article. Good comments everyone! It's interesting to see all the different backgrounds folks are coming from and where they place OOP in the continuum of necessary skills.

I would like to point out that the article isn't anti-OOP by any means. If you've ever taught a programming class, you'll know how much effort it can be to get students thinking about designing algorithms and solving problems. OOP doesn't help with that initial hurdle at all; it just slows down the class as a whole.

I like your example because it lends itself to common knowledge. Someone posted this comic in r/education yesterday and I wonder if this comic wasn't somehow the precursor to the "don't distract new programmers with OOP" post. Whatever the case, it is true that there is often a usability gap in intro to programming texts. There are probably many reasons for it but I think much of it has to do with the advanced programmers writing these intro books subconsciously not wanting to look like a simpleton amongst peers.

At my college the first semester was pure C. After that there was C++ and we branched out to Java and C# after that. I think it worked on the right timeframe, about the time you start being most interested in grouping behaviour into modules you get introduced to objects, and with your fundamental knowledge it should simply 'click' with your desire to arrange behaviour.

I think the big thing is learning to string lines of code together successfully before worrying too much about how to structure that code. I've known too many programmers who become obsessed with architecture, yet they can't write functioning code to solve even trivial problems.

Learn how to make it work at all before you worry about making it right.

Seriously, a basic spellchecker can be implemented in a few lines of Python.

As the linked article suggests, this is mostly (but not entirely) because your computer today has several gigabytes of RAM (i.e., orders of magnitude more than a plaintext dictionary file takes), not because Python has any special magic in it for writing spellcheckers.

Nobody writing a spellchecker in C today would write it like a programmer in 1985 did. The old constraints are gone. Time spent fitting a dictionary into 256 KB of RAM would be a phenomenal waste.

You can write a basic spellchecker in a few lines of C, as well, especially if you're willing to live with compile-time static limits, O(n) lookup performance, or using a third-party library (any of which is entirely reasonable for a first program).

Python's a good first language, definitely, but even "C on an average computer in 2012" is a far, far better language (first or fiftieth) than "C on an average computer in 1985".

Why not teach people using a language in which OO is an integral part, and not just a graft that was added later on? If there is NO way to do something but OO, and you start using objects very early on, even if they are premade (such as Fixnum in ruby), isn't it easier to pick up the concept?

Programming is complicated enough without introducing OO at the beginning. To actually understand OO programming you need to understand how to function decompose a problem, what a variable is, memory (stack and heap) and so forth.

I disagree with the author's decision that Python is the best first programming language though. I believe Python lets the writer get away with too much and it makes understanding the code too difficult. If I'm a new programmer I might be able to write a program in Python, but will I really understand what is going on? Probably not.

My favourite language for new students is Pascal, I think it's perfect for learning to program because it's typed and has a relatively easy to understand syntax. One of my favourite features of Pascal is the reference types, it allows you to teach the general concepts of pointers without worrying about the actual syntax of pointers. It allows you to use reference types in an easy to understand manner (using the var, const, out keywords rather than symbols like & which mean nothing to new programmers).

Pascal makes talking about programming easy with regards to the loops because they follow normal English patterns (while x do... for 0 to 10 do... repeat... until y).

I'm obviously a Pascal nut... but there you go! I guess everyone loves the first programming language they learnt.

Pascal is not too bad as a first language, but I think Scheme or Standard ML (maybe simplified) would be better options. Basically, I'd pick a simple language that supports both functional and imperative programming.

I think understanding what is going on is not important at all when you're learning a language. It comes when you need to actually do stuff. If you don't require it for what you're doing, why should you learn it in the first place?

I don't think so, javascript is considered dynamic and weakly typed because of it's implicit type conversions. I'm not sure if javascript stores a variables' type in memory or if it always evaluates it's type just as it's needed, but I don't think that detail matters in deciding if the language is considered weakly typed.

I agree with the other commenters that weak vs strong typing is more about how easy it is to convert values from one type to another.

For example,
Python's types are strong and dynamic
Haskell's types are strong and static
PHP's types are weak and dynamic
C's types are weak and static (its easy and common to do lots of casting, integer types have lots of coercion rules)

Python doesn't convert on the fly between anything. Which is the main defining difference of strong typing vs weak typing. Perl is very advanced in weak typing. Turning a string into an array on the fly when you use it as such.

I mainly resent the use of + and * to mean something so far from their mathematical meaning, but in any case statically typed languages can easily accommodate that code. Of course most of them would not (out of the box) because + in a sane language would be a commutative operator.

Haskell also lets numerals literals be any numeral type you want, but
we only have instances for Int here, so let's just force the values we
use to be Ints as a quick workaround, by applying the i function on
them:

i :: Int -> Int
i = id

Now that our silly library code is out of the way, we can replace the
Python code:

I'd pick Python over Pascal for a kid because I get a feeling it has more real life usage these days (I could be wrong), but what I'd probably pick over those two would be Corona/ Lua... as it lets you create games for Android and iOS which might be terrific motivation (take the little games you did to show around in class, for instance). Adults [here] don't like how it's not public source, but your kid can still take all the open source tools they want later on once they are on their way to be great programmers.

I agree with this, though I also say just teach them C++ and don't teach them classes for a couple weeks.

The author's rational is fucking stupid "It's easy to do a spell checker in Python" So we're supposed to pretend programming is easy for new programmers? Programming isn't calling a few function, programming is figuring out what the problem (remember it's not always the stated problem) is, what the solution is, and how to design the code around both.

Python is a good language for scripters, but if you're trying to be a programmer teach them a language they might actually use or that is similar to the code they'll be writing for the rest of their life (Pascal).

I'm a fan of Pascal myself (although not my first language), but I teach programming (ages 6-10) with JavaScript. The ability do do things in a browser wins out over most pros and cons.

I introduce objects fairly early on but avoid most of OOP. I'm a fairly firm believer in teaching things that address problems that the programmer has already encountered. That way you are not giving them a new thing they have to remember but making an existing situation easier.

I agree with this article that OOP isn't a good place to start off necessarily. I don't agree that Python is the place to start. He mentions C, but dismisses it saying simple tasks are too much effort in C....this may be true, but the things you'll learn by accomplishing those task in C are worth it. Most common modern languages are derived from C (C++, Objective-C, Java, C#). If you get a base in C, then the OOP concepts that will come along in C++ and beyond are easier to comprehend.

Python is a great place to get started with scripting and web development, but I disagree that its the place to start overall. There is too much to gain by learning C first.

I helped build a tutorial website using this very concept. C-C#. Its http://www.wibit.net and we go C>>C++>>Obj-C>>Java>>C#.

Classes are useful things and should be introduced soon. People who are naturals will start trying to group their behavior and their data early on. This is a pattern people use in every language and have since before the structured programming revolution. It's not unique to object oriented program.

You see, writing a class isn't about architecture. The whole cult of Object Oriented Programming is built on lies, among them the myth that the abstraction of object hierarchies will engineer your software. That doesn't end up being a way to work at all.

Don't try to hide classes or soon people will just be implementing them haphazardly using functions with tons of arguments or using global state. (There's something to be said for letting them do a little of this at first, but that's not because classes are bad; it's because something similar to classes is essential and that's how they are comfortable thinking about it at the moment).

So yes, don't distract them with OOP. But do empower them with grouping data and operations. That's a pattern that is used by all software, even non-OOP software. If you keep a feature from them for a while, let it be inheritance.

It would be handy if we taught OOP rather than OO orthodoxy. It is literally nothing more than packaging data and functionality into a coherent unit to confine functionality to appropriate scopes. The key understanding is the scoping. We write classes to minimise the number of things we need think about at once. We also write classes to keep things that commonly need to be considered together in the same place. Not enough thought is done on this (mainly because it is hard to mark and lots of lecturers haven't done enough OOP to actually understand the nuances of what is effectively about communication rather than computational theory).

Scoping is such a huge part of programming and so under treated in academia. All that is taught is often the raw "private/public/protected" stuff which is just mechanism. The mechanism is hardly important at all. Access modifiers convey intent but understanding the intent is what is important.

Instead schools spend weeks on end teaching about inheritance. Something hardly relevant and something you should be taught to be frightened of.

It should be a little bit more than that. If you think it through and group your data and methods correctly, it makes your code highly readable and you can make use of all sorts of abstractions across your data.

TSRH. Information hiding + scope is are two extremely important aspects of writing good software. OOP simply provides a mechanism for implementation. It's when you stop thinking in terms of encapsulation + scope, and start thinking in terms of objects themselves, that shit hits the fan.

When students are still at the stage of trying to understand language syntax and basic control structures like loops and conditionals, at that stage I think it's best to keep OOP away. They are not at the point where they can start looking at abstracting concepts within programs.

Once they are past that point and they are structuring and architecting non-trivial programs (non-trivial from a new student's point of view), then it is worth informing them of all the effective techniques of abstracting problems in programs, like OOP (among others).

The whole cult of Object Oriented Programming is built on lies, among them the myth that the abstraction of object hierarchies will engineer your software. That doesn't end up being a way to work at all.

This. THIS, RIGHT HERE.

There is absolutely nothing wrong with classes -- packaging state and the functions that work on that state should be a fundamental aspect for any person learning procedural programming.

What could probably be avoided are those unnecessarily in depth discussions on inheritance that have jack-all to do with any real-world programming.

Except that it's more nuanced than that. I think maybe @rocco chose the wrong phrase. What I think (s)he meant to say was "don't distract them with object oriented analysis", something that's almost always taught at the same time as objects and classes themselves. Teach the concept of custom types, messages/methods, and maybe even encapsulation as part of learning to program. Avoid the omnipresent Cat > Mammal > Animal > Organism inheritance hierarchy that only ever confuses programmers and isn't actually useful for real world programming anyway.

"It is better to have 100 functions operate on one data structure than to have 10 functions operate on 10 data structures." - Alan J. Perlis

Classes are the opposite of that idea, you should group related functions into namespaces, but you should leave data as data. As soon as you start grouping functions and data together you end up with a mess, where each class is essentially a DSL with its own quirks. While fundamentally it's just a dictionary that any function should be able to work with.

"It is better to have 100 functions operate on one data structure than to have 10 functions operate on 10 data structures." - Alan J. Perlis

As a longtime Scheme programmer and Haskell student, Perlis is really, really wrong about this, and the slavish worship of this stupid quote has only hurt Lisp by encouraging the "all data can be encoded as cons pairs" disease that way too many Lisp programmers suffer from (and which somehow inevitably I'm the one who ends up having to clean up after).

Record types are more fundamental than lists. Heck, cons pairs are a record type.

Nonetheless, there's a problem with OOP in this regard, in that in practice it leads to an abuse of class-based object graphs and functions/methods excessively specialized to their details. The alternative is the explicit use of a wide variety of generic data structures (lists, arrays, various types of trees, etc.) and generic operations on these types (folds, maps, filters, etc.)

This is so bad, I don't even know where to start. The point of classes and information hiding is that the object can enforce certain constraints and make guarantees about the data as a consumer is accessing it. A dumb hash cannot do any of that. The burden is then shifted onto the consumer to use the data correctly following any implicit rules and constraints. This is a Very Bad Idea(tm).

By the time you group data, you're already grouping data and functionality. Some languages make you mention it different places, but that's the necessary result of grouping the data together to use as a group.

The proliferation of types is a problem, but writing out a small number of useful types is a good response to that problem. Ignoring a language's feature for grouping is counterproductive, even in the reasonably-short run.

As soon as you start grouping functions and data together you end up with a mess, where each class is essentially a DSL with its own quirks.

This isn't any more true when you create something to contain your state than when you don't.

While fundamentally it's just a dictionary that any function should be able to work with.

By the time you group data, you're already grouping data and functionality. Some languages make you mention it different places, but that's the necessary result of grouping the data together to use as a group.

That is absolutely wrong, data is data, and the context you're using it in is what's relevant for the grouping of functionality. The same data may be used in many different contexts, and it's plain idiotic to associate particular set of functions with it permanently.

For example, you might have a database of users and when a user does user centric tasks the workflow is completely different than when you might be doing statistics gathering on your users. Yet, it's the same user data!

Grouping functions into namespaces groups them by context and these functions can be reused for many different scenarios now as opposed to one you happened to run into first making them.

This isn't any more true when you create something to contain your state than when you don't.

Actually, yes it is because now you have to learn functions specific to your class and all their quirks. When you have a set of standard functions and data structures they always behave the same.

Yes the author isn't saying NEVER show them it. but give them time just writing simplistic functions, loops, and recursion. Classes come when you start working with data you need to store.

However I disagree with the author to start in Python. Why start in a language they will likely never use? he says "It's easy to do a spellchecker" easy code isn't good code. A programmer needs to understand they will work for a living if they want to be a programmer. I'd say start in C++ but introduce classes later.

Basically teach them how to code rather than architecture from the start. And then add in the architecture soon after that.

In my opinion it would be simple input/output (because what's the point with out that), functions, loops, recursion, and then structs/classes.

Ehhhhh. I can't quite figure out if this guy doesn't want people who are just learning to program to learn OOP, or if he doesn't want anyone ever to learn OOP. Yes, there are plenty of pitfalls and mistakes to be made, but you should learn how to avoid them. I mean, yeah, if someone's brand new to programming I'm not going to slam a copy of Design Patterns down in front of them, but surely we agree that OOP is a useful way to decompose problems, and that people who really want to learn programming should learn it at some point?

He's advocating that people learning to program for the first time should stay away from OOP because that introduces noise to the signal, as OOP has more to do with architectural concerns that the nuts and bolts of programming: conditionals, loops, function calls, etc.

I don't think your language distinctions are accurate: C, Haskell, and Java all share a core of variables, assignment (one-time in Haskell), conditionals, loops, function calls, records, pointers/references, data structures and collections, and higher-order functions (even if awkwardly so in C and Java).

One could quite easily program in a procedural style in Java by just using statics and treating classes as mere namespaces. All the rest of the OOP in Java is clearly additional mechanism that the learner can ignore (even if it would produce un-Java Java code).

Nope. Haskell does support a bunch of those things, but they aren't fundamental to the language.

At its core, Haskell is just an extension of the simply typed lambda calculus. Even in practice, this is pretty clear: despite the fancy features and syntax, it's still pretty evident that you're basically using the lambda calculus underneath.

This means that the core abstractions are: lambdas, applications, variables and types. Loops, conditions, records, pointers, references and so on are not fundamental.

Yes, that's true. That said, System F is an extension of the simply typed lambda calculus and Haskell is now actually based on System FC, which is an extension of System F.

However, for this discussion, it's simpler to think about the simply typed lambda calculus. Anybody familiar with the untyped lambda calculus (and that should be everyone :)) can easily guess how the simply typed lambda calculus works just from its name; people unfamiliar with the field would not glean anything from the name "System F" and people familiar with it know about the distinction already.

And a Java programmer might tell you that the nuts and bolts are method calls, object references, conditionals, loops, collections.

Method calls = functions, object references = pointers, collections = arrays. People might use fancy names, but they're really just variations on a theme. The point is to start with the fundamentals. Imperative will always be easier on the newbie than functional, and the fundamentals of imperative languages are always the same, even if they are called by different names or at different levels of abstraction.

If somebody doesn't understand flow control at a conceptual level, the difference between blocks, functions, methods and closures will be lost on them.

Haskell programmers don't actually like conditionals (that is, people think the if statement should be removed). Instead, you can implement if and such in terms of pattern matching. And pattern matching is just a clever pattern of lambdas (from a theoretical standpoint).

So conditionals are not fundamental.

In general, for functional programming, the fundamental bits are functions, applications and variables. For a language like Haskell, there are also types. Everything else is just syntax sugar :).

Haskell programmers don't actually like conditionals (that is, people think the if statement should be removed).

Ok, I've been following Haskell for several years now, and I don't recall people claiming anything like this.

Instead, you can implement if and such in terms of pattern matching.

And I'm going to step in and call this a terrible idea. Why? Because to pattern match on a type you need to export its constructors, which leads to clients being bound to implementation details.

If you wanted to eliminate if, the superior option is just to use laziness and higher order functions in its place, not pattern matching. For example, the if ... then .. else ... construct can be replaced with this function:

ifte :: Bool -> a -> a -> a
ifte True x _ = x
ifte False _ y = y

And in many cases where you use pattern matching to take apart a type, you can use higher-order functions like foldr. Note the type and implementation of foldr:

I think thinking into objects (beside toy examples) is, in itself, rather hard to learn. If you start from there, it will take a long time before you write (and understand) your first non-toy program, which can be rather de-motivating.

The "do this, then do that" state of mind is probably easier to grasp than the "first find the classes you will use, then make a hierarchy tree, then describe all the methods you will need ; be sure you understand why you use methods rather than classes, composition rather than inheritance, and why foo is a method of A with an argument of type B rather than a method of B with an argument of type A".

Then, after a few "do this, then that" programs, the learner soon realizes that it does not really work when programs get too big : he needs a way to organize stuff. That's a good time to introduce OO.

I agree that programmers who are learning should not turn to condescending sources of information only, but I question the value at looking at more real-life examples in your first steps into programming, which is what the article was about.

As a side note, I went back and edited the 'Greeter' package twice while I was typing it out. Imagine how terrifyingly confusing that sort of thing might be for someone brand new to the whole "programming" thing.

I don't think the author meant never teach OO, but don't start out with "Good moring class, welcome to CS 101. Today, we'll be building our first C++ Abstract class."

It seems like it's a lot more logical to teach OOP starting from procedural programming, where you have functions and you pass references to structures. Then, OOP makes sense to people, since it's an easier way to keep everything together. But if you start throwing around OOP concepts before someone is comfortable with procedural programming, that's just a recipe for failure.

As with any idea OOP has advantages and disadvantages, and all the advantages it provides can be achieved by other means just as easily. The way OOP decomposes problems is not unique to OOP, but a lot of problems it introduces are.

And my point is that I'm not sure that has to be the progression. Yes, that's the way most CS classes go, but is that really effective? In my experience, there are 2 main groups of computer science students. The first group has known basic programming since elementary school, it comes naturally. They're bored most of the time in class, they're the ones who help other students figure out their assignments, etc. But the basics are the easy part to learn for them. They know how to get a computer to do what they want, but they don't generally know how to do it in a sustainable, maintainable way. The other group is starting at 0. They don't know the nitty gritty OR the overall architecture. So what we do in CS classes is teach a bunch of nitty gritty to get them up to speed, and we ignore the thing that BOTH groups need to learn.

He is not saying that at all. He's saying learn python to get a feel for programming. Also do not use python for OOP. Maybe I am old school but I agree to not teaching OOP in Python. Leave that to Java. Teach memory management and threading in C and then combine all that you learned from the previous languages in C++.

I think that's a pretty limited view of those programming languages. You don't need to learn 4 languages in order to understand those concepts.

I'll reiterate. To me, the "trick" of programming is learning how to go from a problem statement or set of requirements to an architecture with different pieces that fit and work together. Doing that decomposition is in many way language-agnostic (although a given language may enable a certain structure that another doesn't). I think Python is perfect for that sort of learning.

I would just like to insert that I also hate how OOP is normally taught. "Class square is a subclass of rectangle is a subclass of quadrilateral is a subclass of polygon is a subclass of shape". That sort of thing. And then you just have "perimeter" and "area" functions that are inherited and overridden between those classes. No. Teach students how to use those concepts within their program structure, rather than just with their data.

Which sort of leads to my bigger problem - we teach students how to write code snippets, not software. Then we're surprised when they write "programs" that are just poorly-connected code snippets that work together just enough to give a correct result. The "house-of-cards" or "spaghetti" architecture. Try to change out one card and the whole thing crashes down. Pull on one strand and the whole thing falls apart.

I tutored a HS student for his AP computer class a while ago. It was shocking to me. He never started with an empty file. All of the structure was already done for him, he just filled in functions. Teaching someone how to think in terms of objects and functions and data structures and patterns takes far more time then teaching them to "fix this line of code so it adds a 20% tip rather than a 15% tip", so shouldn't we start it sooner? Especially since I would still argue that you do NOT have to know how to write a "for" loop in order to start learning architecture?

The first language should be assembly. They should teach z80 assembly on a graphing calculator. Everything is dead simple. z80 is the simplest assembly language. Reading inputs requires a few commands. Drawing to the screen is just writing to a specific memory address. Very simply yet you understand the speed and power of a computer's hardware. From there as you get more and more complex, you will realize why C is important.

After a few apps in C, you will realize why languages like python and java and the whole OOP is important.

That's the approach for training people how a particular architecture works. I think that if you want people to understand how programming works, you should teach them how lambda calculus works with SICP. Then, once they understand what computation is doing independent of a particular implementation, you can explain how it maps to our current hardware. If you learn functional programming first, it's trivial to understand imperative, but it's very difficult for most people to go the other way.

I think that if you want people to understand how programming works, you should teach them how lambda calculus works with SICP

A lot of people approaching programming for the first time don't have sufficient background in math for that to be a practical starting point for everyone. High schools offer programming classes to 15 year olds who are learning algebra/geometry.

I don't know. When I took SICP, the only prerequisite was one semester of calculus. The professor explained that that prerequisite was there just to scare away business majors; in reality, only a single small example used it: we talked about a derivative function for part of a lecture. Apart from that, there was little math.

Speaking as a student just learning to program, could someone explain a bit more why learning about OOP first is so bad? The CS track at our school has students start learning OOP and Java, and then data structures in Java, and then C/C++ and memory management and algorithms.

Personally, it seemed to me that having learned OOP prior made data structures a lot easier to digest.

I've been tutoring a few 13 year olds in programming for the last couple months and I took a similar approach to the article. I first exposed them to the mindset and basics of computer programming, using loose Python in the very beginning and transitioning to writing larger working progams that they could execute and test (as our goals become more complex). After they had the ins and outs of the basics: loops, arrays, functions, recursion, etc... I introduced them to Java and OOP. I think they were able to make sense of it a lot more since they were familiar with what code looked like, and they had seen the uses and (debatable) limitations of non-OOP patterns.

I had always thought this kind of a teaching method would work really well, and was excited to experiment with it. Another helpful method I found was to explain every concept so they could gain deep understanding and relate different aspects of CS. An example of this was before I introduced if's and logic, I did a quick crash course in Boolean algebra that eventually led explaining the electrical components of a circuit and the use of them for logic. No, they are not experts in any of these topics, but they now can relate them and understand computing as a whole better for future learning.

I should also note that I introduced those topics early because they had a loose curriculum they had to learn to prepare for advanced classes in school in later years.

EDIT: Like many of the comments in this thread, I often theorized what would be the best way to not just teach a kid to code but to really teach a kid to PROGRAM and think. Has anyone else had real experience in trying different methods with completely new students and seeing what worked best?

I agree with this. When I went to College to complete a diploma for Technical Studies (systems administration, essentially), I had a small portion of Linux and Java. I bombed Java so hard. It was difficult for me. I completed my diploma and started working as a sysadmin. Gradually I found a need and the usefulness of scripting. I started with bash, started messing with Perl (read a book on it to teach myself as well), then I touched Python just a little bit, but for my needs my Perl and Bash combination were working just fine for me. Then I needed to learn Java because an employee left and programmed a bunch of applications in Java. What. A. Breeze. I did the intro (same one I took in college) with ease, then I did the one after that and once again it was easy. Getting a good grounds for how scripting is interpreted made it easier for me to just push that stuff aside when I started learning Java, since the biggest thing I needed to learn in Java was OOP. I understood mostly everything else from scripting.

With all due respect, I fundamentally disagree, mostly on the grounds that OOP methods are so baked-into the language it's almost more effort to avoid them than to simply get the idea across in the first place. Even an array is an object: my_array[2] expresses that I am accessing a numbered attribute of an "object." Getting this simple notion across is not difficult and will only aid their understanding of the fundamentals in the long-term. Of course their very, very first introductions to the language should be quite simple, such as basic printing and variable assignments. But I really do disagree that avoiding OOP is at all a viable way of actually teaching python fundamentals.

I disagree mainly because if that person is planning on going into a serious coding career they are better of starting with a language and OOP. Sure at first you can write trivial code without OOP in a language like Java and work your way up to OOP type projects. The problem with using a scripting language (Python, PHP, Javascript, etc) is that you get use to how easy it is to just add whatever you want and basically code without rules, which then leads a new programmer down the path of "well if the other stuff is harder, then why learn those languages?" so they then go about making their projects completely disconnected from design principles and it turns in to a pile of smelly crap as you would see here. I personally have inherited my own pile of crap code in PHP. Zero object design mostly functional and its for a "dynamic" website making tool. Copy and pasted loops that span 10k lines and have variables like i, j and k for indexes of who knows what. Scripting is good for prototyping in my opinion, but when you are looking to build a complex application you better switch to a language that can be structured in a way that someone who has never seen the code, can start to understand it just by looking through it. You miss all of this when using a "easy" language.

I am in my first year of teaching programming in a community college. We have spent one term (10 weeks) working with vb.net and one term programming android with VB. This next term I teach C#.

When I taught VB.net my plan was to have OOP right up front as one of the early topics. It worked OK, but as we are only making small projects that primarily serve to showcase the abilities of the program and embed in the skills it seemed superfluous to the students. When you only have 50 lines of code you really don't need classes.

So this time with C# I am going to teach all the fundamentals first, then when they are down with them, introduce OOP. However they will be aware of it from the beginning as everything is made of classes and everything is an object.

Its seems more important to introduce them to the tools of programming first then introduce structure afterwards than confuse them by dumping it all one them at once.

I do not agree! I have found many new programmers that learn procedural programming only have a great deal of trouble adjusting to a OO point of view.. I cant comment on Python... But a more structured programming environment like CSharp or Pascal is what I suggest to new programmers.

OOP is a high-level architectural organizing principle. For new programmers, is it more important for them to figure out architecture or basic data structures and control flow? OOP can come later, and will be easier to figure out than when it is appropriate to use a list vs a map.

The scripting languages like Python are great because you can enter code without creating classes or other boilerplate required by Java or C#, and can just start typing without all the syntactic noise. This makes it great for beginners, giving them a simple scratchpad for trying things out.

Agreed. A prime example is Java, where you can't even write "hello, world" without a bunch of nonsensical garbage, where someone learning to program has to figure out what the fuck "public static void" means. I remember trying to learn Java in high school, and saying "fuck it" just because every single book immediately launched deep into OOP bullshit. Not to mention, the Java approach of forcing objects down your throat is pretty nonsensical. Some problems don't really map well to the OOP way of doing things, and wrapping functions in objects just adds extra boilerplate.

Java does have a lot of boilerplate, that is for sure. But, the Java approach is not to force objects down your throat. That's why it has static methods. You can treat a class as a namespace full of static methods. Java will certainly not get in your way if you want to do that. Java's approach is very pragmatic, which some have criticized for being schizophrenic (numbers are not objects, arrays are not collections, == vs equals(), etc.)

I think you are demonstrating a misconception about OOP. OOP is just a higher level architectural principle. The inner workings of objects can be functional, imperative, whatever. OOP doesn't "forbid" general purpose functions where it makes sense. For example, in Java, a lot of useful algorithms on collections are grouped together as general purpose functions in the Collections class (which because it cannot be instantiated really makes it a namespace).

Well, sure, except that using classes as namespaces for static objects is kind of silly and needlessly confusing for beginners (at the minimum you have to learn what classes are, what access qualifiers mean, and what static methods are). I much prefer the C++ approach, where you are pretty much free to write anything from Java-style OOP to plain C-style procedural code. But I agree, there is nothing magical about OOP, you can pretty much do all of the same things even in C (with a LOT of boilerplate code).

With Java, this hack forces several other nasty things to happen. Unlike C++ namespaces, the whole class has to be in one file, and you can only have one class per file. In the end, it doesn't really matter, but it definitely makes it much more annoying for beginners.

Sadly, the opposite is also true... Students learning intensive OO first can hardly make good procedural or functional code even when it is the obvious solution, and prefer sticking to "pure OO" approaches.

OO, functional and procedural are all well suited to a class of problems. Becoming proficient only in one of those is like having only screwdrivers in your toolbox.

I agree that some can take concepts and take them to far. I consider that a problem of not enough tools in the tool box. If all you have is a hammer your going to try and fix everything with it... Many new students have this kind of mindset... Hell what am I kidding many older programmers... (Language Bigots) have this same problem. If we can just give them a simple introduction to each mind set rather then trying to drill a single mindset into them... I would be happy. Nothing sucks more then hiring a programmer fresh from collage and finding you have to reteach teach them....