You Don’t Have to Be Good at Math to Learn to Code – The Atlantic

It’s an interesting and open question. Nathan Ensmenger suggests that we have no evidence that computer scientists need a lot of mathematics (math background has been correlated with success in CS classes, not in success in a CS career), but the emphasis on mathematics helped computing a male field (see discussion here). Mathematics has both been found to correlate with success in CS classes, and not correlate with success in object-oriented programming (excellent discussion of these pre-requisite skill studies in Michael Caspersen’s dissertation). It may be true that you don’t have to be good at mathematics to learn to code, but you may have to be good at mathematics to succeed in CS classes and to get along with others in a CS culture who assume a strong math background.

People who program video games probably need more math than the average web designer. But if you just want to code some stuff that appears on the Internet, you got all the math you’ll need when you completed the final level of Math Blaster. (Here’s a good overview of the math skills required for entry-level coding. The hardest thing appears to be the Pythagorean theorem.)

First of all, I cannot tell you how much I hate the current trend to use the term “coding” instead of of programming or developing software. It makes us sound like a bunch of trained monkeys tapping out Morse code or maybe spies sending secret messages. It devalues our field and sends a message to employers that all they need look for are people who can mindlessly crank out lines of Java or PHP.

For people who are going to be doing very routine tasks in their careers – perhaps writing simple programs to generate bills or putting together websites – it is probably true that little mathematical skill is required. But lack of mathematical skill will cut people out of a lot of interesting computing careers. You can’t get hired in the financial industry without a lot of mathematical ability because pretty much everything they do is quantitative. If you don’t have the math background to analyze performance, you will be cut out of careers iin high performance computing which includes a lot of defense and scientific work. And most importantly, Big Data is becoming very important in computing careers. People without mathematical and statistical skills will not be able to take jobs that involve crunching through large datasets.

Anecdotally, I have noticed that the ability to write well, particularly to write analysis well, is more predictive of success in CS than math ability, though some level of math competence is needed to make it through courses like data structures. But students who learn only to “code” and never learn any mathematical skills will be consigned to the kinds of jobs that are easy to offshore the next time the tech bubble bursts.

Yes, this is exactly it. Different people are using the word “coding” in completely different ways. Look at it’s use in the Atlantic article: “if you just want to code some stuff that appears on the Internet” – what? From my Computer Science perspective, that doesn’t even parse. What the heck does “stuff that appears on the Internet” have to do with coding? And then I realize that this person is talking about HTML – that uses “codes” to indicate things, but it is not “coding” in the sense that I would use the term. And no, you don’t need math for that. You also don’t need Computer Science for that, and it’s only tangentially related to C.S.

In fact, the Atlantic article never once uses the phrase “Computer Science” or suggests that what they are talking about is related to C.S. When Mark puts this out there and talks about math skills for C.S. classes or for other types of what WE would call “coding” he’s talking about something else, and not what is in the Atlantic article.

Maybe people in C.S. need to just stop using the term “coding” and let others who have appropriated that term have it. When both communities use the term in different ways, that’s when we have miscommunication and confusion.

And yes, math is very important for Computer Science, even if it’s not used a lot in the professions people with C.S. degrees enter into (some of which look more like “coding” than C.S.). Fundamental to the SCIENCE of computing are things like solving recurrence relations to analyze time complexity, proving correctness of algorithms, and designing things like linear programming algorithms for optimization problems. I have a hard time believing that people who are bad at math are going to be very good at those fundamental C.S. tasks.

That “computer science” is “software development”? Or that all “programming” is “software development”? I don’t think either of those is true.

I don’t teach software development. I teach people who hate the idea of being called a “software developer” how to write programs to test out ideas, to automate repetitive tasks, and for creative expression.

No, thay software development, or even programming, is a better term than coding. And no, I do not think computer science = coding, or programming, or scripting for creativity. Actually, the term coding may well better fit your idea of writing small scale scripts.

What is wrong with software development anyway? It is properly part of computer science, a big part in fact. I would guess that the majority of our students aspire to software development positions of one kind or another, whether in Silcon Valley, at a gaming company, or in an engineering or financial company. The people who are going to these coding academies think they are going to end up in a career doing software development of some kind. I don’t think most of the adults signing up for these programs are doing it for creative expression.

What do you mean by “the majority of our students”??? If you mean Computer Science majors, then you’re certainly right that many want to become software developers. What about the people (like the author of the Atlantic piece) who want to learn some “coding” as a supplemental skill – not the main thing they want to be doing? I think it would be great if a lot of people who had no interest in “software development positions” took a basic class that taught them about coding, software development, and computer science – including the different ways those terms are used. CS Principles anyone?

Most of the students I teach will not become software developers. They will do script writing and some rapid prototyping, but only about 10% will go on to do development of programs that will be used repeatedly by other people. That subgroup needs to learn hardcore software development, the rest need only a few basic principles of good programming (mainly in-program documentation, which our CS department does a terrible job of teaching). Note: I’m teaching bioinformatics, not CS, and few if any of the students I teach are CS majors.

I would argue that object oriented programming is often misused. One of its primary goals – code reuse – can easily prevent the sort of recoding which is necessary for architectural changes. In most business contexts, data reuse is much more important. (There are some significant exceptions to this observation. Also, OO programming can be quite good when OO boundaries correspond to administrative boundaries.)

Architectural changes can be necessary for some kinds of added functionality, as well as order-of-magnitude efficiency gains (the sort which come with deep insight into how the problem is being solved).

Meanwhile, I would argue – while a lot of mathematics can be nonsensical, in the context of programming – that mathematical insights can be crucial for getting things done in a useful fashion. (For example: numbers can be defined recursively, which means that some recursive processes can be replaced by numerical approaches, sometimes leading to massive speedups when the programmer is willing to talk with the people who have the requirements, about what problem needs to be solved.)

If all you’re going for is making sure people seem to be engaged, you don’t need any of this. And, of course, when working on an existing system you need people skilled in the technologies used to build that system. But “X doesn’t need math” can be quite true in some contexts while being misleading at best in other contexts.

Several years ago I had some videos of interviews with some people who did the hiring for big computer game companies. (I lost them somewhere. Bummer.) Very interesting. All three said they hired three types of people most regularly; mathematicians, physicists, and artists. The interviewer ask why they did not look for programmers. The reply was they could train people to program but they could not train people to be good at math, physics or art. Admittedly game programming is a special field but if someone is looking to be a programmer now a days being able to program is not a stand alone skill.

Both programming and mathematics require the ability to manipulate formal systems and reason about them. Doing well at one is likely to be indicative of ability to do well at the other, even is neither is “necessary” for the other.

Math classes that only ask students to follow already set algorithms are likely to be less predictive of programming skills than math classes that require more advanced problem solving skills.

(Note: I’ve been careful in this comment to use neither “coding” nor “computer science”, which seem to mean too many different things to be useful terminology.)

I argue that a lot of programming needs a skill that looks so similar to mathematical maturity that it is often mistaken for it, but is actually a somewhat different skill. Lacking another name for it, I call it “structuralism”. (Yes, I know, overloaded term in the social sciences and humanities.)

When you say “a lot of programming,” what do you mean? Most code that gets written needs those skills? Most professional software developers need those skills? Most programmers of any type need those skills?