The Beginner's Crash Course on Computer Programming

Computer Programming: Every PC user should know how to program, and there’s never been a better time to learn

With the huge variety of computing devices all around us, it’s important to remember what it is that’s special about a full-fledged personal computer. We think the main difference can be summed up in one word: mastery. No matter how much time you spend with an iPad or an Android phone or in a web browser, you can never truly master it. There’s just not enough there to learn. But the PC? That’s different. The PC goes deep.

As you develop your mastery over the PC, you move past all sorts of boundaries. First, you learn to replace the software that came on the computer. You discover the command prompt and how to tweak the OS. You learn to build your own PC, and to benchmark it. And then, at the very bottom of it all, there’s one last boundary standing between you and true PC mastery. You have to learn computer programming.

Why is coding the ultimate test of PC mastery? Because learning how to program is the thing that breaks down the wall between you and your computer—it makes it possible for you to truly understand what’s going on underneath your desktop.

And, all philosophical ramblings aside, it’s a pretty great skill to have. Whether you need to automate a process on your computer or whip up a quick web app for a family member’s website, knowing how to code is a big boon. Who knows, you might even be able to earn some money.

Learning to program isn’t something you can do in an hour, or even in an afternoon. You have to learn to think in a whole new way, which takes dedication and patience. Fortunately, it can also be a lot of fun. In this article, we’re going to take a whirlwind tour through some of the most important concepts in computer programming, and we’ll direct you to resources that’ll help you start your adventures in coding.

Basic Information

A Q&A on the ABCs of programming

Before we can do anything, we’ve got to cover the basics. Here’s what you have to know before you can get started.

When we say computer “programming,” what does that really mean?

For this article, we’re going to use a fairly narrow definition of programming, and say that what we’re talking about is the process of creating software on a computer. That process involves writing out a series of commands for the computer to execute, which will create our desired behavior. We write those commands using a programming language.

What’s a programming language?

A programming language is the set of rules that define how we express what the computer should do when it executes the program. There’s an incredible variety of programming languages available for use, but the vast majority of commercial and personal software is written in one of a core group of languages including C/C++, Java, C#, Python, and a few others. Modern programming languages share a lot of the same basic concepts and some syntax, so learning your second, third, or fourth programming language is much easier than learning your first.

What makes one programming language different from another?

Each programming language has its own strengths and weaknesses. C and C++ are low-level languages, meaning that code written in C is closer to the machine code that your CPU understands (see below). Low-level languages can produce faster, more efficient software, so they’re used where performance is at a premium—for programming an operating system or a 3D gaming engine, for instance. High-level languages, like Java and Python, have the advantage of being much easier to program in, and the same program can generally be written with fewer lines of code in a high-level language.

But which one’s the best?

There’s no one best language—it really depends on what kind of programming you want to do. If you want to program native Windows applications, you’ll use C#; if you want to program sophisticated web applications, Ruby would be a good choice; if you want to be the next John Carmack, you should probably start with C.

No, for real, which language should I start with?

The secret is to not stress too much about whichever particular language you start with. The important things you will be learning are all basic concepts that work pretty much the same in every programming language. You’ll learn how to use data structures and conditionals and loops to manage how your code flows. You’ll learn to structure your program in a way that’s readable and organized. Once you’ve done all that, learning a bit of syntax to pick up a new language won’t seem like much work at all.

But, if you really want a suggestion, start with JavaScript. It’s an easy language to learn, it’s got some practical applications, and its syntax is similar enough to some more-powerful languages like C# and Java that making the transition later on won’t be too hard.

Is HTML a programming language?

Not quite! HTML is a markup language, used to define the contents of a webpage. Although HTML has a specific syntax (a set of rules defining how you have to write things), it doesn’t have semantics, or meaning. An HTML document is rendered, rather than executed. That said, if you have written an HTML document, you at least have experience writing a formalized computer language, which may make the jump to programming easier.

What’s an IDE?

An IDE (short for integrated development environment) is the software suite programmers use to actually write programs. They generally include a specialized text editor for writing the source code, as well as the ability to test and debug your program. Two of the most popular IDEs are Eclipse (open source, free, and available at www.eclipse.org) and Microsoft Visual Studio (proprietary and expensive, but with a free “Express” version that’s limited to and excels at programming in C, C#, and BASIC).

Visual Studio is one of the most advanced IDEs around, and is used by nearly all Windows programmers.

How can I start writing a program, Like, right now?

Unfortunately, it can be a bit of a hassle to get started coding in most programming languages. You generally have to install and configure an SDK (software developer kit), and sometimes an IDE as well, in order to be able to write and compile code in a new language. It’s rarely super hard, but be prepared to spend 15–30 minutes Googling, reading a guide for your chosen language, and setting things up.

Fortunately, JavaScript is much easier to get started with. In fact, you can start writing code right this second, using an in-browser coding environment like JSFiddle.net. An in-browser IDE isn’t a good solution for serious programming projects, but it’s a great way to get started as a beginner. To start writing JavaScript in an interactive environment with structured lessons, you can visit www.codeacademy.com (but more on that later).

If you can talk to a specific RAM address in memory, then it's low level, regardless of how many high level things it can do. Please try talking to RAM with Java, python, anything else, I'd like to see how that goes for you.

Not to mention the constant assertions of things like, "Almost all Windows programmers use Visual Studio", "If you want to program native Windows applications, you’ll use C#", etc. Not to mention that the document never suggests you might actually audit a class at the local community college or votech, or even just buy a book. I think you're being awfully generous to call the article "good." I wouldn't grill /you/ for glossing over details like C and C++ being compiled to intermediate object code before being linked into machine code, because you're not writing a professional piece on the matter.

I can't fathom why Max PC would run an article about programming languages written by a novice without consulting an expert. I didn't review this repost in great detail, but the print run contained more bad information than any other article I can remember reading in any publication, ever. The article indicated confusion between Java and ECMAScript and had a ton of usubstantiated bias with respect to categorizing and comparing languages. I miss the old Boot Magazine days where we occasionally saw some hard science; I know that Max PC isn't Dr. Dobbs, but better to avoid a topic than blunder through it so horribly.

I've spoken with aspiring indie gamers who talk about using Unity (unfortunately they wanted to go for that 8-bit retro look...). I hadn't really investigated it so I didn't realize that it was this easy to use.

The best, and truly only language, is assembly. Just like was mentioned in the article, ALL languages have to get converted to assembly. With the obvious problem being that is takes significantly longer to program and isn't portable between architectures (sometimes not even between companies, such as AMD and Intel, which share only about 90% of the same CISC instructions).

If you're more into hardware stuff, do not get an Arduino UNO. They are over priced and under powered. Consider instead a Teensy 3.1, if you need the power, as it is compatible arduino code and better in just about every way.

The best language is what makes the most sense for any given task. Assembly is rarely the best choice for the vast majority of modern programming tasks.

Assembly is certainly the core language of computer programming, but to claim it's the "truly only language" shows some amount of ignorance and disrespect to the fundamental philosophies of computer science.

99% of programmers will never and should never have to write a line of assembly in their lives, and that is the great success of computing and computer science.

I think I have more respect for "Computer Science" than you think. I am an EE. I've programmed out chip paths (using the horror that VHDL) all the way up to java programs. No matter what language you use. It is built off of someone else's code base unless you're writing in assembly. Compilers do so well because there is a lot riding on them translating your code as optimally as possible.

I didn't say assembly was the only language to use. But, it is the best. You cannot write better than assembly from a performance stand point and that's a fact. End of story.
I also say just under it, that programming in assembly isn't really all that practical. With every layer you move up you need less and less code to get the same thing at the sacrifice of performance. This isn't necessarily a bad thing.

"99% of programmers will never and should never have to write a line of assembly in their lives, and that is the great success of computing and computer science."- I'd have to disagree with you on that. Never writing assembly means never understanding all the wonders that go on inside of a chip that let you do the higher languages. It's worth doing at least once for every architecture you use.

stradric said, "99% of programmers will never and should never have to write a line of assembly in their lives, and that is the great success of computing and computer science."

I understand what you're trying to say: abstraction is hugely important, and we have to be able to abstract the problem domains away from the hardware in order to tackle the really big and worthy problems. But there's still a great need for people who can interpret a core dump or use a debugger. Granted, that's not the same as writing assembly, but being able to do one implies the other. I also find it hard to believe that anyone could have a truly great understanding of high-level languages without a certain amount of low-level knowledge. I think that's why universities still teach comp org and assembly.

I don't really agree. Assembly has its uses, but it isn't any good for 99.99% of things. The only way I see assembly commonly used anymore is a) to better understand what the compiler is turning your code into, which is sometimes very useful and b) manually optimizing code that is extremely performance sensitive, like a game engine's implementation of linear math. Honestly, I doubt many people could write better assembly than a compiler anyway.

You are better off (in my opinion) focusing on writing good, fast C/C++ code (about is low as one should reasonably go) and using your knowledge of assembly to optimize said C/C++ code--not replace it. For example, if you see that writing a function a certain way produces some slower assembly construct, you could figure out how to write the function to better take advantage of the compiler output. This is something I see people do all the time; it's often the reasoning behind really strange looking C/C++ code.

QuantumCD said, "You are better off focusing on writing good, fast C/C++ code and using your knowledge of assembly to optimize said C/C++ code"

100% agree. After using a profiler or other means of analysis to first determine which parts would actually benefit from the optimizations, of course. It's amazing how much time and simplicity is lost to premature optimization.

Although there are a few cases (especially working against APIs whose developers went nuts creating an inordinate number of typedefs for basic types and macros to convert between them - eg, some of the horrid win32 APIs) where ASM is dramatically more concise.

I believe that assembly should be taught as it gives both an understanding and appreciation for what happens "inside the chip". Too many programmers barely have a greater understanding of what happens "inside the box" than does the stereotypical "tech illiterate".

Computer Science classes today will tell you how great Functional Programming is (another lasting influence of Google lol), and how-in general-the recursive solution that saves a programmer's time is more desirable than the iterative solution that is faster for the computer to process. Assembly will give programmers an appreciation for how much of a PITA recursion truly is to implement; the "recursion is easier for programmers to implement than iterative" mantra rapidly falls apart in assembly, especially when programming in "microcontroller-land".

vrmlbasic said, "Computer Science classes today will tell you how great Functional Programming is (another lasting influence of Google lol)"

Many of us cut our teeth in Lisp (even turtle lisp) long before Google was a thing.

vrmlbasic said, "Assembly will give programmers an appreciation for how much of a PITA recursion truly is to implement"

It isn't really fair to talk about the difficulty of expressing recursion in assembly language as being a reflection on functional programming. Computers operate in an iterative fashion, so it's only natural that their native language constructs are better suited to iterative programming techniques. That said, tail recursion can be done simply in asm with a label and a jump, so it's not a major hurdle for a capable runner.

If you were programming with LISP before Google was a thing then you were incapable of getting my point by virtue of veneration. I'd wager that you're better for it.

Learning assembly will give one an appreciation for the apparent simplicity of recursion in higher level languages. It will also offer the beginnings of an explanation as to why functional programming can be so dog-slow and what those pesky stack errors really are when a recursive algorithm screws up.

Assembly knowledge will also allow someone who is steeped in higher-level languages to have some skill with microcontroller programming. Not because assembly language must be employed by the programmer to code for them but because the techniques of higher-level languages that gain simplicity at the expense of requiring more resources will not work well, if at all.

Besides, I consider some knowledge of how a processor works to be essential for a true programmer. Perhaps not to the level of semiconductor physics (or even the transistor/gate level) but to without knowledge of how a processor executes the program the programmer might as well just be typing the incantation of a spell into the computer as it is akin to magic to him.

Yeah, there is a common misunderstanding that less code = faster performance. Actually, this is often the exact opposite, because the compiler generally has to generate even more code to get that "nice" code. Recursion is a common example. It looks really nice to the programmer, but iterative = faster in most cases.

...that's part of the modern CS narrative. It leads to recursion, which leads to extolling the virtues of functional programming which, odds are, will be accompanied in some capacity with a spiel about Google's success allegedly coming in some capacity from functional programming.

QuantumCD said, "Recursion is a common example. It looks really nice to the programmer, but iterative = faster in most cases."

vrmlbasic said, "I disagree about QuantumCD's statement being false"

Why? What, specifically, is it that you think causes recursive algorithms to be slower in "most" cases? The only cases I can see would be those where tail recursion isn't possible, but that certainly doesn't include "most" cases.

I really don't know which one is the fastest, but I hate functional programming. Also, in regards to the assembler debate you can write inline assembler in any c/c++ compiler. So why not have the best of both worlds? Also if your job is to manage terabytes of corporate data, have fun writing all those queries in assembler. Oh wait, that's right, you can't write a database in assembler. If your code has to talk to an Oracle database, pl/sql is your only option. Don't get me wrong, I love assembler. But it isn't the solution to every problem. I strongly believe in using the programming language best suited to the problem at hand.

Please note that most of what I'm saying isn't really in response to you. In fact, I pretty much share your thoughts. This just seemed like an appropriate place to express my sentiments on the various issues I've seen in this thread.

Really? I like some aspects of FP, well, mostly not having mutable state in certain things, avoidance of side effects (well, apparent side effects), and recursive functions...

And to an objection to your assembly point, that's not entirely accurate. Of course, you could write a compatibility layer between the DB and the assembly. It's just push to eax and call. XD Of course.

Or you can look at NoSQL, the datab... Er... Key-mapped storage system.

>>And to an objection to your assembly point, that's not entirely accurate. Of course, you could write a compatibility layer between the DB and the assembly. It's just push to eax and call. XD Of course.

My point was really just to show a situation where you couldn't just sit down and bang out assembly to solve a problem. But in keeping with my original Oracle scenario, even a compatibility layer wouldn't solve all your problems. Sure, it would allow you to talk to the database but there's no getting around the fact that all DDL and DML in Oracle is handled by the SQL engine. Oracle doesn't allow you to just peek into the tables from the outside and do whatever you like. At some point you are going to be embedding SQL into your assembler in order to accomplish those tasks. And you can't get that done with only knowledge of assembler.

That's why I specifically mentioned DDL and DML. Blackboxing wouldn't allow you to create, modify, or update tables in the database. Your only options are to feed SQL to the SQL Engine or to try and hack the database files yourself. But after doing all the requisite reading to successfully hack the files without breaking Oracle, I'd say you'd be plenty proficient enough to write SQL.

And if you ever wanted to commit the greatest programming taboo of all time and actually verify that your code worked as you intended, e.g., created and updated tables without the use of SQL and without breaking Oracle, you would still have to use SQL to ask Oracle to look at everything and tell you if it was having any problems.

The only way out is to ignore Oracle altogether and to write your database in assembler. But then I would point out to you that a bunch of assembler and c++ programmers already got together and did that and the result was Oracle. To get back to the level of sophistication of a current Oracle DBMS from assembler would be akin to reinventing the automobile, but starting over from the discovery of fire and using only the tools that man was able to make with it. Why?

Oy! I object to that classification that puts me in the "might be useful pool"! I only write for shits and giggles. And also to horrify proper programmers with my AbstractTemplateFactoryAtomicSingletonInterface.

It's an interface that is implemented by a template class that is a type of abstract factory that also is a singleton for that specific abstract factory. All the functions are atomic (in the implementation though, not the interface).

It's basically me too lazy to make a new factory for every object I have, an instead just using templates to make them.

Honestly, I don't get to do very much c++ these days and abstract factories are a little over my head. But I did Google the concept and found a good example that created the shell for a factory using classes. I didn't delve too deeply into it, but it doesn't look like it does anything more tricky than basic polymorphism. I think if I kicked it around for a few hours, I'd have a good handle on it. But I shudder at the thought of doing this with templates.

But if you ever find yourself gracious enough to share the code, I'm probably masochistic enough to take a peek.

First off, this is an excellent article and I would like to thank the MPC staff for putting it out there.

I completely agree with the suggestion that a newish (but not brand new) programmer use Unity. I started trying to program in C++ in highschool, and, while it was always a good skill to have, I never really created anything worthwhile. It was always just simple command line programs to do certain complex math and organize data. About a year ago, I found Unity, and it has completely rejuvenated my desire to program. It's still extremely simple, but at the same time gives you a tangible product that is more than just words in a black box; I have even used it as an environment for programming non-game software. Definitely a good thing to point new programmers to and it allows one to concentrate on the final product by being completely cross-platform instead of having to target one OS or arch at a time.

Also, once you learn the basics of programming, a good skillset that I would suggest learning is purely functional programming. It's something that I was just recently introduced to by another user of MPC (thanks, QuantumCD) through the programming language Haskell and is a great way of thinking about programming that can fulfill some of the shortcomings of imperative languages like C++. While it *is* an excellent concept, I wouldn't suggest that a complete newb go straight for it as a first language.

Finally, I don't think that this article covered APIs. An API is an extension of a language that allows the language to do a certain thing. There are a lot of them out there that perform a lot of different tasks; OpenCL, CUDA, RenderScript, and DirectCompute all allow a programming language to use the GPU (and in OpenCL's case, a lot of other devices) for general programming, allowing the dev to get a lot more performance out of a particular system. OpenGL, Mantle and DirectX allow a programming language to output complex graphics instead of just text ( or, in some languages, simple, 2D graphics).

I started with C++ when I was 8 (Python about 8 months earlier). Python taught me object-oriented programming, but I really hated the syntax and constructs (still do this day). I prefer C-style languages like C/C++, C#, Java, etc. About a year and a half later, I had mastered (if one can) pointers, references, polymorphism, and all that garbage. :P For about a year, I kind of got into a slump, and then I found Qt. I *love* Qt. Without it, I don't know if I would have even wrote desktop applications before. So a word of advice: find a nice library to use C++ with, as you probably aren't going to do anything in "pure" C++ besides Computer Science work. Boost is also good, but still not a "platform" like Qt is.

Unity is good, but I don't think many professional game studios use Unity. Also, the C# is kind of an outlier in terms of what most studios use to make games (it's nearly always C/C++ and Assembly). So if you plan on wanting to work somewhere like DICE or Bioware, you are better off working with a game engine that primarily uses C++ and allows you lower-level access to the rendering pipeline, and so on. Unity is great if you just want to work on games to the side--the editor itself is great, the graphics are good enough in the free version (Pro offers much better capabilities by way of graphics), and there is a great community that will help you out with basically any problem you have. Take Cryengine, by contrast, and I haven't found the same community that is willing to help you fix your stack frame or heap corruption and all those great things (C/C++ nightmares... got to love .NET). :P

Also, as a quick note, purely functional programming isn't exactly something that is great for games. :P Most of your game objects have to be acted upon. True, you can write pure functions (an example would be a MoveToPoint function would takes in an object and interpolates its position to another point). However, a much more common method (in my experience) is that you write a behavior for an object to act upon that object. This is called an Entity Framework, and it's also found (albeit, more rarely in my experience) outside of Unity and other game engines. The concept makes a lot of sense for small to medium games, especially. When you get into really, really big games, there are some "better" methods that can ease the pain of managing a hundred different behaviors in tandem (bigger games generally have more objects "talking to each other").

AFDozerman has a good point about APIs--you won't get very far with programming without using one. For example, if you start using JavaScript, you will quickly run into jQuery, which is an awesome library (less of an API) that will help you immensely. Basically every site in existence uses it. And I'm sure that Maximum PC readers are familiar with the big three graphics APIs--OpenGL, DirectX, and recently, Mantle. ;)

It's mainly the post-processing effects that you can't get in the indie version of Unity (and there are even hacks that will let you get certain "Pro" effects!). They recently allowed the indie version a single directional (sun) light to cast real-time hard shadows, which look pretty good if you don't leave the strength at 1 and play with your other lighting settings. However, even with the best art (models, animations, textures, lighting, etc.), you aren't going to achieve that "professional" look without a healthy amount of Pro-only graphical features--namely post-processing effects.