Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Death Metal Maniac writes "A few lucky British students are taking a computing class at the National Museum of Computing (TNMOC) at Bletchley Park using 30-year-old or older machines. From the article: '"The computing A-level is about how computers work and if you ask anyone how it works they will not be able to tell you," said Doug Abrams, an ICT teacher from Ousedale School in Newport Pagnell, who was one of the first to use the machines in lessons. For Mr Abrams the old machines have two cardinal virtues; their sluggishness and the direct connection they have with the user. "Modern computers go too fast," said Mr Abrams. "You can see the instructions happening for real with these machines. They need to have that understanding for the A-level."'"

I was going to post and suggest that they should use 29-year-old computers instead, because the BBC Micro was (designed as) an absolutely superb platform for teaching programming. Then I read TFA, and it turns out that, in fact, that is exactly what they are doing.

It was a huge step backwards when the BBC Model Bs in my school were replaced with 386 PCs. The PCs were networked and so had to have some security. The BBCs let you poke absolutely anything and booted straight into a programming language (a dialect of BASIC, but one that supported structured programming and included a built-in assembler), and included a collection of I/O ports that were trivial to connect your own projects to and drive from BASIC. The OS was stored in ROM, and if you did anything especially wrong, you just turned it off and on again and were back to the pristine condition.

That got me into DR-DOS (I forget what version and can't be bothered to try and reason it out due to alcoholic indifference).

Oh, and a couple of months after I was playing with that crap my IT teacher at school turned up with a Schneider 8086 machine with an orange monochrome monitor and admitted that he had no idea how to use it but that it was the latest generation of computer that businesses were using. I actually got

The problem with using an emulator, is that you're then using an emulator. When you're learning to program, it's much more satisfying having something running on real hardware - even obsolete real hardware - than in an emulator. If you're going to use a virtual platform, you may as well use something like Java, in terms of the amount of how enjoyable it is.

One of the problems that they are trying to address with this idea is that modern programming is so abstracted from the real hardware that you don't

We had to remote into this old Unix System V box and do a few exercises for our course education. No, its not as far back as these students were going but it was helpful to become familiar with that kind of architecture, because you never know whats still going to be kicking around when you get on the job.

Hey man, don't bag on ARCnet. That shit was insane - you could pull coax for as long as the eye could see and still maintain decent throughput (for the day) and it would tolerate about as much 'stupid' as any network I've ever encountered. I saw instances of some serious 'stupid' on ARCnet networks - including one length of cable that didn't quite reach, so they spliced it using two pieces of coathanger soldered on both ends to the frayed ends of each piece of coax. They used cardboard to keep the two pi

I like and use Java everyday but I would not suggest learning Computer Science with it.

I learned data structures and algorithms with plain C and pointers which IMHO is the proper way to do it.

Just recently I was reading a book about data structures and algorithms in Java, and it was very funny the loops people have to jump to create a simple linked list or stack... because Java is *not* done for that...

I disagree- Being exposed to how early x86 or other platform programmers worked around limits of the time teaches new programmers how to innovate and think outside the box (in addition to the very valuable insights on performance tuning & optimization on physical hardware mentioned by many others above).

Modern programming is about algorithms and interfaces. Knowing how to simulate 4GB memory space with only 8 bit registers is not important.

B-b-but the article isn't about modern programming. It's about the A-level program, which is about how computers work, as per TFS/A. Some people need to actually work on hardware in order for your modern programmers to implement their algorithms using an interface.

I feel us programmers have gotten too far away from the lower level aspects of the craft and are now too higher level focused. While, this isn't a bad thing (why should you rewrite a framework everytime you start a new application) - it really perverts ones respects for how things work and efficency.

I am getting back into assembly programming after 8 years of C# and it is a bit of a shift in thought. My college switched from C/C++ to Java my senior year for incoming freshman - a real shame. Programming is totally different when you have no respect of memory management.

I hate programmers. They need to stop writing such bloated crap, hogging my cycles and eating my RAM. Sure, it's OK on one system, but when you've got to push that resource use out across 10, 20, 30, or more servers?

That gets expensive, especially with the recent memory cost hike (yikes!) It's infuriating when the only significant change is the library/framework getting upgraded, and then you've got to upgrade a couple dozen clients and/or servers as a result due to poor performance.

Now to make these 5 Apps and have them run at 1/2 of the memory footprint takes 40% more time.

Thankfully, you only need to spend six months beating your developers to be aware of memory bloat once. So, count the 40% as a capital cost and move on. (Added bonus: your cultural shift will cause new programmers to adapt or fail, thus extending the value of your investment in proper discipline!)

You would get exactly the same "feel" as you get with an old C=64 or Atari or Amiga machine. If your goal is to get down to the bare metal, then go ahead and do so. There's no need to dust-off old machines that are on the verge of death (from age).

You're an old hacker and may relate to this. I found free on craigslist a hand built (hand wrapped) Z80 CP/M box with dual 8" drives and a case of diskettes. No instructions or schematics. This winter I'm going to dig into it with my scope and logic probe and see if I can get the old baby working again. I was amazed that I was still able to just look at what components were on the boards and get a fairly clear idea of how it was put together. I figure the hard part will be following the address lines a

... if you want to know how computers work, learn microcontrollers with the Atmel 8 bit family of controllers (ATMEGA8, for example). These things are wonderfully documented, there is a free C/ASM development environment with emulator (single-step, breakpoints, etc.). The real deal is just a few dollars for a development board (or get an Arduino, same thing). You don't get the absolutely down to the transistor insight, but that's really just a few experiments with TTL gate chips and LEDs away.

New cars are wonderfully simple under the hood, once you strip away all the plastic. Ever taken apart an old carburetor before? Ever try to get it back together in working order? Give me a FI computer, airflow sensor, and fuel injector any day. Not surprisingly, cars went from a maintenance interval of 1,000 miles with a life expectancy of 50,000 miles to a maintenance interval of 10,000 miles and a life expectancy of 250,000 miles by *avoiding* complexity.

Ever taken apart an old carburetor before? Ever try to get it back together in working order?

Yes, lots. While I appreciate the old joke that "carburetor is a French word, meaning 'leave it alone'", I never found carburetors to be capricious, only more complex than 'screwdriver mechanics'. You have to know how the carburetor works, and you have to have the correct service manual, and you have to have the tools to assure the precision parts are all in spec.

There's a bit of truth to that Model T stuff. Everybody thinks they know how to drive a standard transmission until you throw 'em in an antique without synchros... I woudl guess you could say the same about automatic spark advance, but I've never personally experienced that...

The Model T was a rear-wheel drive vehicle. Its transmission was a planetary gear type billed as "three speed". In today's terms it would be considered a two speed, because one of the three speeds was actually reverse.

The Model T's transmission was controlled with three foot pedals and a lever that was mounted to the road side of the driver's seat. The throttle was controlled with a lever on the steering wheel. The left pedal was used to engage the gear. With the handbrake in either the mid position or fully forward and the pedal pressed and held forward the car entered low gear. When held in an intermediate position the car was in neutral, a state that could also be achieved by pulling the floor-mounted lever to an upright position. If the lever was pushed forward and the driver took his foot off the left pedal, the Model T entered high gear, but only when the handbrake lever was fully forward. The car could thus cruise without the driver having to press any of the pedals. There was no separate clutch pedal.

The middle pedal was used to engage reverse gear, and the right pedal operated the engine brake. The floor lever also controlled the parking brake, which was activated by pulling the lever all the way back. This doubled as an emergency brake.

We ran some older machines in my first programming course. When you can see the direct results in speed (or lack of) it can help teach better approaches. Writing a game and seeing the screen flicker when you ask the CPU to do too much is good modivation to find a more effectient approach. One our our instructors also did something like this with visual sorting procedures. If you can see the difference in speed between one sorting approach and another, it sinks in.

That can much more effectively be done by concentrating purely about Big O rather than hardware tweaks. Just tell them to do problems from project euler [projecteuler.net] and they'll get a good appreciation for algorithm efficiency.

There's still nothing like having your actual computer take another 10s to run the same sort someone else's does in 1s. Our current machines are so fast that sorting 1000 items in.1s vs.01s means pretty much nothing to a learning programmer, even though the order of magnitude difference is the same. And harping on Big O isn't "getting your hands dirty".

There are people who learn quite well from theory. But that's not everybody, and actual, perceptible feedback is a very effective learning tool.

I wasn't alone in keeping '286, 386, and '486 boxes around until Pentiums became prolific...and the same goes for dual cores etc...you write code that runs fast on the older generations, and you never hear user-land complaints about your stuff's performance on the new.

Of course, with the advent of.NET....well, now you're only as good as Microsoft is.

One of the great things about the early micros (and probably the even-earlier minis) is that they were Knowable. With a little time, an intelligent person could become familiar with the workings of the entire architecture. I used to have a map of every memory location in the 64KB of ye olde C64 (most of it was user RAM of course) explaining what each byte was for. POKE a different value to a certain address, and the background color changes. PEEK at a certain address and it tells you the current hour. You could learn this... all of it. Obviously that's just not possible with modern computers (probably not even modern phones); no one person can grok the whole system.

The professor does have a point. I remember writing one of my first BASIC programs. All it did was wait for a key to be pressed, then fill all character positions on the screen with that letter. I could watch it go line by line. Then I wrote the same program in 6502 assembler. The entire screen of characters changed instantly, as fast as I could type. A feeling of awe and sudden empowerment rushed through my vei

Still, the parent author has a good point about the nature of computers. There was a time when you could grasp every detail of the system. Now, to understand any one component means a lifetime of specialization. This is good because it makes computers extremely powerful, but it is bad in a way because it makes the world a mysterious place.

When I started learning html around 1997, it was simple. I could know every tag. I kept up with it through frames and into a small amount of javascript and css. Thirteen years later, many sites are extremely sophisticated, accounting for the nuances of various browsers, etc. It's certainly a formidable hobby at this point.

When the world is a mysterious place, it is a frightening place. A child growing up in the modern world sees all of our amazing technology as magic. Everything is a mysterious black box that just works, or when it doesn't, you call someone to fix it, or toss it and replace it. That's why I always respected Fred Rogers. Even if it was something as simple as taking children "behind the scenes" at the supermarket, he always strived to debunk the modern world at a pace anyone could keep up with.

We shouldn't resist technological progress, but we should be aware of how living in a high tech world affects people. Happiness is linked strongly to control. If you choose only to control your own thoughts, you will probably be very happy because success is almost guaranteed. But if, like most people, you choose to try to control the world around you, technology is a double edged sword. While in some senses it has afforded people unprecedented power, in other ways, growing up surrounded by inscrutable sophisticated electronics can be alienating.

Being able to know things is good for your health. Scientia potentia est!

"Now we have these things called drivers and libraries that do all the basic work for us"

And where do drivers come from, faeries? Unless you want a generation of aging programmers who understand the workings of the machine to die off completely and become reliant on drivers originating from unknown mystical places, the younger generation of programmers MUST learn these things.

I do not think it is a coincidence that computers get faster every year, but my experiences as an end-user are not one bit improved since Windows 3.11. Things still break constantly. Mysterious transient problems. Multi-vendor finger-pointing because NONE of them truly understand the complete working system or want to be responsible for it. etc. etc. I still spend about 30% of my time just *waiting* for my PC to do God-knows-what between certain actions.

It's horrendous, and the problem, in my opinion, is that we have become reliant on "high level" app guys like yourself who just "trust" that whatever is beneath their app is going to do what it needs to magically and that it will all come to pass. And then the apps get heavier, and heavier, and round and round we go in perpetual mediocrity. I've been stuck in this feedback loop since 1989, and I'm telling you it makes me want to just swing a hammer for a living instead sometimes...

There's still high performance computing using in such commercial ventures as oil exploration or manufacturing design. A one percent performance improvement saves well over an hour in a week long processing job - so the old software is seen as obviously slow, gets ditched and the optimised software from somewhere else used instead. We've had things finish three days earlier on less nodes when comparing "brand name" software stuck in the 1990s with bug fixes outsourced to India versus something from a smal

The five soon discovered that just because a program was simple did not mean the underlying code was straight-forward. To make matters more testing, the BBC Micro offers a very unforgiving programming environment.

My first piece of commercial programming was on a BBC Micro and having that environment didn't teach me anything, it just made programming more of a pain than being able to cut and paste, set debug breaks and so forth. And it doesn't teach any more than using C#/VB because it's a machine designed around using BASIC, which is itself an abstraction (and IIRC, you didn't have functions, so had to endure the horror of GOSUB/RETURN).

I first used edlin on DOS 1.0 and was kept using it until better alternatives (norton edit, anyone?) appeared. Edlin makes vi seem like a walk in the park. I've used edlin for assembly and Pascal programming, and I say "curse you!" to anyone who jokes about those dark days.

Wouldn't it be better to use the emulation route? For example, writing a program for the original gameboy, and running it through the emulator. I remember at university we learned assembly on an emulated MIPS. We could focus on the individual instructions, on hardware that was simple and clean, but it all ran on the unix servers (x terminals).

from the link: "using 30-year-old or older machines."from the fine article: "First released in 1981; discontinued in 1994 using 30-year-old or older machines."

I recently (three weekends ago) fired up my Commodore PET 2001 [flickr.com] (a *genuine* pre-1980 computer) and have been writing a Forth [6502.org] for it. It's really a lot of fun, and I'm finding that 30 years experience in various high-level languages has improved my "6502 assembler golf" game a lot. It's very incomplete, but the inner interpreter mostly works. Feel free to throw down on it here [github.com]

10 years ago when I went through University, the core of the mandatory Assembly programming course was taught on the PDP-11 architecture, then 30 and now 40 years old.

Granted it's not quite the same. We used emulators and not the real things. Also it was for different motivations. The prof felt it was simpler to teach the cleaner PDP-11 instruction set than the 80x86 or 680x0, although the course did eventually also extend to both. Also he happened to be an expert in systems like the PDP-11.

However the idea of using old systems as teaching aids is hardly new - or news IMO.

My first job was writing software that controlled scientific instruments and their was an awful lot of eductaional software written for them because they were designed to be used in schools. The Basic was more structuured and it could use microcassettes or 5 1/4 flopies with its own DOS.

In short, if you are going to use a dinosaur, it is the best dinosaur to choose

For Mr Abrams the old machines have two cardinal virtues; their sluggishness and the direct connection they have with the user.

Another hacker learning skill you must obtain, that he forgot to mention, is how to completely master a system. This is different from merely learning enough.

At one point, I could tell you every minute detail of OS-9 (the motorola 6809 CPU OS, not the apple product two decades later) and I also nearly mastered 68hc11 assembly, Z-80 assembler, and the PDP-8.

There is no point trying to teach kids how to master something using, perhaps, the linux kernel, because its too freaking big, at least for a one or two semester course.

The mastery skill requires figuring out what you don't know and then figuring out how to find it. Very much like spatial mapping, I see a blank spot in my map of how it all works, so how will I get from where I know to where I don't know? Also you learn how to learn the philosophy of a complete working system, sort of a C/S ecology mindset. Finally there is a bit of reflective thinking that interacts across now usually broadly separated problem areas, look how the memory allocation system has reflected onto the design of the I/O drivers and vice versa.

Learning how to master a topic is a valuable skill, and at least for CS students, frankly best learned on the smaller older stuff. Too many newbies think asking small specific questions of google is all they need, and think they can scale up to a big project merely by asking more little questions, without thinking thru the big picture.

A fourth thing the dude forgot is that older computers were MORE powerful. Power is what comes out of the barrel of a gun, its not P=I*V or MIPS. A single old MVS mainframe could run a small govt department or a multinational corp.

This reminds me of a quote I read from Philippe Khan back in the really old days. He used the original IBM PC (4.77 MHz) to test code (Turbo Pascal) when much faster (8 MHz) machines were available. He said he "liked to watch the computer work".

I noticed a bunch of low (even 4 digit)/. user ids in this thread -- like the guy who got the CP/M box off craigslist. I think it would be quite interesting to do a correlation between low/. user IDs and opinion on the subject. The hypothesis is that older people will have a softer spot for older machines.

Myself? I think learning to program in older machines is a great idea. But then again I learned to program in Sinclair ZX-81's BASIC language -- back when 16kb was a memory expansion...

I'm reminded of something that happened to me while I was a student assistant at a remote job entry location of a university's computer facilities.

The incoming batch of engineering freshmen were being taught, as was the tradition, to program badly in FORTRAN. An instructor assigned them the problem of counting the ways to make change for a dollar, assuming you had plenty of all the denominations of coins. How did he have them do it? Nested DO loops, one per denomination, with each denomination running from 0 to 100 / the denomination's value, of course!

The result? Bunches of students exceeding the thirty-second time limit for WATFIV jobs so their programs were cancelled before they finished. They'd run them again, of course--maybe the first time was a fluke. (The university ran on a 370/138 at the time....) Then they'd come in and ask how to run in a different job class so they weren't limited to thirty seconds.

I wrote a program in Algol W with a recursive function that would solve the general change making problem. It solved the specific one in 0.01 seconds. A friend and coworker (alas, no longer with us) wrote a non-recursive program in FORTRAN that took less than 0.01 seconds, so that the output showed it as running in 0.00 seconds. Our boss took the listings and output and had a discussion with the instructor. He, and I hope his students, learned something.

Nowadays, they wouldn't. Today's computers would run the horribly inefficient version so quickly that nobody would care, and they'd move on to the next thing.

How will the student then apply his knowledge to modern languages such as Java, C# ?
He'll have to optimize his code by doing a bunch of tests, just as he would have did without that class.
With a flags and the time (in ms) required by each of the different methods, he will understand, for example, that quick sort is faster than bubble sort. And so it goes.

How will the student then apply his knowledge to modern languages such as Java, C# ?

It's really pretty simple. After seeing what a computer can do with code intimately optimized for the machine it's running on, they will be exposed to the status quo in Java or C# and their heads will explode. Problem solved on our end!

I was in AP computer science over a decade ago. We used C++ using the "apstring" and "apvector" classes that were similar to the STL.

We of course had to implement bubble short, quicksort, insertion sort, and so.

It was fairly slow on our computers (386s/486s/maybe one pentium!) and you could REALLY see a visible difference between the difference sorts. It was very obvious.

I rewrote the sorts using standard C arrays instead of apvector. Even on those ancient computers, the differences were suddenly almost gone. Bubblesort using straight arrays was faster than apvector quicksort--at least for fairly small arrays. I don't remember the specifics anymore, but you had to be sorting IIRC several thousand things before there was much of a recognizable difference.

So yeah, that made a big impression on me. Then again that class, and intro classes in college were the last time I've had to write my own sorting algorithm...

I think it's a good thing that people who have maybe only used 2ghz+ computers are given a chance to experience something else. I guess a better question would be, why is expanding your horizons ever a bad thing?

I think it's because apstring and apvector were both simplified versions of the real deal. And the entire source code for both was pretty small and understandable for people just getting into C++, templates, etc.

We at least created modified versions of them as well, extending or re-implimenting certain functions. I don't really remember too many specifics!

How will the student then apply his knowledge to modern languages such as Java, C# ?

Do you believe that a school should teach Java, or teach programming?

BTW, C++ can kernel-mode C programming jobs aren't going away, and tend to pay better than Java jobs as the talent pool is growing smaller. Especially for kernel-mode programming, very few schools are turning out bright young talent with any relevent skills in that area, so the labor pool is aging out but the demand isn't shrinking.

I don't know what the A level syllabus is, but I suspect it is more about learning how computers work in preparation for a university degree than about learning how to program in any particular language. Quite frankly I think they should keep things as fundamental as possible at this stage. Students can always go to community college if they wish to learn how to set up outlook, operate excel or write java etc.

This is true. A-levels aren't about getting a trade job immediately. They're essentially college prep classes, though very different from US style. Why teach and indoctrinate in a particular language today when by the time they're out of school the fashions will have changed? Principles are far more important.

It's a little over a decade since I did mine, and I don't know how much they've changed. But FWIW mine involved partly learning algorhythms / programming - in Pascal, with tiny bits of assembly - and partly a bunch of theoretical stuff such as binary (floating point) arithmetic, BNF, Codd's normal forms, basic hardware/architecture principles & protocols, etc. I can't claim to remember the proportion very accurately. Somewhere between 30:70 and 50:50 I think.

I don't think this is something that's worth doing for *vocational* reasons. You don't do this because you'll produce a supply of programmers who are better at the flavor du jour of programming language. You don't do it as an *alternative* to access to modern machines either.

You do it for *educational* purposes, to produce people who understand on a deeper level what is going on than somebody who has studied for some kind of vocational certification. Perhaps they'll go onto be hardware designers, or sy

If you want to get an intimate feel for writing programs without being able to waste resources, try embedded systems programming. The microchip 10F series has only a few dozen bytes of ram, and a couple hundred words of flash. And no hardware multiply. Making it do useful things is an art. Oh, and unlike some relic from the 70's, you can actually get a job programming for tiny microcontrollers.

That said, it does seem like a cool class. One I'd like to take, but for personal interest, not professional development.

They're A-level students, i.e. the final two years of school, ages 16-17 and 17-18. It's probably more interesting than making some crappy VB application, which is what I remember the A-level computing students doing (I didn't do the subject, I did extra maths instead -- it was much more useful for finding a place on a good CS course at university).

If you want to get an intimate feel for writing programs without being able to waste resources, try embedded systems programming. The microchip 10F series has only a few dozen bytes of ram, and a couple hundred words of flash. And no hardware multiply. Making it do useful things is an art. Oh, and unlike some relic from the 70's, you can actually get a job programming for tiny microcontrollers.

Agreed on all of the above, but the experience of working on the relics will translate to modern embedded systems sufficiently well that I think there is value. In many cases the relics will be even slower and be more RAM- and ROM-constrained than all but the the tiniest of today's embedded microcontrollers.

In my digital design course we had to build a simple computer. After we'd demonstrated adders made out of NAND gates we were allowed to use an arithmetic unit chip, and wired things up with latches etc. so we had a workable bus. Programming was accomplished through DIP switches and output via an LED bank. Just like they used to do (substituting LEDs for light bulbs). When you programmed those things you made sure your code was efficient otherwise your hand would get tired flipping the swi

I have programmed on TRS-80s and 8088 w/8087s. Compiled C and Read & Go BASIC.

But now I'm programming python on an 8-core Xeon. When I'm writing a stored procedure or a nested loop of two recordsets, I ***STILL*** catch myself thinking about how slowly those instructions would take on a slower machine. "Do you know how LONG that looping will take?... oh. 0.000006 seconds. heh heh. I catch myself "subvocalizing" the loops, and I shy away from something "so resource intensive" and look for another, more e

And yet, when that routine needs to run three billion times per execution, it completes a few hours faster than a slightly less-efficient algorithm! That's a significant improvement for any company today working with large-scale data. A lesser programmer simply wouldn't be able to do the job.

A) It teaches people how to use unfamiliar hardware/software. Chances are the thing you are going to be running at your job is not going to be the thing you studied in university for.

B) It teaches kids how to not make mistakes in coding. Make a big enough mistake and the entire system goes down. Compilers are also a lot less fault tolerant.

C) It teaches kids how computers actually work by pealing back layers of abstraction. Think about it, has the average person under 20 ever used a CLI? For anything? I think the closest people come these days to actually using a CLI is typing in something on the Windows "Run" dialog.

D) It puts things in perspective. It shows how you don't need a Core i7 to play games, that a graphics card with 100 times the memory of the entire computer isn't required to make art, etc.

E) Its fun. The old computers had a lot more easter eggs built in and little tiny quirks. These days you get a Dell/HP/Gateway/Acer/Asus/etc slap Windows/Linux/OS X on it and its the same as any other Windows/Linux/OS X box, but the old computers all had little things different, some things were frustrating of course, but when you don't have to do it for any too serious of work, it can be kinda fun digging out the old Commodore 64.

I love this idea. I learned 6502 assembly way back, and spent time in grad school hand optimizing fortran subroutines to extract the optimal performance out of a well aged (in 1985) Cyber 730 system.

Most programmers I run into today have never once touched assembly, and many of them think that ANSI C is the same as assembly. While it has been a LONG time since I have coded anything, I have shown a couple coders some old school optimization techniques that blew their minds

I think the closest people come these days to actually using a CLI is typing in something on the Windows "Run" dialog.

Most people don’t even use that. When the Run dialog opens, it contains the last thing that was executed from it, and on a non-technical user’s computer this always seems to be the last thing I executed from it however long ago it was that I used their computer last.

If you ask me, the closest thing people come these days to a CLI is the Location bar in Internet Explorer... maybe even the search bar in Google.

Another way to get to this level of demand is to have them work on extremely large datasets using mathematically sophisticated models. Even on a modern computer, developing a sparse matrix-represented tensor from R^{60 000} dimensions to R^{7500^2} will take a bit of planning.

This will train (mostly) the same skill set and also prepare them for real work. Unfortunately, most CS teachers don't know jack about numerical optimization/statistics/data mining.

Another factor is the conceptual model is simpler. It is possible to know the entire layout of one of these classic machines. The CPU, instruction set, registers, I/O chips and memory layout. You can exactly where a program will load in memory. A 6502 has an Accumulator, X register, Y register, 6 flags, a stack pointer and a program counter. It is possible to know exactly how the computer works on both a hardware and software level.

Try that with a PC, what happens when you flip the power switch. Well which

A) It teaches people how to use unfamiliar hardware/software. Chances are the thing you are going to be running at your job is not going to be the thing you studied in university for.

C) It teaches kids how computers actually work by pealing back layers of abstraction. Think about it, has the average person under 20 ever used a CLI? For anything? I think the closest people come these days to actually using a CLI is typing in something on the Windows "Run" dialog.

I can't stress this enough. I'm 22 - so close to the age range you mentioned, and I had only ever used Windows 3.1 when I was around 3 to 5 years old, and even then it was just to boot up some old Kings Quest or Math Tutor game - and beyond that I only ever used to use the MS-DOS prompt on Windows 95 for ipconfig so that we could get a good Age of Empires game going. Once I got into the Polytechnic that changed a lot because one of our professors was very Linux happy and learning to use the terminal on a Fe

It's not about "understanding low-level programming" - it's about having a direct connection between what you do and what happens. No virtual machine, no garbage collector, no super-fast compile/link/run/modify cycle (s you're going to take a few minutes to THINK about why something didn't work instead of just doing the "quick fix let's test it and see if we got it right this time" route).

you don't need to screw your head on a crippling dinosaur to understand low level programming

Absolutely. Better teach them C so they will know how data structures and memory management work.

Languages that try to do everything may help you write code faster but can be treacherous.

Let's see a simple example. In Python there is a subtle matter of memory management that can be dangerous to the untrained programmer. When you copy a list like this: a = b you are creating a pointer to the other list, when you copy like this: a = b[:] you are allocating memory for a new list and copying the contents.

When you know C, the difference between the two copy instructions above is obvious, but if you don't know what is memory management this can become very difficult to understand. I bet there are many bugs created by Java, Python, and other modern languages that come from this inability to understand how the language works under the hood.

Working on old computers can be fun for some people, but to train programmers nothing beats learning C. C is close enough to the hardware to let one understand the details of how software runs, yet abstract enough to represent any typical von Neumann computer.

Exactly. Higher-level languages are _for_ people who already understand lower-level languages. Just as calculator are _for_ people who already understand arithmetic. Schools don't give calculators to kindergarteners or any child who hasn't yet understood arithmetic. First understand arithmetic, and demonstrate you do so by working _without_ a calculator, _then_ be allowed a calculator. Giving people some high-level language on super fast machines with "retina" pixel density and high-level languages i

Yes, it makes great sense. WHen getting started, it really helps if you're forced to deal with the low level, and more if you can actually see the low level.

I've spent a large part of my career writing software realted to tape drives. It really helped me getting started that I could sit down in front on an old IBM 9-track reel-to-reel to test my code. Not the most useful thing for production data storage, but terrific for seeing problems with production code. Miss the end-of-tape marker? Flap-flap-flap-flap doh!

Similarly, writing and debugging production assembly code made me very comfortable with debugging and crash analysis on higher-level languages, even if I didn't quite have matching source. And that experience in turn lets me understand "what really happens" with a language like C# or Java, and for example explain to people why, for example, the.NET file rename function is no substitute for the Win32 file rename system call, despite the fact "they both just rename a file". STuff that should be obvious to even a junior programmer but, well, isn't.

These systems have high-level I/O options (keyboard, monitor) that embedded systems don't, while avoiding all of the complexity (multi-level caching, speculative execution, out-of-order execution, etc.) of a modern desktop.

Actaually the BBC PC isn't far from the perfect embedded system trainer.From the Wilkipedia."The machine included a number of extra I/O interfaces: serial and parallel printer ports; an 8-bit general purpose digital I/O port; a port offering four analogue inputs, a light pen input, and switch inputs; and an expansion connector (the "1 MHz bus") that enabled other hardware to be connected. Extra ROMs could be fitted (four on the PCB or sixteen with expansion hardware) and accessed via paged memory. An Econet network interface and a disk drive interface were available as options. All motherboards had space for the electronic components, but Econet was rarely fitted. Additionally, an Acorn proprietary interface called the "Tube" allowed a second processor to be added. Three models of second processor were offered by Acorn, based on the 6502, Z80 and 32016 CPUs. The Tube was later used in third-party add-ons, including a Zilog Z80 board and hard disk drive from Torch that allowed the BBC machine to run CP/M programs."

Four A2Ds 8 bits of GIO, and switch inputs. All available from Basic on a machine with a Floppy, Keyboard, and Monitor. Sweet.I so wanted one of these back in the day. Too expensive and not really available in the US at the time.

Seriously, how is this useful in modern computing, other than as a "Back in my day..." quote?

Learning how to use older/simpler machines is an excellent way to learn about a number of fundamental concepts. Modern computing, for all its advances, still operates off the same fundamental principles as it did fifty years ago; it's simply become orders of magnitude more complex.

Now, while it's perfectly possible to learn how to do this sort of thing using emulation or specialized training software, there's real value to having an appreciation of the history of the field you're planning to enter, and working with machines that were once considered state-of-the-art is a very effective way to gain a sense of just how insanely far computing has come. Note, too, that simply because you're never going to be called upon to program a PDP-8 in real life doesn't mean that you can't learn a fair amount of generally-applicable knowledge about hardware, logic, branching, execution, input, output, and instruction sets. In fact, by pulling yourself out of a familiar environment, you're forced to pay attention to important things that you'd otherwise happily ignore--like "well, how does what is in my head actually get into a computer's inner workings?"

Finally, always remember that programming is a subset of computer science. Even if all you ever expect to do is write code, a deeper knowledge of what goes on between the compiler and the electrons is going to be quite useful--and will make you a better coder, to boot.