Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Consul writes "What is the oldest piece of code that is still in use today, that has not actually been retyped or reimplemented in some way? By 'piece of code,' I'm of course referring to a complete algorithm, and not just a single line." The question would have a different answer if emulation, in multiple layers, is allowed.

Interesting, a quick search on Google reveals that there isn't much on this topic other than people talking about the oldest computer they have. One post talks about some old IBM Series 1's and S/360/30. One good one is to say the computers onboard some of the oldest spacecrafts like Pioneer 10 (1972), Voyager I and II (1977). Although they haven't received anything from Pioneer 10 since 2002. But you could say that the computer in it might still be running.

Somehow I doubt that many of the people that would be running such old computers such as ones from before 1970 would be reading Slashdot. And if you think about it, people conceptulized computers differently back then. I think you'd be hard pressed to find mention of a specific program but more of mention of a computer itself. Its too bad there is such a big disconnect between the generations of computer programmers and administrators.

Back in 1994 I did some contract work for a banking site that was still using some code that another firm I had worked for wrote in 1969, though it wasn't entirely unmodified. The source had somehow disappeared into the great filesystem in the sky, and it was my job to patch the binary directly.

Sadly, that sort of procedure has pretty much gone out of fashion, along with the Real Programmer. (Sigh) That's why I am no longer in IT...

Essentially they just modify the executable itself rather than having the code and recompiling it. The types of people who do this also tend to be good at things like debugging programs by reading a raw core dump.
From the quintessential article on the matter:
"For this reason, Real Programmers are reluctant to actually edit a program that is close to working. They find it much easier to just patch the binary object code directly, using a wonderful program called SUPERZAP (or its equivalent on non-IBM machines). This works so well that many working programs on IBM systems bear no relation to the original Fortran code. In many cases, the original source code is no longer available. When it comes time to fix a program like this, no manager would even think of sending anything less than a Real Programmer to do the job-- no Quiche Eating structured programmer would even know where to start. This is called "job security"."
http://www.pbm.com/~lindahl/real.programmers.html [pbm.com]

And why should they? It works. It does precisely the job it was designed to do, and continues to do it at at least the level of ability it originally had, often better if the hardware underneath has been upgraded. Something only truly becomes obsolete when it no longer satisfies today's needs. A well designed, task-specific system could theoretically never become obsolete.

At my job they're replacing a bunch of Tandem code that runs some of our core IT infrastructure with Wintel servers. It makes me ill to even be near the work, because they're taking something that just quietly works and "upgrading" it to something that doesn't.

For those who don't know, Tandem is a high-availability platform designed to never go down. They had the power off to the building earlier in the year and the Tandem folks weren't sure they knew how to power the system on properly - that's how long it had been running.

If there was a power outage, they might not be able to find the guy to turn on the machine? Then it's time to upgrade.

I agree with you that if it works, why fix it? But when a product has reached end of support because 1) the manufacturer has stopped supporting it or 2) there is no one in the working population that knows what to do with it, then you have to get it out of your infrastructure. You cannot continue to rely on products that you have no way of fixing if they break. Just because it hasn't broken in the past 30 years is no indicator that you won't hit something in the next 30 that won't break it.

>If there was a power outage, they might not be able to find the guy to turn on the machine? Then it's time to upgrade.

If it's a mission-critical system, then power outages aren't a concern: The system itself will have a UPS capable of keeping the system running for quite awhile once main power drops, and also will have a generator of some sort backing that up as well. It starts up after a specified amount of time, far in advance of when the UPS will fail.

Once mains power drops, and the UPS starts, alerts are generated to those responsible for keeping the system running, and one of the first things that those people will do is call the company that provides their electricity to ascertain the nature of the outage.

From there, they will arrange for additional fuel for the generator, should the outage be prolonged, and most likely will already have such arrangements in place, if they are doing their jobs properly. In addition, they will start alerting the people in charge of the department(s) that rely upon it, and will keep them informed as well, so that they can plan for it being shut down, should such be required.

However, for the most critical systems, plans will be in place for a transfer of services off-site, should such be necessary.

And, again, if it's mission-critical, regardless of its age - all of these things have been planned for, years since, and, if done properly, they are tested on a regular basis as well: Contracts are in place, points of contact as well, and all are updated regularly: Part and parcel of keeping the system running.

And trust me, if all else fails, and it needs to be shut down, then such has been planned for as well, including having "a guy" available to turn it back on, once reliable power is available.

In addition, such things as handling "what happens if it breaks" have also been planned for, and that includes migrating, when such is deemed necessary.

I'm not sure why you got modded up to +5 Insightful, since there's nothing really insightful at all about your post: It reeks of assumptions that simply do not apply in the real world for those of us in IT that actually support mission-critical systems daily, and do so with an eye towards service and availability for those that rely upon them.

But, this *is* Slashdot: Many here think that those of us in IT exist only to thwart them, because we are clueless, and afraid of their superior "skillz", by their estimation.

I trust I've proven that such isn't always the case:)

Captcha: archfool

That made me laugh - it's an amazingly appropriate summation of my opinion of the parent poster:)

And I say that with NO anger. If anything, I'm saddened that such a post was found to be insightful by anyone.

What's really fascinating about Win95 (and something I've actually tried) is that you can run it fully within the L2 Cache of Intel's latest generation of Core2 processors...It was blooming hilarious to see it never need to page out to system memory because the entire OS was living on-die.-nB

I once worked for Pfizer and "owned" a critical system to support emergency (as in explosions, firefighting, health, etc) operations globally. Dual servers, raids, back-up power supply, the whole works. It had run for years with no outages. The one thing I didn't do was put redundant servers in one of our European data centers because, I was assured, it was nearly impossible for the power to this NJ farm to go away because of backup generators, etc. One day I get a call from IT and they were going to take my emergency information system off-line for half a day! Why? The power switch on the UPS was broken and they couldn't turn it OFF! They brought down my critical never-to-be-offline system that was running perfectly because they couldn't turn it OFF! It was, without a doubt, the dumbest thing I ever saw.

There are system/390 mainframes well into their second decade of uptime, with no original electrical part still in place. Every board is upgradable as faster hardware comes along, without downtime, and in some of these systems only the actual frame is original.

One place I worked for (a major IBM software development lab) had a very old mainframe computer that they used for a few things. Although they could have replaced it with newer systems, I heard that part of the reason they did not do so was because the building's heating/cooling system was designed around this computer. If they removed it, it would be very difficult to re-balance the heating/cooling system. I don't know if this is really true, but I thought it was amusing anyway.

For that matter, how often does it need to run in order to be "still running"?If you run the oldest piece of hardware with the earliest software ever written once or twice per decade for historical reasons, is that code "still running"?

It's amazing to me that NASA has the foresight to design such a remote update system years before the concept of a "firmware update" was ever applied to consumer technology. The innovations that have come out of NASA's labs is vastly underappreciated -- one wonders where our technology would be today if we invested more in the space program and less in killing one another (that is _not_ a condemnation of any particular country, pointing fingers doesn't solve problems...if anyone is offended by that remark I apologize).

We probably wouldn't be as far along. Military technology, especially in times of conflict, has resulted in a great deal of progress. Among other things, there's clearly defined failures (eg, someone defeats your army in battle or you have to abandon some location or policy). In comparison, what's failure in space development? It's obvious when things blow up. But what happens when things just aren't done? Is that a failure or just something that can't yet be accomplished? As I see it, it's far easier for a space program to plug along without any real measure of success and failure. That has complicated our efforts to do things in space.

one wonders where our technology would be today if we invested more in the space program and less in killing one another

Sadly, we would probably never have developed any sort of rocket (beyond the toy phase) if they weren't such a darned convenient way of delivering explosives...

if anyone is offended by that remark I apologize

Please don't. I for one am fed up of our modern PC climate where everyone is afraid to exercise their right of free speech in case someone isn't mature enough to deal with different views. Save the self-censorship for when you're tempted to shout "Fire" in a crowded theatre, or "Jesus loves gays" in a crowded fundamentalist church, or some other speech act that's actually likely to endanger people's lives.

yeah, communicating effectively with people instead of flaming them is certainly cause to get "fed up".

When you try to express a concept that might piss people off, and you aren't trying to piss people off, saying so and expressing sensitivity to their beliefs isn't "PC", it's basic technique of a civilized person in conversation.

note the word "civilized" typically connotes that you are attempting to be a civil person. While being an opinionated asshole is easy and fun (believe me, I know!) it is not effective communication unless your goal is to intimidate your listeners.

I share your impatience with people with thin skins; I also share on a personal level your disdain for those people's "maturity". but the fact is, people are different, and some people have thin skins for legitimate reasons you have no knowledge of. recognizing that is simply showing your listener that you have a basic respect for them as a human being, and it typically goes a lot further to achieve final understanding that just beating them about the head with their own "hot buttons".

in short, showing a little respect, deserved or not, is what it means to be civilized, IMHO. I don't always follow this. But whining about PC stuff is old and tired. Yeah, some people suck and are stupid and wussy; and it's still cool to be cool to people, by and large.

Actually - it wasn't always this way, although this technology was deployed fairly early in the space program.

I remember reading an article about one of the earliest Mars probes. Both the US and the USSR launched probes around the same time. However, when the probes began to approach Mars a huge dust storm ensued obscuring most of the surface for quite a while. The US probe was reprogrammable, while the Russian probe was not. The US was able to put their probe into hiberation during the storm, while the Russian probe expended its energy relaying photos of haze.

So, the value of this ability was proven fairly early in the space program. I'm not sure what the timing was relative to Pioneer but it almost certainly predated Voyager.

One of the original IBM System S360 programs, IEFBR14 is still in wide use today.
IEFBR14 CSECT
SR 15,15
BR 14
END
Only two changes in over 40 years. It doesn't do much, in fact nothing except set a zero return code, but it is widely used for dataset allocation purposes in batch dataset allocation processing.

The joke is that not only it takes four lines of unintellegible gibberish to do with JCL what we would today write as 'rm my/file/name', but also that, against all odds (and all that is holy), it still works today and is used in the exact same way it was used when somebody's grandfather first wrote it.

Somehow I doubt that many of the people that would be running such old computers such as ones from before 1970 would be reading Slashdot. And if you think about it, people conceptualized computers differently back then. I think you'd be hard pressed to find mention of a specific program but more of mention of a computer itself. Its too bad there is such a big disconnect between the generations of computer programmers and administrators.

As someone who has been programming computers since 1966, I beg to differ with you. Code is more persistent than computers, since one can still run code written for an Intel 8080 on a modern dual core Pentium. The one main difference between programming them and programming now is that the cost of computers then meant that machine efficiency then was more important than human efficiency. Unfortunately too many programmers still think that way and are not willing to put in the code for security checks, clean user interfaces, etc. that are required.
In many ways, computer science had a huge regression after the development of microcomputers. Instead of extending the lessons of mainframe computers like the Multics project about security, we returned to the "efficiency" goal because of the lack of power of early micros and still use that mindset when we have IPods that are more powerful than the largest mainframe of 1970.

Thanks for the response and its great to read your input on this.What I'm saying in my post though is that from my point of view and I think from others my age (32), is that you're more likely to hear just about computers from before the 70s rather than the software they ran. I'm sure you have a different viewpoint because you actually experienced that era. But I didn't and all I have to go on is what is written in books and on the net.

I'm glad that there isn't a complete disconnect between the generation

On an 8080? 8086 or 8088, sure, but I don't think the 8080 was really compatible with the x86 instruction set. Similar, sure, but not compatible in either direction.

It really is scary just how powerful computers are today. I recently built a new computer, using a Xeon E3110 (everyone was out of Core 2 Duo E8400's recently, and the Xeon was only about $10 more, and I didn't feel like waiting around). I used to work at Thinking Machines, and a group of us were planning a reunion later that week, and it occurred to me that in just about every measure -- floating point, memory capacity, disk bandwidth, and even memory bandwidth -- my new machine was at least equal to a full CM-2. In some ways -- storage capacity and total I/O bandwidth -- it blows the CM-2 out of the water.

When the CM-2 was introduced in 1987, it was way faster than anything else out there -- if you could figure out how to actually program it. These days, even a distinctly midrange home system (we're not talking an "Extreme" here) gives it at least an honest run for the money. There aren't any CM-2's still running that I know of, and apparently the last running CM-5 was shut down a few years ago, although none of us who ever worked on these remarkable machines would be thrilled to be proven wrong on those two statements.

The US DoD has a system, called MOCAS ("MECHANIZATION OF CONTRACT ADMINISTRATION SERVICES") that was originally brought on-line in 1958 [portfolio.com].

I'm not too familiar with it, so I don't know if the code has ever been changed -- I suspect the hardware has been updated periodically, probably various IBM mainframes -- but based on my experience with government systems there is probably a fair bit of original code in there that nobody understands anymore, and thus doesn't touch.

I don't know for sure, but I suspect that the oldest code still in use is probably the FORTRAN differential equation libraries that are used in aerodynamic and thermodynamic applications. These were developed and extensively tested in the 1950s, and were much of the reason why FORTRAN got the funding it needed. The cost of rewriting these libraries from scratch, including complete re-testing, is very high. Yet the final cost of an inaccurate result is magnitudes greater.

My understanding is that when these libraries are migrated to new environments, it is generally considered better to test the emulations and tweak them until their results agree with the results of vintage systems, rather than messing about in the library code.

I guess they could be, depending on how far you're willing to stretch the definition of computer. It seems quite obvious this the definition used in the question, though. But assuming we somehow agree that DNA is code, you'd still need a religion to refer to it as 'written' instead of 'generated'.

There's still code running for nuclear power plants that was written in the 60's or earlier; given the challenge of certifying emulators we ran it on the original machines; embedded code in machinery was probably been older. Although, most really old stuff was mechanical not based on ICs.

Some military hardware may be even older; reliability and certainty is often more important than the latest and greatest.

Check the various satellites. Voyager 1 is about 31 years old and significant portions of its programming remain unchanged. It is expected to keep running until about 2020. There are older operational satellites, but I'm not sure which ones were hardwired vs programmable controllers.

Knowing full well that I haven't got a clue, my guess would still be microcode embedded in some special purpose device - i.e. not a general purpose computer.

I don't remember when digital watches started appearing, but I suppose there's a bit of code in there? Various industrial machines from waaay back that are still in use ought to be good candidates as well.

Kudos to Consul for a remarkably interesting Ask Slashdot. The best one I've seen in a long while:)

Old school devices such as digital watches use ICs. ICs are really nothing more than assemblies of discrete components (resistors, transistors, etc). To count, the device would have to use at least a PLC (Programmable Logic Controller). These devices could be considered to use 'Code'. The next challenge would be to find the oldest device STILL RUNNING.

I don't remember when digital watches started appearing, but I suppose there's a bit of code in there?

There almost certainly isn't a line of code in them. "Digital" != "Computer". Digital watches are nothing but a clock, a counter, a display matrix and a little bit of logic for setting/resetting the counter.

...because even if the code hasn't been replaced, you can bet the source control software has. My guess would be old cores of for example banking systems, I know there our company has COBOL code written in the 60s and the system is still in COBOL and in use today. If someone wrote a correct, useful algorithm back then it could very easily still exist today. I can at least assure you that they don't exactly do rewrites very often...

The Science Museum has card decks for Jacquard looms that are more than a century old. Bletchley Park has a replica Colossus machine, which needs programming in the shape of switch positions. IDK if the code they use was preserved, or reverse engineered along with the rest of the machine, though.

IEFBR14, the good old chunk of do-nothing code, the most universal executable used by anyone who ever wrote JCL.

It really does that - nothing. IEF is the code prefix, since all code *must* be prefixed, after all. BR14 stands for "Branch to Register 14", which with the old code linkages conventions means "return and exit". In JCL it's commonly used simply to attach, allocate, and deallocate files. In other words, used for its side-effects with the file allocation parameters. I haven't written any JCL in probably 20+ years, or I'd give an example. Anything I'd show now would likely be too badly riddled with errors to give the true, scrumptious feel.

It came about because the developers of the IBM 360 Operating System suddenly realize a few weeks before it was ro be released that they had no method of actually allocating resources for a job. In a panic, they hacked together a version of their assembler macro language to parse the control statements. so the format of the language was the same as assembler language macro calls

label opcode operands

Spaces were significant, everything had to be upper case, syntax was arcane.

FANG [fourmilab.ch], from 1972, is probably one of the oldest applications you can still download and run [crewstone.com]. It's a copying utility for UNIVAC mainframes. UNIVAC Exec 8 was way ahead of its time, with full support for threads, multiprocessors, and concurrent I/O from the late 1960s. FANG was one of the first applications to use that concurrency effectively. You could put in a series of commands to operate on multiple files, and it would do them as concurrently as possible, keeping track of any dependencies in the file copies.

We programmed landleveling program on an Apple II+ in 1979. That code has remain pretty much unchanged since a port to GW basic for the PC. That is the oldest code we have written that is still being used.
Before that we used a strip programmable HP-Calculator (computer?) to run the numbers.

By which I mean production code, not the 'Hello World!' you did in Jr. High. I'll go first. In the mid 90's I wrote a COBOL program to link a mainframe to a HP printer to print transcripts at a uni. The SYSPROG set up the VTAM lines and I glued the PCL together with COBOL. I checked in about 3 years ago and a friend of mine said they were still running it. So at that time it was pushing 10 years. Which makes me proud actually.

If by "program" you mean a stored program on what is conventionally meant by a computer today, I have a candidate. IEFBR14 was used on the earliest version of OS/360 in 1964 as a do-nothing program. It is still in use today, unchanged on the latest version of z/OS. Its function is to execute a JCL step which does nothing, but in the process of doing nothing, the job scheduler is invoked. This is one method of creating and deleting datasets (files). It is also the shortest valid OS/360 (and z/OS) program, containing two executable assembler statements and two assembler directives. The comments are mine.

IEFBR14 CSECT START PROGRAM SECTION SR 15,15 SET EXIT CODE TO 0 BR 14 RETURN AND EXIT END TELL ASSEMBLER END OF PROGRAM

Interestingly, the first version of this program had a bug, which was subsequently corrected by doubling the program length. It omitted the SR 15,15 statement, which meant that at program exit register 15 had an unpredictable value -- and the program exit code was therefore unpredictable. Since a zero exit code is used to guide the conditional execution of subsequent steps, a failure could be indicated when there was none.

And contrary to another post, I believe there are a lot of people with computer experience predating 1970 who read Slashdot. But I don't want to start a flame war over that.

Of course, it depends on what you count as code and what you count as running.

People have already mentioned DNA, and I guess I'd give that high marks.
But maybe we mean things invented by man.

An abacus is a hardware program that is programmable with data and will yield numeric results.
So is a sliderule. And there are others like the card sorters for punch cards, which predate
programmable computers by several decades and yet performed very useful computation long before
general purpose computers. And there are analog computers for predicting the motions of planets
or for controlling the locks of the Panama Canal. But maybe we meant code implemented in software.

The Babbage Machine is mechanical so if it stops, does that mean the machine has crashed or does
it just have a long cycle time? People have mentioned that, and that's certainly a worthwhile
contender.

Mathematics also codes up algorithms, some of which are extremely old, and some of which you might regard as code, and so there might be
something there that's competitive. But in a forum like this, full of nerds,
I think "math" is too easy an
answer and isn't provocative enough to get people thinking, so I'll go with this one:

My personal favorite is just something done in human language. Human language has codified
the execution structure of organizations and processes for quite a long time. The US Constitution
defines an engine that runs the United States, for example. Roberts Rules of Order is a program that is an interrupt-driven system that runs meetings. Contract law in the US (and perhaps world-wide) reminds me a lot of the structure of bootstrapping TCP (reliable transport of packets under a contract) from unreliable pieces (the contract terms and offers); the whole business of how you can send an offer and what constitutes acceptance in the face of data loss and things arriving in the wrong order is very much analogous to what you see in modern networking systems, but just used to work via pony express instead. So I'd put my vote on one of those. I just don't have the time to
work out the timelines to figure out which one came first... probably something in English Common Law. It also depends on whether you want a "framework" or a "packaged application" or whatever, because some of these I've mentioned are in different categories in that regard.
These may not be quite as old as some mathematical algorithms, but I bet they're more overlooked.

Now that I think of it, though, I bet food recipes (which are algorithmic in nature)
predate even the earliest work of mathematicians, and it wouldn't surprise me
if the recipe for making hot tea is the oldest, even if it's been upgraded a few times for changes
in available hardware.

Just a few weeks ago, one of my guys was looking at an old system that we have running. It is an old IMS application running on an IBM mainframe used to manage some factory equipment. We want to replace that system (even though "it just works"), so my guy was looking into it to see how it worked, as documentation is, of course, non-existent.

The source code was written by my first CIO in the mid 1980s (who retired in the early 1990s), and it had a comment at the top which stated that it was created in January, 1968. It is quite sloppy... clearly before anyone thought about writing pretty code. There is no doubt in my mind that it was originally written on coding forms, and subsequently loaded into a machine via the long-defunct keypunch department. The program, of course, is running on much newer hardware now, but the code that is running was written in 1968.

I speculate that there is a bunch of older code outside of my company.

1. The US air traffic control system is 1960s vintage and I'd bet that there's still code in it that is unchanged since it was written.

2. Some airline reservation systems are of equally antique origins. Although I'm sure the hardware has been updated in the ensuing years, I'd say there's probably a lot of code that hasn't been rewritten. Back in the '80s when I was doing some work with an airline and asked about that, I was told, "That code is older than you are."

3. Don't know if this is still the case, but back in the late '70s, Navy carriers had computers so old that they were having to scrounge up germanium transistors to keep them operating. They wanted to keep them operating because nobody wanted to pay to rewrite the gazillion lines of reliable and tested assembly-language code that ran on them. If any of those are still around, they'd be my top candidate for having unchanged code still in operation. I'd guess that, in general, military systems (of the non-COTS [commercial off-the-shelf] type) are the most likely "oldest code" candidates, because of the lengthy and expensive qualification process and the long service life of such systems.

I was looking for some mathematical routines to port into Python and ended up poking around at http://www.netlib.org/ [netlib.org] and http://www.nist.gov/ [nist.gov] where there are huge repositories of mathematical functions, most written in Fortran.

One of the most interesting things after perusing much of the code I was looking for, was that instead of using integration routines for calculating things like Bessel functions, Hankel functions, and other differential equation related functions, they simply used look up tables and curve fitting.

I suppose in the 1960's that made perfect sense as computers were so slow. But even today, I don't know why I shouldn't do the same thing. With EM and circuit simulation software its GIGO. There are so many parasitics to model, that you can only ever get an approximation anyway, so what difference does it make if you get a tiny error from a look up table, vs. the "exact" integration routine value?

I saved this post from alt.folklore.computers. Terribly impressive. I'mnot sure his age estimate is necessarily accurate -- the finalincarnation of the Leo ceased to be manufactured in the later half of the60s.

I don't know if some modern incarnation of the Orange Leo made it past Y2k. If it did, my guess is it will still be around for a long time...

From: Deryk Barker
Subject: Re: Multics
Newsgroups: alt.folklore.computers, alt.os.multics
Date: 1998/11/09[*snip*]When my wife was working for Honeywell, in the 1980s, one of thecustomers she had dealings with was British Telecom.

BT, at one location, had what they called the "orange Leos".

Now, for those who don't know this, the LEO was the world's first-evercommercially-oriented machine (1951). Even more amazingly, the LyonsElectronic Office was designed and built by the J Lyons company,best-known as manufacturers of cakes and for their nationwide chain ofcorner tea shops.

Anyway, an "orange Leo" was an ICL 2900 mainframe (they came in orangecabinets), emulating an ICL 1900 mainframe, emulating a GEC System 4mainframe emulating a LEO.

My uncle used to run embroidery machines in union New Jersey. These were built in the late 1800 and were about 100 feet long 10 feet wide and 2 stories tall with 1000's of needles stitching constantly. Literally were built as part of the building they were housed in.

Where it gets interesting is these were driven by a large mechanical computer that ran from paper punch cards. The device itself was about a 1 meter cube. There were adders, and carry, multiples, and I think even branches and loops. It used to move paper cards back and forth as it created post man patches or frillies part of ladies undergarments.

Don't know if this counts though and I think it's decommissioned anyhow, but it was sure was cool to watch.

Sort of depends on definition of "still running". If you mean in use when necessary and essentially an unchanged algorithm and logic, we have a lot of FORTRAN code written in the early 60's still running in daily use. I predates Fortran IV, but I would suspect that the same code started in ALGOL and They are generally math function routines (convert Euler Angles to Quaternions, that sort of thing). Originally it was on cards but then implemented into files. I still have some of the card decks. I would guess that with some work I can find some older than that (that is character-wise identical except for the comment cards).

Believe it or not, the Jaquard Loom - 1801 (which is still in operation today), is the oldest known powered, programmable 'computer'. It's output is not text or numeration, but textile.If there is a hole (or binary 1), it allows thread to go through. So it is digital and not an analog computer. http://en.wikipedia.org/wiki/Jacquard [wikipedia.org]It is debatable if it is a computer, but the original post wanted to know about code running today.Well the code is there as punch cards. Each set of cards can make a particular pattern in textiles. Copies of the code still run today.Also, Babbage wanted to use a similar punch card system to program his engines.Now if we are talking analog computing 'code' then that is a different story.:)It's all there folks!

The oldest extant computer architectures are IBM System/360 (now called System z, but able to run object code from the 360) and Burroughs B5000 descendants (now called Libra). Both architectures date from the early 1960s (1964 for the System/360 and 1961 for the B5000), so we can guess that the oldest running programs date from the same period, or about 40 years ago.

This also fits well with one of the unwritten requirements of the questions: that there be a language in which to write the lines of code. The earliest computer languages (LISP, COBOL and Fortran) date from only a few years prior to the introductions of these systems (LISP was invented in 1958, COBOL in 1959 and Fortran in 1957).

This also fits well with a couple of long lived software systems with which I am familiar: The IRS tax return processing system dates from 1964, written in a combination of COBOL and System/360 machine code, it only now being replaced by C++ code (the project is called CADE and has been featured in a number of newspaper articles over the past 10 years as a monumental failure). The airline reservation system, SABER, dates from around 1960 and has been in constant use since it went live in 1964. While SABER was originally written for IBM 7090 mainframes, it was transitioned to System/360 in the early 70s.

Embedded systems aren't a consideration at this time scale (the first microprocessor didn't appear until 1971), so we don't need to worry that some washing machine from the 1950s is still running some program written at that time. Still, it sounds like the oldest running programs must be about 50 years old.

I find it amusing that most conspiracy theorists - whether the conspiracies are true or not is immaterial - tend to write long rambling screeds like that that cause people to lose interest after the first sentence, and then use that as proof that the world is against them.

As late as 1998 one of my former employers was running applications written in 1401 assembler in the late 50s/early 60s which in turn had been translated from IBM accounting machine commands. I can't say if they are still running since I am no longer there but given the size and resulting inertia of that entity I would not bet against at least one of those apps still being in service.

Once they rebuilt the Manchester Mk. 1 ten years ago, Alan Turing's program became the oldest program runnable without emulation. It clocks in at 60 years old, being written in 1948. The code finds the highest common factor between any two integers expressable in 32 bits. Not bad, given that the Mk. 1 had only one arithmetic operator, subtract.

That's true enough, and it was presumably because it was well-known that Turing used it on the world's first stored-program computer - easier to spot defects in the hardware side of the logic if the software side can be trusted as correct. The program and data were both in volatile memory, and instructions were fetched via an instruction pointer rather than going on to the next piece of punch tape or going by hard-wired instructions. (Conditional branches on a pre-stored program computer must have been a bugger, especially with something as fragile and slow as punch tape.) There were known problems with the computer - invalid instructions might do anything - and although it stored 40-bit words, it could only handle the first 32 bits.