In Re^2: Programming is more like:, moritz brought up the wobbliness of mathematical foundations in physics, when I complained about lack of proper mathematical tools in programming. True, most scientists who do mathematics daily never mind about the actual proofs behind theorems, except of course mathematicians themselves. In physics in particular, there are few purely mathematical proofs; most things can be proven by pointing at a physical fact, or hand-waving as some physicist friends call it. However, this doesn't detract from the usefulness and effectiveness of the method!

I know of a few attempts to conjure "the algebra of programming". Interested monks can search for articles by E. W. Dijkstra (et al., including W. Feijen), David Gries, or Ralph-Johan Back. The approaches are very interesting, and from what I have tried the methods out, I can tell that they do work on small scale -- if-then-else statements, loops, and even individual functions.

Why these methods have not been succesful -- if we measure success by the number of programmers using them -- are partly the same as listed in the parent post, Re: Programming is more like:. An additional reason is that most people seem to regard mathematics as extra labour, waste of time, and something that will guide them away from the actual goal.

A further problem is that the field of programming is very young compared to other big mathematical disciplines. Physics, for instance, begun with astronomy thousands of years ago, and has had rigorous mathematical methods since Newton's times (17th century). Programming is not older than 60 years, and during that time programmable machines have changed a great deal, not to mention programming itself. Even though Von Neumann architecture, for example, has been very successful, there's no saying there wouldn't be better ones out there. If and when such ones are invented, programming languages themselves will change, because most nowadays are (bluntly put) abstractions of the said architecture. A likely catalyst will be quantum computing, unless there will be an additional abstraction layer mimicking a traditional non-quantum one.

So, we don't even know yet what programming is or what it should or will be. We haven't laid down a good enough framework either. You can do loops and conditionals in all programming languages nowadays. Ignoring differences in syntax, they can all be treated same at the abstract level, and useful theorems can be proven about them, if you define useful semantics. But we -- the "average" programmers -- don't know any of the theorems. Given a sorted array and a value we know is in it, why can we know for sure that if we search it in linear order starting at the beginning, we will eventually always find the value at some index? It sounds very basic, and it's not difficult to prove, but the mythical algebra could be built on simple theorems like this, much like Eucledian geometry, and then expanded to cover deeper and more meaningful theorems.

Coming back to physics, programming may not have similar laws as physics does, but if we had something analogous, we could perform similar hand-waving in analyzing problems and actually deriving pieces of the program. This is where the usefulness would lie.

Ironically the proofs that professional mathematicians create are almost never truly rigorous either. The difference is that a mathematician knows how, in principle, they should be able to make it rigorous. Assuming that there are no errors. Which is often a dubious assumption.

How do I explain it? Imagine that you wrote down a detailed outline for your computer program in pseudocode, but you never tried running it. Suppose you convinced yourself that the outline was complete and unambiguous. That's kind of like a mathematical proof. Suppose two other programmers were selected read your spec closely, agreed with your assessment, thought it was interesting, and didn't find any major mistakes. That's kind of like a published mathematical proof.

Based on your experience programming, how many small, niggling problems do you think you'd encounter if you tried to turn that outline into a real program? How possible is it that you'd encounter something that would require a larger rethinking? Do you think you might encounter a fatal flaw that would make your program useless?

That's how math is. Lots of proofs have errors. It has happened (for instance to an Italian school of geometry in the late 1800s) that there have been entire areas of mathematics that collapsed under the weight of accumulated areaserrors. There are some results that nobody knows whether to believe because nobody has been able to audit the proof. (Famously the classification of finite simple groups.)

Just waving a mathematical wand and saying we have a proof doesn't mean that we're right. That's not to say that trying to construct proofs about our programs is not a worthwhile exercise. But it isn't a magic cure either.

Edit:planetscape noticed that I said areas instead of errors. I found this very ironic, but fixed it anyways.

If so, as a corollary your statement "Just waving a mathematical wand and saying we have a proof doesn't mean that we're right" is equivalent to Dominus' (in)famous statement that "You can't just make shit up and expect the computer to know what you mean".

That isomorphism is interesting, but orthogonal to what I was trying to say. I would say that your point is part of the reason why there is such a good parallel between doing mathematics and writing programs. However the point that I was trying to make is that our customary way of doing mathematics is like programming where you only outline the program. You don't write it out completely, you don't run it through a compiler, you don't do QA on it. You just do desk checking on a program that has never actually been written and convince yourself that it would work.

If I wanted to draw a parallel to a famous quote, I would go with Knuth's, Beware of bugs in the above code; I have only proved it correct, not tried it.

Another example is the proof of Fermat's last theorem by Andrew Wiles. Since the proof is long and involved, it took the author himself to point out the errors, and later fix them.

I take it you are advocating that the only good way to ensure a program is correct is to run it on a computer. But here's an inherent problem: your mental model of the semantics of the programming language you use and what is actually implemented on the computer are bound to differ. Which one is correct is subject to debate (i.e. whether the documentation -- which is the only thing consumable by humans -- or the implementation -- which is consumable by computers -- is definitive). A document may be ambiguous, yet a human reading it can still understand what is meant. A computer program cannot be unambiguous. What may happen is that you interpret the program differently from the computer.

A further problem is the level of explicitness required. Machines do not have creative thinking, so every last little detail has to be spelled out. Humans, on the other hand, can guess, find connections, and derive on existing knowledge when interpreting a document. Excessive detail will only hamper understanding, which is directly contrary to computer programs. A computer will make all idiot mistakes you never thought anyone reviewing the program would.

I'm not advocating that the only good way to ensure a program is correct is to run it on a computer. However I am saying that actually running it on a computer tends to catch a lot of things you don't catch otherwise. Furthermore when you add in unit tests, QA, and other best practices that can only happen after the code can be run, you catch even more. Therefore a program that has only been desk checked should be viewed with suspicion.

As for why mathematicians don't do this, you've hit on some of the reasons. Mathematicians only seek to convince other mathematicians of their results. They strive to be critical of themselves and others, but at the end of the day if you can convince other mathematicians that you have a proof, that's what matters. Therefore there is no need or desire to write proofs out in unreadable formal detail, but there is a need to clarify the central point in a way that other humans will understand.

If you are looking for more mathematical influence in programming, then you're probably looking in the wrong directions for it. Math is very useful to programming in some ways and not very helpful in others.

One place to look to math is for notation and theory of the programming language itself. Lisp and its many dialects are based on the lambda calculus. Forth can be very simple because it borrows postfix notation, which is designed to make order of operation unambiguous.

Another place math is very useful to programming is in optimization. By developing software using algorithms which grow slowly in necessary operations compared to the volume of data, we can make major improvements in the performance of code at a high level. By simplifying expressions, we can get a few more cycles here and there as microoptimizations when needed. Math can be used to prove the equivalency of two solutions, but regression testing should still be used just in case.

Math can be useful in approximation routines for heuristic solutions that would take too long to solve using strict algorithms. It can also be useful to choose when it's necessary to use a heuristic and when the body of data is small enough to use a fast-growing algorithm.

Some people will say that math is useful to programming for simulations, accounting, data mining, and statistics. I'd say that's not quite right. I'd say in those cases the math is useful to the programmer and the user. It's not really useful to the act of programming as an external influence, though, because that part of the programming itself is math.

All of these are good reasons that a math or hard science background can help one as a programmer. Being able to program things that use math is different from using math to improve programs, though. The former is domain-specific programming where the domain is mathematical in nature. The latter is the field of Computer Science.

Many programmers can get by writing software with just a solid grasp of high-school algebra and a rudimentary understanding of CS. This is possible by using guides to algorithms written by other people who have done the research, and by not writing programs that require much math in themselves. To write a solid mathematically-centered program, you have to know the math yourself. It helps to have the math background even if you're not programming mathematically intensive code, because math can describe many things about a program that are quite useful to know.

The connection to physics in the OP is apropos. Physics straddles the line between mathematical purity and empirical data, and sometimes you have to give up some elegance and do some hand waving to get stuff done in the real world.

Similarly, in programming we deal with real world events that hamper our ability to make assumptions, like BrowserUK's lift motor. I spend much more of my time defending against those events (customers entering VISA in the "name on card" field, customers putting URLs in the search box, etc.) than I do contemplating the lack of rigorous mathematical backing of my actions.

But, I don't feel like the study of "the algebra of programming" is useless to me, even if I don't care about it. As a physics major I really disliked mathematics but it provided me the tools to get the job done. As a programmer, do I care much about this subject? Not really, but I know that work in this direction will inform the tools I'll be using to program 20 years from now.

It's not clear to me whether you're talking about the "mathematics of programming" or the "mathematics of computer programs", which I think is an important distinction to make. Programming is a creative process: you start from a specification and construct a program to achieve it. A computer program is a static object, unambiguous, with formally defined operational semantics. The computer program is easily amenable to mathematical analysis (in principle), but the creative process of producing programs is not as easily amenable to automation.

The entire field of Formal methods is devoted to the automated, mathematical analysis of computer programs. It is a very active research area.

Of course, many people (including some respondents in this thread) would object to the fact that your mathematical models can never include all of the real world concerns that might be relevant. Of course it's true, and it's a very legitimate concern. Einstein said it best:

"As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."

Some people consider this a shortcoming of trying to model things mathematically. I prefer to think of it as also a shortcoming of the real world ;)

Of course, you could have an interesting epistemological discussion about whether logic and mathematics are necessary to say anything about external reality with any degree of certainty, but I'm not really equipped to have such a discussion...

Perhaps people are overestimating the application of mathematics to rocket science and engineering...
Gathering a set of courses that will reach the destination is all well and good, but imagining a trip there isn't the point.

You still have an army of engineers designing the rocket, its parts and sub-parts. Testing each part, and thoroughly testing the assemblies before sending the resulting design out into the field.

On the tiny scale:

physics will say; resistors do this, capacitors do that, inductors do this. Combine a resistor and a capacitor in this way to get a low-pass filter.

Computer science will say; if statements do this, jumps do that, and assignments do this. Combine a jump with an if statement in this way to get a while loop.

On the small scale:

physics will say; this AC/DC circuit should handle X amount of volts, and convert the electricity with N% efficiency.

Computer science will say; this function should convert inputs of type/range A to type/range B, in order O(N^M) time and memory use.

On the large scale:

the engineer can handwave; yes we can make a rocket that will reach the moon.

The programmer can handwave; yes we can provide remote access using a web client.

To get back around to the parent post; I believe the average programmers DO know at least the core theorems. Advanced programmers know more high level constructs as well as low level details and implementations. We can all analyse and derive problems with handwaving, to various degrees of detail depending on experience.
If you try to program without knowing the core theories, you won't get very far and will be limited to not only plugging prefab modules together, but also limited to a subset of their features and capabilities. In terms of engineering; anybody can wire together a battery and a motor to make their own fan, reasonably easily. But that same random person trying to create their own an air conditioner, battery or motor could be quite a disastrous project.

No, we can't do loops in all programming languages. And there is quite a few languages that have nothing in common with von Neumann at all. (Heck ... where's von Neumann in SQL?) And actually it's most often easier to reason about programs in those languages.

I imagine w-ber is only considering Turing-complete programming languages in this discussion (of which SQL is not one). Since all Turing-complete languages can simulate one another, and since at least one Turing-complete language has loops (Perl, for instance), then all Turing-complete languages must have some equivalent to looping. It may have to be simulated, and that simulation may not be efficient, but it's there nonetheless from a mathematical standpoint.

I think the school system gives students a false impression that mathematics is "pure truth", or an exact science. The way it is taught is for bean-counters.... exact accounting..... and equations.
This is good for those purposes.

But physical reality dosn't conform to our equations, and engineers and scientists are always making non-real special conditions, like "assume no friction", "assume constant temperature", "assume whatever to make the equation work". Computers are still excellent for that,using simulation modeling, but you don't see it in Perl (or most languages because they are slow).

So you basically have 2 types of math in programming, the bean counting, which Perl does well; and the computer simulations, which is best done in c or assembly on super computers.

Then there is the theoretical math, like of sets, and form, and this has yet to be modeled well.

So to lump all math together and talk about it's usefulness in programming would need a book or 2 to do it justice.

To me, simualtions and numerical analysis are problably where physics is going. There is not going to be some "great formula for everything" that can be computed. Instead there will be giant models, on supercomputers, which analyze billions of variables, every microsecond, trying to predict outcomes.

They will teach waveform analysis in math classes, and the students walk away feeling they know about waves; but they still can't come up with an accurate formula for waves breaking on the surf.

Given a sorted array and a value we know is in it, why can we know for sure that if we search it in linear order starting at the beginning, we will eventually always find the value at some index? It sounds very basic, and it's not difficult to prove, but the mythical algebra could be built on simple theorems like this, much like Eucledian geometry, and then expanded to cover deeper and more meaningful theorems.

Went to join the gridlock to see it
Held an eclipse party
Watched a live feed
I cn"t see tge kwubosd to amswr thus
I tried to see it, but 8000 miles of rock got in the way
What eclipse?
Wanted to see it, but they wouldn't reschedule it
Read the book instead