Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

A casual discussion of algorithms ranging from abs to numerical calculus.

Who & What

Jack W. Crenshaw, Ph.D. (Physics)
wrote his first computer program in
1956 for an IBM 650. He has been working with real-time software for
embedded systems ever since -- contributing several years to NASA
during the Mercury, Gemini, and Apollo programs. In addition to other activities, he is currently a contributing editor for Embedded Systems Programming magazine and author of the
Programmer's Toolbox column.

In Math Toolkit for Real-Time Programming, his effort is
focused on describing the pitfalls of vendor-provided math libraries
and providing robust replacements. In section one he gives a thorough overview of constants and the various manners in which to declare them, naming conventions, and error handling. As the work progresses, in section two, he builds a library of proven algorithms ranging from
square roots to trigonometrical functions to logarithms. Did you suffer through calculus in college with a barely passing grade?
Section three will teach you more about numerical calculus in a half-hour than you may have learned in three semesters.

Kudos

Math Toolkit is written in an easy to understand
anecdotal manner. You might be tempted to think that the author was
animatedly relating the history of computing square roots while having
lunch with you. This method works very well and keeps what could be a
rather heavy subject from becoming too much of a burden. Most chapters
have historical tidbits liberally sprinkled throughout.

Even if college algebra left you with post-traumatic stress disorder, you will not have any trouble with section two. Indeed, you may find
yourself intently following the author on the trail of the perfect arctangent algorithm -- much as a sleuth on the trail of a villain.

The depth of knowledge shown, and its presentation, is exceptional. The author's years of experience are evident in his
self-confident writing style. You will rarely see a clearer overview of numerical calculus.

Quibbles

The cover of the book states: "Do big math on small machines." This, combined with the Real-Time
Programming phrase in the title, might lead one to believe that the book's primary audience is intended to be the embedded microcontroller crowd. Sadly, not so. There is very little here for
the die-hard assembler programmer other than some very handy integer square root and sine routines - and these examples are in C++. Based on the cover, I would have liked to see a greater emphasis on
processors lacking a floating point unit. Also, some code examples in pseudo-assembler would have been welcome, as the author chose C++ as
the language of choice for all examples.

Crimes

As is so often the case nowadays, there are various typographical
errors scattered throughout. This seems to be an epidemic in current
technical books. Fortunately, it didn't affect the readability of Math Toolkit.

Conclusions

I believe Math Toolkit for Real-Time Programming would be a great, perhaps mandatory, addition to the bookshelf of anyone that
is involved in writing code that has a heavy math component. Other
than the somewhat misleading cover, I cannot find anything truly negative to say about this work. Congratulations are in order to
Mr. Crenshaw on a job well done.

The book also includes a CD-ROM of all example source code. In reality, to get the best benefit from the book, you should mostly
ignore the CD-ROM and work through the examples. To quote the author: "Never trust a person who merely hands you an equation."

Table of Contents

Getting The Constants Right

A Few Easy Pieces

Dealing with Errors

Fundamental Functions

Getting the Sines Right

Arctangents: An Angle-Space Odyssey

Logging in the Answers

Numerical Calculus

Calculus by the Numbers

Putting Numerical Calculus to Work

The Runge-Kutta Method

Dynamic Simulation

Appendix A: A C++ Tools Library

Disclosure

I received a review copy of this book from the publisher. Thus, my loyalties and opinions may be completely skewed. Caveat Lector.

Back in the day, a book like this would have been a real life saver for those of us slugging it out with brain-damaged operating systems (e.g. MS-DOS). From things like MIDI sequencers to guidance systems, the need for real-time speed was a real issue.

However, with the the maturity of operating systems, many of them now include device drivers, APIs, objects and other goodies that insulate the average programmer from the hassle of issues like latency. So my question is, other than good academic study, would it pay for the rest of us to spend the $$ on such a book?

Though I admit, having to write my share of real-time apps back in the day has me curious enough to put the book on my wishlist.

Someone marked this question redundant? Guess that shows you jerks are everywhere.

Hey, I understand completely what you're saying. I for one am glad I don't have to deal with such as latency and pre-emption. In fact, here is a link to a nifty article entitled "Real Time Issues in Linux [helsinki.fi]" that essentially sums up what you asked with a resounding yes.

This deserves some more explanation, since everyone here seems to have missed this point.

A Real Time system is one where the ouptut isn't correct unless it arrives on time. Real Time systems are deterministic - not necessarily fast. The key is to use bounded-time algorithims so that you can predict the worst case execution time at compile time. RTOS's aren't designed to be fast, they are designed to have deterministic schedulers and kernel services.

Of course, faster processors make it easier to meet real time deadlines, but as processors get faster I'm seeing engineers ignore the real time analysis and design because the code passed the last test they ran. Then they are surprised when it fails in the field...

However, with the the maturity of operating systems, many of them now include device drivers, APIs, objects and other goodies that insulate the average programmer from the hassle of issues like latency. So my question is, other than good academic study, would it pay for the rest of us to spend the $$ on such a book?

The above generally doesn't apply to anyone doing serious embedded work with small and midrange microcontrollers. Often an operating system is thin to non-existent on these platforms. Some of the lower-range parts may have a 2-byte hardware stack, 28 bytes of RAM and maybe 512 bytes of program memory. Obviously, you won't be doing much sophisticated numerical work on these smallest of microcontrollers, but for more midrange parts, I've found this book to be a godsend.

Well... Realtime programming might now be an issue for you if you use an advanced OS coupled with a mighty cpu.But in many situations you might find yourself programming for, say, a small 1 MHz cpu in a timecritic controllsystem at a factory or chemicalplant or something like that.That's when you'll need your skills in realtime programming.

I work with a group of eight other people updating 40 year old Assembler on an IBM Series 1. Something tells me that if this was included in our training programs, those that are
SUF
FER
ING
through the digit-crunching wouldn't have such a hard time. Most people consider this back-in-the-day, but there's an aaaawwwful lot out there that still reeks of old german engineering, and chunk-button ATMs.

Real-time and going fast are two totally different problems.Satellite controllers may need real-time programming - there's physical stuff moving, and if a signal needs to be responded to in the 100ms before the bird turns another degree, you need hard real-time. But there's nothing that a bank does that needs real-time, unless the device in an ATM that hands out the cash is really badly designed. Yes, you need to know that the customer has taken the cash out of the slot or that the receipt-printer's finished, but if you find out 100ms late some of the time, it isn't going to hand out the wrong amount of money, it's just going to be slightly later drawing the next screenful of customer interaction. Some of their stuff needs to get high volumes of work done quickly, but that's a throughput problem, not a real-time problem, and you might get better throughput if some of the transactions have wait their turn rather than preempting other ones.

I actually have this book. It does read fairly well with some good examples: although I should note that I haven't finished it yet. One thing it is especially usefull for is defining a math library that's accurate. Crenshaw talks about how a lot of compiler's built in function/methods don't hold up to rigorous math and he's right. But instead of just complaining about it he walks through solid alternatives. Overall it's pretty good and would provide some quality code for open projects. IMHO anyway.

Not all computers are desktop PCs. Have you ever heard of Palm Pilots? These things are slow!
I searched some time to find a decent integer square root routine to calculate object distances in my elite for palmos game [harbaum.org]. I would have loved such a book...

If you still need a decent integer square root algo, check out
this page. [azillionmonkeys.com] I used the mborg_isqrt2 variant on that page as a starting point for writing my
highly optimized Intellivision
version [spatula-city.org] for SDK-1600. [spatula-city.org] My optimized version takes about 600 - 700 cycles for a 16-bit square root, on a machine where most operations take 6 to 8 cycles. (The version I was replacing took 4000 - 10000 cycles.)

This book looks like it might be interesting to me. Here at work, we had our own math expert,
but he's retired (or semi-retired). We've contracted with him to do math libraries, and
that works for now. But what about 10 years
from now? There's a lot of subtlety in some of these algorithms (it's not always just as easy as
whipping through a Taylor series expansion),
so it's probably time someone in our group started learning.:-)

I am currently trying to get a data-acquisition computer to keep up with a five thousand frame-per-second video feed [redshirtimaging.com] while doing processing between the frames. Hard real-time is a real issue for me.

Of course. Real time doesn't mean low latency.It means predictable (bounded) latency! It's asecondary issue if that latency is low or high.My Linux is reasonably fast, but it's still far from real time: each time I touch my xawtv window, the whole machine freezes for a second...

However, with the the maturity of operating systems, many of them now include device drivers, APIs, objects and other goodies that insulate the average programmer from the hassle of issues like latency.

While mature OS'es are indeed very nice to have, they are not universally available. Mature OS == larger code size == larger HW demands == uses more power == larger battery == heavier equipment. And yes, this is still a very real issue. You are not always in a situation where you can throw more hardware at the problem.

As for latency, there are situations where you need absolute control over the timing. I recently participated in the development of a portable heart defibrillator. If there is a delay between the order to give a shock and the actual delivery of that shock, you may kill the patient instead of reviving him. For such jobs, you need guarantees, not promises.

I remember when having a solid math background was de reguire for a programmer. Of course, I'm talking the mid 80's, engineering school and Fortran, so I'm kind of krufty.

I wonder how much better could we be if coders knew basic math, if they know how those little bitty chips actually computed the sine of something instead of assuming it works. We would probably have rock solid operating systems without all the glitzy GUI stuf..

I wonder how much better could we be if coders knew basic math, if they know how those little bitty chips actually computed the sine of something instead of assuming it works. We would probably have rock solid operating systems without all the glitzy GUI stuf..

Huh? What's the use of sine in an OS besides to draw glitzy GUI stuff?

Funny this topic should come up. I just did a 'Store Locator' for the company I work for (I'm the IT Manager, belive it or not). All I have is your basic HS diploma, and in creating the search, I realized I don't know a damn thing about sine and cosine. I don't know how they're used, or how they're applied. I have a feeling that they're somehow related to geometry (which makes sense, seeing I have to get a distance between two points on a curve - the earth), but I'm not sure.

Sure, it's probably taken me longer to write this post, than it took to find the php code I used as a basis for the search, but how much math is REALLY needed overall?

I slept through school, I did really bad, all because I felt it was worthless. I did feel that my business class, business law, and basic Algebra has been useful. But overall, it wasn't worth my time. Hell I had a physics teacher who'd pick on me because I was flunking (it's amazing what good test grades + 0 homework does to you), but I just found physics interesting - jeez, it was only HS. I was testing the waters, not padding my GPA. I believe that's what's HS is FOR.

And if you KNOW what you want to do (I knew I wanted to fix/program computers when I played on my Apple ][ in 6th grade), what the hell is college for?

Well yeah, man, if you want to be grinding out php and html or doing admin work for the rest of your life, sure, there's no reason to get a higher education, and if you're happy with what you're doing then that's great.

But to get a job writing computer graphics software, or audio processing, or designing any sort of embedded hardware, knowledge of advanced math is required. The people who want to do this kind of work pursue higher educations, and if they enjoy what they're doing then that's great, too.

Well yeah, man, if you want to be grinding out php and html or doing admin work for the rest of your life, sure, there's no reason to get a higher education, and if you're happy with what you're doing then that's great.

Well, that sounds a bit belittling. I think building networks (I'm beyond admin, I just do EVERYTHING - including PBX) can be just as difficult as programming, and you get the same rewards.

I don't really grind out anything. Hell, I put up a TV antenna last summer, and hooked up the security cameras to a linux box for motion detection around xmas. I'd much rather be doing 90 different things, than concentrate on programming in 'X'..

Maybe I should have left out the 'Manager' part:) (I'm just the only one here.)

Then again, maybe I AM that good, and you're all just jealous! muhuhuhahaha!:)

It wasn't meant to sound belittling. I meant what I said: if you find it rewarding then that's great.

What came out as a belittling tone probably slipped through because I know that colleges around the country are churning out graduates with BSes in Information Technology or similar majors, all of whom are going to be going after YOUR JOB. Now it sounds like you've got a good mind and a good head start in the IT world so I wouldn't be too worried, but just know that your field isn't going to be getting any less competitive.

What came out as a belittling tone probably slipped through because I know that colleges around the country are churning out graduates with BSes in Information Technology or similar majors, all of whom are going to be going after YOUR JOB.

For what I've seen, I think they're mostly programmers and MCSE's. That's not too damaging to me. For practial, in-house purposes, I can pickup whatever I need programming-wise. I completely understand I won't be programming games, or advanced simulations any time soon (Hell, you can find my pitiful posts on wine-devel about trying to get FoxPro running.. rick@v a leoinc). But those positions always seemed like a small percentage of the market as a whole. Everybody needs a network, internet access, firewalls, phones - infrastructure. It just seems like a bigger target to me.

Fortunately for me, most people I run into are sorely lacking on what I would lump together as basic infrastructure.

(but at this moment, I have to put php aside,so I can figure out an EDI issue with FoxPro) I love having so many different things. How many EDI people know PC's? Networks? The consultant who interviewed me for this job didn't know many, so here I am!

Ok.. enough of the ego-boosting stuff for now:)

Personally, I think experience can replace college. You just have to be resourceful, and create a resume that shows it. I think I did a good job doing that.

Now, Social Skills OTOH....It would have been good for me to live in a dorm for a few years. I dormed weekends with my girlfriend - which got me to where I am today, family-wise:)

Almost everybody in the EDI field sort of stumbled their way into it accidentally.:-)I used to be the primary EDI guy where I worked a few years ago, but I haven't touched it since. I think I can still look at an unparsed 850 (X12) transaction set and tell you what's on it without thinking about it, and writing a program to parse all those nested loops is pretty fun. I can probably belch out a 997 FA after chugging a pint of beer. That would impress a very small number of people, unfortunately.

But I also know a lot about PC's (served time doing desktop support) and a little networking.

I've been programming since 1968, and very little had anything to do with math. People give me the same line, wow, I'm no good with math, I couldn't program, and don't believe me when I say computers add and subtract, multiply once in a while (array subscripting usually), and hardly ever divide.

Scientific or engineering programming, they need the math because they are math programming. The rest, forget it, maybe you add some numbers for a shopping cart, multiply for sales tax, but programming has little use for math.

I learned long ago that when an 8 bitter needs trig functions, you use a look up table generated externally.

mathematical algorithms and nonmathematical algorithms. To a computerscientist, this makes no sense, because every algorithm is asmathematical as anything could be. An algorithm is an abstractconcept unrelated to physical laws of the universe.

Nor is it possible to distinguish between "numerical" and"nonnumerical" algorithms, as if numbers were somehow different fromother kinds of precise information. All data are numbers, and allnumbers are data.

So maybe most of the math is trivial, but that's not the same as being useless...:-p

What's sad is that discrete math isn't really taught in public school. (At least, it wasn't when I was in school.) One day, I found a Discrete Math textbook at the local library in the 'For sale, $0.25' bin. I opened it up and thought "Oh my goodness, this is a programming and algorithms book!" To my mind, 'math' had always meant either calculation (symbolic or otherwise, your typical Algebra and Calculus), or geometry and proofs. While geometric proofs may border on discrete math, they really seem different to me. They're not algorithms.

Discrete Math branches into useful concepts such as graph theory (you couldn't do network routing successfully without it!), some of the basics of sorting, and so on. Basically, it was the math of "machines" -- that branch of mathematics which concerns itself with stepwise algorithms. Djikstra's algorithm (least cost path through a weighted graph), Prim's and Kruskal's algorithms (minimum cost spanning trees) were all in there.
I thought the book was great.

And, of course, not a single line of code in it.
(At least, not in any computer programming language.)
But I still thought of it as a programming book.

I'm a mathematician who just wrestled his BS from the UC regents who has also been programming for the last 7 years; and I agree that it's a shame that discrete math isn't taught to pre college students. It really is a wonderful foundation to the structured proofs in much higher math in (what I think is) a friendly form as well as the foundation of programming.

I find it's also true that there isn't much math involved in general programming; be it abstract algebra, topology, or analysis. Crap, even linear algebra doesn't come up explicitly that often unless you're writing a numeric package (or graphics, or interpolation, or sim.).

I say this with a tear in my eye b/c I'm in the middle of hunting for a more rewarding way to spend my math knowledge (not a lot of open spaces in crypto or AI programs). Grad school is the only option (if I can get in).

The one thing I do think higher math helps with is understanding the structure of problems. Math really is the science of patterns. It may take a long time to understand what topic in higher math applies, but I guarantee that there is an area of math that fits your problem.

Actually even if you're just doing basic sorts, searches and manipulate data structures, its amazing how much math goes into it. Ever considered the algorithmic complexity of using binary trees versus randomized data structures like skip lists [nec.com]?

You can be a "computer programmer", but to be a good one that actually has a brain and knows the pros and cons of the algorithms you're coding out requires math. At least the basics of probability theory and calculus.

How did you write the search function? Did you come up with an alorgithm on your own? Did you use a prewritten, off-the-shelf search routine?

Note that I'm not making judgements here, I'm just underscoring the point that there are some jobs that require certain knowledge, and others that don't.

And FYI (so you can impress your coworkers and/or significant other <g>): The sine of an angle refers to the y-coordinate of the point at which a line drawn from a starting point at that angle would intersect with a circle of radius=1 drawn around the starting point.

I won't bash you like some of the other replies to your post, nor will I give you hope that you can advance past a limited set of jobs in the IT industry.

College (esp for computer engineering and CS) fundamentally teaches you:

1. How to solve problems

2. A toolset (ie math, algorithms) to go about solving those problems

True, you may not ever use calculus, but as a computer scientist you will use matrix theory because it is the best way to solve some problems.

This is not only for scientific/research either. If you try to write anything performance related, you'll have to use higher math. Computer science ain't easy.

Let me stress again that college teaches you about your subject matter and how to solve problems for it. You can come up with this stuff by yourself, in my experience only a tiny percent working without a college degree will ever accrue enough to offset what they missed in college.

I won't bash you like some of the other replies to your post, nor will I give you hope that you can advance past a limited set of jobs in the IT industry.

Thanks for not bashing, and please don't think I'm attacking when I say this but:
If anything was learned from this post, it's that there are a lot of PROGRAMMERS who read Slashdot. IMHO, Programming in itself is a limited set of jobs in the IT industry.

Let me stress again that college teaches you about your subject matter and how to solve problems for it. You can come up with this stuff by yourself, in my experience only a tiny percent working without a college degree will ever accrue enough to offset what they missed in college.

You post sounds depressing, but don't worry about me, I'm all set (maybe I'm even in that small percentage). Maybe I'll go back to college when my kids are teenagers. I'll still be less than 40. Yes, I did EVERYTHING early - against the grain, thankyouverymuch:)

Your code may do the job, but does it do the job efficiently? And if it didn't, how would you know?

I changed majors from CS to Mathematics halfway through because I realized that programming is easy; you can always learn a new language or a new technique by picking up the appropriate O'Reilly book on the subject. But writing good programs -- programs that are robust, that scale well, that do as much as possible as quickly as possible -- is really applied math. And math is hard.

You simply have no idea how much you don't know, and with the attitude you have, you probably never will.

Your code may do the job, but does it do the job efficiently? And if it didn't, how would you know?

But what kind of programs? Like another poster said, math isn't really involved in the mainstream too much anymore. The dealer locator is the only thing that has made me think of anything math related in quite a while.

I also didn't spell out my duties. While the inital post may be directed at programmers, how many programmers are directly affected by math? I don't just program, and my programming isn't very intense. I've taken ONE week-long C++ crash course, and that's it. While I still haven't done anything in C++, I've done things in FoxPro, C, PHP, Perl - simple stuff. Want an example of what I've done? www.havokmon.com/stuff. Little blurbs. Nothing major. I didn't need advanced math, and yes, they apply to my job:)

How do I compare with the rest of the industry? I don't know. I have NOT worked for a company that produces applications. I have worked for companies that produce their OWN applications. The only app I know of that had any intense math in it, was a 'sales tool'. You could visually zoom in on any locations on a US map, and get population, sales density, and some other figures. I'm SURE that required heavy math. But that's one app out of MANY.

You make a good point, and I understand where you're coming from. But IMHO, database knowledge is much more important. If you want to know if it's efficient, you watch it execute. If it seems fast enough, it's fast enough. Remember the 90/10 theory. I learned a long time ago (from Netware server performance, actually) that spending 90% of my time trying to tweak out 10% more performance really isn't worth it in the end.

Now, if you're talking embedded systems, or console game programming - ok. But otherwise there are WAY too many constantly changing variables to try and tweak stuff over 90%.

Well, see, I'm a database guy too. I split my day pretty much equally between SQL and PHP. A lot of people may not consider that "real programming." I do, and in fact I've done some pretty heavy-duty scientific application programming in that past, and I'm here to tell you, I use my math skills just as much now as I did then. Because I don't just write queries and interfaces; I write them with absolutely fanatical attention to detail, and I subject everything I put up on my company's server to the kind of rigorous scrutiny I learned in set theory and algorithm analysis classes. And as a result, it works, and it works well, and on the rare occasions that something doesn't work as well as it should, I know how to fix it. The bigger applications get, the less meaningful the 90/10 theory really is -- for very big applications, a whole of bunch of seemingly trivial speed and scalability tweaks add up to big improvements down the road.

Funny, I'm someone with a lot of mathematical training (Ph.D. in Applied Math) but only a few courses in computer science. Somehow, I've managed to pick up a humungous amount of CS along the way, things like algorithm design and analysis, designing and coding industrial-strength C/C++ libraries and applications (yes I get paid for this), high-performance computing, OpenGL coding to roll my own volume visualization apps, doing all of my own unix system administration, setting up all of my own hardware...I've always thought that the best way to become really good at coding and software engineering is to first get a degree in mathematics. If you can do that, the rest is easy.

(Okay, I am a bit biased; I'm a college math professor, and in addition I do a lot of research and consulting related to numerical computation).

Funny, I'm someone with a lot of mathematical training (Ph.D. in Applied Math) but only a few courses in computer science. Somehow, I've managed to pick up a humungous amount of CS along the way, things like algorithm design and analysis, designing and coding industrial-strength C/C++ libraries and applications (yes I get paid for this), high-performance computing, OpenGL coding to roll my own volume visualization apps, doing all of my own unix system administration, setting up all of my own hardware...I've always thought that the best way to become really good at coding and software engineering is to first get a degree in mathematics. If you can do that, the rest is easy.

Hehe, I'm coming from the opposite direction: A little BASIC on a TI99/4a, a little Apple ][ hardware install, a little PICK navigation to FDISK and boot to my games on my mom's PC, hardware, networking, OS...

At this point, my next 'advanced' task is to write a replacement shipping EDI application. Java I hope, but I don't see any advanced math coming into play. You know what's really scary, sometimes I forget (like now) what you call the base of a number, without the decimals. Is that an integer? Pretty bad, but I haven't needed it. I can't even remember what I used in my C++ class last month.. Not an int, a float? ah well.

Maybe when I get to OpenGL programming, which I assume may require algorythm design, I'll go take some math classes:) But I don't see that day coming any time soon.

At this point, with advanced math, I'm like Sean Connery in 'Indiana Jones and the Last Crusade', "I wrote it down so I wouldn't HAVE to remember".

Sure, it's probably taken me longer to write this post, than it took to find the php code I used as a basis for the search, but how much math is REALLY needed overall?

Lord Kelvin put it best (though the notebooks of Lazarus Long come darned close):

I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.

Lord Kelvin

Anyone who doesn't know at least some basic math and statistics is a sucker for all the fallacies pushed by advertisers, politicians, and groups with an agenda. And once you've done your Google search for a formula to plug and chug, how do you test it if you don't know enough math to really understand it?

When I went for my Comp Sci bachelor's I was amazed at how many math-phobics there were in my Comp Sci classes. As part of earning the degree you had to take 4 specified math classes (Calc 1 & 2, Linear Systems, Probability). You only had to take one more math class to get a minor in mathematics, Calculus 3.

Now I've always been big on math but I was kind of surprised at how few people were willing to take a single class to earn a full-fledged minor.

May I be the first to say, that is a sad Math minor. You should at least need to get to Number Theory or Algebra and Real Analysis to qualify for a Math minor. Those five classes are (or should be) requirements for any science course, including Comp Sci.

<rant>Oh, the majority of coders know basic math, all right, or at least the most important concepts that are needed to hold down a job in today's IT business. They know that time equals money, and that taking the time to get the thing done right in the first place costs too much. They know (or they think they know) that it isn't necessary to worry overmuch about program size and speed any longer, because they can always depend on the hardware engineers rescuing them with the next set of more powerful products. They know that they get a greater return on effort spent on making pretty presentation slides of all the wonderful new features that are to be put into the next release and then transcribing the slides into the product's gui than on analyzing whther the new features are being provided in a consistent and unconfusing way, or even if they are needed in the first place.</rant>

It's not that trading raw power against development costs is unreasonable where that choice exists - far from it - but rather that hand-waving away questions of efficiency on the assumption that God (or Moore's Law[1]) will provide is a sure recipe for the sort of bloated and near-unmaintainable messes that are so common today. A Mbyte here, a Mbyte there, an assumption that the compiler will find and optimize the invariant components of loops... if you're not careful these all start adding up to measurable numbers "why is this so s l o w . ..")

[1]And, of course, one can always paraphrase Parkinson's Law [google.com] for IT: programs and data expand to fill the processor power and storage available.

Most of us wouldn't be any better at all. My math is rather rusty, though I used to be good at it at school. The reason my maths is rusty is because in the 7 years I've been doing commercial software development I have almost never needed any maths.

When I have, I've never needed anything beyond what I could find in a textbook or online in less than 5 minutes.

Heck, I've practically never even had any reason to use floating point math over that period of time.

Sure, there are lots of areas where you do need maths beyond what you can pick up from a book in 5 minutes, but there are far more where maths is irellevant.

Beyond basic algebra, maths is just another set of domain knowledge that you'll need to aquire to do particular types of software development, not something that is an inherent requirement in order to be a good coder.

Embedded systems gurus are primarily ECE types. They are engineers that know programming. The math knowledge and emphasis will depend primarily on your background. There are a lot of so-called programmers out there that come from a variety of background and consequently have varying exposures to mathematics. You even tend to find programmers with MIS backgrounds who have never taken a calculus course.

It will depend really on what you call yourself. I am an engineer and I have been programming for almost 25 years however my background is definitly skewed towards scientific programming. You can even see it in the sequence of programming languages that I learned over my career:

BASIC->FORTRAN->ASSEMBLER->PASCAL->C->LISP->XLISP- >C++->JAVA

I don't call myself a programmer, but an engineer who programs. This is because you will notice there are some importand tools missing from the above list. Things such as PERL which we know that every real programmer would have in their toolbox.

I don't know how most schools are...but the school I just got by BS from lumps math and comp sci right together. Though we had to take a lot of math as it was (especially higher math, discrete math), we also had some math professors teaching CS courses (algorithms, intro to computer systems, even operating systems).

I think this prepared us pretty well for what would be a more theoretical-type CS career (i.e. not just going to work as a programmer or web developer, but also continuing on to your masters or PhD).

Some of the ideas the department was real big on was proving correctness, for example, by induction. Instead of giving you a compiler and API and saying here, do this, they made you write it out and actually write a proof about why/how your program works (now imagine people actually doing that, for their OS's CreateProcessEx function!).

If all programmers new math there would be too much competition for those of us who do, not to many H1 - B's that can do physics simulation, stocahstic analysis etc., I'm glad an old dude can still find a niche in the post dot-com apacolypse.
Mark

The subject matter of this book is slightly different, since it has an emphasis on real-time [techtarget.com]. If you're just interested in crunching a large problem as fast as possible, then latency [netacquire.com] is not an issue.

BTW, if anyone wants to take a gander at Numerical Recipes in C/Fortran they are available here [nr.com].

It seems to me that the book itself is
pretty flimsy, content-wise. Yeah, if you
need a lot of hand-holding about the various
polynomial approximations and iterative approaches
to calculating special functions, you'll probably
learn something. But the end
of the book is Runge-Kutta; that's a technique that's covered pretty
early on in Numerical Recipes or even a freshman
class in numerical analysis.

Interesting read. Still, I would challenge all these numerical specialists to come up with a tome that is equally comprehensive and equally readable by scientists without extensive numerical analysis training. The book has been so successful because it hits the target audience perfectly.

For every algorithm in NR there is a better one published somewhere. But probably the same could be said of any book on algorithms. Also, numerical algorithms people are very opiniated and form distinct camps with different favorite algorithms. But pick up the actual papers where timing and results comparisons are made and you'll often find the tests were made on some standard set of test data that doesn't reflect how you might want to use the algorithm. So although I have many complaints about NR (especially the terrible coding style) I still think it's the best all-rounder.

In my field, it is absolute essential that one squeeze every last bit of math power out of the CPU. So this books occupies a place of honor on my shelf and I refer to it almost daily when I write my Perl scripts.

I'm surprised to see it posted on/., though, because he's pretty harsh towards the gaming community. In fact, he says near the beginning that game-related technology in CPUs (MMX and so forth) is taking away much-needed brainpower from research that should be reaching towards making chips do more math per unit time (not to mention driving up production costs for toy-obsessed, joyless loners). He calls for an immediate end to the pandering that Intel et al do to get into the pocketbooks of the socially-inept, technology pseudo-elite and wants real reform in the area of empowering science.

You know, it's really easy to write complete bullshit on slashdot and get +5, so to add a little challenge, good trolls add a small self-contradition to signals to other trolls that their post is a troll. Kudos to PhysicsGenius for mastering the art of good trolling to such precision!

I have to ask. If you're trying to squeeze all the math power you can out of your computer, why are you using an interpreted language? Use something compiled so your computer can spend its time doing the math, not parsing the code.

If you're doing a lot of number crunching or data manipulation (in big sets, with hashes, etc) you're probably spending most of your time in the libraries which are written in C. In fact, being that they're written by programmers skilled in that specific area, you're probably getting better performance than if you wrote them yourself.

Perl isn't an interpretted language, in the traditional sense. In most BASICs, when the execution comes back to a given line it's parsed again, executed, and dumped. If anything, they usually only cache a line or two to help tight loops. Perl is interpretted/compiled all at once, when you start.

Runtime speed is a little slower than other languages, but it's mainly because you've got a lot of runtime checks and hidden memory allocation turned on. Use C++ with automatic array expanding and garbage collecting and you'll see the same kind of performance hits.

That said, the ease of perl causes a lot of features to be misused by programmers who don't know how long it'll take. If you have two pieces of data (a header name and value for instance) it's common to toss them into a hash to keep them related. This isn't really a good idea unless you need to look them up by the header name. If you're just going to dump them out in arbitrary order, you should probably use two arrays in sync. Pre-allocate them to avoid a little delay at every operation. This way you avoid the overhead of the hashing algorithm that you're never going to use, and the slightly-slower lookups compared to an array.

You can also do more complex things this way. I've seen people use hashes here and read the list out by sorting the keys to the hash and iterating through. They'll then do this a few times, sorting at every step. If you want these arrays sorted, but you still don't care about finding a specific header, use an array of two-element arrays, sort the master based on the first element, and not only do you avoid almost all the overhead of hashes, but you have a permanently sorted array, no need to sort at every use.

These programming "errors" are worse in Perl than most older languages because they're easier to implement. In C you'd have to find a library function to create hashes, or write your own. If you started to write your own you'd quickly realize how many cycles you were burning and probably find an easier way unless your application demanded it. In perl (and many little "scripting" languages) you can do so much in a single command that you may not realize.

This is why if I were hiring I'd only take programmers with a "traditional" background of C or other low-level language, before they got to the Perl, Java, Python, or whatever modern rapid-development language we were using. ASM experience is even a plus. Nobody understands the cost of a routine like someone who programmed in ASM. And it's worth thinking about. Usually you say that requiring 512MB of ram ($40 these days) is worth it to save an hour or two of programming, but hopefully at other times you realize a CGI on a busy site can't be that greedy.

So, in conclusion. Perl isn't traditionally interpretted. It's almost as fast, or faster, than C for anything that spends much time in libraries (most code). Most of what slows down Perl (or Python, or Java, or C++, etc) is programmers who don't know what routines they really need to use. The cause of this is usually not enough experience in less "helpful" languages.

Agreed, much like using Matlab (which incidentally requires a rudimentary understanding of matrix algebra) - its very fast if used correctly and yet painfully slow when used incorrectly. loops = bad, vectorization = good

Grrr. It's one thing to do a physics simulation with 64 bit doubles, and another thing to keep it stable with 32 bit floats. It's an art and not for the shallow thinker talk to real physicists (and gamers) at companies like Havok and MathEngine. As for intel pandering, he ought to read a book like "Platform Leadership" to learn just what Intel have done to get stuff into the hands of peasants like me, for whom Cray did diddly squat. Scientists? Hah!

The Forth literature contains many examples of high-performance hardware-integer-math-only routines. A core feature of most Forth algos is rescaling to a power of two space at the start of the algo and from it at the end. This allows bit shift operators to do their stuff. It can take non-trivial fiddling to rescale algorithms - hence, it's nice to just look them up.

Unfortunately, it's tricky to find Forth books these days.

That's a shame, because along with Smalltalk, Lisp and APL, I think Forth is one of the "mind expanding" languages all programmers should at least experience, instead of just deciding C/Java/C++/VB is the one true language.

I don't know of other programs, but I know at the University of Waterloo (where I am a computer science student), we must take quite a lot of math courses, ranging from linear algebra, calc, classical algebra, combinatorics & optimization and statistics. The math content for the CS program is very high, and in the end you get a BMath degree.

Maybe this is different at other schools (well, actually I know it is at most, most don't do nearly as much math), but I would hope not. I think to be a solid programmer a solid math background is a requirement.

oh, and btw, for anyone nitpicking, UW now offers a BCS program, as well as the typical BMath Honours CS. The BCS seems to offer a bit more flexibility, so BCS students may not choose to take 'as much' math.

This is pretty much Waterloo's claim to uniqueness, is it not? Something about being the only Univeristy in Canada with a math _Faculty_?

I think this may provide some insight into whether or not it's a GoodThing for CS students to have more math in their degrees. Microsoft hires more programmers from Waterloo than anywhere else. And just look at the QUALITY of their code.:)

On a somewhat tangential note, I'm in Communications Engineering at Carleton, and we badly need a stochastics course in our program, so Digital Comm doesn't keep flying over our heads. Sometimes more math is good.

The schools in my area all have at least one or maybe two big employers. The curriculum is generally based on the needs of these few employers

See, that is where your problem is! The school is setting curriculum based on employers. It should not happen this way. Your school is shortchanging every student who goes there, by effectively (though obviously not completely) limiting their student's employment choices after school. Post-secondary education, especially at the university level, should educate its students in a way in which they can work almost anywhere, not just the 1 or 2 big companies in the area.

And oh, as a side note for another reply, yes, MS hires more grads from U.Waterloo then anywhere else, and when even the slightest controversy comes about over MS controlling curriculum [slashdot.org], people get angry and fights start.

This is a subject that's rather neglected -
three years of college math didn't go very far in letting me understand
how math (fp and otherwise) is actually done in discrete systems.

A year (or so) ago I attended a lecture given by Guy Steele (of Lisp/Java/
Crunchly fame) on his proposal to alter how IEEE floating point numbers
are mapped to real numbers. It quickly flew over my head, but gave a great
insight into the whole field. Steele then had a fair old "discussion" with
the one person in the audience whose head hadn't been overflown (sic), as
there was plainly still much controversy left in this area. On trying to do
some "why didn't I get this stuff at college" reading, I found there wasn't
a great deal of literature.

The reviewer's concern that coprocessor-less systems should be covered is
valid, but I'm not sure going as far as assembly is necessary. For example,
I once had the privilige of reading through Hitachi's libm implementation for
their H8 series microprocessor/microcontroller (one would be generous to
call H8 a 16-bit system, and ungenerous to call it an 8-bit system). With one
small exception (I think the cos table lookup) the whole thing was in (quite
readable) C, and (at least for basic libm stuff) performance was perfectly
acceptable. For didactic purposes, a C (or sane C++) implementation would
be the thing one would want to find in a book - I get very annoyed at embedded
books where the examples are written in asm for the author's favourite (obscure)
microcontroller.

A year (or so) ago I attended a lecture given by Guy Steele (of Lisp/Java/ Crunchly fame) on his proposal to alter how IEEE floating point numbers are mapped to real numbers. It quickly flew over my head, but gave a great insight into the whole field.

Steele is God. He also invented Scheme, wrote the original Common Lisp manual, co-wrote with Harbison a classic reference manual for C, and wrote parallel languages for the Connection Machine.

On trying to do some "why didn't I get this stuff at college" reading, I found there wasn't a great deal of literature.

To be honest, a lot of embedded coding is done with C or C++ these days. I've been following Crenshaw's articles in Embedded Developer magazine for years now. He explains a lot of what they try to teach in college Calc, etc. in simple, practical terms, and reduces it to usable algorithms.

I graduated as a Math/CS double major from Drake University, where almost all CS majors also got a Math degree because the CS prereqs covered all but 3 of the Math prereqs. It has actually helped me enormously as a programmer to know math: in the past month, I've needed transformation matrices, sine/cosine stuff, and a bunch of other things that, granted, could have been lifted verbatim from Google groups, but it's often faster (and the code is better) if I just do it myself.

Not being exactly a math whizard myself, I found this book extremely entertaining. It's pretty easy to see that the author is a heavy follower of the KISS philosophy. He tries to keep it simple not just in his code, but also in his explanations. It is possible to understand most of his explanations, even if you don't know much about differential equations, fft's or anything else.

As for the title, I agree it's a bit misleading. The book has pretty little to do about real-time (in fact nothing, as far as I could see). What it really should be called is "Computer arithmetic and a little of numerical methods for dummies". This book will help you understand how to write your own libm, and give you some ideas for more advanced tasks, but that's about it.

For me, who didn't know much of this stuff, it was very interesting. It will probably not save you that course in numerical algorithms (which I for one haven't taken), but even then, it will probably contain some interesting tidbits you didn't know.

On the other hand, if you have years of experience in writing computer math routines, it will probably quickly become dull, but that's true about anything you already know.

As a biz-app programmer, I am bothered by too much attention given to floating point math and not enough to decimal math. A decimal-centric approach would give better results for monetary calculations, because any truncation and rounding are at decimal (base-10) boundaries instead of base-2 boundaries. It gives results more like one expects doing it by hand on paper, which shapes most peoples' perceptions of what they expect (and the customer is always right, even if they are boneheads).

The only library I know that supports it is the BC-library sometimes used with PHP. (Well, I guess you could say that COBOL has such also.) It actually uses strings to hold the results so that there is no machine-based limitation on precision size. Plus, that improves its cross-language use since almost everything supports dynamic strings these days.

(Not the fasted approach I suppose, but most biz apps are not math intensive anyhow. Most code is devoted to comparing strings, codes, and ID's and moving things around from place to place. IBM used to include decimal-friendly operations in its CPU's. Those days seem gone for some reason, yet biz apps are still a huge domain.)

The reason so little focus is given to what you call "decimal math", and most people would call "fixed point" is that there's a very simple way of doing it: You do everything with integers scaled sufficiently high up, and move the comma to the right the prerequisite number of steps to get the number of decimal points you want.

Oh, and there are lots of old fixed point code floating around. Looking for "fixed point" instead of "decimal math" might help you find what you want...

(* there's a very simple way of doing it: You do everything with integers scaled sufficiently high up, and move the comma [euro decimal?] to the right the prerequisite number of steps to get the number of decimal points you want.*)

Integers have a limited length in most built-in stuff. What if you want to store 0.666666666666666666666 in a variable?

Besides, one should still wrap such behind a library rather than manually manage the decimal position. You would then have an integer version of the BC library I mentioned.

Most people that used fixed point deal with 2-3 decimal points for financial transactions, so precision is rarely an issue. Of course there are always some people with weird requirements. But in particular the guy I replied to brought of business/financial usage where a 32 or 64 bit integer would be more than enough for fixed point with 3 decimals.

Add 1.23 to 3.45 and store the result with one decimal place.IBM's Packed Decimal maxes out at 15 digits plus sign.COBOL does a good job of keeping track of the (implicit) decimal points.If you need predictable results, you need to be aware of rounding issues. In general the round of the sum is not equal to the sum of the rounds.In business calculations, if you add a list of numbers from top to bottom and add the same list of numbers from bottom to top, you get the same answers, both right.In some scientific calculations, if you add a list of numbers from top to bottom and add the same list of numbers from bottom to top, you get different answers, both wrong.

In most business calculations, your requirement is to be able to round with sufficient accuracy to meet demands from your auditors and the government. That typically means that between 2-5 decimals fixed point will be enough. Yes, you still have to deal with rounding, but usually because there are lots of business rules and laws that regulate how you deal with financial data: Inland Revenue in the UK, for instance recognize two ways of calculating aggregate VAT for an order consisting of multiple items, and one of them is specified as calculating the VAT for each item to three decimal points, sum them up and round to two, and explicitly states the rounding rule to use.

Doing that with integers is trivial: Keep sums to the tenth of a pence, sum them together, and write your own one line inland_revenue_round() function that rounds the way Inland Revenue requires it.

Same applies for what you're saying. Of course you need to care about rounding, but doing rounding of integer fixed point representations is trivial if you know the rounding rule you need to apply.

But you have to deal with rounding issues and precision with floating point as well, though lots of people don't realize that and screw up because the results are much closer to expected most of the time.

Right.Basic rule of floating point is that numbers are NOT equal.Specifically, an input number is not necessarily equal to the same number expressed as a constant.Your example works well with integers, but the fun comes when they change the rules and you have other things that depend on those intermediate numbers.One problem with holding number to more places that shown is you get columns of numbers that do not add up to the total shown.

If memory serves, the Mac's math libraries initially used decimal strings to
represent numbers - it's been a decade since I wrote a Mac program - perhaps
someone with more current knowledge can shed some light as to whether this is
still the case?

Also, Java's java.lang.math.BigDecimal class contains just the kind of
functionality you describe - its docs are
here [sun.com].

In general, I think you'll find lots of fixed point math libraries around - they're
mostly intended for numerical computation and mathematical cryptography (e.g RSA),
but they should be quite applicable (if sometimes overkill) for your biz-app
uses.

I think you will find that floating point calculations have much more applications in realtime environments.

Why is that?

BTW, could you clarify what you mean by "real-time"? I have seen 2 different definitions before. One is that the response time has to be within a specified tolerance 100% of the time. The other is "interactive". I did not use that term IIRC.

Mr. Crenshaw is also the author of the popular Let's Build a Compiler [iecc.com] series of articles a while back.

These articles don't go into a lot of the complicated stuff that's involved in modern compiler design-- Crenshaw keeps it simple, keeps it straightforward, and still produces a working (if not optimizing) compiler by the end of the second or third article.

No, it won't let you code a C compiler that will beat the pants off of gcc or Borland's latest offering, but the end result is pretty useful.

For those who don't support Slashdot's Amazon embargo, here's their link to the book [amazon.com]. Not only are they selling the book for $35, they have 25 sample pages, including the entire index and the first half of the first chapter. (And no, I'm not in Amazon's affiliates program and don't make a dime if you buy the book using the link that I provided, as a quick glance at the URL will prove.)

Many of the responders to this claim loudly and insistently that they've been programming for years and have never used any math. This is one of those perennial topics - I've seen it on the usenet and on web sites more times than I'd like to admit.

But by "math" the reference is almost always to calculus.

But math is not just calculus.

Math includes (and this is a MINIMAL list :

algebra Every program using symbols to represent things that might vary is using algebra. Algebra isnt just manipulating big expressions to find values of x and y - it is really about using names to refer to values. (For example x=y+1 is fundamentally an algebraic expression.

boolean logic Using logical expressions and understanding what they do is just the predicate calculus. Using logic languages (prolog primarily) is, well, logic.

Linear Algebra Try to program more than minimal graphics without linear algebra.

The structure of numbers computing square roots and the like. This kind of computing also typically involves calculus and its relatives

Calculus
many parts of computational mathematics, including things like square roots, sin/cos and the like. Also, finding tangents and normals to surfaces which is a big part of reflection models in graphics. The logic involved is also used in the analysis of algorithms.

logical reasoning Every time someone writes a loop or a recursive function, they are essentially using mathematical induction (albeit informally). Propagation of pre/post conditions (not just in procedure calls, but on the statement to statement level is also logical reasoning (and informal proofs).

Graph Theory Where doesn't graph theory show up? Dependency graphs, path algorithms of all sorts. Trees are graphs. Garbage collection involves graph theory. Programs are (on several levels) graphs. The internet as a network is a graph. Websites are graphs (and it can be interesting and revealing to look at them as such).

Number Theory Cryptography!

And there's more - check out Glassner's "Digital Image Synthesis", or Knuth's "Art of Computer Programming", - find places where mathematics is not mentioned. Let alone such things as wavelets, the Mandelbrot set, grammars, text (or UI) layout, automata (and on, and on, and on...). I can show you a very hard mathematical problem (which I'm still working on) based on an algorithm you all know, but that is often coded incorrectly.

If you're not doing any of these things, you may be programming, but you're probably not programming well.

Juris Hartmanis said (half jokingly) in his Turing Award lecture that "Computer Science is the engineering of mathematics" I think its about as good a definition as any I have ever heard.

I used this book as one of the references for my Game Developer Conference course on "Faster Math Functions", and the book is good but has holes. Crenshaw's style shows his crusty old engineer roots at times - his coverage of Mininax polynomials is way behind the times and he seriously needs to get into Mathematica or Maple as his basic high-precision tool.Work by Tang on combatting destructive cancellation in range reduction, the new semi-table based exponant and log methods, Intel's research into using Estrin's Method based SIMD for evaluating polynomials or Muller's book on Elementary Functions are beyond Crenshaw's experience, and it shows. This is a homebrew book rather than an introduction to the state of the art. More information at SCEA R&D Website [scea.com].

Jack should just about be a household name. When Apollo 13 went berserk, NASA dusted off some calculations of trajectories that he had previously done (he was no longer working for NASA) and used one of them to bring the astronauts back.

That's it. For those who want the quick link for the Let's Write a Compiler, right here [iecc.com] (http://compilers.iecc.com/crenshaw/.)
I really hope that Crenshaw might write again about compilers. I agree with the Pascal and 68k part -- they're old, and even some of the approach taken by the tutorial is probably not up to speed with modern practices. But hey, at least it gives a good historical account.