A New Way To Think About Parallel Programming

Monthly Archives: March 2013

Why do we even use languages to generate programs? Why isn’t there something available that is better (or at least less error prone) at generating software applications than pseudo-English programming languages?

I believe that there is a very simple reason that we keep getting new languages instead of new solutions. The reason is that we always pick the people who were most successful using the “last big thing” to design the “next big thing”. And since the last big thing was a programming language, they will probably suggest a programming language to be the next big thing.

This isn’t a criticism of language developers; it’s an observation of human nature. If you asked Babe Ruth how to become a better batter, he would probably tell you all the things that he did that helped him hit so many home runs. Stance, swing-plane, power techniques, etc. But if what you really wanted to do was get on base more often, his advice might not help.

So too with the rock-stars of programming. If you ask them how to improve software development, they will probably suggest new programming languages that would make their own individual future development efforts easier and/or more comfortable.

But if what we want are innovative solutions, outside the box solutions, then we are putting the wrong people in charge of finding those solutions. Instead of picking the best programmers, we should be asking ordinary people how to create error-free programs.

Instead of studying the best and the brightest programmers, we should be studying competent people to see where they make mistakes and how to prevent those mistakes. Perhaps then we will have the insight necessary to create systems that the rest of us mere mortals can use to effectively and efficiently develop software applications.

In the next blogs, we’ll investigate first human shortcomings and then language shortcomings, and how these shortcomings interact to produce the same slow-motion development cycle.

Did you enter a 1,000 line computer program and NOT compile until the end? And did it work the first time? C’mon, don’t count the lines that are pre-compiled in the libraries you used. Count only the lines that you wrote and keyed in yourself.

Honestly, I’d be surprised if you could enter a 100 line program, compile it and then run it without any errors or corrections the first time. I’d be surprised if you could copy at the keyboard just 30 lines of existing code and have it work the first time.

Programming languages provide just too many ways to fail. Maybe some letters got transposed or a variable was capitalized when it shouldn’t have been, or maybe you just forgot to add some punctuation.

And these are just the mechanical failures! There is a seemingly infinite number of logical problems that can cause your program to fail or to provide “unexpected enhancements” to the expected behavior.

If programming languages are so damn terrific, why are there so many ways to fail?

If programming languages are so good, why do computer programs still take so long to develop?

If programming languages really are the best method of creating software, why do they still have bugs waiting to be found after they have been completed and released?

It seems like with programming languages, it’s not “Failure is not an option”. Instead it is, “Failure is the only option”. Followed by, “Repeat the only option until it runs well enough to release.”

With all the really smart people who have developed software over the years, why do we still have a program development methodology that produces such dismal results?

Read the next blog for some preliminary thoughts and an outline of where these blogs are going.

Did you ever notice that computer programming languages are pretty much failures? Sure, we can use them to create programs that do what we want them to do. Eventually. More or less. But how many hundreds and thousands of development hours were required to make the program do most of what we want?: And how many buggy versions does it take to finally get that fully functional version. You know, the one without too many latent bugs. Because no program of any size and complexity is 100% bug free.

Prove me wrong. Build a tiny, 1,000 line program that compiles and runs bug free. On the first compile. Oh, yeah, and have it do something useful. Compare your results to an electronics hobbyist who will breadboard a small audio amplifier or a light sensor in an afternoon or a backyard mechanic who can remove, mill, and reinstall the heads on his hot rod in a day and have it run the first time.

We are so conditioned to accept that programming languages will generate unusable outputs that we never even notice the failures anymore; we just note the error messages and try again. We blame ourselves and our inability to think flawlessly instead of blaming the tool for allowing us to build invalid programs. It’s like we’re all using hammers with little tiny strike faces and then blaming ourselves because we keep missing the nails and hitting our fingers instead.

Do you know any other development system where it normally takes until version 3.x before a usable product is available? What if version 1.0 of the Oakland Bay bridge had only worked with Chryslers on Tuesdays? What if Model T Fords v1.0 could only turn right and you had to wait for v2.00 to turn either left or right? Can you imagine any sane person buying an airline ticket if the person behind the ticket counter said, “Rebooting the airplane is only necessary now and then. As long as you’re above 5000 feet, you’ll be fine.”

Sure, we’ve gotten a lot of mileage out of programming languages, but isn’t it about time we found a tool that actually helped us generate the software that we want instead of trapping us in a swamp of syntax errors and compilation errors. Programming languages are just the tools that we’ve gotten used to but not the only tools possible.

Thank you for visiting the Avian Computing blog, a blog dedicated to improving how we think about parallel programming. The current ways of thinking about parallel programming are ineffective and inefficient because they fail to capitalize on the strengths of human thinking and fail to leverage the strengths of computers. These deficiencies result in parallel programs that are slow to develop, are difficult to debug, show unpredictable performance, and contain potential run-time failures that may occur only intermittently .

This blog will look at many of the issues associated with parallel programming and will try to provide new perspectives on solving these issues, specifically keeping in mind the strengths (and weaknesses) of the human mind.

Currently, we attempt to solve our parallel programming problems using the tools and techniques developed to create single-threaded programs and then attempt to brute-force them into parallel programs with the application of “pure logic” and our massive intellect. This approach is no more effective today than it was 50 years ago. This blog will search for more effective solutions for the rapid development of parallel programs, primarily by using the Concurrency Explorer (ConcX), available soon on this web site. This open-source software should not be confused with Microsoft’s ConcurrencyExplorer (no space between words) that used on their CHESS system.

An underlying assumption of this blog is that 1,000-core systems and 10,000 core systems are in our near future. And to be able to use these centi-core and kilo-core systems, we need a better way of generating parallel programs. Currently, we develop parallel programs the old fashioned way, with “blood, sweat, toil, and tears”. The goal of this blog is to investigate how to more efficiently develop software that will run on these kilo-core systems that will be available in our near future.