I got into a debate with a computer science professor a few months ago when I made a controversial blanket statement that “the code inside loop bodies is the only code that matters for performance.” I should provide some context: I was discussing how multi-threading is about speeding up loops and I don’t care about straight line code which is only gonna execute once (or just a few times). My argument is that programmers do not write billions of lines of straight line code. Its the repetition of code (via loops or recursion) that makes the code “slow.” In fact, I can argue that any time we wait on a computer program to do something useful, we are in fact waiting on a loop (e.g., grep, loading emails, spell checking, photo editing, database transactions, HTML rendering, you name it). It is a rather silly argument but I would like to see some counter arguments/examples. Question: Is parallel programming all about loops/recursions or are there cases where code that executes only once is worth optimizing?

Please note that if the code executing once has a function call which has a loop in it then that counts as a loop, not straight line code. Comments would be great but at least take the time to indicate your vote below.

When talking to, Owais Khan, a friend studying communication systems, I mentioned that multicore systems are becoming memory bandwidth limited even though the bandwidth of latest chips exceeds several GB/second. He was puzzled and then corrected my terminology, thereby pointing me to a common mistake made by computer scientists . I decided to write about it and collect opinions from computer scientists here.

After reading my post on the shortcomings of Amdahl’s law, A reader of Future Chips blog,Bjoern Knafla (@bjoernknafla), on Twitter suggested that I should add a discussion on Gustafson’s law on my blog. Fortunately, I have the honor of meeting Dr. John Gustafson in person when he came to Austin in 2009. The following are my mental notes from my discussion with him. It is a very simple concept which should be understood by all parallel programmers and computer architects designing multicore machines.

I received a interesting comment from ParallelAxiom, a parallel programming expert, on my post titled “When Amdahl’s law is inapplicable?” His comment made me re-think my post. I must show an example to hammer my point. Thus, I have added an example to the original post and I am adding this new post just so the RSS subscribers can see this update as well. Please look at the original article first in case you have not read it.

I see a lot of industry and academic folks use the term Amdahl’s law without understanding what it really means. Today I will discuss what Gene Amdahl said in 1967, what has become of it, and how it is often misused.

This question has been bugging me for the last few days. Why would anyone use a linked-list instead of arrays? I argue that linked lists have become irrelevant in the context of a modern computer system. I have asked around a few colleagues and I include their counter arguments and my rebuttal for your reference. Please point out if I am missing something.

Timothy Morgan at The Register brings good news: we can expect steady growth in chip sales in the coming years. The bar chart shows the actual and projected yearly growth for 12 years.

The following are the projected numbers for the individual segments:

Category

Growth (%)

Wireless Communication

17.6

Industrial Electronics

7.3

PCs, Tablets, Phones*

6.2

Consumer Electronics

3.1

Automotive

3.0

Wired Communication

-1.7

*The grouping of PCs with tablets and phones is unfortunate given that sales of PCs are declining while the sales of the other two are increasing.

The article also brings bad news:

It’s reasonable to expect that replacement cycles for all consumer devices will be extended a little – maybe a year or so – as we introduce one or two new types of devices into our lives. We’ll have more devices and will make them last a little longer than perhaps we might have in the past.

I am not willing to buy this argument. I can see how owning more devices can reduce the chances of me buying brand new devices but I don’t think it impacts upgrades. In my opinion, consumers upgrade when the newer generation product offers “better” features. In fact, many upgrades are forced by the industry leaders. For example, PC upgrades were artificially induced through connectivity changes (e.g., USB, DVI-D) for many years. In summary, I do not expect the upgrade cycles to lengthen just because we will own more devices. After all, owning a Macmini, a laptop, an iPad, an iPhone, and an Apple TV 2 did not stop me when the iPad 2 came out

What is your take? Does owning a phone and a tablet makes an upgrade less likely?

I originally wrote this article for Technoati.com. It was published on 6/21/2011. Click here to read it there.

It has been a week since the AMD Fusion developer forum and I have been reading about what was said and told by AMD, ARM, and Microsoft speakers. While there were a lot of talks, the one that jumps out at me most is from AMD Fellow Phil Rogers. The following is my top three inferences from this talk.

“If debugging is the process of removing bugs, then programming must be the process of putting them in.” (Edsger W. Dijkstra)

“I don’t care if it works on your machine! We are not shipping your machine!” (Vidiu Platon)

“No matter how slick the demo is in rehearsal, when you do it in front of a live audience, the probability of a flawless presentation is inversely proportional to the number of people watching, raised to the power of the amount of money involved.” (Mark Gibbs)

“The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.” (Henry Petroski)

“A computer once beat me at chess, but it was no match for me at kick boxing.” Emo Philips

“Programming can be fun, so can cryptography; however they should not be combined.” Kreitzberg and Shneiderman

“Perl: The only language that looks the same before and after RSA encryption.” – Keith Bostic