Rule #6 - Adapt and Adopt

The roguish Richard Feynman after winning his Nobel Prize in Physics in 1965.

In 1965, Dr. Richard Feynman won the Nobel Prize in Physics. He was brash, rebellious and rarely passed up an opportunity to point out hypocrisy and irrationality.

For example, when he was a young man working on the Manhattan Project at Los Alamos National Lab, he noticed a big wash under the perimeter fence.

While the gated entrances were heavily guarded, this unprotected gully was deep enough for a man to pass under without bending over.

Feynman walked up to the guarded entrance one day, said hello, then turned, walked down the fence line, walked under the fence and went to work.

Yet another case of the front door being heavily padlocked while the back door swings wide open.

Sounds like a lot of our technology security efforts today. Isn't it comforting to know this stuff has been going on forever.

Anyway, that's not what I want to talk about today.

Feynman's New Trig Notation

When Feynman was much younger, he became annoyed with the standard mathematical notation used for trigonometry and so invented his own. Here is how he describes it in his autobiography:

"While I was doing all this trigonometry, I didn't like the symbols for sine, cosine, tangent, and so on. To me, "sin f" looked like s times i times n times f! So I invented another symbol, like a square root sign, that was a sigma with a long arm sticking out of it, and I put the f underneath. For the tangent it was a tau with the top of the tau extended, and for the cosine I made a kind of gamma, but it looked a little bit like the square root sign. Now the inverse sine was the same sigma, but left -to-right reflected so that it started with the horizontal line with the value underneath, and then the sigma. That was the inverse sine, NOT sink f--that was crazy! They had that in books! To me, sin_i meant i/sine, the reciprocal. So my symbols were better."

He goes on to acknowledge that while this notation was better for him, no one else understood it and therefore, it was not very useful.

"I thought my symbols were just as good, if not better, than the regular symbols - it doesn't make any difference what symbols you use - but I discovered later that it does make a difference. Once when I was explaining something to another kid in high school, without thinking I started to make these symbols, and he said, "What the hell are those?" I realized then that if I'm going to talk to anybody else, I'll have to use the standard symbols, so I eventually gave up my own symbols."

— "Surely You're Joking, Mr. Feynman"

Standard ways of doing things make communication and interaction possible. It's often not the best way, but it's the common way.

And that makes it the best way.

The reasons one approach gets adopted over another are usually the stuff of history and politics.

Why do Britain and the U.S. use British Imperial units for distance and weight while most of the rest of the world uses the metric system?

I'm sure it's more about the powerful influence of the British empire during a certain period in world history than a rejection of the greater logical consistency of the metric system.

When something becomes a standard there is a high cost to changing that standard.

For better or worse we have certain standard notations in math and physics.

Yet notation is completely arbitrary.

You could come up with a thousand different ways to represent written mathematics.

Right or Wrong, Programming Standards Matter

Well, that brings us to programming.

Just as in math, computer programming has its standards...and its battles over standards.

One coder likes the Kernigan & Ritchie style for opening braces while the Microsoft standard for C# is to put the opening brace on its own line.

Another C# coder prefers to use the var keyword instead of the full, explicit name of the type.

// ---------------------
// Var shortcut
// ---------------------
// This is a bad use of var because the type is not obvious from the line of code
var v = MethodThatReturnsSomething();
// This is a better use of var
var v = new SomeClassType();
// -------------------------------------------
// Explicit specification of the variable type
// -------------------------------------------
// Redundant, but without ambiguity
SomeClassType v = new SomeClassType();

Still another coder uses "m_" prefixes to denote his private variables while someone else uses just a single underscore and camel casing.