Tuesday, December 31, 2013

If you're not already reading Don Marti, you might not know about the fascinating topics he covers on his irregularly-updated blog.

He spends most of his time writing deeply about the online advertising industry, an important subject given that it's probably the most important part of the high-tech industry nowadays, but he occasionally posts on other topics, too. His essays often pursue the man behind the curtain, the lesser-known ways in which big money and large organizations manipulate the world in the search for Better Advertising.

I had an interesting conversation with a California resident a few days ago. A door-to-door sales rep had just come by, and as soon as he left, she called the police. The non-emergency police number, but still.

It turns out that other people in the neighborhood had also called. We've gone from a society in which door-to-door sales was totally normal, even the subject of underground NSFW comics, to something that regular people call the police about.

Schneier's snake oilers were always trying to re-use one-time pads. You can't do that. Likewise, you can't collect and store PII—and it's all PII—and not have it come back to bite the people that it's about.

Non-creepy advertising isn't perfect, and doesn't solve all the customer/vendor match-up problems in the world. We have a lot of non-advertising tools for that. But it's a fallacy to say that just because non-creepy ads have a problem doing something, creepy ads are any better.

It's an example of the Internet making an industry less efficient. Which should be in an Economics paper somewhere, but really, we need to find honest work for software developers now stuck in adtech. And for real advertisers to quit the adtech-captured IAB, but you knew that.

Print continues to command an unreasonably large share of advertising budgets. Spending is down, but proportionally not as much as time.

With the trendiness and bubblyness of digital, we'd expect it to go the other way.

Something deeper than click fraud is going on here. Print is inherently more valuable because it's less trackable, and carries a better signal, and we keep seeing that in these Internet Trends reports.

Monday, December 30, 2013

Ever since the third bore beneath the Oakland hills opened in 1964, Caltrans has shifted the traffic direction of the center bore to accommodate the heaviest traffic. That has forced drivers heading in the opposite direction to funnel from four lanes into two to squeeze through a single bore.

I think the writers for the Chronicle, like perhaps many of us, thought that this massive project would never complete, as they kept referring to the three-tunnel arrangements in the present tense.

But that time is over! The fourth bore is here!

The new tunnel is, frankly, gorgeous. There's something about those overhanging ventilation fans that reminds me, say, of NCC-1701, or maybe the Warthog.

You can only get to the new tunnel, natch, coming East-to-West. Simply stay in the right lanes as you climb up toward the hilltop and you'll find yourself smoothly guided through this beautiful tunnel.

For now, it's all good news. The delays through the tunnels are eased; the aggravating lane-switching protocols are no longer needed; our tax monies went to some needed infrastructure and it's serving its purpose.

This is our second Stefan Feld game; about a year ago I got his Trajan, which I thought was nice but rather busy.

Castles of Burgundy is, I think, quite enjoyable.

Castles of Burgundy has a fair amount of randomness (i.e., luck) in it. Each of your overall turns is controlled by a roll of the dice, and the selection and timing of the pieces on the board is also a matter of luck. In any particular game, not all the pieces will be used, and they will not all arrive on the board at the same time.

A further element of luck is that you may choose one of a variety of possible game boards at the start of the game, and the particular board you choose may or may not fit well to the pieces that actually appear as possible selections when it's your turn.

What this means is that Castles of Burgundy is a game where tactical optimization is quite important. Rarely can you successfully plan many moves in advance.

You may start the game with a lucky opportunity to fetch two Sheep tiles, and think that your winning strategy will be to fill your animal area with Sheep. And then, you may go three rounds without ever seeing another Sheep tile! The game will punish your stubborn inflexibility and refusal to adjust your course.

Another interesting aspect of this game is that, to be successful, you must, must, must be willing to pay attention to the strategies that your opponents are using, and make the occasional defensive play, which generally involves compromising your own Grand Strategy from time to time to make a play which denies your opponent a particularly crucial tile. If they are working to fill their 5-tile Buildings region, and are desperate for that Watchtower, grab it! If they're favoring animals, and you can grab those Chickens, make it so!

Another nice thing about Castles of Burgundy is that it isn't terribly fatiguing to learn. 15 minutes with the rulebook, and one rather slow practice game, and you'll have it down, at which point you can switch from learning mode into playing mode, which is much better.

And Castles of Burgundy has plenty of mental challenge, even given the dice and board randomness. You'll find that there's just enough opportunity for thinking and planning to keep you entertained, without the deep multi-turn planning and organization issues that can turn a game like Agricola or Le Havre into a 4 hour event. Castles of Burgundy cleverly foils those approaches without making you feel like it's dumbed down to the point of dullness.

Saturday, December 28, 2013

Holt, a journalist (the New Yorker, the New York Times) and amateur philosopher, takes as the subject for his book the following question: "Why is there something rather than nothing?"

One day I went to the local college library and checked out some impressive-looking tomes: Sartre's Being and Nothingness and Heidegger's Introduction to Metaphysics. It was in the opening pages of the latter book, with its promising title, that I was first confronted by the question Why is there something rather than nothing at all? I can still recall being bowled over by its starkness, its purity, its sheer power. Here was the super-ultimate why question, the one that loomed behind all the others that mankind had ever asked.

Holt undertakes to study this question from a variety of perspectives: physical, astronomical, theological, mathematical, linguistic, philosophical, literary. And hence he beings a rather meandering journey around the globe, tracking down mathematicians, astronomers, philosophers, linguists, physicists, religious thinkers, and authors to ask their opinion of The Question.

Interspersed with his relating of the conversations he has with all these deep thinkers, Holt digs into various historical figures who considered these topics: Leibniz, Plato, Hegel, Sartre, Wittgenstein, and more.

Such an omnibus approach must necessarily provide such a varied buffet that every reader of Holt's book will, at some point, throw up their hands and say: "what rubbish! Why did he bother to include that?"

But, on the other hand, there is such a collection of fascinating characters, with such intriguing and well-presented theories and proposals, that every reader will also, at some point, gasp in recognition and say: "yes! Yes, of course! How clear is that!"

Still, at times, Holt's book becomes somewhat of a catalogue:

My purpose, though, is a serious one. What I am struggling to do is to see the world in the most abstract way possible. That, it seems to me, is the best remaining hope for puzzling out why the world exists at all. All of the thinkers I had already spoken to fell short of complete ontological generality. They saw the world under some limited aspect. To Richard Swinburne, it was a manifestation of divine will. To Alex Vilenkin, it was a runaway fluctuation in a quantum vacuum. To Roger Penrose, it was the expression of a Platonic mathematical essence. To John Leslie, it was an outcropping of timeless value. Each of these ways of seeing the world purported to yield the answer to why it exists. But none of these answers struck me as satisfactory. They didn't penetrate to the root of the existential mystery -- to what Aristotle, in his Metaphysics, called "being qua being." What does it mean to be?

Surely, you cannot criticize Holt for being over-timid. If you are going to take on a Big Question, how much bigger could a question get? Holt's book has no easy answers, but it excites the mind and motivates the reader to think and consider and contemplate ideas that are big and complex and worthy of study.

How much more can you hope from a book than that?

I enjoyed this book a lot, and happily passed it on to the next reader, who I hope will enjoy it as well.

Wednesday, December 25, 2013

Ikeuchi spent the better part of the last year building this incredible machine, a creation that isn’t so much a case mod as full-blown diorama. It’s a deliriously detailed little world that just happens to take place in and around a functioning computer. It also redefines the idea of what it means to have a cluttered desk.

Ikeuchi, a designer by trade, likes to call it his “secret base.” Inspired by mecha anime like Gundam and Macross, every surface is packed with something to discover. Soldiers tend to intricate, forbidding machinery. Mechs await repair. The work seamlessly blends plastic toys, gizmo components, and scraps of other materials with the computer itself.

Ikeuchi is quoted as saying: "The most important thing is to think of the computer as a living thing."

My aunt names all her inanimate objects: her computer, her car, her coffee maker, her knife sharpener.

And certainly I'm known to have conversations with my 20-year-old mini-van with some regularity: "please start! please start! please!!"

We invest much of our own selves in the objects we create, both hardware and software, and Ikeuchi's magnificent artwork makes that vividly clear.

For many years there has been a growing awareness that something is rotten in the state of mathematics education. Studies have been commissioned, conferences assembled, and countless committees of teachers, textbook publishers, and educators (whatever they are) have been formed to "fix the problem." Quite apart from the self-serving interest paid to reform by the textbook industry (which profits from any minute political fluctuation by offering up "new" editions of their unreadable monstrosities), the entire reform movement has always missed the point. The mathematics curriculum doesn’t need to be reformed, it needs to be scrapped.

There's no return (;) between them. You can have multiple entry points and exit points in definitions! There is a certain simplicity and power that comes from not having a compiling vs. immediate mode as in regular Forth. Immediate words are just yellow and can be anywhere. Red words can also be anywhere. They simply give names to the current address in the instruction stream. There is no colon (:) word and definitions don't necessarily end with a return (;). It's also common to have an early return rather than an else. It's a different way to program, even for a Forth.

Like all currency systems, Bitcoin comes with an implicit political agenda attached. Decisions we take about how to manage money, taxation, and the economy have consequences: by its consequences you may judge a finance system. Our current global system is pretty crap, but I submit that Bitcoin is worst.

This sort of thing can destroy our country. Trust is essential in our society. And if we can't trust either our government or the corporations that have intimate access into so much of our lives, society suffers. Study after study demonstrates the value of living in a high-trust society and the costs of living in a low-trust one.

So RSA’s defense is essentially that they didn’t undermine their cusotmers’ security deliberately but only through bad judgment. That’s cold comfort for RSA customers—good security judgment is one of the main things one is looking for in a security company.

The final interesting finding is that IP enforcement does not seem to have any impact on piracy rates. This is no surprise to anyone who understands how the internet work but may be a surprise to policy-makers and corporations lobbying for anti-piracy measures. So when the UK banned the Pirate Bay in 2012 here is what happened to piracy rates:

See that? Nothing. Zip. Nada. In other words, anti-piracy measures are possible the most ineffective policy instrument ever. Why? Because the Internet.

A pardon is normally granted only when the person is innocent of the offence and where a request has been made by someone with a vested interest, such as a family member. On this occasion, a pardon has been issued without either requirement being met.

As a part of the Oracle Marketing Cloud, we’ll be able to accelerate our vision of giving marketers across all industries the most advanced platform for orchestrating customer experiences over time and across channels. We couldn’t be more thrilled about what this means for our customers and employees.

Really, now: have you ever seen a more perfect juxtaposition of three words than:

Saturday, December 21, 2013

Despite this, the people of Qunu were undeterred. They were welcoming their beloved "Tata" home. Everywhere I went I was greeted with a smile, a handshake, and the words, "Molo, Sisi" or "Hello, Sister." There was a spirit of South African ubuntu -- or unity -- just as President Obama noted in his moving memorial speech, and I knew all would come together to honor a great man.

"One afternoon several of us had the same experience -- typesetting something, feeding the paper through the developer, only to find a single, beautifully typeset line: "cannot open file foobar" The grumbles were loud enough and in the presence of the right people, and a couple of days later the standard error file was born..."

There are Bitcoin mining installations in Hong Kong and Washington State, among other places, but Mr. Abiodun chose Iceland, where geothermal and hydroelectric energy are plentiful and cheap. And the arctic air is free and piped in to cool the machines, which often overheat when they are pushed to the outer limits of their computing capacity.

“I think the biggest NoSQL proponent of non-ACID has been historically a guy named Jeff Dean at Google, who’s responsible for, essentially, most to all of their database offerings. And he recently … wrote a system called Spanner,” Stonebraker explained. “Spanner is a pure ACID system. So Google is moving to ACID and I think the NoSQL market will move away from eventual consistency and toward ACID.”

But this store has earned a special reputation for selling quality “dumps,” data stolen from the magnetic stripe on the backs of credit and debit cards. Armed with that information, thieves can effectively clone the cards and use them in stores. If the dumps are from debit cards and the thieves also have access to the PINs for those cards, they can use the cloned cards at ATMs to pull cash out of the victim’s bank account.

Each interviewer has a limited amount of time to convince themselves that you will be a great hire, and they want to spend that time in the most efficient way. Therefore once you are in a technical interview, our interviewers will mostly focus on programming problems, not the resume, which we find to be the best use of your time.

So Pond is not email. Pond is forward secure, asynchronous messaging for the discerning. Pond messages are asynchronous, but are not a record; they expire automatically a week after they are received. Pond seeks to prevent leaking traffic information against everyone except a global passive attacker.

Let’s do this right and build a real Open Source secure asynchronous messaging solution that is more than snake oil and marketing gimmicks. TextSecure, the Open Source app we’ve been developing at Open WhisperSystems, uses the Axolotol ratchet, which we believe should represent the core of any secure asynchronous messaging solution today. We’ve worked with Cyanogen to transparently integrate the TextSecure protocol into CyanogenMod

The Hoarder is a cautious creature, perpetually unsure of itself. The Hoarder lives in a world of perpetual cognitive dissonance: extremely proud of his work, but so unsure of himself that he won’t let anyone see it if it can be helped.

So he hides his code. Carefully avoiding check-ins until the last possible minute, when he crams it all into one monolithic commit and hopes no one can trace the changes back to him. His greatest fear is the dreaded merge conflict, where the risk of exposure is greatest.

See all the crazy angles in the following shots. Nothing is by accident, the perspective they chose was purposefully done to help visually tell the story. Either to see a character's point of view, or to help show the dominance of a character with a certain interplay. Close-ups show what a character is thinking or feeling, over-the-shoulder shots place the audience right into the conversation, and the whole time there are shapes and lines in the foreground and background that aid in leading the viewers eyes to where they need to look.

Google is a highly collaborative workplace, so the open floor plan suits our engineering process. Project teams composed of Software Engineers (SWEs), Software Engineers in Test (SETs), and Test Engineers (TEs) all sit near each other or in large rooms together. The test-focused engineers are involved in every step of the development process, so it’s critical for them to sit with the product developers. This keeps the lines of communication open.

The office space is far from rigid, and teams often rearrange desks to suit their preferences. The facilities team recently finished renovating a new floor in the New York City office, and after a day of engineering debates on optimal arrangements and white board diagrams, the floor was completely transformed.

Besides the main office areas, there are lounge areas to which Googlers go for a change of scenery or a little peace and quiet. If you are trying to avoid becoming a casualty of The Great Foam Dart War, lounges are a great place to hide.

Everyone collects utilities, and most folks have a list of a few that they feel are indispensable. Here's mine. Each has a distinct purpose, and I probably touch each at least a few times a week. For me, "util" means utilitarian and it means don't clutter my tray. If it saves me time, and seamlessly integrates with my life, it's the bomb.

This style of communication, which I see quite often in security topics, makes it very easy for newcomers to feel utterly helpless. You are told that a particular practice is bad security-wise, but do not know how to improve, and it’s too abstract to figure it out on your own. I happen to know that strncat is safer than strcat because the latter can cause buffer overflows, and otherwise the manpage of strcat is quite vocal in explaining this. I have only a vague idea of how the block size of MACs influences timing channels. In other words, if we help developers discover what is wrong, it is most vital that we show them a clear path towards improvement.

It seems perverse that in our digital society, where we are freer than ever to work where we like, with whom we like, on what we like, that our communities are more gender biased than ever. The most egalitarian societies, in the gender sense, are totalitarian states. This is surely a sign that we're doing something profoundly wrong when it comes to large-scale organization. The long-standing accusation of sexism may be accurate data yet it's done nothing to improve things, and instead widens the disagreeable, and enduring, split between the genders.

So what is the real purpose of the massive $10.9 million surveillance system? The records we examined show that the DAC is an open-ended project that would create a surveillance system that could watch the entire city and is designed to easily incorporate new high-tech features in the future. And one of the uses that has piqued the interest of city staffers is the deployment of the DAC to track political protesters and monitor large demonstrations.

I've only scratched the surface. There is too much to learn, and not enough time.

Motivation Meter: Who doesn't want to spend New Year's in Shreveport? OK, so neither team spent the summer running gassers with the goal of playing in this game – but it beats being home watching other teams play. Both teams should be moderately motivated.

assessment of watchability:

Sun Bowl (24): UCLA vs. Virginia Tech, Dec. 31.

Watchability (scale of 1-5): 3.5. Much like the Advocare/Independence/Weed Eater Bowl above, you have to respect the longevity of a mid-level bowl like this in an out-of-the-way locale like El Paso. It has carved out a niche in the college football landscape. Long live the Sun Bowl. Take time to watch it.

Belk Bowl (15): North Carolina vs. Cincinnati, Dec. 28.

Watchability (scale of 1-5): 2.5. This game gets the garden-spot, mid-afternoon time slot on a Saturday. During the regular season that means a major SEC showdown on CBS; during bowl season that means the fifth-place team in the ACC Coastal Division against the third-place team in the AAC. Oh well.

and, with a straight face this time, games not to be missed:

Rose Bowl (31): Stanford vs. Michigan State, Jan. 1.

Watchability (scale of 1-5): 5. In terms of pure aesthetics, it's the most watchable game of the year. Every year. It wins in terms of tradition, too. And this year's matchup is the second-best bowl game, for The Dash's money.

Orange Bowl (35): Ohio State vs. Clemson, Jan. 3.

Watchability (scale of 1-5): 5. Urban Meyer coaches again in the state of Florida. How have he and his team handled the devastating loss to Michigan State in the Big Ten title game? And Clemson returns to the scene of its January 2012 bowl crime, where it surrendered 70 to West Virginia.

Motivation Meter: Buckeyes talked a good game after the loss to the Spartans about being ready and finishing the season right in Miami, but that’s far easier said than done given what was lost. For Clemson, the motivation should be very high – especially after another damning loss to South Carolina.

Best Orange Bowl Ever: Miami 31, Nebraska 30 in 1984. A massive upset that spawned the Miami dynasty and derailed one of the greatest teams in college football history. Cornhuskers scored late and Tom Osborne gamely opted for a two-point conversion, when playing to tie (in the pre-overtime days) likely would have cemented the national title. When Turner Gill’s pass fell incomplete, we had a new world order in college football for the next decade.

Well, at least it's something to do when you suddenly find yourself with three hours while everyone else is at the mall, on airport runs, or asleep.

Friday, December 20, 2013

I've always felt "young at heart." I stubbornly continue doing things like playing soccer, riding my bike to work, going backpacking and hiking, and so forth.

One of the things that's paramount in my mind, as I age, is to retain mobility. I've got close family members who aren't mobile, and I can see that it really impacts their quality of life. So being able to get around freely is extremely important to me.

Recently, our beautiful 6 1/2 year old Labrador, the best companion a family could ever ask for, has suddenly developed a severe case of arthritis or some other sort of joint damage. Viewed from hindsight, this has been coming on for a while, but to us it seemed very sudden. She had recovered from all her previous injuries quite quickly, but this time it's been a stubborn several weeks with no improvement.

She's always been a fetch-and-retrieve dog, and so the restrictions on carrying balls, sticks, pine cones and other toys have come as a big change in her lifestyle. If you had seen her 5 years ago, racing through a meadow at top speed, eyes focused on a flying frisbee, snatching it out of the air at the last moment, you'd watch her limping down the hall, barely able to make it 100 yards from the house, and it would break your heart.

Besides the mobility, the other thing that terrifies me is losing my eyesight, as I'm so incredibly dependent on my vision, not only for my employment, but also for my general lifestyle.

So I recently made my regular trip to the eye doctor, worrying about vision problems that have been affecting family members recently.

Happily, after a thorough, if exhausting, two hours at the doctor (did you know they make an MRI of your eyeball now?!), he pronounced himself thoroughly pleased: "perfect visual field, no blind spots, no evidence of glaucoma, no retinal tears or scars, no evidence of tumors, no cataracts, optic nerve looks healthy and strong, eyesight remains stable and prescription has changed just slightly."

I've got lots of things to do, lots left on my list, so I'm trying to keep healthy and active and stay busy.

This is the sort of academic research that there should be more of: with bold eyes, they take a fresh look at some precepts that were held to be Truth, run them through the maw of Hard Data, turn them on their heads, offer some suggestions as to why the surprising results might actually hold, and point to areas that have been under-considered.

So, what is the Received Wisdom that they investigate? Well, they consider the 40-year-old notion of Software Reliability Models (SRM), and the younger, but still widely known, notion of Vulnerability Discovery Models (VDM), both of which make statements about the expected behavior of a body of software over time.

The implications of such a VDM are significant for software security. It would suggest, for example, that once the rate of vulnerability discovery was sufficiently small, that the software is "safe" and needs little attention. It also suggests that software modules or components that have stood the "test of time" are appropriate candidates for reuse in other software systems. If this VDM model is wrong, these implications will be false and may have undesirable consequences for software security.

In other words, how do we assess risk when we are building software? If we reuse software that has a long pedigree, should we trust that reused code more, less, or about the same as we trust new code?

For other measures, such as the cost and speed of development, the reuse of existing code has many well-known benefits, but here the authors are specifically considering the implications for security. As they say

It seems reasonable, then, to presume that users of software are at their most vulnerable, with software suffering from the most serious latent vulnerabilities, immediately after a new release. That is, we would expect attackers (and legitimate security researchers) who are looking for bugs to exploit to have the easiest time of it early in the life cycle.

But after crunching lots of numbers, they find out that this presumption does not hold:

In fact, new software overwhelmingly enjoys a honeymoon from attack for a period after it is released. The time between release and the first 0-day vulnerability in a given software release tends to be markedly longer than the interval between the first and the second vulnerability discovered, which in turn tends to be longer than the time between the second and the third.

Furthermore, this effect seems pervasive:

Remarkably, positive honeymoons occur across our entire dataset for all classes of software and across the entire period under analysis. The honeymoon effect is strong whether the software is open- or closed- source, whether it is an OS, web client, server, text processor, or something else, and regardless of the year in which the release occurred.

So, why might this be?

The researchers have several ideas:

One possibility is that a second vulnerability might be of similar type of the first, so that finding it is facilitated by knowledge derived from finding the first one. A second possibility is that the methodology or tools developed to find the first vulnerability lowers the effort required to find a subsequent one. A third possible cause might be that a discovered vulnerability would signal weakness to other attackers (i.e., blood in the water), causing them to focus more attention on that area.

Basically, time is (somewhat) more on the side of the attackers than the defenders here.

From my own experience, I'd like to offer a few additional observations that are generally in agreement with the authors's findings, although from a slightly different perspective.

Firstly, bug fixers often fail, when fixing a particular bug, to consider whether the same (or similar) bugs might exist elsewhere in the code base. I often refer to this as "widening the bug", and it's an important step that only the most expert and experienced engineers will take. Plus, it takes time, which is all too often scarce. Attackers, though, are well known users of this technique. When an attack of a certain type is known, it is common to see attackers attempt the same attack with slight adjustments over and over, looking for other places where the same mistake was made.

Secondly, fixing old code is just plain scary. In practice, you don't have as many test suites for the old code; you are unsure of how subtle changes in the behavior of old code will affect all of the various places where it's used; the original authors of that old code may have departed, or may have forgotten how it worked or why they did it that way. Developers may well be aware of bugs in older code, but simply decide it's too expensive or too risky to fix it, and hence allow a bug to remain present in older code even though they know it exists.

Lastly, that "blood in the water" sentiment is real, even though it's hard to interpret. The reluctance by the manufacturer to advertise information about known vulnerabilities, often labeled "security by obscurity," is in many ways an easy emotion to comprehend: "If we tell people there's a security bug in that old release, aren't we just inviting the bad guys to attack it." Although security researchers have done excellent work to educate us all about the problems with vulnerability secrecy, it's hard to educate away an instinct.

Overall, this is a fascinating paper, and I'm glad I stumbled across it, as there's a lot to think about here. I'm certainly not going to give up my decades-long approach of reusing software; it has served me well. And I'm far from persuaded by some of the alternative solutions proposed by the authors:

research into alternative architectures or execution models which focuses on properties extrinsic to software, such as automated diversity, redundant execution, software design diversity might be used to extend the honeymoon period of newly released software, or even give old software a second honeymoon.

Some of these ideas seem quite valid to me. For example, Address Space Layout Randomization is quite clever, and indeed is a very good technique for making the life of the attacker much harder. But "software design diversity"? Harumph, I say.

I suspect that, in this area, there is no easy answer. Software security is one of the most challenging intellectual efforts of our time, with many complexities to consider.

For now, I'm pleased to have had my eyes opened by the paper, and that's a good thing to say about a research project. Thanks much to the authors for sharing their intriguing work.

Around Jingletown -- Respect (2007). This is the view, more or less, from my office. Just off the port (left) side of the Respect, you can see, canted at about a 20 degree angle, the jack-up legs of one of the sunken barges. This barge still remains to be recovered; hopefully now that the Respect is free of the estuary floor, the barge will follow suit soon.

It's been a busy several months here on the Oakland-Alameda estuary, with a lot of successful cleanup performed.

Although it suffers somewhat from being overly long, there is still a lot of wisdom in this essay, and it's certainly worth the time to read.

I suppose I was hooked when he started with perhaps my favorite topic: debugging.

Debugging is the cornerstone of being a programmer. The first meaning of the verb to debug is to remove errors, but the meaning that really matters is to see into the execution of a program by examining it. A programmer that cannot debug effectively is blind.

I like that much of Read's advice is very practical. For example, when talking about debugging, he notes that you can use a debugger, but that there are many other ways to debug: reading the code, reading log files and traces, inserting print statements, adding assertions to the code, etc.

Since debugging is one of those activities where the goal is not to win a beauty prize, but to get the job done, a practical approach is the only sensible way, so I knew Read spoke from experience already.

It's also interesting to see how Read approaches a problem I've seen many a time: fear of making things worse:

Some beginners fear debugging when it requires modifying code. This is understandable---it is a little like exploratory surgery. But you have to learn to poke at the code and make it jump; you have to learn to experiment on it, and understand that nothing that you temporarily do to it will make it worse. If you feel this fear, seek out a mentor---we lose a lot of good programmers at the delicate onset of their learning to this fear.

Or, as my old friend Tom always says, "We call it software because you can change it."

Of course, some bugs are simply hard, and it's a measure of Read's real-world experience that he's seen those before, too:

If you can't reproduce it, set a trap for it by building a logging system, a special one if you have to, that can log what you guess you need when it really does occur. Resign yourself to that if the bug only occurs in production and not at your whim, this is may be a long process. The hints that you get from the log may not provide the solution but may give you enough information to improve the logging. The improved logging system may take a long time to be put into production. Then, you have to wait for the bug to reoccur to get more information. This cycle can go on for some time.

How true this is! I have several bugs, important ones that I'm enormously concerned about, that I'm deep into this "cycle" with, myself. A few of them have been plaguing me for years. Sadly, one of the only ways you know you're working on an important body of software is when you encounter situations like these.

If it were an easy bug, we'd have fixed it by now. The hard bugs are the only ones left.

Read's essay, like any deeply personal essay, is uneven. He spends a lot of time covering topics such as:

How to Recognize When to Go Home

How to Stay Motivated

How to Disagree Honestly and Get Away with It

How to Tell People Things They Don't Want to Hear

How to Get a Promotion

and How to Deal with Organizational Chaos

which indicate that he has spent quite a bit of time in the trenches of the modern software development world, with its pressure, emotion, and chaos.

But Read will get no friends for making a suggestion such as:

commonly, some stressed-out person who does not have the magic power will come into your cube and tell you to do something stupid. If you are really sure that it is stupid, it is best to smile and nod until they go away and then carry on doing what you know is best for the company.

This, of course, is exactly why software engineers have the prima-donna reputation that they do.

And it's not clear to me if Read even realizes what he's saying. There's no hint that this is tongue-in-cheek, or that he understands the implications of his suggested approach to the cacophony of opinions that characterizes software development, and how it risks branding the practitioner as one of Those Guys You Never Want To Work With Ever Again.

Yet it is also valuable advice, if you understand how and when to use it properly.

I suspect that Read and I would find ourselves to Disagree Honestly on many more topics. For example, I don't think he gives anywhere near enough credit to the writing of tests, which is the one technique that I've spent the most time developing in my own repertoire over the last decade, and which I think remains the most underused technique among the stellar programmers that I know.

Read is at least honest about his weaknesses in this area, noting, for example, that

I was late to appreciate the benefits of source code control systems but now I wouldn't live without one even on a one-person project. Generally they are necessary when you have team working on the same code base. However, they have another great advantage: they encourage thinking about the code as a growing, organic system. Since each change is marked as a new revision with a new name or number, one begins to think of the software as a visibly progressive series of improvements. I think this is especially useful for beginners.

This is all well and good, but it only begins to touch the power of what you can do with a source code control system. Code archaeology, project branching, configuration tracking, bug bisection, system organization, code security, build automation; all these techniques and many more become possible once you start to learn how to really use your source code control tools effectively.

Similarly, one of the breakthrough moments for me was a point where I was, frankly, stumped on a subtle and elusive problem, battering my head against the same walls. A wise programmer of my acquaintance, when I approached him for advice, took a holistic view, asking questions like:

From the historical records of your build automation system, can you tell when the problem began to appear? what platforms and configurations it appears most commonly on? what patterns appear in the problem occurrences?

Can you correlate the problem to performance behaviors using your performance tools?

How has the configuration of your test systems changed since the code was written?

What shows up when you search the bug tracking database for that message?

What is the history of that bit of code? When was it last modified, why, and what bugs were being fixed? Do you have tests for those bugs? Can your coverage tools tell you if they're exercising that code?

A few days later, after writing some tools to crunch the data, we spotted the trigger. In retrospect, of course, it was obvious what the problem was, but in the heat of the moment it a revelation to me to understand how this programmer, more expert than I, had developed ways to break through roadblocks by continually searching for new evidence, developing and disproving theories, and using as many tools as possible to improve the power and accuracy of his evaluation. There's a great Sherlock Holmes quote about this use of tools:

Now the skillful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order.

But Read's essay is over a decade old; I'm sure that his views have changed during the years, and perhaps if he and I were to meet now, we'd find that we agree on more things, and disagree on fewer.

If you're considering a career as a programmer, if you know a programmer and find that you must spend a lot of time with her or him, or if you're just curious about what goes through the mind of a programmer, Read's essay is well worth your time.

And if you're already a programmer, and want to become a better one, I'm confident that you'll find lots of ideas to chew on in Read's essay; self-improvement never stops, and there is always more to learn.

It was a surreal scene for onlookers standing on the shore at Shine Tidelands State Park, many who waited all day to see the lift that, by engineering standards, was no big deal considering the DB General's 700-ton lift capacity.

Once the 280-foot-long, 70-foot-wide and 40-foot-tall truss was in the air -- almost a million pounds of steel hanging there like a toy -- tugs gently pulled the DB General directly backward. Two other tugs then nudged another barge into perfect alignment under the truss.

Although the General was once the largest barge crane on the West Coast, it is now of course dwarfed by the Left Coast Lifter.

Sunday, December 15, 2013

Maria Semple's Where'd You Go, Bernadette? is sort of the fiction equivalent of Beaujolais Noveau, Strawberries at Wimbledon, or a bouquet of roses: it is not meant to be placed on a shelf, stored for later enjoyment; it needs to be consumed and enjoyed now, or it will spoil.

This is not to say that Where'd You Go, Bernadette is an inferior pleasure. It is a comic delight, bubbly and energetic, vivid and captivating. It's just to note that a book which is so thoroughly stylish and au courant runs the risk of being simply baffling to readers just a few years later.

But underneath all the pop culture references is an engaging story populated with a delightful cast of characters, told with a light touch, a wink, and a smile.

I suspect this story will particularly appeal to the bit jockeys of the world, those who can instantly appreciate a passage such as

Mr. Branch's administrator knocked and asked if Mr. Branch had reviewed a code fix. Mr. Branch looked at his cell phone and shuddered. Apparently, forty-five emails had come in while we were talking. He said, "If Bernadette doesn't kill me, Reply All will." He scrolled through the emails and barked some code talk about submitting a change list, which his administrator furiously copied down before dashing out.

But even if you've never submitted a change list in your life, I think you'll enjoy Where'd You Go, Bernadette?. The holidays are busy times for everyone, but if you have a bit of downtime, and you're looking for something to brighten up your afternoon, and drive away those winter grays, give Where'd You Go, Bernadette? a try.

Saturday, December 14, 2013

First, you'll download the game and install it. When you start it up, you'll gape at the beauty of the screen.

You'll work your way through the tutorials, getting an understanding for how the interface works, and what the various screens and dialogs are trying to tell you.

All too soon, you'll be finished with the tutorial, and you'll be invited to play a game. "Pick a country!", the computer says, "and let's get going."

So you do.

And you are immediately lost, baffled, overwhelmed.

But you'll persevere. You'll take some time to read the manual, which Paradox have so thoughtfully made available for all.

You'll spend time reading about the game on the Internet, considering the various advice, strategies, and tips that people discuss.

And you'll keep trying to play.

After a while, you'll learn how to pause the game, and how to slow the game speed way down, so that you can see what's happening.

And at some point, when you least expect it, you'll find that you suddenly can't stop thinking about the game.

"France needs to ally with Aragon," you'll think to yourself, "so what would make that happen?" Should you investigate a Royal Marriage? Send a diplomat to improve relations? Offer a bribe? Eventually, you learn enough about the game to discern that Aragon's reluctance is due to the active war that France is prosecuting with England.

And so you wonder if sueing for peace with England, to your north, will actually aid in relations elsewhere on the continent? How will you manage to unite the various dukedoms, and form a unified France?

Meanwhile, you've got troops to lead, an economy to run, diplomatic enquiries from all fronts, and a trading empire to build.

Each time you play, you'll realize how crude your previous attempts were, and how horribly you've mis-managed affairs, and so you'll start anew again, and again, and again.

Before you know it, Steam will tell you that you've been playing a total of 74 hours so far, and you still feel like such a rank amateur.

Will you enjoy it? I don't know. This is a vast and complex game, like nothing you've played before.

But if you think you might enjoy it, I encourage you to give it a try, as the game is truly a work of art, and enormously repays the time you devote to it.

Tuesday, December 10, 2013

I'm not sure if it's because the days are cold and dark, so everyone is staying inside and reading and writing, or because there is some sort of harmonic convergence, or perhaps I just got lucky, but I've been reading some very interesting longer-form CS writing recently.

Bueno's e-book is a clear, compact, well-organized treatment of performance optimization. The title is a riff on Don Knuth's 40 year old tongue in cheek sound bite, which sadly is all that many computer professionals ever learn about performance observation.

If you want to go further, start with Bueno's superb book: he points you in the right direction, saves you from several basic pitfalls, arms you with a collection of useful tools and techniques, and points at resources to help you move further once you've grown comfortable with the basics. I particularly like the fact that Bueno links to some of Richard Cook's, as I think Cook has some fascinating ideas and deserves more attention.

And I just love this hard-won advice:

Your instrumentation should cover the important use cases in production. Make all the measurements you want in the lab, but nothing substitutes continuous real-world data. Think about it this way: optimizing based on measurements you take in a lab environment is itself a falsifiable theory, ie, that lab conditions are sufficiently similar to production. The only way to test that theory is to collect measurements in production too.

I've read, oh, approximately 8 trillion articles about Bitcoin; these days, writing a "here, let me explain Bitcoin to you" article seems to be one of the rites of passage.

Most of them are rubbish.

But Nielsen's exposition is clear, nicely paced, compactly worded without being dense, and somehow hits just the right level of explanation for me. As I read it, I was reminded of another great document that, while wildly different, is still very similar in approach and technique: Bill Bryant's Designing an Authentication System: a Dialogue in Four Scenes. In both documents, the approach is to start with a solution that seems like it should work, identify the problems with that solution, and evolve from there:

My strategy in the post is to build Bitcoin up in stages. I’ll begin by explaining a very simple digital currency, based on ideas that are almost obvious. We’ll call that currency Infocoin, to distinguish it from Bitcoin. Of course, our first version of Infocoin will have many deficiencies, and so we’ll go through several iterations of Infocoin, with each iteration introducing just one or two simple new ideas. After several such iterations, we’ll arrive at the full Bitcoin protocol. We will have reinvented Bitcoin!

This strategy is slower than if I explained the entire Bitcoin protocol in one shot. But while you can understand the mechanics of Bitcoin through such a one-shot explanation, it would be difficult to understand why Bitcoin is designed the way it is. The advantage of the slower iterative explanation is that it gives us a much sharper understanding of each element of Bitcoin.

Langer has devoted 6 years of his life to studying, analyzing, deconstructing, and, most importantly, explaining Stuxnet, the most sophisticated and fascinating piece of malware yet unleashed upon the world.

Although Langer is not the most natural of writers (in his defense, I suspect English was not his first language), he more than makes up for his dry prose with the amazing depth of detail and knowledge that he includes in this work.

The attack continues until the attackers decide that enough is enough, based on monitoring centrifuge status, most likely vibration sensors, which suggests a mission abort before the matter hits the fan. If the idea was catastrophic destruction, one would simply have to sit and wait. But causing a solidification of process gas would have resulted in simultaneous destruction of hundreds of centrifuges per infected controller. While at first glance this may sound like a goal worthwhile achieving, it would also have blown cover since its cause would have been detected fairly easily by Iranian engineers in post mortem analysis. The implementation of the attack with its extremely close monitoring of pressures and centrifuge status suggests that the attackers instead took great care to avoid catastrophic damage. The intent of the overpressure attack was more likely to increase rotor stress, thereby causing rotors to break early – but not necessarily during the attack run.

If there is something you want to know about Stuxnet, you will find it here.

Much of what Langner writes remains controversial, and certainly this story is far from complete. But Langner has done the world a tremendous service by sharing his deep and broad knowledge of Stuxnet widely and openly. Read it. Think about it. Understand just that little bit more about the strange new world we occupy.

So if, wherever you should be, the days are short and the nights are cold, pull up a comfy chair, grab your reading device, and sink your brain into some deep thoughts. Enjoy!

I'm no expert, but it seems to me that Brazil, Argentina, and Columbia must count themselves pleased, while Germany, Spain, and Uruguay have to steel themselves for stiff competition. Switzerland and Belgium fell somewhere in the middle, I think.

Both England and the U.S. were destined to have hard schedules, and certainly both now have their work cut out for them.

Masters of Doom is the well-written and engaging story of two guys named John (John Carmack and John Romero), who came together and in a few brief and intense months completely re-invented the entire computer gaming world, inspiring all sorts of critical innovations along the way, and then burst apart, in a collision event that captivated the (relatively) large world centered around them.

I knew much of the outlines of the story of Carmack and Romero, but there were just oodles and oodles of details that I had never been aware of, and Masters of Doom was full of surprises and things that fascinated me:

I had absolutely no idea that Carmack and Romero first met and started working together in Shreveport, Louisiana. Masters of Doom is polite and generous about this, but, really: Shreveport? Even in 1989 there were identifiable centers of software activity: Boston; San Francisco; North Carolina's Research Triangle; Redmond, Washington. But Shreveport? How unlikely it was that even one of the Johns would end up in Shreveport, and how spectacularly unlikely it was that the two of them ended up there at the same time. The description of this completely accidental happenstance was, by itself, worth reading the book for. Of course, once they all got together, it wasn't long before they created the classic hacker's house, in a way that could happen anywhere, whether it was in Shreveport, Louisiana or on the face of the moon:

Carmack, Lane, Jay, and an Apple II programmer at Softdisk named Jason Blochowiak had scored an enviable coup not long before when they found a four-bedroom house for rent right along these shores. Jay had bought a cheap boat, which they docked there and used for frequent outings of kneeboarding and skiing. In the large backyard was a swimming pool and a barbecue, with which Jay, a cooking enthusiast, grilled up Flintstonian slabs of ribs. The house itself had plenty of windows looking out on the scene, a large living room, even a big tiled bathroom with a deep earth-tone-tiled Jacuzzi tub. Jay had installed a beer keg in the fridge. It was a perfect place to make games.

I never knew the origin of the name "id Software". I thought it was some sort of psychology reference, but in fact it was a shortened and merged amalgam of the company originally formed by John Romero and Lane Roathe:

While in New Hampshire, the two even decided to merge their one-man-band companies -- Romero's Capitol Ideas and Lane's Blue Mountain Micro -- under one roof as Ideas from the Deep.

...

When the guys christened their company, they shortened the Ideas from the Deep initialism and simply called themselves id, for "in demand". They also didn't mind that, as Tom pointed out, id has another meaning: "the part of the brain that behaves by the pleasure principle."

I was fascinated by the description of the special synergy that Romero and Carmack found, each one's strengths complementing the other's. You can't plan for this sort of thing; you can't cause it to be; it just happens, or it doesn't.

Romero and Carmack were now in a perfect groove, with Carmack improving the new Keen engine -- the code that made the graphics -- while Romero worked on the editor and tools -- the software used to create the game elements. Nothing could distract them.

...

Carmack and Romero had developed another aspect of their collaboration. Though Carmack was gifted at creating game graphics, he had little interest in keeping up with the gaming world. He was never a player, really, he only made the games, just as he was the Dungeon Master but not a player of D&D. Romero, by contrast, kept up with everything, all the new games and developers.

...

Romero immediately saw the potential in Carmack's technology, potential that Carmack was, by his own admission, not capable of envisioning himself. And because Romero was a programmer, he could speak to Carmack in a language he understood, translating his own artistic vision ino the code Carmack would employ to help bring it to life.

...

He played around with rooms that flashed strobe light, with walls that soared and receded at different heights. Every decision he made was based on how he could best show off Carmack's technology. Carmack couldn't have been happier; what more could someone want, after all, than to be both appreciated and celebrated? Romero was just as energized; with Carmack's innovations, he too could reach new heights.

I loved the bit of back-story about how Carmack and Romero found themselves inventing the approach of having an extensible gaming engine, with level editing tools that allowed others to create new levels and build entire new games:

The Right Thing was programming Doom in such a way that willing players could more easily create something like this: StarDoom, a modification, or mod, of their original game.

...

For Doom, Carmack organized the data so players could replace sound and graphics in a nondestructive manner. He created a subsystem that separated the media data, called WADs (an acronym suggested by Tom Hall, it stood for Where's All the Data?), from the main program. Every time someone booted up the game, the program would look for the WAD file of sounds and images to load in. This way, someone could simply point the main program to a different WAD without damaging the original contents. Carmack would also upload the source code for the Doom level-editing and utilities program so that the hackers could have the proper tools with which to create new stuff for the game.

The best, and most important, part of the book, in my opinion, is the long and detailed retelling of the split of the Johns, as the intense worldwide pressure to follow-up Doom with Quake drove immense tension between them, culminating in the days following the release of Quake to the world.

The chasm between Carmack and Romero was too wide. Both of them had their veiws of what it meant to make games and how games should be made. Carmack thought Romero had lost touch with being a programmer. Romero thought Carmack had lost touch as a gamer. Carmack wanted to stay small, Romero wanted to get big. The two visions that had once forged this company were irreparably tearing it apart.

There are many other fine parts to this book. I was amazed how the hours just flew by, reading about all the vivid personalities, wild escapades, and spurts of undeniably brilliant creativity.

Perhaps surprisingly, the world of software engineering has been fairly free from the "celebrity biography" genre of reporting. Most of the external coverage of the software industry has focused on the CEOs and entrepeneurs (Bill Gates, Larry Ellison, Michael Dell, Steve Jobs, etc.). I suppose these people are interestnig, although to me their stories always feel dull and uninteresting.

I think there is an important reason for this: nearly all world-changing software isn't written by a single person, it's written by a team of people, working together, bringing all sorts of different talents, personalities, and approaches to the project.

They were casually talking about how a large team is required in the range of 1000 to create games with minute details, and when the question popped up about GTA 5 team’s size, Benzies revealed that it’s much, much more than 1000.

Yes, you read that right: more than 1000 people worked together on GTA V. According to Masters of Doom, the Wolfenstein 3D team was rather smaller:

They had finally fired Jason, narrowing the group to Carmack, Romero, Adrian, and Tom.

Perhaps there's no going back to the days when Carmack and Romero could sit side-by-side in a single room, building breakthrough games and tools like never before.

But just because things are different now, doesn't mean they are worse. I wasn't programming computers in Tony Hoare's day, but after 3 1/2 decades of programming, I've seen lots of approaches, lots of techniques, and lots of personalities.

I've been part of small teams, and part of immense efforts. I've seen solitary geniuses, and engaging, extroverted, and inspirational team builders.

And it's all great.

May there be many more years of programming ahead of us all.

And may there be more books like Masters of Doom, to share the excitement, thrills, and drama with us all.

The group you are in determines where you play, which involves both travel and climate considerations:

The seeded team in Group H will have a relatively easy first round schedule with matches in the milder conditions of Belo Horizonte, Rio de Janeiro and Sao Paulo.

But the seeds in Group G will play in the intense heat of northeastern cities Fortaleza, Natal, Salvador or Recife.

The travel issues are substantial, and add to the controversy:

In such a vast country, there was an early plan to revert to the arrangements once used in World Cups where teams were based in one region instead of travelling all over the country but Brazilian organisers did not want one region to stage all of Brazil's first round games.

As that was not politically expedient, FIFA agreed every team had to travel all over, resulting in the huge distances covered, apart from those in Group H where the venues are relatively close.

The locations are relevant because the game times are set on a perhaps unexpected basis:

From June 12 until June 22 when there are three matches a day -- the programme switches to four a day from June 23 to June 26 for the last round of group games -- matches are due to start at 1pm, 4pm and 7pm local time which is 1600GMT, 1900GMT and 2200GMT to maximise European television audiences.

However, the early kickoff time has sparked some unease as it will be very hot in the northeast at that time of day.

In my time zone, I think that means that the games are at 8 AM, 11 AM, and 2 PM.

Hmmm... those 11 AM games should make for fine lunchtime viewing!

It's important not to get confused about what time it is:

From June 23 until June 26 a pair of games will kick off at 1pm and the other pair at the same time later in the afternoon, although the clock will show 4pm in one stadium and 5pm at the other because they are in different time zones.

So there you go:

The games won't be played at a convenient location for your team

The games won't be played at a convenient time for your team

Your team will have to travel extensively throughout the fifth largest country in the world

Sunday, December 1, 2013

It looks like you’re paying a lot for slower clock speeds as the cores increase, but that’s not the entire story. Those weird Turbo Boost numbers, which are easy to pull from here and here, are worth understanding before choosing a modern Intel processor.

They indicate the number of extra 100 MHz increments by which the CPU may ramp up its speed with a given number of cores in an active, high-power state. The sequence begins with all cores active, then counts down to just one core active.

Out of that tense Oval Office meeting grew a frantic effort aimed at rescuing not only the insurance portal and Mr. Obama’s credibility, but also the Democratic philosophy that an activist government can solve big, complex social problems.

What I like about this approach is: 1) no clear cutting was required to prepare the land for generation and the land remains multi-use, 2) it’s not fossil fuel powered, 3) the facility will be run by a major power generation operator rather than as a sideline by the datacenter operator, and 4) far more clean power is being produced than will be actually used by the datacenter so they are actually adding more clean power to the grid than they are consuming by a fairly significant margin.

Sublime Text is a text editor. A very fast, efficient, cross-platform text editor written explicitly for editing code. It is not an IDE, debugger, or builder. It’s made to be super kick ass at editing text and not much else.

Cross Region Read Replicas are available for MySQL 5.6 and enable you to maintain a nearly up-to-date copy of your master database in a different AWS Region. In case of a regional disaster, you can simply promote your read replica in a different region to a master and point your application to it to resume operations. Cross Region Read Replicas also enable you to serve read traffic for your global customer base from regions that are nearest to them.

when you make software with a UI that's hard to use and confusing in its design, you create a situation where error is inevitable. In networking software, errors are really, really bad. When a misplaced semicolon can kill Internet routing to a large part of the country, your software design has an issue. And I don't believe that these situations are solvable by "knowing what you're doing." If that was the case, why bother with anything past binary? Why make anything easy to use?

Neural networks are one of the most beautiful programming paradigms ever invented. In the conventional approach to programming, we tell the computer what to do, breaking big problems up into many small, precisely defined tasks that the computer can easily perform. By contrast, in a neural network we don't tell the computer how to solve our problem. Instead, it learns from observational data, figuring out its own solution to the problem at hand.

Did Hungary, under your guidance, drive the Turks from Europe, unifying the Balkans along the way? Did your Aztecs hold off the Spanish, English, and French and maintain an Empire in Central America? Did your England win the Two Hundred Years War, conquering the French and building the mightiest Empire in Europe? Did your Iroquois launch a reverse colonial war, overwhelming the stunned nations of Europe after centuries of bitter warfare?