Prescription drugs are important elements of our lives. There is a strict, scientific, testing-based process to assure that drugs that become widely used are safe and effective, with known side effects. Computers and software are also important elements of our lives. There is a chaotic, fashion-trend-based process used to select the mixture of tools and techniques used to build, maintain and operate our IT systems, resulting in widespread failures, along with cost and quality problems. Worse, there is no recognition that this is the state of affairs, and no movement to correct the situation.

Pharmaceuticals

Everyone knows that drugs are important, and an important part of our economy. Here are some numbers from the CDC. In 2013, we spent about $271 Billion dollars on prescription drugs. That's quite a bit, but just about 10% of national health spending.

I won't recount the process drugs go through to get approval from the FDA, but I think everyone knows it's an elaborate, multi-year and multi-stage process, with testing at each step to assure that we know how a proposed new drug will work in human beings. While I have my complaints, there is a process, and it's scientific and evidence-based.

IT

The IT industry is also a large one. Here's a breakdown of it worldwide.

There are conflicting estimates of its size in the US, but here's a representative one.

Note that the definition of IT does not include the activities of well-known IT-centric companies like Google.

I was fascinated to see that in 2013, IT was three times the size of the entire pharmaceutical industry. Amazing.

Drugs and IT

Drugs are developed by scientists. They are vetted by a strict scientific process. Only drugs that make it through all the tests are widely used. As a result, the vast majority of drugs are used safely and effectively by the vast majority of patients, with a few experiencing side effects that have already been identified.

IT is run by professionals and staffed with computer scientists and engineers, using tools and techniques developed over many years by scientists and engineers. No matter how high-profile and important the project, regardless of the involvement by government or private companies, a shocking fraction of IT projects end up late, too costly, ineffective or worse. Industry-accepted certifications seem to make no difference. New methods and techniques emerge, become talked about and are deployed widely without any evidence-based process being used to assure their safety and effectiveness. The industry is rife with warring camps, each passionately committed to the effectiveness of their set of tools and techniques. But there isn't even postmortem testing to see which ones were better at gaining its adherents admittance into IT "heaven."

Conclusion

I think the FDA-run drug acceptance process could be much better than it is. But the important thing is, everyone involved in prescription drugs understands and acts scientifically about the process. No one, including me, wants that to change.

The IT industry is at least three times the size of the drug industry. There are computer science and computer engineering departments in every major university, and their graduates staff the industry. It's hard to imagine that they don't understand science, scientific process and evidence-based reasoning. However: they adhere to faith-based processes and vendor-driven products that yield horrible results year after year. None of them say, "hey, this stinks, maybe we can apply that thing that Galileo, Newton and Einstein did, what's it called, science?"

The last thing I want is government involvement in IT, given how horribly government handles its own IT affairs, and I'm not suggesting it here. But it's a sign of just how bad things are in IT that the bureaucratic, government-run FDA does a more scientific job with drugs than anyone does with IT.

They say that cognitive computing, the term-du-jour for Artificial Intelligence (AI), is in the process of transforming healthcare. Billions of dollars of investment are behind the effort. Sadly, there are good reasons to believe that little good will come of it.

Cognitive Computing

Whatever it is, people are pretty sure it's BIG. Here's what a major investor and the former GM of IBM's Watson unit says about it:

It's so hot that IBM has created a separate division for Watson, investing more than $1 billion just to get it started, and will have a headquarters group employing more than 2,000 people.

So What's the Problem?

Big investments like this should mean that there's a big problem to be solved. What is it? Not enough doctors? The doctors are too expensive, and somehow automating what they do with this mega-expensive effort will help that? The doctors aren't as smart or educated as Watson will (by presumption) be?

Someone involved should let the rest of us know.

Meanwhile, count me a skeptic. The reason is simple: there is a decades-long history of researchers and big companies making claims more modest than the ones being made for "cognitive computing," and they've all failed, technically and/or in business terms. In the end, computers do get used for more and more, as we all know from personal experience. That's a trend that will certainly continue. But "cognitive computing," i.e., AI reincarnated and re-named? Uh-uh.

The message appears to be: if you're not way into Big Data, you're missing out on important things! For vendors and job seekers, I'm sure this is true, without reservation. For the companies that wish to benefit from the investment? Maybe not.

The Big Data Trend is BIG!

There's one thing for sure about Big Data: it's a Big trend.

We have been assured that Big Data is now the driving force in computing.

If you scan through the books, conferences and other things whose focus is Big Data, it's clearly a major fashion trend.

Whenever something like this catches first, everyone wants to jump on. Lots of people talk about their own "big data" that, on a closer look, isn't so big after all.

Probability

Let's start with something simple and universal: flipping coins. Suppose we place ads. We make money when the coin comes up heads, and lose money when it comes up tails. Our data people tell use that the odds of getting heads are 0.5, with a certainty of 0.1 -- i.e., the chance of it coming up heads is probably 0.5, but it might be as little as 0.4 or as much as 0.6. Now we have 100X more coin flips to apply to our measurement. Great, now we're really going to start marking money!

They come back, sweaty and proud with the answer: the probably of getting heads is 0.500, with certainty of 0.001 -- i.e., the chance of it coming up heads is probably 0.5, but it might be as little as 0.499 or as much as 0.501. Wow, we've increased our level of precision massively! How much does that increase the money we make? Hmmmmm.

Quality/Integrity

Maybe the problem is that we just got lots more data points about the same thing. It didn't broaden our knowledge. Maybe we need to expand, check out the odds for not just nickels, but also dimes and quarters. Hmmmm. Let's get more ambitious. Let's track users, not just on our website, but also on 100 other websites. Tell the programmers to get going! We're going to be rolling in money from Big Data!

The programmers seem to be having trouble matching people over different web sites. Are all these people who claim to be David Black the same person? What about that David B. Black guy? And there appear to be two really different patterns of use coming from the same IP address -- maybe someone else is sharing the computer? And I just discovered that there's a David Black who appears to use the internet from Manhattan, and a David Black who uses it from some place out in New Jersey. We already know there are multiple David Blacks. This could be one person or two. Which is it? This is getting hard.

Darn. I thought all I had to do was get loads more data and a Hadoop cluster and the money would start pouring in. Getting all that data to match up and make sense is harder than it looks. And then, when I've done it, is all I'm achieving increasing my level of certainty about what I already knew?

Data Coverage and lift

Alright. I've got my 100X more data. I've FINALLY sorted it out so it's high quality and matches up. Now I've got to make sure it really broadens my knowledge and gives me uplift in my results.

So far, all I've been doing is looking at my customers' actions. I bet if I look at demographics and social media -- that's lots of data, surely it qualifies as "Big," I'll get better results. Big Data team -- mush!

Darn, darn, DARN! Yeah, all this big data stuff changes what I offer to whom for how much -- but it's not making a whole lot of difference in my results. And I'm getting hammered with complaints from people who want me to stop making offers to their kids, and old customers who wonder why we don't love them anymore. Yeah, we're getting 5-10% uplift, but we're losing at least that much from our old business, not to mention all the costs we've added.

Who's making money from the Big Data stuff? It must be the consultants, the vendors and the conferences. It's sure not me.

Of course, I could just patch it all up, start going to conferences, bragging about how I'm an expert, and maybe I'll get a great new job. But it would be based on a lie. I'm not that kind of person.

Conclusion

I love data. I love exploring it, analyzing it by all available means, and understanding it. Evidence-based solutions are the only ones I'm comfortable with. Everything else is just baseless faith. If I can use math optimization, machine learning or something else to do a better job than a person could do, I'm all for it. If I can get additional data and that data will help me get better results, bring it on!

But "Big Data" is not in principle better than "enough" data. Too little is not enough. More than you need to get the job done is a waste. Just like Goldilocks, you should want the amount of data that's "just right."

There are decades-long trends and cycles in software that explain a great deal of what goes on in software innovation – including repeated “innovations” that are actually more like “in-old-vations,” and repeated eruptions of re-cycled “innovations” that are actually retreating to bad old ways of doing things. Of the many examples illustrating these trends and cycles, I’ll focus on one of the most persistent and amusing: the cycle of the relationship between data storage definitions and program definitions. This particular cycle started in the 1950’s, and is still going strong in 2015!

The relationship between data in programs and data in storage

Data is found in two basic places in the world of computing: in “persistent storage” (in a database and/or on disk), where it’s mostly “at rest;” and in “memory” (generally in RAM, the same place where the program is, in labelled variables for use in program statements), where it’s “in use.” The world of programming has gone around and around about what is the best relationship between these two things.

The data-program relationship spectrum

At one end of the spectrum, there is a single definition that serves both purposes. The programming language defines the data generically, and has commands to manipulate the data, and other commands to get it from storage and put it back. At the other end of the spectrum, there is a set of software used to define and manipulate persistent storage (for example a DBMS with its DDL and DML), and a portion of the programming language used for defining “local” variables and collections of variables (called various things, including “objects” and “structures.” At this far end of the spectrum, there are a variety of means used to define the relationship between the two types of data (in-storage and in-program) and how data is moved from one place to the other.

For convenience, I’m going to call the end of the spectrum in which persistent data and data-in-program are as far as possible from each other the “left” end of the spectrum. The “right” end of the spectrum is the one in which data for both purposes is defined in a unified way.

People at the left end of the spectrum tend to be passionate about their choice and driven by a sense of ideological purity. People at the right end of the spectrum tend to be practical and results-oriented, focused on delivering end-user results.

Recent example: Ruby and RAILS

The object-oriented movement tends, for the most part, to be at the left end of the spectrum. Object-oriented people focus on classes and objects, which are entirely in-program representations of data. The problem of “persisting” those objects is an annoying detail that the imperfections of current computing environments make necessary. They solve this problem by various means; one of the most popular solutions is using an ORM (object-relational mapper), which does the work of moving objects between a DBMS and where they “should” be nearly automatic. It also can automate the process of creating effective but really ugly schemas for the data in the DBMS.

A nice new object-oriented language, Ruby, appeared in 1995. It was invented to bring a level of O-O purity to shell scripting that alternatives like PERL lacked. About 10 years later, a guy was working in Ruby for Basecamp and realized that the framework he'd created for it was really valuable: it enabled him to build and modify things his company needed really quickly. He released it to open source and became known as the RAILS framework for Ruby, the result being Ruby on RAILS. Ruby on RAILS was quickly adopted by results-oriented developers, and is the tool used by more than half a million web sites. While "pure" Ruby resides firmly at the left end of the spectrum, the main characteristic of the RAILS framework is that you define variables that serve for both program access and storage, putting it near the right end of the spectrum.

Rhetoric of the ends of the spectrum

Left-end believers tend to think of people at the other end as rank amateurs. Left-enders tend to claim the cloak of computer science for their style of work, and claim that they're serious professionals. They stress the importance of having separation of concerns and layers in software.

Right-end practitioners tend to think of people at the other end as purists, ivory-tower idealists who put theory ahead of practice, and who put abstract concerns ahead of practical results. They stress the importance of achieving business results with high productivity and quick turn-around.

It goes in cycles!

There have always been examples of programming at both ends of the spectrum; neither end disappears. But what happens is that the pendulum tends to swing from one end to the other, without end up to now.

In 1958, Algol was invented by a committee of computer scientists. Algol 60 was purely algorithmic -- it only dealt with data in memory, and didn't concern itself with getting data or putting it back. Its backers liked its "syntactic and semantic purity." It was clearly left-end oriented. But then practical business people, in part building on earlier efforts and in part inspired by the need to deliver results, agreed on COBOL 60, which fully incorporated data definitions that were used both for interacting with storage and for performing operations. It was clearly right-end oriented. All the big computer manufacturers, anxious to sell machines to business users, built COBOL compilers. Meanwhile, Algol became the choice for expressing algorithms in academic journals and text books.

There was a explosion of interest in right-end languages during the heyday of minicomputers, with languages like PICK growing in use, and niche languages like MUMPS having their day. The rise of the DBMS presented new problems and opportunities. After a period of purism, in which programmers struggled with using left-end languages that were supposed to be "easy" like BASIC to get jobs done, along came Powersoft's Powerbuilder, which dominated client-server computing because of the productivity win of integrating the DBMS schema with the language. The pendulum swung back with the rise of the internet and the emergence of "pure" java, with its arms-length relationship to persistence, putting it firmly in the left-end camp. Since Java wasn't pure enough for some people, "pure" Ruby got out there and grows -- and some Ruby user is under pressure to deliver results and invents RAILS, which then spreads to similarly-minded programmers. But "amateurs" who like Java but also like productivity invented and are spreading Grails, a derivative of Java and RAILS that combines advantages of each.

So is RAILS an innovation? Java? I don't think so. They're just minor variants of a recurring pattern in programming language design, fueled by the intellectual and emotional habits of groups of programmers, some of whom push towards the left end of the spectrum (with disdain for the amateurs) and others of whom push towards the right end of the spectrum (with disdain for the purists).

Conclusion

This spectrum of the relation between data and program has persisted for more than 50 years, and shows no signs of weakening. Every person or group who, finding themselves in an environment which emphasizes the end of the spectrum they find distasteful, tries to get to the end they like. Sometimes they write a bunch of code, and everyone thinks they're innovative. Except it would be more accurate to say they're "inno-old-ive." This is one of the many patterns to be found in software history, patterns that account for so much of the detail that is otherwise hard to explain.

In baseball, teams play against each other. Each half inning, one team does its best to attack the other and score, while the other does its best to stop them. The teams are similarly staffed, and they alternate playing offense and defense. In computer security, teams also play against each other. The "home" team always plays defense, while the "away" team comes to town and tries to score against their hosts.

Tiny, often remote "visiting teams" in cyber-war score massive victories against huge, well-funded organizations like OPM and Anthem. These are rarely quick "hit and run" attacks -- they are more often months-long penetrations, during which massive amount of information gold is marched out of the "well-guarded walls" of the clueless behemoth. What's worse, most people don't seem to care -- imagine if a single gold bar were secreted out of Fort Knox: heads would roll! How can this happen? Why does no one seem to care?

Baseball and Cyber-war

First and foremost, baseball is visible. We can see it and understand it. Loads of fans come to stadiums to watch it. Cyberwar? It's largely invisible. It's as though the stadiums were empty.

In baseball, we can actually see the team at bat competing against the defenders. It's pretty exciting! For the vast majority of people, there is no equivalent in cyber-war.

The fans and managers understand the game; those closest to it have normally played it. They have strong opinions, for example, about the defensive shift maneuver, which is sometimes used against a pull hitter. Even if you've never heard of it, a simple diagram makes it easy to understand. In cyber-war there are also strong opinions, but the way most managers think about cyber-defense is simply inappropriate and ineffective. Not only is there no defensive shift, there is a complete lack of awareness when the enemy has been inside your walls for weeks, ransacking away. Because no one understands what's going on, including those in charge, the ineffective methods continue to be standard practice, even when there are better approaches available. Retail stores, who actually care about loss prevention, generally have better theft prevention measures.

Above all, there's this. The people who play baseball care about it. So do the people who watch baseball. Cyber-war is way more than a game, but people just don't take it seriously. They don't even give the passion to it that they give to games! The individual computer users don't know or care, and neither do the managers.

Conclusion

Nothing will change in Cyber-war until we understand it, start caring about it and apply methods that work. In a fight between the smart and motivated against the clueless and unmotivated, the outcome is preordained.

Our personal data is stored in the computers at large corporations and government organizations. We now have abundant proof that these large organizations are incapable of protecting our data. This is not a string of bad luck that will soon pass. These large organizations never had good security -- they just weren't being attacked. Unfortunately, the security flaws are a direct outcome of the dysfunctional technical and management practices that lead to large-organization IT failures across the spectrum.

Recent Security Disasters

The security disaster at the government Office of Personnel Management (OPM) has been in the news recently. Here is a summary, and here is a timeline. OPM knew all about security, and tried its darndest to be secure, spending over $4.5 Billion dollars on a system to prevent breaches, including a recent $218 million upgrade on the security system known as Einstein. All for naught.

In the private sector, there was the breach at Anthem, preceded by a string of security disasters at major banks and retailers involving tens of millions of consumer records.

The Response to the Attacks

We're seeing the usual responses to the problems.

First and foremost, try to avoid letting anyone know there's a problem.

Second, try to draw attention to all the attacks that were thwarted. The OPM is actually bragging about all the attacks they defend against! That's like, when the bank has been totally cleaned out, bragging about how many attempts had been thwarted.

No one is losing their job. No significant changes are being made. No one is running around like their hair's on fire. Ho-hum, it's business as usual.

Systemic Issues are behind the Disasters

Security in large organizations is broken. But that's just a side effect of the fact that IT in large organizations is broken. Not in detail -- in principle. When the foundation of a building is made out of jello instead of concrete, you don't fix it by adding more jello, trying a new flavor of jello, or getting everyone to walk slowly and carefully. You replace it with reinforced concrete -- pronto! When the foundations are the wrong kind of stuff, making new foundations out of jello will never help. Even if it's jello that costs billions of dollars.

The Systemic Issues

This is a subject that is long and deep. All the problems come down to two simple core thoughts: (1) computers are just like all the other things to which management techniques are applied, so standard-issue "good management" will solve any problems; and (2) computer security is just like all the other computer issues, and can be managed using the same standard techniques.

Wrong and wrong.

Computers and software in general are radically different than anything else we encounter in our normal lives, and evolve more quickly by orders of magnitude than anything else in human experience. Managing a software building project as though it were a home building project leads to results that are, at best, 10X worse than optimal methods, and at worst, complete disaster.

Computer security in particular is not just another issue to be managed using standard techniques, which in any case yield horrible results. In computer security, we're dealing with smart and motivated attackers who are at war with us, and naturally use the latest "weapons" in a rapidly evolving arsenal. While our attackers are at war with us, we plod along at a peace-time pace, scheduling security issues like just the other items in prioritized lists. When the armed gang breaks through the back door of the warehouse, we eventually discover the break-in and schedule a response for sometime in the next couple of months. By the time we've installed new alarms, the gangs are already on their third generation of tools for defeating them.

The vendors of hardware, software and services have evolved to provide incredibly expensive, ineffective products and services that are packaged to make top managers feel great.

Computer security requires war-time actions, not peace-time ones

Translating from physical security, managers insist that security is about walls, guards and kevlar vests. The bad guys are out there, our job is to keep them out. Wrong. The vast majority of security breaches result from either conscious or unknowing cooperation of insiders. Including OPM.

The bad guys are at war with us. By the time we've figured out that we've been robbed, the bad guys are long gone. By the time we're just wrapping up the requirements documents for our response, the bad guys have cleaned us out again.

Once we finally deploy our best defense, the art of war has advanced and our defenses are useless, just like the Maginot Line in World War I.

Conclusion

We all know that the definition of insanity is repeating the same actions and expecting different results. In that sense, the approach that large organizations, private and public, take to computer security is insane. All the people in charge propose is doing what they've always done, only somehow harder and better. The alternative approach, while radically different from the current one, is simple, clear and actionable. The people in charge actively resist it today. They've got to embrace it if there is to be any chance at all of improvement in cyber-security.

As a result of the decades I have spent working on, in and around computers, I have learned many things from other people, from books and talks, from studying the results of other peoples' work, from trying to accomplish many things in various ways myself, and from following the course of many projects and products over time. During this period of time, the computer industry has changed dramatically in many ways, and not much in some ways.

Most of the knowledge and insight I have gained from this effort over time match well with those of the industry as a whole. However, there are major subject areas which I have observed don't get much attention, or need major innovation. Here are some of those areas.

Quality

A good deal of attention has been paid to the quality that results from software development efforts. Products have been built to automate various aspects of the quality process, and there are techniques frequently incorporated into the software development process intended to assure good quality results.

However, it is clear that there is a tremendous opportunity to enhance the quality process. There are conceptual and technical advances that can be applied to the quality process that greatly improve the results of software development and reduce the time and effort to attain those results. While it is likely that there are situations to which the optimal techniques do not apply in part or in full, it appears that they are applicable to most software development projects.

Optimal results

There are a few areas in computers where people focus on measures of goodness and generally agree on what those measures are, for example, total cost of ownership. But the concept of the best possible result in theory, comparable to Shannon’s result in communications, is rarely applied in computing. Yet, there are a number of areas where it is applicable and useful.

Similarly, in computer hardware, people frequently reach a consensus concerning the “best” way to implement a certain feature, whereas in software development tools and processes, the thoughts about the optimal way of doing things evolve slowly, but rarely reach resolution. Moving beyond advocacy and thinking about what is truly optimal and how to attain it is very fruitful.

History

Software development is a field that pays remarkably little attention to history; everything is now and the next best thing. But in fact, a study of history in this field is very rewarding, because just like in real history, you find that some thing truly change, some of them extremely slowly and a few rapidly; and that other things go through recurring cycles. Knowledge of this history is interesting in and of itself, just like “real” history, and also enables you to predict the future within reasonable limits by extrapolating the patterns.

Application and systems software

If you look at every line of code that is executed in order to run a program, the lines fall into various categories, including systems software, standard libraries and applications. The “line” between these has been moving “up” very slowly over the last few decades. This glacial trend has impacts on operating systems, databases, application development tools, and related subjects. Understanding and exploiting this trend is a target-rich environment for innovation.

Abstraction levels

When we notice things repeating in computing, we build a level of abstraction to encapsulate the repetition and then work at the level of abstraction. Each abstraction is something that has to be built, adopted and learned, and because of these obstacles (and in spite of the benefits), abstractions propagate slowly. Some are so hard for most people, like those involving real math, that they can only be used by hiding the complications from practically everyone. Exploiting abstractions can lead to huge advances.

Closed loop systems

The concept of running an automation system “open loop” vs. “closed loop” is widely understood. But I find that few computer systems are run closed loop. Even though this is not exactly a novel concept, most people who work on building or operating the systems seem to be unfamiliar with it.

Workflow systems

The concept of workflow has been around for many years, and many systems have been built that embody the concepts. However, most people that I encounter seem not to understand the abstraction, and no good tools have appeared to ease the path to implementation.

Application Building Methods

The biggest, fattest target of all is the project management style of organizing and managing software projects. I have written a long book dissecting the theory and practice of applying otherwise reasonable project management techniques to software, and another one outlining the alternative approach. The larger the organization, the more likely software is going to be built the bad way. When software is built quickly and well, it is most often built in smaller organizations that are working under some kind of severe constraint. And of course, there are people here and there who simply have figured out better ways of building software, and just do it.

Understanding the People

Athletes are special people -- they're people like everyone else, but the outstanding ones are different in important ways. To encourage them to do their best, you have to understand those differences and act in appropriate ways. Same thing for software. HR people, general managers and everyone else applies the same template to software people they apply to everyone else. They emphasize the commonality and ignore the differences. This is why, in general, management of software people is inexcusably bad. I've written a book about this.

Summary

Each of the subjects mentioned here could be a book; some of them are the basis of whole innovative companies. They're not just theoretical. Exploiting some of these subject areas can lead to rapid tactical execution benefits in organizations that build or use software.

Everyone knows it's hard to build software. Even projects that are judged "successful" are often fraught with problems. The odd thing is that many of the steps people take to reduce the risk and increase the odds of success actually make things worse!

Trying to reduce the risk of software projects

At some level, everyone knows that software projects are risky and often fail. They really want to avoid failure, but the second one guy starts babbling about "object-oriented frameworks" and another guy rattles on about "Agile and a great SCRUM master," normal people get even more worried. "How can I avoid being road kill" is the fear causing the roiling of the intestines. So they insist on things that make them feel safe, all of which (perversely) are most likely to increase the time, cost and risk of failure!

These safe-feeling but risk-increasing items include (but are not limited to):

Outsourcing the project using normal procurement channels and methods

Selecting a large vendor to do the work

Requiring lots of certifications among the organizations and people doing the work

Selecting independent auditing, testing and other functions to assure the work is done well

Interviewing the people in charge of the work, and accepting only those who make you feel comfortable

Each one of these merits an essay explaining why such common-sense steps make things worse. Empirically, they do. The spate of failures among the Obamacare implementations are a recent poster child, since the implementations involved most of the above "safety-increasing" elements.

Outsourcing

Outsourcing is a favorite. Huge organizations outsource all the time, even their whole IT function. But there is no evidence that the organizations that do the outsourcing do any better than the flailing organization that outsources. There is exactly one guarantee: having the work done under a different roof means that you are largely free of the responsibility, and largely from the stress of seeing the sausage factory in action.

Large Vendor

Choosing a large vendor is a tried and true way to make the buyer feel better and safer. You wouldn't buy a car built by someone you'd never heard of, would you? Of course not! So sensible people insist on dealing only with large, well-established vendors. Unfortunately for those sensible people, the things that work in most of our lives cause failure in software. Too bad!

Certifications

You wouldn't go to a restaurant that had failed a health inspection, would you? Or go to a doctor who had lost his license? Of course not. So a good way to feel safe is to find out what certifications are floating around the software industry and make sure your vendor has lots of them. Nice idea. Makes sense in most fields. But not in software. In software, you can be pretty sure that the more certifications they have, the worse they are at building software.

Independent checking

How do you know if they're really doing the work they say they're doing? We get our books audited by an outside firm, so doesn't it make sense to have the software audited by an outside firm of experts? Makes common sense. However, this is yet another example of how common sense makes things worse in software.

Personal interviewing

When all else fails, use your in-depth knowledge and experience with people to do your selecting. The trouble with this nice idea is that the person you're dealing with deals with yokels like you all day long, and you're not nearly as good as you think you are. Worse, the person you're interviewing either personally does the work (unlikely), in which case you have no clue at all, or they're just a sales person (most likely), in which case you're seriously outgunned. Forget it.

Conclusion

If software were easy, everyone would learn how to do it as kids, and be able to pick it up again after years of not having done it. We all know how to make risky decisions and processes less risky. The trouble is that most of those methods, which work pretty well in most of our lives, come up short in the wacky world of software, frequently making things worse.

Wonderful World

I sure hope you can win that girl or boy you're after in spite of all that not-knowing!

The Wonderful World of History

Politicians study history in general and the last election in particular. Fiction writers frequently read fiction, current and historic. Generals study old battles for their lessons; even today at West Point, they read about the Civil War. Learning physics is like going through the history of physics, from Galileo and Newton and through Planck and Einstein to the present. Even the terms used in physics remind you of its history: hertz, joules and Brownian motion. Math is the same way. Whatever you're learning was first established at some point in history, and remains as valid and applicable to the present as when first discovered.

Software, by contrast, is almost completely a-historical. Not only are most people involved uninterested in what happened ten years ago, even the last project is unworthy of consideration – it’s “history.”

History isn't just for historians

How did we learn about biological evolution? By observing species and trying to figure out their history. How did we learn about genes and DNA? By trying to figure out the mechanisms that make organisms work through time. Geology? Gee, I wonder how those mountains got there? And what happened so that I'm finding fossils of creatures that lived in the ocean up there?

A good deal of science is historical in nature. We try to construct theories that explain how things got to be the way they are; and then we run tests or make lots of observations.

Software History is for the Birds

Or so it appears, from the way that the vast majority of software people act. We're about to embark on a new project. How did similar projects work out in the past? What are we doing differently? The uniform response to questions like these? Crickets.

One thing I've realized is that our determined effort to ignore history in software is a completely understandable defense mechanism. Suppose you're starting an hours-long road trip. At the end is near-certain disaster. Would you like to know that at the beginning of the trip, so that every second is miserable, building to a crescendo of terror? Or would you rather blissfully cruise along, and then be blind-sided at the end, leading to a mercifully quick death? Apparently, pretty much everyone agrees that blissful ignorance is the way to go.

A Wonderful World

Here's what I think would be a wonderful world:

They both love each other, AND

They know lots of software history together, leading not only to A's in school, but great jobs and successful projects.

We go to the symphony to hear great music. We go to the hospital when we’re injured or sick, and hope that the caregivers will heal us. When you’re sick, the only thing that matters is getting healthy. When you’re healthy, you have a huge array of activities to choose from, one of which might be going to hear great music.

Both orchestras and hospitals use computers to do their jobs. In both cases, computers play an important supporting role, while people deliver the actual services customers/patients want.

One of the great hospitals, Mount Sinai, and one of the great symphony orchestras, the New York Philharmonic, provide clear illustrations of how differently medical and cultural institutions think about the computers they use.

Computer Trouble at the Symphony

There was a computer outage at the New York Philharmonic. Along with many other subscribers and supporters, I received an e-mail on May 7th telling me about the problem. The Philharmonic is clearly embarrassed by the situation, and went out of its way to make sure their customers know about it, what the status is, and what they’re doing about it. By sending this e-mail, they clearly announced to many people who would otherwise have had no idea the computers were down that there was a problem. But to their credit, the Philharmonic’s priority was being open about the situation so that any inconvenience was minimized.

Computer Trouble at the Hospital

There was a computer outage at Mount Sinai hospital last fall. I personally experienced the problem and wrote about it here. In striking contrast to the Philharmonic, no public word was or has been issued about the situation, so far as I can tell – even though I’m a patient, and even though Mount Sinai is much more crucial to my health than the Philharmonic.

Mount Sinai may be embarrassed. I have no way of knowing; they’re keeping a pretty tight lid on the situation. In fact, as far as I can tell, the medical profession combines suppressing all information about system outages with considering the whole subject to be a joke.

Why do I think they think it's a joke?

There is a list of the top 100 hospital CIO’s. There is a little blurb about each one of them. Among the 100 mini-bio’s I can find only one reference to whether their computer systems are working are working or not. Here it is. First of all, keeping computers running is beneath mention in 99 of the 100 cases. In the one out of 100, here's what they say.

He "caused" a network-wide crash -- but that's OK, he "played a role" in "recovering it" (sic) too, ha-ha-ha.

Conclusion

There’s an attitude problem and an issue of priorities among the people who run hospitals. Comparing them to their counterparts in the world of symphony orchestras illustrates the problem vividly. The people in charge should make sure that their computers are actually up, running and available, above all else. They should track their performance. They should be open and transparent about it. They shouldn’t suppress information. Above all else, they should get it done! Sadly they’re not getting it done, in spite of their monstrous salaries and budgets, and that’s not likely to change any time soon.

When the computers go down in a hospital, patient lives are put at risk. Medical records aren't accessible, care orders can't be entered or received, and the staff runs around trying to make things work as best they can, in spite of the unavailability of the hospital's mission-critical system.

Could anything be worse?

Yes.

The outages aren't tracked. They are hidden -- literally kept secret. After all, reputations are at stake here! If it ever got out that people whose salaries run into the hundreds of thousands of dollars a year for running an operation that spends hundreds of millions of dollars a year can't even keep the computers running, who knows what might happen?

The IT Horror Show at Mount Sinai Hospital

I’ve already told the story of one of my personal experiences with horrible hospital software. Here’s another.

When I arrived at the cancer treatment center at Mount Sinai in New York last Fall, I immediately noticed that things were different than they had been on my prior visits. Patients were anxious, and staff were madly rushing about. Here's the waiting area on a calmer day.

The problem was immediately evident when I checked in: the screen was blank, and everything was being done on paper. This was Wednesday, and the computers had been down since early Monday. Some departments were back up, but since some important ones were still down, lots of things were still being done with phone calls and handwritten notes. Among other comments, I heard “This isn’t the first time this has happened.”

This multi-day outage didn’t take place in Podunk. It was at a premier medical center. Is it better at Mount Sinai than other places? Worse? I have no way of knowing.

This was outrageous. The health and life of patients, the hospital’s primary mission, was compromised, to put it mildly. Everyone was anxious and upset, but no one was shocked. Was anyone fired? Did the CIO lose his job? The CIO deserved to be frog-marched to the nearest exit, along with anyone else involved. But last I heard, the news of the outage was suppressed, as usual, and the CIO and his whole crew continue to be richly employed.

It appears to be a question of priorities. Hospitals and their CIO's issue press releases when they install a new version of the ridiculously expensive enterprise software they use, and move up another rung on the ladder of how heavily dependent your hospital is on its EMR (electronic medical record). Being more dependent on computers is considered to be a good thing in this industry! But simple things like tracking the up time of the system? Apparently it's beneath the level of the top people to pay attention to it -- it nonetheless appears to be important enough to train everyone to hide the outages.

Computer Availability

The more dependent you are on computers, the more important it is that they actually work! The top people in any computer-using organization can be cavalier about system up-time. This isn't just something that happens in healthcare, as I've pointed out. The two most important things about any computer system are that it works and that the performance is reasonable. This is true times a large number for a system that is mission critical for an organization devoted to curing sick people.

Conclusion

Heads should have rolled after the outage that I personally experienced and can personally testify actually happened at Mount Sinai Hospital in New York City. Not only didn't they roll, they continue to crow about how wonderful they and their system are, while making sure to suppress all news and information about their IT malfeasance. To put it mildly: not acceptable.

Innocent people taking a train are dead. Many are injured. The government had an answer in 2008: spend billions of dollars and wait for years. There's a better answer: Build a smartphone app, with some cloud software, a couple sensors and cameras, and engine cab remote-control harness. It would be faster, cheaper and more effective than the existing partly implemented "solution," and lives would be saved.

The Crash

Reactions to the Crash

The basic reaction has been typical all-politics-all-the-time. Here's the Reuters story:

Later in the same story, you learn that the engineer was driving at more than twice the speed limit for that part of the track, and that the accident would not have happened except for his error. But that's a detail, I guess.

Technology Could Have Prevented the Crash!

Then it turns out, we know how to prevent things like this! But according to the experts, it just hadn't been installed.

This PTC ("positive train control") sounds like wonderful stuff. It turns out it's been around for awhile. Everyone seems to agree that it would go a long way to solving the problem of crashes like the Philadelphia one. So what's gone wrong?

Government-Mandated Positive Train Control

Here's a good summary of the issues and problems of the wondrous PTC solution, which was mandated by Congress in 2008. It was declared by Congress that it must be completed by the end of 2015. It won't be. And the cost? The GAO estimated somewhere between $6.7 billion and $22.5 billion.

A brand-new system dreamed up by government bureaucrats in a short period of time -- of course it takes billions of dollars and many years to implement! Of course it's a completely custom system, relying on railroad-only technology that will be generations behind the general computer industry before it's even deployed! Of course everyone assumes you can spec out a never-built-before system and get it right the first time!

This is amateur-hour technology, and it is ... killing! those of us unfortunate enough to be in the wrong place at the wrong time. This is a near-perfect example of bureaucratic "innovation." It is an example of the "what not how" problem of regulation: what should happen is simple declarations of goals (don't murder people) instead of gruesomely detailed directions for how to avoid murdering people. The bureaucratic approach mandated by Congress has already resulted in incredible expense and multiple avoidable deaths, just as its similar approach to computer security has resulted in some of the worst security breaches in history.

The Modern Approach

There is a better way. It leverages modern computing, devices, networks and software. "Experts" will pooh-pooh the approach, saying that anyone who proposes it doesn't understand the harsh and peculiar railroad environment. That's what experts always say in situations like this, standing on their little technology island, protecting their "expertise" and their jobs, until modern, high-volume technology gets the job done. Then, without further comment, they retire.

I won't lay out the whole approach in this post; this blog has lots of the core ideas, and so do lots of modern computing technology people.

Just as mapping software on a phone can track your location and speed when you're in a car, it can do it when you're on a train. Why shouldn't lots of people have this app? Why not publish the complete map of all the train tracks? Most of it already seems to be available to consumer mapping programs -- they just need to be tweaked to allow travel on rails instead of on roads. Yes, there are areas where track maintenance is taking place where trains shouldn't go -- just like with roads! Mapping software already exists to avoid such routes -- just use it! Yes, there are switches -- how about adding them to the maps, and making whatever controls them upload their state to the cloud? Yes, there are other trains to be avoided -- how about the apps all upload their positions to the cloud, and give a view to where other trains are? Yes, there are things you should pay attention to when you're not looking at the app -- navigation apps already handle this through audible alerts or talking to you.

These simple steps, which could be built iteratively and deployed in weekly cycles, would go a long way to solving the problem. There remains the problem of overriding the train controls in case something terrible happens -- but if all the conductors have the app and they have access to the engine car, many of the potential bad things could be avoided. The potentially tricky issue of automated speed control could then be addressed -- but after all, airplanes are largely run by auto-pilot, why shouldn't trains? If auto-pilot works for vehicles that go hundreds of miles per hour, miles in the air with no tracks, surely it can't be too hard to make a version for relatively slow vehicles without steering controls, whose only variable is speed!

While the government is mandating and regulating, billions of dollars are being wasted building systems that will be obsolete before they're installed, and meanwhile people are being killed and injured. There is a better, faster, cheaper way. Its cost to build is likely to be much less than the cost to simply maintain the PTS. So let's do it!

The median annual wage of a college grad with a computer, math or statistics degree is over $70,000. This is better than the vast majority of college majors, and compares really well with the median annual wage of high school grads, which is under $40,000. The conclusions are clear:

Go to college

Major in computers, math, statistics, architecture or engineering

Otherwise, you’re screwed.

Well, all right, majoring in education or psychology leads to crappy salaries, but at least it’s better than being just a high school grad.

This is a test!

Trigger Warning! From here to the end of this post could trigger feelings of inadequacy among certain people. Others could feel anger towards the author, causing potentially dangerous heightening of the pulse rate. Others could feel that the author is hopelessly arrogant or elitist, resulting in generally uncomfortable feelings. So read on at your own risk.

This post is a test of whether you’re qualified to be a top computer programmer, or an outstanding achiever in any technical/quantitative field. The thoughts in this post up to this point summarize what the article accompanying the chart intends you to conclude, and what most people will think on looking at the chart.

The author of the article clearly failed the test.

Did you?

Understanding the data

If you haven’t already, look at the chart again. Note the big, fat explanation at the top. The endpoints of the lines represent 25th and 75th percentiles. The 75th percentile for high school grads is about $50,000. This means that a quarter of high school grads have salaries above that. The 25th percentile for computer etc. grads is roughly $50,000, perhaps a little more. Which means that a quarter of the computer etc. grads make less than $50,000. In summary: a quarter of high school grads have salaries that are greater than a quarter of college grads with degrees in computers, math or statistics. Read that sentence again. Get it? Did you figure it out before reading this?

Implications for Hiring Computer Programmers

I hope you’ve just seen why, when I’ve hired people, I really haven’t given a %^* about their education or their degree – in fact, the higher the education and the fancier the degree, the more concerned I am to weed out the folks with bad attitudes, the ones who have been granted the knowledge and the certification to prove it, and want to spend their lives resting on and/or milking their degrees. Some of the best programmers I’ve met in decades of programming did not have college degrees. Most of the ones who are less than excellent and/or have “risen” in management are experts at glancing at things and reaching the wrong conclusions. Like most people do when looking at the salary chart above. FWIW, here are some good examples of drop-outs who did pretty well. Including the Wright Brothers -- after all, how hard can inventing the airplane be?

The people who are best in computing combine big-picture, visual/conceptual abilities with an utterly uncompromising attention to detail. Computer programs shouldn’t have even a single byte wrong, and the bytes should be selected and arranged according to a deep conceptual understanding of the problem at hand. Amateurs and pretenders don’t do well at either of these jobs, much less in combination.

Conclusion

If you care about attracting, selecting and retaining the very best software people, you would be well advised to alter your hiring practices as required to select the people who ... get ready for it ... can actually do the work! Really well! Having degrees or whatever is not nearly as correlated to that outcome as you might think.

Consider the sets "Excellence" and "Government IT." There is a great deal of evidence that these are non-overlapping sets. Put another way, the phrase "excellence in government IT" is an oxymoron. Of course, there are people who think otherwise. Mostly, these are government workers and their enablers.

Digital Government Awards

It appears there are organizations promoting and celebrating "digital government." Who knew?

Part of what these guys do is hold awards ceremonies honoring the best, the brightest and the most accomplished. There was an awards ceremony for New York in 2014.

30 people were individually honored for Outstanding IT Service and Support. In addition, 10 awards were given in various categories. One of the categories is related to one of my favorite subjects. The award, "Demonstrated Excellence in Project Management," is a double killer: excellence in project management, which you mostly demonstrate by chucking it over the side of the boat, and excellence in government IT, which is pretty much the null set. So "government project management?" If there ever was a candidate for something emptier than the null set, that's got to be near the head of the line.

One naturally wonders what magic project won this coveted award. This project was so good that the leader was also awarded the Best of New York Leadership Award. Here are the highlights: This is a bit hard to figure out. Mostly, it appears, he spent money and outsourced work. He put a little data center into a big central one, and by the way bought a bunch of new equipment (that's what "modernizing WCB's infrastructure" means), and he dumped thousands of cases to an outsourcer ("third-party administrator" sounds more official, doesn't it?), I guess because those poor government workers were just overworked.

But I was unsatisfied. I really wanted to know how he got the top award for project management. So I clicked to find out: And I was rewarded with this page, from the organization that leads, promotes and awards excellence in digital government:

I was truly impressed. I always wondered how all those government agencies, some of which are bound to have bright people who truly want to serve the public, managed to deliver such uniformly expensive, inefficient, labor-intensive systems that often don't work. Now we have the answer: they have an organization that leads them and shows them how its done!

By giving awards, they in effect define excellence down. Think about this guy singled out for the leadership award: he bought a bunch of equipment (for less? more? who knows?), moved to another data center and outsourced some work. That's the best of the best! Think about what everyone else accomplished during the year!

If you're a professional software project manager, I have a suggestion. Why don't you become a consultant with Mary Kay or Avon so you can do something more worthwhile with your life?

Oh, boy, that was mean. But if you can stand it, read on.

Project Management in General, and in Software

Project management is a well-developed body of theory and practice. In most fields to which it is applied, it is the only responsible way to run things. Period.

So you'd think it would be a winner in software, which badly needs something to get it manageable. It's really hard to believe that normal project management techniques and practices wouldn't apply to software development pretty much the same way they apply to other things. But they don't.

We now have literally decades of experience showing that project management, when applied to software, simply and categorically does not work. I've covered this subject quite a bit on this blog, and devoted a whole book to exactly how and why it does not work.

It is one of the many sad results of the mad refusal of the whole software industry to pay attention to history that this fact is not one of the first things taught in school.

Project Management in Software

As it is, project management for software is a skill you can acquire. There are piles of books. There are certifications. Many of the people who go into the field are nice, well-meaning people. I like most of the ones I've met. One guy I know even teaches courses in it; from his description, it sounds like his course would be great!

But there's a problem. Not all programmers admire or even respect project managers. There are good reasons for not wanting your project to be infected with the disease of project management. But most programmers aren't particularly intellectual about it. They just want to be left alone! Some of them feel strongly about it. So I would advise project managers to watch their step.

And if you are going to get into project management and make a success out of, do try to take a course like the one my friend teaches, not one like Dogbert's:

If you avoid the Dogbert course, your life expectancy will be considerably longer.

In a prior post, I demonstrated the close relationship between math and computer science in academia. Many posts in this blog have delved into the pervasive problems of software development. I suggest that there is a fundamental conflict between the perspectives of math and computer science on the one hand, and the needs of effective, high quality software development on the other hand. The more you have computer science, the worse your software is; the more you concentrate on building great software, the more distant you grow from computer science.

If this is true, it explains a great deal of what we observe in reality. And if true, it defines and/or confirms some clear paths of action in developing software.

A Math book helped me understand this

I've always loved math, though math (at least at the higher levels) hasn't always loved me. So I keep poking at it. Recently, I've been going through a truly enjoyable book on math by Alex Bellos.

It's well worth reading for many reasons. But this is the passage that shed light on something I've been struggling with literally for decades.

When we learn to count, we're learning math that's been around for thousands of years. It's the same stuff! Likewise when we learn to add and subtract. And multiply. When we get into geometry, which for most people is in high school, we're catching up to the Greeks of two thousand years ago.

As Alex says, "Math is the history of math." As he says, kids who are still studying math by the age of 18 have gotten all the way to the 1700's!

These are not new facts for me. But somehow when he put together the fact that "math does not age" with the observation that in applied science "theories are undergoing continual refinement," it finally clicked for me.

Computers Evolve faster than anything has ever evolved

Computers evolve at a rate unlike anything else in human experience, a fact that I've harped on. I keep going back to it because we keep applying methods developed for things that evolve at normal rates (i.e., practically everything else) to software, and are surprised when things don't turn out well. The software methods that highly skilled software engineers use are frequently shockingly out of date, and the methods used for management (like project management) are simply inapplicable. Given this, it's surprising, and a tribute to human persistance and hard work, that software ever works.

This is what I knew. It's clear, and seems inarguable to me. Even though I'm fully aware that the vast majority of computer professionals simply ignore the observation, it's still inarguable. The old "how fast do you have to run to avoid being eaten by the lion" joke applies to the situation. In the case of software development, all the developers just stroll blithely along, knowing that the lions are going to to eat a fair number of them (i.e., their projects are going to fail), and so they concentrate on distracting management from reality, which usually isn't hard.

What is now clear to me is the role played by math, computer science and the academic establishment in creating and sustaining this awful state of affairs, in which outright failure and crap software is accepted as the way things are. It's not a conspiracy -- no one intends to bring about this result, so far as I know. It's just the inevitable consequence of having wrong concepts.

Computer Science and Software Development

There are some aspects of software development which are reasonably studied using methods that are math-like. The great Donald Knuth made a career out of this; it's valuable work, and I admire it. Not only do I support the approach when applicable, I take it myself in some cases, for example with Occamality.

But in general, most of software development is NOT eternal. You do NOT spend your time learning things that were first developed in the 1950's, and then if you're good get all the way up the 1970's, leaving more advanced software development from the 1980's and on to the really smart people with advanced degrees. It's not like that!

Yes, there are things that were done in the 1950's that are still done, in principle. We still mostly use "von Neumann architecture" machines. We write code in a language and the machine executes it. There is input and output. No question. It's the stuff "above" that that evolves in order to keep up with the opportunities afforded by Moore's Law, the incredible increase of speed and power.

In math, the old stuff remains relevant and true. You march through history in your quest to get near the present in math, to work on the unsolved problems and explore unexplored worlds.

In software development, you get trapped by paradigms and systems that were invented to solve a problem that long since ceased being a problem. You think in terms and with concepts that are obsolete. In order to bring order to the chaos, you import methods that are proven in a variety of other disciplines, but which wreck havoc in software development.

People from a computer science background tend to have this disease even worse than the average software developer. Their math-computer-science background taught them the "eternal truth" way of thinking about computers, rather than the "forget the past, what is the best thing to do NOW" way of thinking about computers. Guess which group focusses most on getting results? Guess which group would rather do things the "right" way than deliver high quality software quickly, whatever it takes?

Computer Science vs. Software Development

The math view of history, which is completely valid and appropriate for math, is that you're always building on the past, standing on the shoulders of giants.

The software development view of history is that while some general things don't change (pay attention to detail, write clean code, there is code and data, inputs and outputs), many important things do change, and the best results are obtained by figuring out optimal approaches (code, technique, methods) for the current situation.

When math-CS people pay attention to software, they naturally tend to focus on things that are independent of the details of particular computers. The Turing machine is a great example. It's an abstraction that has helped us understand whether something is "computable." Computability is something that is independent (as it should be) of any one computer. It doesn't change as computers get faster and less expensive. Like the math people, the most prestigious CS people like to "prove" things. Again, Donald Knuth is the poster child. His multi-volume work solidly falls in this tradition, and exemplifies the best that CS brings to software development.

The CS mind wants to prove stuff, wants to find things that are deeply and eternally true and teach others to apply them.

The Software Development mind wants to leverage the CS stuff when it can help, but mostly concentrates on the techniques and methods that have been made possible by recent advances in computer capabilities. By concentrating on the newly-possible approaches, the leading-edge software person can beat everyone else using older tools and methods, delivering better software more quickly at lower cost.

The CS mind tends to ignore ephemeral details like the cost of memory and how much is easily available, because things like that undergo constant change. If you do something that depends on rapidly shifting ground like that, it will soon be irrelevant. True!

In contrast, the Software Development mind jumps on the new stuff, caring only that it is becoming widespread, and tries to be among the first to leverage the newly-available power.

The CS mind sits in an ivory tower among like-minded people like math folks, sometimes reading reports from the frontiers, mostly discarding the information as not changing the fundamentals. The vast majority of Software Development people live in the comfortable cities surrounding the ivory towers doing things pretty much the way they always have ("proven techniques!"). Meanwhile, the advanced Software Development people are out there discovering new continents, gold and silver, and bringing back amazing things that are highly valued at home, though not always at first, and often at odds with establishment practices.

Qualifications

Yes, I'm exaggerating the contrast between CS and Software Development. Sometimes developers are crappy because they are clueless about simple concepts taught in CS intro classes. Sometimes great CS people are also great developers, and sometimes CS approaches are hugely helpful in understanding development. I'm guilty of this myself! For example, I think the fact that computers evolve with unprecedented speed is itself an "eternal" (at least for now) fact that needs to be understood and applied. I argue strongly that this fact, when applied, changes the way to optimally build software. In fact, that's the argument I'm making now!

Nonetheless, the contrast between CS-mind and Development-mind exists. I see it in the tendency to stick to practices that are widely used, accepted practices, but are no longer optimal, given the advances in computers. I see it in the background of developers' preferences, attitudes and general approaches.

Conclusion

The problem in essence is simple:

Math people learn the history of math, get to the present, and stand on the shoulders of giants to advance it.

Good software developers master the tools they've been given, but ignore and discard the detritous of the past, and invent software that exploits today's computer capabilities to solve today's problems.

Most software developers plod ahead, trying to apply their obsolete tools and methods to problems that are new to them, ignoring the new capabilities that are available to them, all the while convinced that they're being good computer science and math wonks, standing on the shoulders of giants like you're supposed to do.

The truly outstanding people may take computer science and math courses, but when they get into software development, figure out that a whole new approach is needed. They come to the new approach, and find that it works, it's fun, and they can just blow past everyone else using it. Naturally, these folks don't join big software bureaucracies and do what everyone else does. They somehow find like-minded people and kick butt. They take from computer science in the narrow areas (typically algorithms) where it's useful, but then take an approach that is totally different for the majority of their work.

It's reported that New York City's Taxi and Limousine Commission (TLC) wants to pre-approve new software releases by ride companies like Lyft and Uber. Since the TLC is well-known to be heavily staffed with software experts, what can be bad about this idea? Other than just about everything, that is?

The proposal

Here's what they're saying:

Uber and Lyft have to buy smartphones and give them to the TLC because the Commission runs such a tight budget that there's no way it could afford the required thousands of dollars. Oh, wait ... the planned 2015 revenue of the TLC is projected to be $545.6 million, with expenses of $61,045,000. That leaves just $480 million or so, which is undoubtedly already committed to something or other, which is probably terribly important.

Let's assume it happens. How is it going to work? Uber gives a release to the TLC, which takes exactly how long to test it how rigorously by what means? By the time it gets around to organizing to test one release, another will have arrived. So the pressure will immediately come to have fewer, larger releases. Then will come the time when the TLC approves a release and there's a bug. There will be commissions, reviews, and a big operation will be set up to implement industry best-practices, government-style. Things will get even slower and longer, and government tentacles will start weaving their way into Uber's software development organization. In the end, New York will end up getting a small number of releases, way after the rest of the world has them, buggier than everyone else, and the costs will be passed on to the drivers and riders.

Why?

Right. Sure.

The Reality

Governments can't build software that works in any reasonable time. See this.

No matter how hard they tried, software testing in the lab just doesn't work. See this.

They will press to have fewer releases, when more frequent releases are the key to good software quality. See this.

Finally, most important of all, we don't need to be protected, thank you very much. If it doesn't work, people will stop using it, and the company will either fix its problems or go out of business. That's the way the greatest wealth-creating and poverty-eliminating system ever invented works.

Distributed computing is a trend whose time has come ... and gone. Well, not completely. If my computers have to ask your computers a question, that's best done using something like "distributed computing." But to be used by a single software group to serve their organization's needs? Fuhgeddabouddit.

The early days of distributed computing

In earlier days, there were lots of computing problems that were too large to be solved in a reasonable period of time on a single computer. If it was important to cut the time to finish the job, you had to use more than one computer, sometimes lots of them. This was frequently the case during the first internet bubble period, for example, when the concept of “distributed computing” really got traction. The idea was simple: in order to serve lots and lots of people with your application, a single computer couldn’t possibly get the job done without making everyone wait too long. So you wrote your application so that it could use lots of computers to serve your users; you wrote a “distributed” application.

It’s always been harder to write distributed applications than non-distributed ones, and of course there’s lots of overhead in moving data from one computer to another. But if you can’t serve your users with a single-computer application, you bite the bullet and go distributed.

Distributed computing today

The most common form of distributed computing lives on today, more often called "multi-tiered architecture." This is when you have, for example, computers that are web servers, front-ending computers that are application servers, front-ending computers that run a database. That's a simple, three-tier architecture. The idea is that, except for the database tier, it's easy to add computers to handle more users, and by doing much of the computing on something other than the database server, you make it handle a higher load than it otherwise would be able to.

There's a more elaborate form of distributed computing that also has a strong fan base, sometimes centered around a service bus. Other people call it SOA (a service-oriented architecture). These are slightly different flavors of distributed computing, often found together in the same application.

Like most ways of thinking about software, the people who love distributed computing learned to love it and think it's right. Period. Just plain better, more advanced, more scalable, more all good things than the stuff done by those amateurs who run around being amateurish.

The impact of computer speed evolution

As I've mentioned a few times, computers evolve more quickly than anything else in human experience. Do you think that the computers of today can handle more than computers could at the time distributed computing took its present form? Is it just possible that, for most applications, a simpler approach than distributed computing in any of its forms would get the job done?

Multi-core processors

We all know about Moore's Law, I hope. But people don't think so much about the impact of multi-core processors. Simply speaking, "cores" put more than one computer on the chip. Physically, you still have a single chip. But inside the chip, there are really multiple computers, one per core, each running completely independent of the others. And the way they've built the cores, you actually get two threads per core -- each thread can be considered a execution of a program. So, in a sense, you’ve got “distributed computing” inside the chip!

This is incredible. In the past, you might have 3 computers on each of 3 tiers, each with a robust 16BG of RAM (who would ever need more??), for a total of 9 computers with about 150GB of RAM. Connected by dirt-slow (by comparison) ethernet. Here, you've got 2-4 times the number of threads, 10X the amount of total RAM, all in a single chip, no bopping around on the ethernet slow lanes required. Who needs distributed computing when you've got one of these babies?!

Conclusion

Clearly, all the folks who regularly attend services at the Church of Distributed Computing didn't get the memo. This is not new news -- except to the SOA and enterprise bus enthusiasts! There's no way mere facts are going to cause them to stray from their life-enhancing faith!

But for the rest of us, it's clear. Use those cores. Use those threads. Make sure there's lots of RAM. And enjoy the numerous, multi-dimensional benefits of the simpler life.

Math and music are incredibly inter-related, as has been understood at least since Pythagoras. But they are never studied in a single academic department. Math and music are arguably more intimately bound than math and computer science. But math and music are never in the same department, while math and computer science frequently are. Hmmm....

Math and Computer Science are joined at the hip in Academia

Math and Computer Science are so intimately related in academia that they are frequenty part of the same department. This is true at elite institutions like Cal Tech.

Math and Computer Science are in the same department at private liberal arts schools, too, like Wesleyan.

They're a single department at major state universities, like Rutgers.

Same thing as lesser state schools. Here's how it goes at Cal State East Bay.

I make no argument that this is universal. Don't need to. If you search like I did, you'll find that putting math and computer science in a single department is a common practice.

Why are Math and Computer Science so Academically Intimate?

Most people seem to think that math and computer science are pretty much the same thing. Consider this:

Most "normal" people who try either of them don't get very far.

The people who are way into either of them are really nerdy.

If you're good at one of them, there's a good chance you'll do well at the other.

They are incredibly detail-oriented. They're full of symbols and strange languages.

What you do doesn't seem to be physical at all. What are you doing while programming or doing math? Mostly staring into space or scribbling strange symbols, it seems.

You can write programs that do math, and math applies broadly to computing.

Meanwhile, there are other remarkably similar things that don't end up in the same department. Consider the "life sciences." They all have loads of things in common. Everything they all study starts life, develops, lives for awhile, maybe has offspring, and dies. DNA is intimately involved. Oxygen and carbon dioxide play crucial roles. But since when have you ever seen a department of botany and zoology? Like never, right? In the humanities it's just as extreme. Ever hear of a department of French and German? Academics already fight enough among themselves without that...

Academics clearly think that math and computer science aren't just similar or highly related. If so, they'd treat them the way they do languages or life sciences. A broad spectrum of academics think they're so interwoven that there are compelling reasons for studying them together. Thus a single department that has them both.

Math and Computer Science, a Marriage made in ????

It's a common practice for math and computer science to be studied together. Obviously, most people have no trouble with the concept. Of all the things to question or worry about in the world, this seems pretty low on the list.

I would like to change this. I'd like to cause trouble where there is none today -- or rather, I'd like to EXPOSE the deep-seated, far-reaching, trouble-causing consequences of the fact that everyone thinks it's quite alright that math and computer science are thought of as pretty much two halves of the same coin. In fact, I will argue that the math-computer-science-marriage is just fine for math -- but the root cause of a remarkable variety of intractable problems that plague software development.

Note that I did a quick shift there. I have no problem with math and computer science being together. They kinda belong together. My problem is that everyone thinks that you study computer science in school so that you're qualified to do software development after graduating. And that software development shops require CS degrees, and pay more for advanced degrees in CS, on the theory that if some is good, more must be better.

I will flesh this out and explain why it's the case in future posts. But I thought throwing down the gauntlet was worth doing. Or at least fun!

Smart Programmers

Everyone knows that programming is hard. Everyone knows that really smart programmers can be many times more productive than average programmers. Everyone knows that your project's chance of success go way up if you have smart programmers working on it. But not everyone knows that there is a collection of flaws to which really smart programmers are particularly susceptible.

The reason is pretty simple. Programmers are people, and people have problems! But different kinds of people often have different kinds of problems, and truly exceptional people may have problems many of us are not familiar with. A person who's really tall can have problems with bonking his head on entries that most of us don't have. A person who's really famous can have problems having a quiet meal in a restaurant most of us don't have. And there can be a dark side of a really smart programmer's most admirable qualities.

The "Problem" of being good at solving really hard problems

My new book on Software People (here's a description, and here's the book on Amazon) has a whole section on the problems endemic to high-IQ programmers. Here is an excerpt:

Are good at solving hard problems. The ability to solve hard problems distinguishes them from other people. They ignore simple problems. They disdain working on them. When a simple problem can’t be avoided, they go to great lengths to turn it into a hard problem. People who are good at solving hard problems like hard problems, and can find them in places where other people see no problems at all. Sometimes this is a good thing, like when you encounter a genuinely hard problem that can’t be avoided. Smart people get bored easily. Smooth, straight roads are boring. Some smart people will actively change directions and seek out a problem they suspect will be hard because it is hard. Often, smart people seeking hard problems overlook sophisticatedly simple solutions because they are simple, or spend loads of time solving a really hard problem that actually didn’t need to be solved.

If you've got a hard problem, you darned well better have people who are good at solving problems that are hard. Otherwise, you're screwed. But it happens often enough that programmers who are good at solving hard problems are really proud of that fact (why shouldn't they be?). Their self-identity is tied up in that ability.

What you really want is a programmer who is capable of solving really hard problems, but feels no need to demonstrate that ability unless it's really needed. I've definitely met people like this, but boy are they rare! You're talking about a super-nerd who is amazingly humble.

Climbing a Mountain

Suppose there's a mountain your team has to ascend. There's only one good mountain-climber in your group, and he's an amazing one -- famous for his ability to tackle near-impossible climbs. You and your team are standing at the foot of a mountain. Naturally, you turn to your expert.

Your expert, being an expert, scopes out the mountain. He sees lots of things that the normal people in your group miss. He spots a hard-to-see, tricky path that avoids the tough parts and makes the ascent a piece of cake. He also sees a route that starts out looking smooth, but has a pulse-pounding section that no one could make without his expert knowledge, experience and guidance. And then there are the other routes.

The expert has to choose between these two reactions at the end of the climb:

Boy, what an easy climb! That mountain wasn't so tough after all!

We got to the top, but we almost died on the way. If it hadn't been for X's amazing skills, we would be calling for helicopters to remove the injured and the dead at this point. Thanks, X!

Choice number 1: X's amazing skills are nearly invisible, because it "wasn't so hard after all" -- but only because X uniquely saw the hard-to-see route that avoided the difficulties.

Choice number 2: X's amazing skills are on full display, demonstrated in vivid 3-D to the team members, as he accomplishes something no normal mortal could pull off.

Hmmmm. Choice 1: make a hard thing simple, which only I could do, but in the end, everyone is left with the impression of how simple and easy it was. Choice 2: take a tough-but-possible route, in which my amazing powers are on full display. The smart person may not even be aware of how his guts and ego are pulling him to Choice #2. It's just human nature.

Conclusion

There's lots more in the Software People book, where this came from. On the one hand, outstanding software people are people. On the other hand, they have issues that are unique to their smartness. You want smart people on your team. Definitely. But you also want to help your smart people be even better than they already are by confronting and overcoming their unique problems.

We give kids sex education. We give them driver education, and require a driver test and license before driving. But we let any fool onto the internet to wreak whatever havoc they can on themselves and others without a second thought. It's time for a change!

Education for Meaningful Use

Education on the basics of how the internet and associated technologies work and how to control, respond to and interpret what you see is totally neglected. There are no significant efforts that I know of to make people educated consumers of this important, ubiquitous service that is so widely used. But there is a more important issue...

Education for Safety

By far the most important subject for internet education is safety. Maintaining internet safety has some similarities to general safety, but is different in important ways.

Internet "driving" safety

The most important aspect of safety while driving is avoiding driving while impaired in any way, and paying sharp attention to the road and other vehicles at all times. Driving while impaired by drugs or alcohol or while engaged in texting or talking are recognized factors.

So imagine how hazardous internet driving must be when people don't even know how to read the road signs (the URL's) and can't tell that they've wandered onto a road constructed by criminals specifically for the purpose of enabling them to steal your car, drive it to your bank and take out a big withdrawal! But that's exactly what it is! Here's an example of a more brazen attack (image from a good guy, Yoo Security), demanding that you send the money yourself: Unfortunately, there are criminals out there who have grown far beyond simple smash-and-grab operations. These sophisticated criminals with a long-term view trick you to "drive" onto their criminally-constructed "road" for the sole purpose of making your car an instrument for stealing from other people or organizations. They can make your computer into a zombie to participate in botnets. It can serve that purpose for minutes or years without your awareness. Is the problem big? You betcha. There are more computers that have been hi-jacked into botnets (maybe yours!) than most people are aware of:

Sometimes, of course, the criminals are stupid, greedy or malicious -- I guess those are the drop-outs from the "criminals should be good citizens" certification program. So your hi-jacked device could slow to a crawl, do weird things, look over your shoulder as you type until they get the information needed to drain your bank account or max out your credit card, or even (just because it's fun!) wipe out your machine while leaving some cute "it was me! Have a nice life!" Message on your screen.

Internet E-mail fraud

How often do you get a letter purporting to be from your bank asking you to send them a letter containing your account number just so they can verify that everything's OK? If you got one, do you think you'd respond as requested? Apparently you're not alone -- criminals are the supreme capitalists, and abandon efforts that are unprofitable before long.

But how about letters on the internet, i.e., e-mail? Along with everyone I know, I get an amazing number of criminal solicitations, ranging from the laughable (at least to me) to the amazingly credible every day. Data-driven capitalists that they are, the only explanation for the persistence of these efforts is that more than enough of them work to cover the costs and trouble of running the schemes, certainly more than getting a legal job.I've seen fewer solicitations from Nigeria lately, but the slack has been taken up by Libya.

Here's one of the new breed from Libya:

Here is a somewhat more plausible one from a place that really could be your bank:

Conclusion

Uneducated internet users cause billions of dollars of harm to themselves and others every year. You think this would result in outcry by those users and people who know them for education. You might think this might merit a bit of attention from the institutions who so assiduously and expensively educate, authorize, license and otherwise keep us on the straight and narrow. When I'm in Central Park in New York, there are rangers watching my every move; they set me straight when I ride my bike where I'm not supposed to, or walk in one of the ever-changing restricted areas. The conclusion is obvious: every move I make in the Park is more worthy of watchful restriction by people in uniforms than the millions of actions on the internet that seem, at least to me, far more destructive. I must be missing something.

When lots of human beings work at something for a long time, they tend to figure out how to do it. Building software appears to be a huge exception to that rule. With decades of experience under our belt, why is it that we still can't build good software?

One of the reasons software projects so often fail and improved methods aren't used appears to be that the people involved have perverse incentives.

Incentives

Everyone knows about incentives. They work. Even when we know someone is using incentives to get us to do something, we're more likely to do the thing with incentives than without them.

Perverse Incentives

Whether an incentive is perverse or not is in the eye of the beholder. From the incented person's point of view, an incentive is an incentive, and as we know, incentives work. But we normally call incentives "perverse" when they incent people to do something that most other people would agree is a bad thing.

Perverse Incentives: Mortgages

The housing boom leading up to the financial crash of 2007 was clearly driven by perverse incentives on multiple fronts. Borrowers were tempted to take what seemed to be easy money. Mortgage companies could make piles of money in fees by packaging up risky mortgages and passing them on. Rating agencies could collect loads of fees by not looking too closely. And the bankers at the top of the food chain made themselves lots of money by creating and selling fancy instruments that ignored the underlying realities and the ultimate consequences of their actions. Then it all came crashing down. Many were hurt, the big guys who made the most money least of all.

Perverse Incentives in Software

Software is so rational, so organized, the people involved are so smart and well-educated -- surely perverse incentives aren't driving behavior in software, are they?

Sorry, sweetie, perverse incentives are a human issue. Humans respond to incentives, perverse or otherwise. And as it turns out, there is a rogue's gallery of perverse incentives operating in software -- I will only scratch the surface here!

Estimates

They are also a GIANT BILLBOARD incenting EVERYONE involved in the process to make any estimate as long as they can possibly get away with; and since very few people (often including the programmer involved!) has any idea how long something *should* take, the estimates are typically accepted as is; but then, manager often double the estimates before passing them on. Why is this perverse?

The organization probably would like to get something done in the shortest reasonable time. But the programmers and project people are measured on whether they beat or miss the estimate. The longer the estimate, the better the chances of avoiding failure. It's that simple. It just makes it all the more maddening that, even with inflated estimates, things still go wrong!

Requirements

The whole modern software development process starts from requirements. Gamesmanship around requirements is therefore front-and-central. Estimates are based on requirements, and therefore controlling and fixing the requirements is central to the effort of creating "success." The system may fail, the users may hate it, but if it meets the "requirements," the people running the project get to declare "success." What you'd like is for the project to succeed when the needs of the business are met. The perverse incentive is for the people delivering the system to define "meeting the requirements" and then control the requirements to assure that they're met, regardless of what disasters happen to the business.

False reporting

Just like at the VA, project managers are highly incented to avoid reporting problems -- typically using big fancy reports that are chock full of meaningful-seeming stuff but are in fact just garbage. Just like in the mortgage-driven financial crisis, everyone involved is incented to declare success, take their rewards, and kick the can down the road for the next guy. Eventually, with shocking speed, it all comes crashing down, just like the financial system, and just like the mere 4 days between the laudatory article about how great Cover Oregon was going to be and the admission of total failure.

False Assessments

Here's where the rubber meets the road. Who is incented to blow the whistle on a failing software project? How, when and by whom is a software project judged to have failed? Most importantly: what are the consequences of having failed?

We all know the answer. Who has even heard of a software engineer who was fired for failure to deliver? And the people in charge? Never. It wasn't their fault! And the project didn't fail anyway! The requirements changed every month, the target kept moving, and blah, blah, blah.

Conclusion

Your kid comes up to you and asks, "can I play my video game now?" You briefly think about how your question when you were that age was "Can I go out and play now," but the kid isn't interested, and is bouncing around waiting for your "sure." Being the aspiring adult you are, you act responsibly and ask "Have you done your homework?" There's a brief pause. The kid is doing a quick risk-reward ratio calculation. If he says "yes," he probably gets to do what he wants. But you might ask to check. Hmmm.

This is the breeding ground of perverse incentives. We all learn to balance honesty, openness and getting what we want. Some of us go for honesty and openness, deciding that anything else just isn't worth it. But loads of people make an informed judgment on a case-by-case basis, much like the kid and his homework.

Whatever the morality of the case, the facts are clear: software projects fail left and right, and perverse incentives are a significant factor in making them fail. Without changing the incentives, we're unlike to abandon the Bad Old Way of building software and achieve success.

I've just published my book on Software People -- an insider's look at what programmers are like. It's got the same tacky cover design as the three books already publicly available:

I attempt to cover material in the book that I haven't seen elsewhere. Here are some of the topics:

A description for outsiders of all the stuff you've got to know in order to be a programmer -- learning a language is just a tiny bit of it!

A statement of the programmer's dilemma -- how all-consuming mastering even a slice of software usually is, and the difficult trade-off's you're then faced with involving the other skills you need to succeed in an organization and in life.

A discussion of how there are levels and levels of software skill -- it isn't like learning to drive a car. Similarly with productivity.

A extensive discussion of the cultural divisions and wars that blaze through the software community, with mutually incompatible "religions" living in separate colonies, looking with disdain and pity at those who follow false gods.

How people who are excellent at software, far from being honored, are often diminished and marginalized.

Lots of material about hiring. Who decides, on what basis, common mistakes.

A discussion of the deep-seated cynicism that infects a large number of programmers.

Technology organizations, managers and decision making.

Typical patterns I've seen in software people.

An extensive discussion, with examples, of the flaws that are characteristic of high-IQ programmers.

Finally, a discussion of the role of the CEO in a company where software plays a key role.

I've been at work for a long time on my series of books on how to Build Better Software Better. The books in the series have circulated in draft form, and each has undergone multiple revisions over a period of years. I've already released my basic books on Software QA, Software Project Management and Wartime Software. The one on People underwent 9 major revisions. Software People is less technical and more readable by civilians than the others.

I have a couple more that are no longer undergoing revisions and are about ready for general circulation. They are:

Software Business Strategy. There are some things that are unique to running a software business that apparently are not taught in business schools, and are common errors in the software businesses I see. I spell out the problems and solutions in this book.

Software Product Design. You'd think we'd have it down by this point. But I see software product design happening all the time, and mistakes made over and over. In this book I describe the best methods for creating successful software products and avoiding the common mistakes.

Software Evolution. When you see software built over decades and decades, patterns emerge -- and it's far from just onwards and upwards! These software patterns are strong and they repeat, like the well-known Innovator's Dilemma, only much more software-specific. They have amazing predictive power.

I will publish the rest of the books as time permits. Meanwhile, I'm pleased that I've finally released the Software People book for Kindle, more than 12 years after I circulated version 1.

The problem is big. It's getting bigger. Here's one summary of what's been happening:

What's the problem here? Is it really so hard to achieve cybersecurity?

I suggest that the issue is clear and simple: the people in charge of keeping your information safe are not motivated to keep it safe. The consequences to them personally of failing to keep it safe are minimal, and so they simply don't take the trouble to do it.

Motivation and consequences

Whether we like it or not, people are motivated on the positive side by rewards, and on the negative side by punishments. If you see people acting in a certain way, you ask, what is the incentive that is encouraging that behavior? The incentive could be positive (you get something good) or negative (something bad that used to happen when you did that thing no longer happens). A great deal of human behavior can be explained by personal incentives: rewards and punishments.

Incentives in Cybersecurity

So what happens to people in the companies when one of these big data thefts happen? Are the front-line drudges punished but the executives given a free pass? Do the people where the buck supposedly stops lose their jobs but the worker bees who were just executing according to a bad plan let off lightly? Answer: there's some bad publicity, but no one loses their job, no one's pay is docked, nothing!

If no one at the companies even went through the motions of trying to keep your data secure, the publicity might be bad. But that's what regulations are for -- CYA. The company claims it was following all the regulations that are supposed to keep data secure. So how is it their fault if, in spite of all their excellent, by-the-book efforts, the data walked out the door anyway? Case closed. The company and all its employees, from top to bottom, are off the hook!

Incentives and Motivations

When a company loses money and market share, the CEO is likely to lose his job. When a person in accounting delivers bad data, they're likely to lose their job. When a department does really well, the people in charge are frequently given bonuses or promotions. They get better jobs and make more money. In most industries, sales people are incentivized by commissions -- if they sell more, they make more money. It's everywhere. To encourage good behavior, reward it. To discourage bad behavior, punish it.

Everyone says they're concerned about protecting your data. They use as evidence the fact that they conform to all relevant regulations and spend lots of money on security. So if, in spite of all this, the data is lost, it can't possibly be their fault!

Does that mean the regulations themselves are bad or ineffective? No one is claiming that (except for me and a few other voices in the wilderness), but think about this: when has any regulator lost anything because they were doing a bad job at regulating? The very notion boggles the mind!

Bottom line: they have no incentive to protect your data! We know this because, when people are properly motivated to get a job done, they somehow find a way to get it done. The fact that they are unmotivated and have bad theories practically guarantees failure.

Conclusion

Lack of motivation.

No incentives.

Ineffective regulations.

Therefore, cyberthefts will continue unabated until this changes. Q.E.D.

Big Data is awfully important now, and it's poised to become the driving force in computing in 2015! If you don't think so, just look at this:

NOT!! I've pointed out how arithmetically-challenged most things called "big data" are. I've pointed out how it's mostly a technology fashion trend with little substance. And how when you dig into it, "big data" often isn't big. Or meaningful, or relevant.

You might think, with the emperor strutting around butt-naked ugly, we would be embarrassed and turn away. Sadly, it appears to be getting worse. Computing is supposed to be dominated by all sorts of number-centric nerds. The Big Data trend is strong evidence that trends in computing are every bit as science-based as fascination with the Kardashians.

The methods for achieving effective cybersecurity for a large class of applications are simple and obvious, but almost never implemented. If the methods were implemented, they would prevent the kind of massive, high-profile data loss that has been increasingly in the news. The methods make common sense to most normal people – but as we all know, computer “experts” are anything but normal. The industry needs to get it together, stop spending massive amounts of money on futile efforts to secure consumer data, and start implementing common-sense measures that work!

The current approaches to CyberSecurity are fundamentally flawed

That’s why they don’t work! It’s like if you’re playing pool, missing a lot of your shots, and spend lots of effort gesturing, jumping and grunting as your shot fails to achieve its objective – do you think your problem is not jumping vigorously enough or grunting loud enough? That’s what most enterprise responses to cyber-insecurity amount to. Increasing the money spent on things that don’t work won’t suddenly make them start working.

The basics

No matter what methods we use, if we continue to deploy large numbers of security guards who are nearing retirement against small, smart, fast-moving ninja bad guys, we’ll lose. If we continue fighting the last war, we’ll lose. If we continue to think that this game is all about how high and thick the walls of the castle are, we’ll lose.

New approaches, new methods

They’re not really new – like most good ideas, they’ve been thoroughly proven in other domains. We know they work. It’s a matter of adapting them so they apply to our computer systems.

A lot of smart computer people have worked on the security problem for a long time. The issue isn’t something abstruse like better encryption algorithms. It’s simple!

First, realize that anybody who walks in the door could be a bad guy.

Second, monitor and track the valuable stuff that you don’t want walking out the door.

Both of which, believe it or not, we fail to do today inside computer systems!

How retailers do it

Retailers with lots of low-value goods like grocery stores have store monitors and checkout areas. Anyone could be a thief, so people are assigned to monitor actions accordingly. Some goods may be valuable and easy to hide, like razor blades. Those are often displayed, but require a store employee with a key to let you get them.

Clothing stores frequently have security tags on every single item. The tags are removed using a special tool during the check-out process. If you try to walk out of the store with an item that is still tagged, alarms ring and security people grab you.

Stores with very high value goods like jewelry stores have locked cases, and a heavily human approach to security. Basically, at least one person watches each customer (and sales person!) with jewels at all times. They are disciplined to manage the number of items that are outside a locked case carefully. While the guards watch the customers (i.e., the potential thieves), what they really do is watch the jewelry. They track each item until it’s been bought or safely returned to its case.

The retail approach to securing valuable items is clear: using whatever combination of automated and human means that make sense, track every valuable item, and assure that when the item goes out the door, it has been cleared to go out with the person it’s going out with.

Applying Cybersecurity methods to retail

What would retail look like if we used the kind of methods used by computer experts?

First, every store would be surrounded by thick, high walls. No display windows! There would be strictly controlled ways of getting in – think TSA security at an airport. Further imagine that the world was awash with fake and stolen ID’s, so that while getting in the store legitimately is odious, for a skilled bad guy, not too hard.

Now imagine that once you’re in, there is no one watching the goods, there are no security tags on the clothes, no security cameras and no guards. You can grab a string of shopping carts, pile them high with goods, and wind slowly through the aisles. At check-out – well there is no check-out! You’ve been thoroughly vetted on the way in, after all, so you must be OK. When you’re done “shopping,” you can just leave! With your mountains of goods!

Of course, most visitors to this imaginary store are legitimate. They put up with the horrible entrance gauntlet because all stores have something like it. They get what they need and somehow arrange with the store to pay for it. There’s nothing to stop thousands of bad-guy visitors from walking out with thousands or millions items each, or millions of visitors to walk out with normal-sized shopping carts. Whatever works.

You might think I’m exaggerating. I wish I were.

Applying Retail methods to Cybersecurity

It’s a bit more technical and less visual to see how retail methods can be applied to computer systems, but the basic concepts are clear. While current cybersecurity focuses on perimeter defense (like TSA security for stores), the retail approach would be a bit looser. After all, if the bad guys get in but can’t get away with anything valuable, they haven’t accomplished much, have they? How proud is a bank robber who’s broken into the safe but can’t leave with the dough? How fruitful is his career of crime if, every time he passes the demand note to the teller, she just smiles and says “next customer, please?”

Applying the retail method to computers requires a completely new approach to tracking what visitors do when they’re inside the computer. While tracking their actions is important, what really needs to be done is track the “goods,” the valuable data items. The retail approach would differ according to the value of the items. If they’re like clothing, each item would be checked on the way out to make sure it’s authorized to leave. If they’re like jewels (for example, personal information), each item is watched like a hawk the moment it’s “picked up” by a “customer” (program). Does the customer have a couple of jewels? That could be OK, but we’re more alert. Does the customer have ten or more? Quietly circle the customer, watch the doors, and make sure there’s no escape.

The method needs to be extended to apply to the unique circumstances of the computer. Computer bad guys can easily assemble thousands of confederates to do their bidding. The bad guys can dress and act however the boss wants them to. However, they are unlikely to act just like normal shoppers. But I don’t want to take this too far in a blog post – we’re coming up to the edge of methods I’d rather not disclose.

Conclusion

Computer systems, corporate and government, will continue to be breached at an alarming rate, which is of course much higher than is publicly disclosed. More money will be spent and people hired. More standards will be set, regulations promulgated and enforced. As should be obvious by now, most of the money will be wasted, most of the people will accomplish nothing, and the regulations will increase costs while making things worse. Unless something changes.

The problem of cybersecurity can be solved. But it can only be solved if: we acknowledge we’re at war and act accordingly; we apply within the guts of our systems common-sense methods whose principles are clear, obvious and proven in other domains; and we start acting as though we actually want to solve the problem, as opposed to the current strategy of denial, cover-up and blame-shifting.

I get my health insurance through Anthem. Corporate Anthem was hacked, and the company has made a mess of their customer relations after the hacking, as I've described from receiving their "help." I now see evidence that my personal information was accessed, and Anthem has never told me.

Anthem and HIPAA

Anthem is really committed to HIPAA. Here's how they explain it on their website.

It's clear from this that Anthem is very committed to privacy and security. Both! Here's some of what they say about privacy.

And here's some of what they say about security.

Anthem clearly had all the bases covered. Except they didn't. What's mind-blowing to me is that, in spite of all the security-privacy-lah-de-dah, someone walked off with the personal information of tens of millions of customers -- and no alarm even went off! The breach was actually discovered by an alert grunt in the trenches.

Hacking David Black

Anthem has communicated to its members that they would let them know when they discovered whether any particular member was among those who had been hacked. I haven't heard a thing from them. But I now know that it's likely that my information was stolen.

I went into the standard Anthem consumer portal a little while ago.

I poked around a little, and discovered this little bombshell:

In other words, "I" had logged in at quarter after one in the morning on Saturday, Jan 31, 2015. However, I personally wasn't logged into Anthem at that time. I was asleep.

The Good News

There's good news here! I already knew that Anthem either didn't know whether I'd been hacked or had decided to not tell me, so no change there. My opinion of Anthem was already subzero, so it didn't get noticeably lower. Furthermore, in spite of all this, Anthem executive management will continue to rake in millions, and they're pretty sure that profits won't be harmed:

What a relief!

Conclusion

Nothing new here. Big corporations comply with all the burdensome regulations, and tens of millions of private records somehow get stolen. The result: lots of face-saving talk that does no one any good, and increased competition-stifling regulation that does nothing to solve the problem. Nothing to see here, people ... move along...

I'm hoping that people will start writing songs about cyber-insecurity, and that a good one will emerge that will be acclaimed as the "Anthem of Cyber-Insecurity." It will be sung quietly by groups of computer users who hold hands as they hear the details of yet another massive computer breach. While singing, some of the much-abused users will be silently praying that their "protectors" get bombed by Facebook friend requests by identity-thieved replicas of themselves, while others will pray for the end of "help" that isn't.

The Anthem Attack

I'm one of those praying users, because I'm a member of Anthem, the company that "lost" the personal information of "tens of millions" of its members sometime in 2014; they're not sure how many, whose records were "lost," or when it happened. Here's a personalized communication I received from Anthem:

Anthem has made a priority of communicating with its customers about the attack. When you're in the glare of publicity like this, I'm sure great care has gone into each statement on the case. That's probably why I have received more than one missive with the same date that spins things in different ways. For example, the Feb 13 note above refers simply to "cyberattackers" who "tried to get" private information, raising the possibility that their efforts were foiled by the valiant workers at Anthem.

Check out the identically-dated but substantially different Feb 13 note below.

In this second attempt, Anthem tells us about "cyber attackers" (now two words instead of one) who executed a "sophisticated attack," and "obtained personal information" "relating to" their customers. I guess it was successful? But maybe not, because the behavior of these guys isn't a felony, it's merely "suspicious activity" that "may have occurred." Furthermore, they carefully state that the personal information wasn't the customer's actual personal information, but merely "related to" said personal information. Hmmm....

What "May Have Been" Lost

So what information may have been lost during this incident that may have occurred at some unknown time? A fair amount.

Again, what's clear is that Anthem isn't clear. The information "accessed" (wasn't it stolen?) "may have included names, ..." But maybe not, we are led to believe. If the information that may have been accessed may have included my Social Security number, why isn't it possible that all sorts of other information was also accessed? We are supposed to be reassured that "there is no evidence at this time" that this actually took place -- a nearly ideal way of phrasing something that is supposed to sound like reassurance, but provides full CYA.

Anthem Provides Protection

Anthem has a whole website set up to let its members know what's going on, and to let customers know how they can get protection against the possible unauthorized access of their personal information.

Here's what Anthem will do: they'll pay a third party to help you out.

If you get in trouble, you can call the service, and they'll help you out. Meanwhile, your personal information may be in the hands of people who were unauthorized to access it. If they are the kind of people who will do "unauthorized" things, who knows what perfidy they'll stoop to?

Anthem's Additional Protection

The basic service you get isn't protection at all, as they make clear. Nonetheless, "For additional protection..." -- on top of the non-protection they already provide -- you can sign up for more. What exactly is this more? Quite a bit! Here's some of it:

Wow, and all for free! Let's sign up!

So you enter your e-mail, and get a code, go to the website, enter the code, and finally get to register for protection.

What happens next? Here's the page:

Wow, this is amazing!

I have a chance to enter into a website a good fraction of the private, personal information entrusted to a giant insurance company which, while under their stewardship, "may have been accessed" by "unauthorized" entities.

The security geniuses who kept my information secure want me to give it again to a company that they endorse as being wonderful security experts. Anthem was just terrific at keeping my information secure -- it goes without saying that their endorsement of the security of this partner they've just picked is rock-solid.

These guys are bureaucrats. Read this about bureaucratic security cred. And for more, this.

Summary

Anthem's revenues are greater than $60 Billion. They can afford to keep customer data secure.

Anthem's executives are paid enough to do their jobs well. Last year, the CEO made over $16 million and the CFO over $7 million.

And yet...

It took a guy at the bottom rung of the ladder to pay attention and notice something was wrong; had he not cared, the outflow of personal data would still be going on, as it had been for an indeterminate amount of time before the alert employee's observation.

No system or procedure established by the rich, giant entity had anything to do with noticing the breach, much less preventing it.

Everything about what they've done since exhibits the same lack of attention to detail and I-don't-care attitude that made the breach possible. What they mostly seem to want is to dash off letters riddled with errors and assurances, focused above all on their public image.

Their offer of "protection" is a cruel joke, exposing the gullible who accept the offer to further dissemination of their private information.

Conclusion

I'm waiting for that anthem as I sit, holding hands in a circle with my fellow users, thinking dark thoughts. And I'm as likely to enter my personal data into the Anthem authorized "protection" service as I am to publish it on this blog.

I have pointed out Facebook's lack of desire or ability (who cares which?) to deliver software that actually works. I've pointed out that they're hardly alone in this respect. It's important to accept this observation as true, so that you can change behaviors that may have been unconsciously predicated on the supposition that Facebook delivers great software, effectively and efficiently. They don't. So don't hire their people and expect great things to happen, and don't mindlessly emulate their methods or use their tools!

The Unspoken Assumption

Facebook is a wildly successful company, worth over $200 billion. I'd like my company to be worth even 1% of Facebook. So I better find out what Facebook did, and learn from it. Facebook is a software company, so their engineers must be smart and effective. I better get some of them in so they can teach us the "Facebook way." And their tools -- wow. If Facebook uses something, what an endorsement that is. My guys had better have a real good reason to use something else; I look at what FB's worth and what we're worth -- don't we want to be like them? If a tool or method is good enough for FB, it should be plenty good enough for us.

The role played by software in FB's success

Here's the logic:

FB is wildly successful.

FB is built on software.

Therefore, FB software must be wildly excellent.

We already know by examining the quality of FB software that it's crappy. So we have reason to suspect that the virtues of FB software may NOT be a driver of FB's success. Consider this thought: What if FB is wildly successful IN SPITE OF its crappy software? If that's true, the LAST thing you'd want to do would be to infect your reasonably healthy engineers with disease vectors from FB.

Explaining FB's Success

There are lots of reasons software companies can become very successful other than having great software. In fact, by the time a company gets large, bureaucracy and mediocrity normally take over, and any great qualities in the software are normally eliminated. The most common reason a software company gets and stays successful is the network effect, the self-validating notion that "everyone" is using the software, therefore I should too.

The network effect becomes even more powerful when there's a marketplace. E-Bay is a great example. If you're a seller, you want to sell in the place that has the most buyers. If you're a buyer, you want the greatest choice of things to buy. Similarly, if FB is where all your friends are, you'd better sign up -- which makes the network effect even stronger.

FB, by chance or plan, leveraged the network effect for growth brilliantly. Harvard already had a physical book with everyone's pictures in it, called the Facebook by students. The basic education and promotion problem was solved out of the gate: Harvard students knew what a "facebook" was; they all had a physical one, and used it, if only because their own information was there. For example, here's me in the 1968 edition: However straight-laced those Harvard freshman looked, a fair number of them were hackers and troublemakers. Here's the very last page of the 1968 FB. Look at the last guy listed.

There's a similar entry, with a different photo, at the start of the book.

Zuckerberg was solidly in the long-standing Harvard hacker tradition. He had already illictly grabbed student photos for a prior application, which both got him in trouble and made him famous on campus. So when he launched "thefacebook," of course all the Harvard students would check it out. He did this in January. It was used by about half of all Harvard undergrads within a month.

His next smart move was to open it just to students at a couple more elite schools, and then Ivy League schools. Once established there, he expanded. He did NOT open the doors and let anyone join -- he moved from one natural community to the next, letting the network effect do its magic before moving on. Finally, alumni were allowed to join, but only if they had a .edu address proving their affiliation. That's when I joined. Only after a whole generation of students had made it the standard did FB allow their parents to join.

The quality of the software had nothing to do with this. If people had to pay for it, FB would have flopped. Feature after feature came pouring out of the self-declared brilliant minds of the top people at FB, many of them flops, mixed in with scary experiments with privacy. But it was "good enough" most of the time, it's free, it's where your friends are, what can you do?

The conclusion is clear: FB grew to be a huge success IN SPITE OF having rotten software quality and development methods that are just horrible.

The FB environment and yours

Facebook software development methods and tools are NOT something a small, fast-moving, high-quality software shop should want to emulate. Their quality methods in particular are not only trashed by their users, but also by a fair number of ex-employees. The same thing goes for the computing and server environment.

If you find a talented ex-FB-er, by means hire him or her -- but only after verifying that they're sick of how things are done at FB and want to work at a high-quality place.

Above all, don't emulate the actions of FB's leadership. It's the network-effect flywheel that continues to bring eyeballs to their applications, NOT their great software.

And think about this: if they're so brilliant and such great developers, why have they done about 50 acquisitions in their short life, a couple of which are important to their growth?

Facebook is an incredibly successful company, one of the most valuable on the planet. It is natural to assume that a main reason for this is that they've got a boatload of great programmers who produce code that users love. This assumption is wrong. In fact, the widespread adoption of Facebook masks deep, long-term quality issues that are not getting better.

Facebook Success

Facebook recently passed $200 billion in market value. Amazing! It has billions of users world-wide and has no serious competition. No one can question FB's success in user count and market capitalization.

Facebook Mobile App

Mobile device use is going through the roof. We are in the middle of a massive, rapid migration from workstations and laptops to tablets and smart phones. This trend impacts FB just like everyone else. At the recent Money2020 conference, a top FB executive laid out the numbers, which are stunning; in short, FB mobile use nearly equals normal web use. If anything is important at FB, it's got to be getting the mobile app right.

Facebook Mobile App Quality

So how is FB doing, this premier, ultra-successful company with no lack of resources to do an excellent job? They've got to be doing way better than the rest of the industry, right?

Let's start by looking at user reviews:

Not too bad, 4 stars out of 5, right? But out of more than 22 million reviews, more than a quarter gave 1, 2 or 3 stars, more than 6,000,000 reviewers! Let's look at a few of those reviews. (I didn't scan for exceptionally bad reviews; I just picked off ones that were near the top of the Play store.)

Here are a couple reviews. Cindy gave 1 star because the app doesn't work at all, and Johnny gave 2 because he suddenly can't avoid being buried in notifications.

Here are a couple more reviews. The third reviewer gave 3 stars even though the app is basically disfunctional.

These are educational:

The 3 on the left describe things that worked on a prior release that no longer work, which is the cardinal sin of quality testing. Look at Bratty's review awarding 4 stars, even though he/she can't use the app at all. Makes you wonder if anything but 5 stars is good for FB. Jeremy's review sums it up: "you're still not listening to your users." If only 5 stars represents satisfied users, the ratings mean that about half of FB app users have a serious bone to pick. Which is quite a statement.

FB App Quality in Context

Compare the performance of the FB app to the performance of your car. Getting a new release of the app is similar to getting your car back from the repair shop, only with little trouble on your part and no expense. Most cars run pretty well -- they start in the morning, run through the day, and rarely break down. When you get your car back from the repair shop, it's even better, even less likely to break down.

Not true for FB. Even though it's "in the repair shop" pretty frequently, the FB "mechanics" all too often find a way to break things that used to work, and fail to fix things that didn't work when it went into the "shop." FB programmers and managers think they're way smarter than auto mechanics, but if the car people performed even a little bit like the FB crew, they'd be out of business. The reality is that, with all their oh-so-highly-educated-and-smart mountains of cool (mostly) dudes, the FB crowd can't come close to delivering the quality that nearly every corner-garage mechanic delivers every day.

FB quality stinks, and it stinks for their fastest-growing, flagship product. In saying so, I'm simply summarizing the expressed experiences of literally millions of their users. There are ways to achieve high quality software. FB does not lack the resources. The fact that they don't deliver quality and aren't even embarassed about it tells us that they just don't care.