Monday, August 29, 2016

Here's a little principle for all TO DO and NOT TO DO lists. I suspect most of us do this instinctively, but it's worth making it an explicit part of your planning process:

Break the tasks into small tasks for scheduling purposes.

If you have large tasks to do, they may keep falling to the bottom of your stack because they look too imposing or because you rarely have large blocks of empty time.

Also, smaller tasks have fewer conditions needed to start. A large task may have so many pre-conditions that even when you have the time, one of the conditions may be missing, so you don't start.

One way to break up a large task is to create smaller tasks, each one of which removes a pre-condition for the larger task. For example, if you need to call a source, but you do most of your work at a time when the source may not be available, make the call into one small task so it can be removed from the constraints on the large task.

Small tasks are also motivating because you receive frequent feelings of accomplishment.

Wednesday, August 24, 2016

The questioner wrote: "I have an idea for a great book. I have only have this one idea. I worry if I write this book and it doesn’t go well I might end up discouraged with no more great ideas to write about."

Someone once said, “There’s nothing as dangerous as an idea—especially if it’s the only idea you have.”

By themselves, great ideas for books are essentially worthless. Hardly a month goes by without some eager soul telling me they have a great idea for a book that they want me to write in partnership with them. They simply don’t understand that it’s not the idea that’s worth anything, but only the writing of it.

99+% of “great ideas” never get written. Even though I’ve published over 100 books, I have had hundreds more “great ideas” that I’ve not (yet) written. I certainly don't need somebody else's great idea in order to have something great to write about.

So, either stop asking meaningless questions and write your book, or just file this “great idea” in the wastebasket and get on with your life. Chances are you’ll have hundreds of other great ideas in your life.

Saturday, August 20, 2016

When this question came up on Quora.com, lots of good and useful answers were given, but they all seemed to be external answers. For me, with more than 60 years of programming experience, the one thing that made me a better programmer than most was my ability and willingness to examine myself critically and do something about my shortcomings. And, after 60 years, I'm still doing that.

I also examine my strengths (longcomings?) because I know that my greatest strengths can quickly become my greatest weaknesses. I've watched many programmers who examine themselves critically, but then work to improve their greatest strengths, to the exclusion of their weaknesses. That strategy takes them a certain distance, but the nature of computers is to highlight your greatest weaknesses as a programmer.

Computers are like mirrors of your mind that brightly reflect all your poorest thinking. To become a better programmer, you have to look in that mirror with clear eyes and see what it's telling you about yourself. Armed with that information, you can then select the most useful external things to work on. Those things will be different for you than for anyone else because your shortcomings and strengths will be unique to you, so advice from others will often be off target.

Wednesday, August 17, 2016

The question was posed: "How does a tester help with requirement gathering?" A number of good, but conventional, answers were given, so I decided to take my answer to a meta-level, like "what is the tester's job, generally, and how does that relate to this question?"

The tester’s job is to test, that is, to provide information about the state of a system being built or repaired. Therefore, the tester should help with requirement gathering or any other phase of development where the job of testing might be affected.

A professional tester will involve her/himself in all phases, not such much to “help” others do their job, but to assure that s/he will be able to do a professional job of testing. So, for example, in the requirements work, the tester should obviously monitor any requirements to ensure that s/he can test to see if they’re fulfilled. The tester should block any vague statements such as “user-friendly” or “efficient” or “fast”—to take just one example.

Moreover, requirements are “gathered” in many ways besides an official requirements gathering phase, so the tester must always be on the alert for any way requirments can creep into the project. For instance, developers frequently make assumptions based on their own convenience or preferences, and such assumptions are not usually documented. Or, salespeople make promises to important customers, or put promises in advertisements, so a tester must spend some time speaking (at least informally) to customers—and also reading ads.

In short, a tester must be involved right from the very first moment a project is imagined, with eyes, ears, and, yes, nose always on the alert for what must be tested, how it must be tested, and whether or not in can actualliy be tested. That's the way a tester "helps" the requirements gathering and all other phases of a project's life, on alert for anything that could affect accurately documenting the state of the system.

As a consequence of this broad view of the tester's job, a tester has to resist any assertion that "you don't have to be involved in this phase," regardless of what phase it is. Obstacles to testing can arise anywhere.

"You can fool all of the people some of the time, and some of the people all of the time, but you can't fool all of the people all of the time."- Abraham Lincoln

People in the software business put great stress on removing ambiguity, and so do writers. But sometimes writers are intentionally ambiguous, as in the title of this book. "Quality Software Management" means both "the management of quality software" and "quality management in the software business," because I believe that the two are inseparable. Both meanings turn on the word "quality," so if we are to keep the ambiguity within reasonable bounds, we first need to address the meaning of that often misunderstood term.

1.1 A Tale Of Software Quality

My sister's daughter, Terra, is the only one in the family who has followed Uncle Jerry in the writer's trade. She writes fascinating books on the history of medicine, and I follow each one's progress as if it were one of my own. For that reason, I was terribly distressed when her first book, Disease in the Popular American Press, came out with a number of gross typographical errors in which whole segments of text disappeared (see Figure 1-1). I was even more distressed to discover that those errors were caused by an error in the word processing software she used—CozyWrite, published by one of my clients, the MiniCozy Software Company.

The next day, too, the Times printed a letter from "Medicus," objecting to the misleading implication in the microbe story that diphtheria could ever be inoculated against; the writer flatly asserted that there would never be a vaccine for this disease because, unlike smallpox, diphtheria re-

Because Times articles never included proof—never told how people knew what they claimed—the uninformed reader had no way to distinguish one claim from another.

Figure 1-1. Part of a sample page from Terra Ziporyn's book showing how the CozyWrite word processor lost text after "re-" in Terra's book.

Terra asked me to discuss the matter with MiniCozy on my next visit. I located the project manager for CozyWrite, and he acknowledged the existence of the error.

"It's a rare bug," he said.

"I wouldn't say so," I countered. "I found over twenty-five instances in her book."

"But it would only happen in a book-sized project. Out of over 100,000 customers, we probably didn't have 10 who undertook a project of that size as a single file."

"But my niece noticed. It was her first book, and she was devastated."

"Naturally I'm sorry for her, but it wouldn't have made any sense for us to try to fix the bug for 10 customers."

"Why not? You advertise that CozyWrite handles book-sized projects."

"We tried to do that, but the features didn't work. Eventually, we'll probably fix them, but for now, chances are we would introduce a worse bug—one that would affect hundreds or thousands of customers. I believe we did the right thing."

As I listened to this project manager, I found myself caught in an emotional trap. As software consultant to MiniCozy, I had to agree, but as uncle to an author, I was violently opposed to his line of reasoning. If someone at that moment had asked me, "Is CozyWrite a quality product?" I would have been tongue-tied.

How would you have answered?

1.2 The Relativity of Quality

The reason for my dilemma lies in the relativity of quality. As the MiniCozy story crisply illustrates, what is adequate quality to one person may be inadequate quality to another.

1.2.1. Finding the relativity

If you examine various definitions of quality, you will always find this relativity. You may have to examine with care, though, for the relativity is often hidden, or at best, implicit.

Take for example Crosby's definition:

"Quality is meeting requirements."

Unless your requirements come directly from heaven (as some developers seem to think), a more precise statement would be:

"Quality is meeting some person'srequirements."

For each different person, the same product will generally have different "quality," as in the case of my niece's word processor. My MiniCozy dilemma is resolved once I recognize that

a. To Terra, the people involved were her readers.

b. To MiniCozy's project manager, the people involved were (the majority of) his customers.

1.2.2 Who was that masked man?

In short, quality does not exist in a non-human vacuum.

Every statement about quality is a statement about some person(s).

That statement may be explicit or implicit. Most often, the "who" is implicit, and statements about quality sound like something Moses brought down from Mount Sinai on a stone tablet. That's why so many discussions of software quality are unproductive: It's my stone tablet versus your Golden Calf.

When we encompass the relativity of quality, we have a tool to make those discussions more fruitful. Each time somebody asserts a definition of software quality, we simply ask,

"Who is the person behind that statement about quality."

Using this heuristic, let's consider a few familiar but often conflicting ideas about what constitutes software quality:

a. "Zero defects is high quality."

1. to a user such as a surgeon whose work would be disturbed by those defects

2. to a manager who would be criticized for those defects

b. "Lots of features is high quality."

1. to users whose work can use those features—if they know about them

2. to marketers who believe that features sell products

c. "Elegant coding is high quality."

1. to developers who place a high value on the opinions of their peers

2. to professors of computer science who enjoy elegance

d. "High performance is high quality."

1. to users whose work taxes the capacity of their machines

2. to salespeople who have to submit their products to benchmarks

e. "Low development cost is high quality."

1. to customers who wish to buy thousands of copies of the software

2. to project managers who are on tight budgets

f. "Rapid development is high quality."

1. to users whose work is waiting for the software

2. to marketers who want to colonize a market before the competitors can get in

g. "User-friendliness is high quality."

1. to users who spend 8 hours a day sitting in front of a screen using the software

2. to users who can't remember interface details from one use to the next

Tuesday, August 02, 2016

One of the (few) advantages of growing old is gaining historical perspective, something sorely lacking in the computing business. Almost a lifetime ago, I wrote an article about the future dangers of better estimating. I wondered recently if any of my predictions came to pass.

Back then, Tom DeMarco sent me a free copy of his book, Controlling Software Projects: Management, Measurement and Estimation. Unfortunately, I packed it in my bike bag with some takeout barbecue, and I had a little accident. Tom, being a generous man, gave me a second copy to replace the barbecued one.

Because Tom was so generous, I felt obliged to read the book, which proved quite palatable even without sauce. In the book, Tom was quite careful to point out that software development was a long way form maturity, so I was surprised to see an article of his entitled "Software Development—A Coming of Age." Had something happened in less than a year to bring our industry to full growth?

As it turned out, the title was apparently a headline writer's imprecision, based on the following statement in the article:

"In order for the business of software to come of age, we shall have to make some major improvements in our quantitative skills. In the last two years, the beginnings of a coherent quantitative discipline have begun to emerge…"

The article was not about the coming of age of software development, but a survey of the state of software project estimation. After reviewing the work of Barry Boehm, Victor Basili, Capers Jones, and Lawrence Putnam, DeMarco stated that this work

"…provides a framework for analysis of the quantitative parameters of software projects. But none of the four authors addresses entirely the problem of synthesizing this framework into an acceptable answer to the practical question: How do I structure my organization and run my projects in order to maintain reasonable quantitative control?"

As I said before, Tom is generous person. He's also smart. If he held such reservations about the progress of software development, I'd believe him, and not the headline writer. Back then, software development had a long way to go before coming of age.

Anyway, what does it mean to "come of age"? When you come of age, you stop spilling barbecue sauce on books. You also stop making extravagant claims about your abilities. In fact, if someone keeps bragging about how they've come of age, you know they haven't. We could apply that criterion to software development, which has been bragging about it's impending maturity now for over forty years.

Estimates can become standards

One part of coming of age is learning to appraise your own abilities accurately—in other words, to estimate. When we learn to estimate software projects accurately, we'll certainly be a step closer to maturity—but not, by any means, the whole way. For instance, I know that I'm a klutz, and I can measure my klutziness with high reliability. To at least two decimal places, I can estimate the likelihood that I'll spill barbecue sauce on a book—but that hardly qualifies me as grown up.

The mature person can not only estimate performance, but also perform at some reasonably high level. Estimating is a means mature people use to help gain high performance, but sometimes we make the mistake of substituting means for ends. When I was in high school, my gym teacher estimated that only one out of a hundred boys in the school would be able to run a mile in under ten minutes. When he actually tested us, only 13 out of 1,200 boys were able to do this well. One percent was an accurate estimate, but was it an acceptable goal for the fitness of high school boys? (Back in those days, the majority of us boys were smokers.)

This was a problem I was beginning to see among my clients. Once they learned to estimate their software projects reasonably well, there was a tendency to set these estimating parameters as standards. They said, in effect: "As long as we do this well, we have no cause to worry about doing any better." This might be acceptable if there was a single estimating model for all organizations, but there wasn't. DeMarco found that no two of his clients came up with the same estimating model, and mine were just as diverse.

Take the quality measure of "defects per K lines of code." Capers Jones had cited organizations that ranged on this measure from 50 to 0.2. This range of 250-1 was compatible with what I found among my own clients who measured such things. What I found peculiar was that both the 50-defect clients and the 0.2-defect clients had established their own level as an acceptable standard.

Soon after noticing this pattern, I visited a company that was in the 150-defect range. I was fascinated with their manager's reaction when I told him about the 0.2-defect clients. First he simply denied that this could be true. When I documented it, he said: "Those people must be doing very simple projects, as you can see from their low defect rates."

When I showed that they were actually doing objectively harder projects, he reacted: "Well, it must cost them a lot more than we're able to pay for software."

When I pointed out that it actually costs them less to produce higher quality, he politely decided not to contract for my services, saying: "Evidently, you don't understand our problem."

Of course, I understood his problem only too well—and he was it. He believed he knew how to develop software, and he did—at an incredibly high cost to his users.

His belief closed his mind to the possibility of learning anything else about the subject. Nowadays, lots of managers know how to develop software—but they each know different things. One of the signs of immaturity is how insular we are, and how insecure we are with the idea of learning from other people.

Another sign of immaturity is the inability to transfer theoretical knowledge into practice. When I spill barbecue sauce on books, it's not because I think it will improve the books. I know perfectly well what I should do. But I can't seem to carry it out. When I was a teenage driver, I knew perfectly well I shouldn't have accidents, but on the way home from earning my driver's license, I smashed into a parked car. (I had been distracted by a teenage girl on the sidewalk.) I was immature because even though I knew better than to gawk at pretty girls while driving, I had an accident anyway.

The simple fact was that we already knew hundreds of things about software development, but we were not putting those ideas into practice. Forty years later, we're still not putting many of them into practice. Why not? The principal reason is that our managers are often not very good at what they are supposed to do—managing. In Barry Boehm's studies, the one factor that stood above all the others as a cause of costly software was "poor management." Yet neither Boehm nor any of the other writers on estimating had anything to say on how to make good managers—or get rid of bad ones.

Better estimating of software development could give us a standard for detecting the terrible managers. At the same time, however, it may give us a standard behind which the merely mediocre managers can hide.

Using estimates well

So, if you want to be a better manager, how do you use estimates effectively?

Any estimate is based on a model of what you're estimating. The model will have a number of parameters that characterize the situation. In the case of estimating a software project, parameters of the estimate might include

• number of programming languages to be used

• experience level of development staff

• use of formal code reviews

• characteristic error rate per function point

• many other factors

Suppose, for example, the project has six teams, each of which prefers to use a different programming language. Up to now, you've tolerated this mixture of languages because you don't want to offend any of the teams. Your estimating model, however, tells you that reducing the number of languages will reduce risk and speed up the project. On the other hand, if you try to eliminate one of the languages, your model tells you that a team with less experience in a new language will increase risk. By exploring different values of these parameters, you can learn whether it's worthwhile to convert some of the teams to a common language.

To take another example, you've avoided using formal code reviews because you believe they will add time to the schedule. Studying your estimating tool, however, shows you that use of formal reviews will reduce the rate of errors reaching your testing team. The estimating model can then show you how the rate of errors influences the time spent testing to find and correct those errors.

Many poor managers believe an estimating model is a tool for clubbing their workers over the head to make them work faster. Instead, it's a tool for clubbing yourself over the head as a guide to making wise large scale management decisions.