Thursday, December 28, 2006

I'll be in vacation for the next few days, but in the mean time, Paul Carvalho has an interesting post comparing skilled (exploratory) testing to fencing. I thought it was insightful; I hope you enjoy it.

Software Quality Engineering recently sent me a bunch of inteview questions for a StickToolLook e-newsletter. Writing a reply was hard, as I wanted to provide real information that was worth reading, not just pontificate or repeat the same old thing. My replies are below; I'm interested in what you think.

SQE: What are the benefits of a tool developed for an entire market? What about a tool designed to focus on a particular need?

That's a really tricky question, because within every market there is a smaller market. If we start with tools designed for everyone who ever touches software development, we can find tools inside of that. Say, tools for just testers – then just performance testers. Inside of that market are tools for web performance testers, java testers, and database testers. Inside of that we have tools for each specific flavor of database.

Overall, I like tools as niche as a I can get them, because the vendor can solve a specific problem and solve it well. The higher-level tools offer some level of integration and file sharing, and workflow that just can’t exist in a specific tool. Also, the more general a tool, the better chance there will be to find a user group, consultant, or support when things go bad.

SQE: What are the cons to using either type of tool?

Well, for example, if you use a general tool for all databases, calling your sales rep and ask about Oracle support, and he replies "yeah, there’s a driver for that." That’s bad news. At best, you’re going to muck about for a day or three before finally getting the software to mostly work. At worst, it'll go back on the shelf. General tools generally have a main market; like ordering the fish in a steakhouse, you never know quite what you’re gonna get.

I have heard of a few general tools that work in lots of environments; McCabe has amazing support and I've heard good things about Worksoft certify. You just have to be certain to check a vendors claims carefully – yourself.

Of course, if you use the specialized, Database-Specific tool and write custom code for it, then when you convert environments you might have to throw all that work away and start over.

SQE: Is it possible to develop a tool or a tool suite that can be all things to all people? Or even all things to one person?

I think it’s a matter of philosophy. UNIX, for example, is a collection of small tools that can be tied together in really interesting ways to do just about anything; I think the "Swiss Army Chainsaw" is my favorite term to describe it. Is UNIX a single tool or a collection of tools? (Grins) Who Cares?

There are certainly performance testing tools, record/playback tools, and development environments people spend all day in, and some of them have very passionate and active users. My colleague Mike Kelly is fond of the phrase "Role Players", to mean that the person likes to perform a function and do it well. If someone wants to be an excellent performance tester for websites, then sure, that person could live in one tool, align himself with that vendor, and probably make a very good living as an employee or consultant. That just isn't me, and you never know when a specialty is going to be destroyed – say, for example, when websites start to support wireless devices and the bottleneck for performance shifts.

SQE: On the other hand, can a tool be too specific?

Certainly! I don’t know anyone who uses awk or sed anymore, for example. Those were great little programming languages, but perl cam along and offered all the exact same features plus more.

Of course, if the tool is open source, it doesn’t really matter if the tool is too specific, because it's free, and you can put it away and pull it out if and when you have the need again. If it’s for-purchase, the market will sort it out – the vendor will either expand the product or fail in the marketplace. (You have to love capitalism, after all.)

SQE: Which type of tool is more prevalent in the market right now?

I think vendor-supported tools tend to go big and open source tools tend to be small. The open source tools are often developed by a specific person to solve his own problems, and growing the tool won't make him any money. So open-source tools grow through some other guy adding features that he needs. For-purchase tools are driven by companies that want to make money; they have an economic reason to try to reach as many people as possible. I hear a lot of good things about Visual Studio Team System, which is big, big, big general purpose tool. I think the market is still figuring itself out, but right now most of the new tools I see are specific tools for a small market, created by small companies who have expertise in that area – and I think that is very good thing.

SQE: Where do you see the market headed in the future?

I think software testing is an immature field, and software quality even less so. That means we need to try a bunch of different approaches, see what works, and then build up tools around that. Like I mentioned earlier, I see little companies doing just that. If they succeed, economic forces will expand those products a little bit, while hopefully the "big" products will get out of the clouds and come down to the real world a little bit more. Eventually, I predict a fight for the middle.

I posted this to the Software-Testing Yahoo Group yesterday, and I thought Creative Chaos Readers might enjoy this. The background is a post saying that context-driven thinking was universal, and there was no value in putting people into different boxes such as the "Analytic" (Academic, Telecom, Finite State Machines) school of testing or the "Factory" School (ISTQB).

In reply Scott Barber wrote:*If* that is the case (and I'm not saying I agree, I'm just doing a thought experiment), then who is it promoting things like "Best Practices" and flaming things like Exploratory Testing? Is it managers? Developers? Tech Magazine columnists? Because *someone* is promoting these ideas and it isn't anyone who believes in the context sensitivity of testing.

And I replied:In my experience, when people are fighting XP and promoting best practices, they are often willing to pay lip service to the concept that software environments are different, and what works for you might not work for me, and vice versa.

Then they go back to the chalk board and continue explaining the rational, unified model thingy that applies to everyone. :-)

Seriously, it's hard to argue with context-driven principles, and few people do. It is a lot easier to ignore them - and many do. (I think that's allready been said on this list, so I labelled this post ...)

Wednesday, December 27, 2006

As a military cadet, I had a few occasions to design systems - generally point systems. For example, the number of points required to graduate from a summer encampment, or a merit/demerit system.

I typically would write a page that gave guidelines and concluded with "Plus or minus (some big number) for items of exceeding excellence or discredit", and failed to meet expectations. What my superiors wanted was a complex, detailed, organized, predictable system. They wanted something comprehensive.

That always amazed me. First of all, if that could be done, someone would find a way to game it. Make Public Displays of Affection (PDA) a 1-point demerit, and some cadet would end up embracing his girlfriend during pass-in-review, collect the demerit, and reply "it was worth it." OR, for encampment graduation, a cadet could do the absolute bare minimum to graduate, reflecting a negative attitude that was undeserving, while a handicapped cadet (we had a few) might do his best but not quite make it.

It also happened on promotion forms. You would have a cadet that just didn't get it, but you'd be forced to use the CAPF 50 (leadership eval form) to do an objective evaluation. Sadly, "Gets it" level was not on the sheet, so the overall score would be too high. What do you do? Pass the cadet, or systematically mark him down in everything to fail, or spend a hour trying to figure out how to "fairly" complete the feedback form, with accurate feedback, that resulted in the outcome you desired?

This problem isn't just limited to me. One of the themes of the movie Thirteen Days, which is about the cuban missle crisis, is the desire of the Military Establishment to escalate the cuban missle crisis to war. To do this, they get the president to agree to a set of "rules of engagement", then try to use brinksmanship to escalate the level of conflict until US Troops are in harms way. Then the rules of engagement would require a counter-attack to "defend" out troops.

I'm all for decision support systems (DSS). There's a diffence, however, between a DSS that provides information and one that makes the decision for you. As a Decision Maker, why would you force yourself into a system that limits you? Why develop a system that takes over, limiting your ability to use common sense and good judgment? Why would you want that?

Often times, there are some perfectly good reasons for this. Some ERP systems, for example, can do a better prediction of trends than a human can, thus limiting the amount of excess inventory that needs to be carried but ensuring the shelves are stocked. In other cases, like hiring, there is a real risk of being sued for picking one person over another. Objective systems that decide for you, say for example, a point system, will limit legal risk.

Or, I dunno, say you are running a conference and you want to provide feedback to the people you rejected. Having a templated form that every member of the committee fills out that you can average looks a lot more impressive than saying "Well, we talked about it, and you didn't make the list. Try again next year."

But, there's a problem, and that is this:

Most first-pass objective systems suck. They really, really, suck.

I'll say it again: Most first-pass objective systems suck.

I've known this since childhood, at age 15, when I tried to make my first merit/demerit system, but I did not find out why until much later. Wording my explanation is hard, but I'm going to try, so please bear with me:

1) Modeling system effects is hard. When you reward something, you get more of it, but you get more of exactly what is measured. You want people to get leadership training, so you give them points for taking the class and they are going to take the class. But you really want them to learn about leadership. Did that happen? Maybe. Maybe not.2) The more complex the system, the more variables. Modeling a 2-hour-a-week-plus-some-weekends-and-encampments military environment, with the same rough goals and training schedule is hard. Imagine the workplace, where each project is different!

3) The more variables, the more interactions, reactions, and unintended consequences.

In the 1970's and 1980's, the great solution for this was going to be Artificial Intelligence (AI) and neural networks; computers that could learn. It's actually pretty easy to build a system in LISP that can look at what books you like, find other books that other people like, and make recommendations. When you are dealing with a closed system with a few million books, it's pretty easy.

Real life is not a closed system

It turns out that it's easy to make a CASE system that requires a requirement for every check-in, or a requirements template to be complete before coding begins.

But the judgment that the template is filled out well? That is best done by a human. It is very hard to make an AI program capable of assessment, or any of the higher levels of bloom's taxonomy, except in very specific applications, in which case the computer is really just parroting back what some human told it.

Why I am talking about AI?

Because most processes and systems are really really bad AI - AI programming in a vague and ambiguous language that is much closer to BASIC than LISP.

This is a huge part of the problem that CMM(I) and ISO 9000 have. They want to be one-page descriptions that say "Do the Right Thing" or "Do Good Work", but you need to define "Good" and "Right", and to try to do that a crappy language like English which is worse than BASIC, while dealing with all of the variables in software development is, well ... hard.

To my knowledge, there is only one computer than can synthesize all this information, and it is the human brain. The role of the human brain is to make sense and integrate the world around it. If you've ever had intuition, or a gut feeling, you know that can be a lot more than emotion. It can be the left and the right sides of your brain working in concert to solve a problem on the subconscious level. And where process descriptions fail, the human brain can be surprisingly good at solving problems. (For example, to quote Michael Bolton: "If your project has dug itself a hole, your process ain't gonna pick up the shovel.")

The job of collecting, synthesizing, and making a judgment is a craft. Like art, writing, development and testing, judgment can be improved with practice. In future entries I would like to explore a few of these exercises.

For the time being, here's my $0.02: Be skeptical of systems that spit out answers about behavior and judgments. Ask questions about the weighting.

Remember this: If you are a decision maker overseeing such a system, it can exist to make the decision for you. To quote Richard Bach:

"If it's never your fault, you can't take responsibility for it. If you can't take responsibility for it, you'll always be its victim."

Tuesday, December 26, 2006

Here's a gift for the holidays; a software concept maturity model to lighten your mood.

Level 1 - InitialAt this point, someone has an idea. It seems to work for them, to solve the problems they have. The person actually uses the idea with some success. There are lots of ideas at level 1 right now, but we've never heard of them.

Level 2 - DefinedSomeone gives the idea a name. Exploratory testing is probably at this level right now, although there are a few tools starting to emerge, and that is the leap to level 3. "Blink" Testing is probably at this level now. The key thing about level 2 is that once an idea has a name, it will spread.

Example of jumping from level 1 to level 2:

Level 1: "As a final check, Joe eyeballs the data, looking for irregularities. He does this mind-meld thing where he takes a step back and looks for patterns."

Level 2: "Our shop does blink testing as part of it's quality process. Does yours?"

Level 3 - VendorifiedAt this point, the idea is popular enough that people start to do it, and ask for tools to make it. Vendors begin to provide tools to make the idea easy. The Enterprise Service Bus is probably at this level.

Level 4 - "Gotta Have It"By level 4, vendors have invested a significant amount of time and effort in creating tools. They now create a market for the product. Instead of telling you the ROI of the product, they tell you that "everyone else is doing it" or "In Six Months, companies that aren't doing X will be unable to complete." Enterprise Resource Planning (ERP) hit level four about three years ago. Web Services and CRM are probably at this level.

Not to be a jerk, but level 4 often involves marketers and executives who don't really understand the idea at the nuts-and-bolts level.

After level 4, the tree splits ...

Level 5A - UbiquitousAfter all the hype, it turns out that the concept doesn't solve all your problems and never will. Instead, hopefully, it has some broad specific applicability that solves a lot of problems like X, and can be used by people when the concept makes sense. Object-Oriented Development is probably at level 5 - People use it that use it, people that don't don't feel bad about it. Sure, there is a design patterns community and classes on Object-Relational Mapping, but no one is saying you "have" to use design patterns. (Any More) Agile Development is pushing it's way from level 4 to 5A right now, but I don't have a good feel for what is happening to test driven development.

Alternatively, some ideas go to level 5B ...

Level 5B - CraptacularSome ideas do not have broad but specific applicability and do not solve problems very well, yet a small group of companies and consultants continue to insist that you "just gotta have it" or that it is the wave of the future. RUP, UML, and some aspects of MDA are probably level 5B. It is interesting to note that many level 5B ideas are re-labelled or re-named to something more "hip", then re-cycled at a lower level. Examples: The Agile Unified Process, The Enterprise Unified Process, Agile Model Driven Development, and HeavyWeight SOAP Web Services, which in many ways is dressed up CORBA. I am not 100% sure about EUP and Agile UP, though, as they may be part of some elaborate joke. ISTQB testing is probably level 4, going to 5B.

Note that this is just a model, and it's incomplete. For example, several of the standards set by the Object Model Group have no reference implementation, as they were never actually used in the first place, skipping level 1 entirely.

What is my point, you ask?

Well, in the 1970's a gentleman named Mark Hanan wrote a really interesting book called "Consultative Selling." In the book, he suggested that instead of traditional selling, which he calls "Vending", companies should help the customer solve it's problem and increase bottom line performance - the ROI argument. When you see big, full-color ads talking about the 500% ROI from test automation, someone in the line of inspiration probably read Hanan.

The problem comes when people stop trying to understand the problem and really solve it, and instead ask "How can I use this (solution) to make money?" - then it turns into a solution seeking a problem, and you begin to climb the ladder toward craptacular. At best, you hit 5A, which really isn't that bad, OO has really helped some people solve real problems in the real world.

And, know that I think about it, that is the danger with Agile right now, the Shark that could be jumped.

As for me, I intend to try to stay in the Agile Community, to make sure we go to 5A and not 5B.

Friday, December 22, 2006

First of all, Agile Shark Jumpin' got a reference last week on InfoQ.com. Nifty!

Second, I am taking a short sabbat from Creative Chaos over the next few days. There may be a couple of posts next week, and more serious stuff after the new year.

When we come back: More from Jim Brosseau, I will discuss enterprise architecture (a little), and perhaps maybe finally deconstruct blue man group.

I'm no good with sappy Christmas sign-offs, so I'll just be honest. I've had an awful Advent. It's _supposed_ to be a season of preparation for the coming of Jesus Christ. I haven't prepared. I haven't prayed enough. I skipped Bible Study. Lately, I have even been short with the kids.

Here's the thing - knowning all that, he'll still come for us anyway next week - and knowning full well what he will do for us on Good Friday.

Most readers of "Creative Chaos" probably know that allthough I do all things software, my predominant focus is software testing. In fact, a few weeks back Elisabeth Hendrickson labelled my blog "Mostly Testing", and I was vaguely annoyed that I did not make her "Mostly Agile" list ... then again, seeing how my thoughts on Agile Jumping the Shark played out, I should probably be flattered.

James Bach posted this yesterday to the context-driven yahoo discussion group; It matches my feelings relatively well, and I thought it was worth sharing:I first attacked the CMM because they labeled my community "Level 1: Initial" and called it "heroic". The first label was clearly not descriptive of what we were doing at Borland, the second label *was* descriptive, but they seemed to think heroism was a BAD thing. This caused me to speak out in favor of my school, and to seek better ways to decribe it. For a time I called it "Market-driven software engineering" but that didn't stick. Then I called it the Cognitive paradigm (as opposed to the Clerical paradigm), but when Cem suggested context-driven, that captured it better, for me.

I think the strategy of some consultants, such as Rex Black, is to label their school in such a way as to encourage the belief that their school is the only school-- and thus that there is no controversy. I think that's why, in Rex's certification advertisments, he writes as if his ISTQB school of "elite" software testers represents the pinnacle of testing achievement for all of us, instead of a peculiar formulation that looks to my eye like an example of sloppy Factory school thinking.

To me, to deny controversy is terribly arrogant. I may have a different view of humility than Rikard, but to me I express humility not by saying I don't have a strong idea of what is right and wrong, but rather that I have a strong idea, that my strong idea may be mistaken, and that I strive to encourage the kind of criticism I need in order to improve it.

I'm upset with people like Rex, who fight for the ascendency of their faction in the wider community of testers while pretending not to be part of any faction. That turns it into a propoganda war instead of a straightforward debate. It's trying to manipulate people instead of persuading them.

I want to see more use of the schools concept so that we can have a free market of ideas where consumers can make informed choices.

If you don't know much about the Context-Driven School and would like to know more, you can subsrcibe to the Yahoo Group. We also talk about context a fair bit on SW-IMPROVE, my email discussion list, where the volume is considerably less.

Thursday, December 21, 2006

Brian Marick once wrong something to the effect of "Methodology design is an extension of personality."

There is a large percentage of the population to who having things be stable, predictable and repeatable are very important. They tend to be attracted to, and succeed it, larger companies with established "ground rules." They tend to struggle in smaller, start-up, and entreprenurial enviroments.

And, to turn a phrase, that just ain't me.

In the methodology discussions in the 80's and 90's, the stable, predictable voice was the loudest one in the room. Interesting exceptions that were'nt quote as loud were little voices like Extreme Programming, The Open Source Community, Jerry Weinberg, and DeMarco/Lister.

Stable, Predictable, Repeatable, Measured does yield a certain set of results. I'm just not sure that everyone wants that, and I think it should be okay to say that. I've drawn the images in "Two Roads" in a certain way to make the distinction clear.

My goal is not to insist that one is right or wrong, no, it's to help you decide to make a concious choice.

To help with that, I've drawn a characture, but the loudest voice in the room has been doing that for twenty-five years now, so I hope you'll allow me the opportunity. :-)

"Deal or no deal?""Is that your identity?""Let's play one verses one hundred!""Is that your final answer?"

These catch-phrases all come from simple, formulaic game shows. The shows are the same, every time, right down to the catch-phrases and witty banter. How many different ways are there to say "And we'll find out, right after this short break" anyway?

You might even say that these game shows are stable, predictable, and repeatable. Moreover, the plan generally works - find a formula that brings in viewers and stick with it until the formula is no longer profitable. Whether it's asking increasingly challenging questions for more and more money, like the latest gameshows , or having two teams compete and kicking a member off the losing teams, like most reality shows, you basically just do the same thing again and again, and it works.

... but imagine what it's like to manage Saturday Night Live. That show is comedy; it is new every morning. Comedy shows are funny and entertaining when they are different every time. In that world, the challenge moves from picking a formula and finding sponsors to hiring and inspiring great talent. Late-light shows like Leno and Conan are somewhere in the middle; they have a packaged format, butdifferent guests. A predictable opening monologue, but nobody likes to hear the same joke two nights in a row. Or, twice ... ever.

Of the two, which is software development? Well, probably, both. Somewhere out there there is a database-backed website company spitting out the same templated solutions, again and again, and somewhere else there is a company doing things that are "new every morning." Most companies that develop software are somewhere in the middle between the two, stuck somewhere on the great spectrum.

The bigger question is: What kind of company do you want to work for, and would your organization benefit from moving closer to (or further from) one of the two poles?

"Deal or No Deal" is not going to have a great positive cultural impact, but it could make a lot of money for NBC and small group of writers. Saturday Night Live, on the other hand, has helped define popular culture for twenty-odd years (for good or ill), including The Blue's Brothers, Wayne's World, and a Dozen More.

Saturday Night Live is a talent incubator; it brought us Chris Farley, Adam Sandler, Tina Fey, and countless other talented comedians, given them a stage to perform on, and support and ideas to improve.

Picking one of these strategies as an individual can be even easier than doing it as a CIO. On one hand you could pick a mainstream, know-what-you-are-going to get path like education or getting PMP, Java, or other certification. On the other you could try to be different, which probably means lasting and unique contributions to a field.

This season NBC has two shows (a drama and a comedy) that follow the antics of an SNL-like show, mostly off-stage. Despite the cowboy-like mentality I hint at above, the shows do have some bounds. For example, they do need to fit into a specific time slot with commercial breaks. One common theme on Studio 60 is the planning board - a big cork board with 3"x5" index cards in it that represent skits. They can add skits the board until it's full, take them down, tear them up, develop them later, and show on.

The board is called a story board. I suppose that makes the cards ... story cards?

By now, I hope it is clear that I have chosen my path. It's nice to know that we share a similar set of tools with other people on that path.

As for what the game shows do, I don't really know. I suppose it's not interesting enough to support a profitable "life on the set" drama or comedy.

Wednesday, December 20, 2006

I used to go to conferences to get new ideas, and my emphasis was on the "big" ones like The SD Conferences or the Open Source Conference. Thanks to the internet, the public library, and wooden speakers who just stand up and read the bullet-points, I can now get most of that value from home. (No, they aren't all that bad, but you get the point.)

Over the past couple of years I have come to realize that little fact, and yet I still go to conferences. In fact, I desire to attend many more than I can actually get out to. Something changed.

Oh, I still try to pick up ideas, but instead of getting them from powerpoint slides, I get them from talking to people. In fact, it is people, relationships, and opportunities to sharpen my own saw that keep drawing me in to conferences; not fancy theoretical ideas about how to reorganize a 100-person IS shop, or new policies to implement top-down. To borrow a line from "Lessons Learned in Software Testing": Conferences are for conferring.

So here's two assertions for you:

1) It's generally better to meet people at conferences who actually live somewhere near you,

2) Having those people around, and talking to them, can often make 'sharpening the saw' (your mind) a lot easier

At this point, you might agree, but in the back of your mind you are thinking "but my company won't send me to a conference."

Well, let's connect the dots. The easy way to meet people who you have a chance of catching at lunch sometime is to go to a regional conference. Regional conferences are all over - there's GLSEC in Michigan, IQAA in Indiana, PNSQC in Portland, YAPC all over the place, the Simple Design and Test Conference in Pensylvania, a ton of barCamps, Ohio and Chicago both have conferences as well. Check out QAIWorldWide or the Agile Alliance for a user's group near you, and see if they have a conference.

Most of these conferences are run by a non-profit, so the price will be cheap - usually less than $300 per day. They may be close enough that you don't need a hotel room. If your company won't spend a dime, email them and find out if you can volenteer, and earn your own slot. Then you go for free.

If there is no such conference in your area, you can start one. Really, if you live in any kind of medium metro area and you have a few friends who will commit to it, it's not about being "hard", it's more a committment of time.

So, like I hinted at before, after the conference a few people who really care typically go out for a beer, and that's where the real learning starts. Amazingly, there are actual conferences that are structured this way. They limit the conference to something like 15-25 people and expect heavy involvement for all attendees; no sitting in a chair for you. My two examples are the Indianapolis Workshop on Software Testing, or IWST, which I'm making a real effort to get to this year, and the Simple Design and Test Conference. (SD&T)

Peer conferences are especially interesting because they go beyond non-profit to non-commerical! Everyone is a speaker and the conference is often free for all attendees. SD&T is the largest that I have seen, and it really intrigues me.

If you look at the SD&T website, you'll see on the bottom-right that they drew some reasonably big names to be "just" participants, er, speakers when everyone is a speaker. One of them is the guy with the hat, who just looks really familiar. It turns out that he's George Dinwiddie, a regular reader and occasional commentor on Creative Chaos, who now has his own blog. It turns out George lives in Maryland, my old stomping grounds, and the more-or-less home of the Agile Conference this year.

If you look at my blogroll at right, those people are more than just minds I respect; I consider them friends. I've met every single one of them at a conference or user group meeting (Except for Jerry and John Bruce) - and Jerry and John I have each read at least 180,000 words of. (That's about 3 printed novels)

So first, seriously, let's encourage each other. If you want help getting plugged in to a development community near you, drop me a line. And second, here's to meeting George at a conference soon, so he can qualify for my blogroll. (Seriously, drop me a line.) :-)

I turns out that my Collegue, Jim Brosseau has been going through some of the same issues that I have. Jim and I don't totally agree on every issue (who does?), but he does say some things in a recent newsletter that I think are rather profound, and I'm going to provide a couple of exerpts.

Here's the first, which talks about his experience at Agile Vancouver:

I have no problems with an idea that fills a room and gets people talking about techniques that help them be more effective. What I do have problems with is the over inflation of the value, or even the knowing neglect to correct some flawed assumptions about an idea, all in the name of further padding of the wallet.

At one point in the final panel session, there was the suggestion that teams starting down an Agile path should enlist the support of an expert for training. There were more than 2 of the experts that I counted at the front of the room that clearly recognized this as a "cha-ching" moment. This was expressed either politely as a satisfied smile and nod, all the way to the blatant Tiger Woods pumping of the arm for the hole in one. Who is the primary beneficiary of this Agile movement, anyways?

Consumer Tip: If any company or organization suggests a specific methodology, tool or framework as a reasonable solution for you before you can safely say that they truly understand and empathize with you, your culture, your challenges and your products, do your best to make the door hit them on the ass on the way out!

Even if that tool or methodology is what you called them about in the first place.

What is Agile, if not a fully fledged product? Perhaps an enabling technology, perhaps a series of patterns, perhaps a philosophy. Good stuff, but currently over-hyped.

What I saw at the conference was well beyond a philosophy, it was a religion, bordering on a cult. There was a common enemy that everyone could rally against, with an 'If you are not with us, you are against us' fervor. All things non-Agile were seen as bad, and generally lumped into the category of Waterfall or hacking. Plenty of ad hominem discussions about evil managers.

There were a large collection of ideas that were embraced by and expressed as Agile, even though the vast majority have been around and practiced in effective 'waterfall' projects for decades. Retrospectives are Agile. Estimation based on size is Agile. Estimation Poker is Agile (hello...Wide Band Delphi, anyone?). Early stage focus on test is Agile (and has been a well known approach for early stage validating scope for years).

Perhaps the current push extends some of these ideas a bit further, but there is nothing that I would call disruptive technology. Hell, we wrote what looks a lot like XUnit over a dozen years ago on a huge ATC project. Decidedly pre-dating Agile, but we did some smart things.

What is happening is that a group of people are re-discovering things that work, assuming that they can be generally applicable, and evangelizing a bit too aggressively. I wrote an article for the Cutter IT Journal in June of 2004 titled Beyond the Hype of a New Approach, a cautionary tale expressing many of the concerns I had then and still have, at the time in response to Jim Highsmith's Agile Project Management book and the corresponding movement. Then and now, a lot of good ideas are being thrust upon us in a manner that will cause the mainstream to cringe rather than embrace. We need to understand the market before we can sell the product.

The argument is not that the ideas in Agile are bad, but that we technologists thrive in a Boolean world while reality is analog. The best answer before we have all the data really is "it depends." Euthanize the dogma, please. Don't get me going on the suggested prospect for a Unified Agile approach. It's not going to happen.

There are a couple of things I keep noticing in the software world, a big one of which is a desire for simple, straightfoward, "easy" approaches that allow practitioners to turn their brains off and follow the process.

I have a real problem with that, because software development is knowledge work, and "easy" approaches make the process responsible for the outcome, not the people. That might work fine at McDonalds, where everything is consistent (just not very good), but do you really want software like that?

Sadly, where there is a market, businesses will fill it. So not only do you get Agile Dogma (mouthing agile buzzwords and assuming they will always work without thinking), but you get Agile pushers, who are more interested in the advancement in the prominence of "Agile" than in any lasting contribution to the field.

My response to the "Give me a cookie-cutter soltuion" question is, well, long. I like to try to start a dialogue. After all, it takes a very long time to say "Life is pain highness. Anyone who will tell you otherwise is selling something" without losing a client.

In the end, the reality is that software development is hard, and the cream will rise to the top.

In the 1960's, we had the "Software Engineering Crisis", which was essentially the problem that NATO could not develop software supply to meet demand. The solution, more-or-less, seemed to be TQM (at least, according to NATO), which had it's pros and cons. (I think Tom DeMarco offered a more credible, economic explanation in "Why Does Software Cost So Much", but I digress.)

A more recent crisis, and, in my mind, a more credible one, is the software testing crisis that Dr. Cem Kaner has been suggesting. Basically, that twenty years ago a "big" program was about 10K lines of code and written in COBOL - even a manager could understand it. A good tester could put the whole program in his brain and evaluate behavior. Today, a big program is four millions lines of code and written in a higher-level language like Java. While programmers have had orders-of-magnitude improvements in that time (think COBOL to C to C++ to Java to Perl, spagetti to structured to OO), in many cases testers are using the same techniques they were using in 1986. The result? Testing as a profession is in danger of losing it's relevance.

Interestingly enough, the world of computer architecture is going through the same thing. No, not enterprise architecture (whatever that is), but real architecture - bits and registers and adders and instruction sets. For the past thirty years, Moore’s law has promised faster chips every twelve to eighteen months, and it's slowing down.

I'm sure you've heard the Intel ads for "dual core", and that is the current big bet of hardware architects - massive parallelism. The problem is that not every problem can benefit from being split into component parts and re-assembled. Sure, computing PI or calculating the area under a curve might, but what about word processing? And even if we could speed up word processing, it's doubtful that the compiler will be able to make the program faster - a human being is going to have to program for it, which means you'll have to teach the programmers how to make parallel programs.

There is an interesting interview this month in ACM Queue that explores these issues:

Tuesday, December 19, 2006

(I'm still on haitus from the Agile Jumped the shark. More soon. While I work on it ...)

From a recent post I made to the agile-testing list ...

I see two roads diverging in a wood ...

On one side, we have Frederick W. Taylor, with his goal of a stable, predictable, repeatable process. CMM(I) and all that.

On the other, I see an IT shop that works on projects that are "new every morning", different, and unique. Stability and predictability don't make a lot of sense when every project uses different technologies and ideas; when you desire not a predictable outcome, but a valuable outcome.

The first approach is a standardized, compete-through-process approach. It attempts to decrease the variability on software projects - which means in real English - to eliminate the differences that people make. In that case, it becomes possible to manage projects statistically (at least in theory) and hire based on hourly cost. The first approach turns development into a commodity ... or tries to.

The second approach tries to _maximize_ the differences between people for competitive advantage. Instead of a standard quota that can be predicted, it asks questions like "What new idea can you come up with today to catapult us forward?" The second approach may not be predictable, but it is design, and companies like Apple and IDEO labs have done exceedingly well with that model. (Fair disclosure: I own stock in Apple Computer Corporation. Or, in other words: I put my money where my mouth is.)

Here's the key for the CIO: If you don't pick one of the two approaches, someone will pick it for you. Under the first, it's very hard to view IT as anything other than a cost center. Under the second, you have a chance to be viewed as an investment center - to be an investment center - to help differentiate your company from other companies.

And that is why the CIO should care; it is a chance for the CIO to move up a floor to the CEO's office.

.... I think if a few more people took the road less travelled, our IT industry might be in a very different place. We could attract and keep better talent (instead of losing them to MBA's, JD's, and Medical degrees), which would increase wages, crank out better results, and begin a virtuous cycle.

Monday, December 18, 2006

... And will probably continue to be sick. There's more to come on Agile Jumping the Shark, but it'll probably later in the weak.

Still, in the interim, I wanted to share this. This is from a discussion I had with Cem Kaner a few weeks ago:

(Consultant X) is wrong. I'm wrong. Bach is wrong. You're wrong too. If we sharpen each other's thoughts, we might inspire ourselves or some new colleagues to pull together a new generation of better mistakes.

Friday, December 15, 2006

As I said before, going meta is a good thing. However, going meta requires experimental evidence. Unfortunately the industry has latched on to the word "Agile" and has begun to use it as a prefix that means "good". This is very unfortunate, and discerning software professionals should be very wary of any new concept that bears the "agile" prefix. The concept has been taken meta, but there is no experimental evidence that demonstrates that "agile", by itself, is good.

The danger is clear. The word "agile" will become meaningless. It will be hijacked by loads of marketers and consultants to mean whatever they want it to mean. It will be used in company names, and product names, and project names, and any other name in order to lend credibility to that name. In the end it will mean as much as Structured, Modular, or Object.

The Agile Alliance worked very hard to create a manifesto that would have meaning. If you want to know their definition of the word "agile" then read the manifesto at www.agilemanifesto.org. Read about the Agile Alliance at www.agilealliance.org. And use a very skeptical eye when you see the word "agile" used.

One thing I noticed is the agile community uses the term "Go Meta" in a very differerent way than the Weinberg community. The Agile folks mean "Take a specific practice that works in a specific context, assume it applies to everything, generalize and water it down." When used that way "Go Meta" is generally a perjorative, in that the idea is the person doesn't actually check to see if the idea applies in general. (Aristotle didn't check to see if larger objects actually feel faster; Galileo did.) Bob Martin certainly uses it that way in his article.

Then there is this other group, much more loosely collected than the Agile community. I'll call them the Weinberg community for lack of a better name, because we have been influenced by, recognize, respect, or work in the same space as Jerry Weinberg. Jerry is the author of "The Psycology of Computer Programming" as well as forty other books on software development that deal with people and systems issues.

In his books, Jerry talks about going meta in a different way: Metacognition, which is an entirely different thing. Cem Kaner has a recent blog entry that includes a description of Metacognition:# Metacognition refers to the executive process that is involved in such tasks as:

* planning (such as choosing which procedure or cognitive strategy to adopt for a specific task) * estimating how long it will take (or at least, deciding to estimate and figuring out what skill / procedure / slave-labor to apply to obtain that information) * monitoring how well you are applying the procedure or strategy * remembering a definition or realizing that you don’t remember it and rooting through Google for an adequate substitute

# Much of context-driven testing involves metacognitive questions:

* which test technique would be most useful for exposing what information that would be of what interest to who? * what areas are most critical to test next, in the face of this information about risks, stakeholder priorities, available skills, available resources?

# Questions / issues that should get you thinking about metacognition are:

* How to think about … * How to learn about … * How to talk about …

In other words, when you are following process 1 because the methodology book says so, and you start to figure out "hmm ... if we keep doing 1, we'll get more of A, and what we really want is B and C, so we should probably do 2, but that will give us D, but that's okay because . .." you're performing meta-cognition, which is really higher level thinking.

My examples of metacognition are thinking about thinking, or learning how to learn better, planning how to plan better, or reasoning about reason. In that sense, I Go Meta every day, and it's a real good thing.

For purposes of my writing, when I say "Go Meta", I do not mean MetaCognition. I mean the other thing, which, ironically enough, is nearly the opposite. My concern is that Agile has gone the other thing; that we are not applying enough metacognition.

For the past year or so, I've been using the term "software metaphysics" to describe metacognition on software projects. I want more MetaCognition about software projects; I want more questioning the "why" of the process, more reasoing on the consequences and the system effects.

It's when I don't see that metacognition, when I see mindless acceptance, dogma, and folklore accepted as "software engineering", that I get concerned.

I made a posst to the Agile-Testing list a few months back about AgileCMMI; I thought it was worth repeating here:

>In the long run should we have 'agile CMM'?

Ok. I'm going to take a stand here.

The Agile Manifesto has an explicit value system - individuals and interactions over processes and tools, customer collaboration over contract negotiation, and so on. In fact, from the research I've done, the Agile Manifesto was very much a reaction to the heavyweight processes of the 1980's and 1990's - often symbolized by the very term 'CMM'

And, in this corner, weighing in at 711 pages, is the CMMI for Systems Engineering, Software Engineering, Integrated Product and Process Development, and so on.

The CMMI itself -implies- a value system involving comprehensive documentation, processes and tools, and contract negotiation. I would gladly debate, point-for-point, the existence of this value system - but for purposes of this email, let's just assume that, like Prego, "It's in there."

So, on first blush, Agile CMMI seems to make no sense at all. It's silly. It's like a peanut-butter and fish sandwich - why would you want that?

However, a second look shows something interesting.

Say the organization is a DoD Supplier, Forced to do the CMMI thing. Taking an 'Agile' Edge to it means asking questions like this:

- "We have to do comprehensive documentation. What does 'comprehensive' really mean? What is the minimum amount of documentation needed, and how much can we shift our focus toward working software?"

- "We have to have defined processes and tools. How can we define our processes to be as flexible as possible, so that they enable the greatest freedom to individuals to make good decisions in the moment, based on sound judgement?"

- "We have to have a defined contract negotiated up front. How can we write a contract to enable change and collaboration?"

If you *have* to do CMMI, these might be good questions to ask. In other words, while I might view Agile CMMI as a compromise, it beats the heck out of surrender. :-)

Ben Edwards commented yesterday that in TV Shows, the characters age and the writers need to introduce new characters or plotlines to keep things interesting. Those things may make the show jump the Shark, argues Ben, but Agile has "just been around for a while and is gaining followers ad people, tired of the old ways of doing things, are looking for something that works better."

There's nothing wrong with that, and I applaud it, but I'd like to take a few moments to talk about the system effects of a mass movement.

First of all, it's now much less of a career risk to pursue agile development. Lots of companies are doing it, and "Agile RUP" or "Agile CMMI" is far less threatening than Extreme Programming. Automated Unit Tests, TDD, and xUnit frameworks are hitting the early mainstream, and vendors are adding refactoring tools to IDE's like Visual Studio.

Second, the original Agile movement was a re-action to the heavyweight, documentation-centric processes and methods of the 1980's and 1990's. More than a few people noticed that it's really hard to move a heavy boat, and that extra documentation adds mass. They also noticed that the "Crystal Ball" of a project beyond 90 days doesn't work. So a bunch of guys got together at the snowbird conference and suggested lean artifacts, planning in short increments, and adjustment.

There are several problems that the agile manifesto just plain punts on: For example, the idea that you can be on-time, on-budget, high-quality, feature complete. Forget about it - at the beginning of a large project, the customer doesn't even know what features he wants - who are we kidding?

But ... there's a problem. Lots of companies want to be able to predict all that stuff, to the point that it is better to be certain and wrong than to be uncertain. I have experienced this first-hand, and DeMarco and Lister comment on this in "Waltzing With Bears." Another Example: Extreme Programming doesn't have a concept of "architecture" or a role of Architect. It simply doesn't address the whole, er, problem space, that, um ... Enterprisy-Architecty-Modelling-y things address. (Whatever)

By "punting" on problems that it can't solve, the agile manifesto makes it possible to deliver great software regularly with considerably less waste.

The problem is that by saying "Embrace Change", we are also saying "Get over your fear of loss of control", and there are a whole lot of people in this world who don't want to. They want to be told that they can have their pie and eat it too. And they have titles like VP of Development, CIO, CEO, or CTO.

This means there is a market, with money, who want to be told how they can have all this agile stuff and also have CMMI, or Architecture, or Portfolio Management, Long Range Planning, or a Crystal Ball.

In fact, one of the consistent things I hear on software discussion lists is "We want to be agile but how do we solve problem X." Where problem X is something that is addressed by one of the technologies above.

For the first couple of years, the answers I saw on the lists were consistenly something like "Gee, we've been doing agile for two years and never had a problem with issue tracking. We talk about it at the standup, and we fix it." Eventually, though, I started to see answers more like this:

"Yes, my company thought of the same problem before we switched to Agile, so we use AgileBugTracker, by SuperAgileSoftware. It's great!"

It shouldn't be surprise; Adam Smith tells us that someone will start a business to exploit that opportunity.

So we get Agile RUP and Agile CMMI and Agile Portfolio Management, Agile Issue Tracking and Resolution, Agile Systems Architecture, and get slowly pulled back into the world we were trying to escape from.

The tent is too big and we've given credence to ideas and concepts that we should not have, to the point that it is very hard to tell "good Agile" from a bunch of consultants that can't ship software but know the right buzzwords.

Personally, I am a member of the American Society for Quality, and I have read Crosby, Drucker, Juran, and Deming: I went through the 600-page books and know the difference between "Getting It" and using the buzzwords. And I have real concern that Agile is in danger of becoming TQM or Six Sigma: Inherently good but misunderstood more often than not. That is what I mean by jumping the shark.

Thursday, December 14, 2006

On Happy Days, there was an episode where Fonzie jumped a shark tank on his motorcycle. Many people consider that the "high water" mark of the show, and believed that once the show reached that pinnacle, it had no where to go but down. There is even a website, JumpTheShark.com, devoted to chronicalling when TV shows start to go downhill.

That said, I'd like to share a few facts:

- Agile was originally a cosortium of a bunch of like-minded groups: Scrum, DSDM, Crystal, and Extreme Programming. The goal was to make a bigger tent.

- The tent is getting bigger each year. You can now google for "Agile RUP" or "Agile CMMI" and get a considerable number of results. The agile conference grew to 1,100 people last year and I believe is predicted at 1,500 this year. Instead of a band of rebels, it's now 'hip' to be agile. (That means it will attract an entirely different group of people than it did four years ago, when the only people who knew what agile were active readers of the wikiwikiweb.)

- Yesterday, Elisabeth Hendrickson posted Inside the Secret Fears of Agilists, which stated as a top concern that the big global outsourcing companies would adopt Agile as a buzzword, just like Service-Oriented, Business Intelligence, Global Sourcing, and 'Enterprise'

- My friends who are certified scrum masters are now using the term "use what works" to justify studying for the PMP exam

Wednesday, December 13, 2006

I was talking to Tessa yesterday about the PMP certification, which I am not exactly a big fan of. She reminded me of the old adage 'if you hate it, do more of it.'

In other words, if you have windows, learn dotNet. If you hate this Ruby/RailsNation thing, give it a try. If testing is annoying to you, invest some time in growing your skill. Not only will those things become less painful and less annoying, but you may pick up a few interesting, new, and different ideas to take back to your world of Linux, Perl, or Development. It is even possible (not probable, but possible) that you learn to see what you hate in an entirely different light, and, if not like it, at least appreciate it's strengths.

Tessa's right. After all, that's how I became a CMM(I) expert. I still find the CMMI in bad taste, but I'm an expert. :-)

Now, I've allready invested a good bit of time in learning about the world of the Project Management Institute, but I am resolved to give it another look.

More importantly, though, I just got off of a treadmill, followed by a resistance workout.

Tuesday, December 12, 2006

By having defined coding standards, developers trained in the use of the those standards are less likely to make certain coding errors.

The one thing coding standards guarentee is consistency, and, arguably, readability. But less errors? I grant that in theory, coding standards can prevent errors. For example, "Don't use global variables", "Every function should have an automated test" or "In perl, use auto-indexing in for loops instead of C-style ++" - something like that can decrease errors.

Then again, those are often best learned through mentoring and good craftsmanship, not code standards. Most of the code standards I have seen obsess over where to place the curly braces, what to name the variables, and how many spaces to indent.

In fact, I have seen so-called fagan-style reviews that focused entirely on that kind of slavish adherance to standard; hours spent without a single defect found that would actually impact a customer.

This is couched inside an editorial, not a journal paper, so I give the author a little wiggle room, but here's my suggestion: If you want to make a statement like this in a professional journal, either provide a lot of supporting evidence, or be honest. "In my experience" is a great way to be honest; failing that, give at least one tangible example. Otherwise, we run the risk of coming off disconnected and enterprisy.

No, please don't jump out of your seat and tell me that I "Have" to get more organized.

I don't buy it. I've run a successful regional software conference with my methods; they seem to be doing just fine.

I used to tell a story about how I felt bad or guilty for being so disorganized, until I saw David Parker's Office at Salisbury University. Dr. Parker was is one of the best problem solvers I have ever met, and he had literally two feet of paper covering his entire office. He was promoted to Department Chair the year after I met him.

Yet many of the organization folks still don't buy it. That was an exception; being "Organized" is "right."

Ok, I'll try again. One more time: My creative output is an order of magnitude higher than any 'organization evangelist' that I have ever met. When I read storys of Euler, Guass, Einstein, Joedel, Escher, Newton ... they sound a lot more like me than the organization people.

Personally, I value the Creative Chaos. Heck, it's the name of my blog.

At the same time, I recognize the consequences of that kind of thought-life style. Things do get missed. Things do get forgotten. When I get an idea in my head (a hundred years ago, they would say "when the muse strikes") I zone out of the real world until I can get the idea down, or, worse, I lose the idea.

Ugh.

So here's the tool of the day: 3x5 Index Cards.

I use index cards for everything.

Blog ideasTesting ideasManaging evolving requirementsMoving from a vague and floofy requirement to something concreteGetting those concrete requirements in some sort of priorityArticle ideasPresentation ideasBullets pointsThings to come back toGroceries to pick up on the way homeThings to not forget

No, they aren't organized. My 'system' consists of the blank cards, a wallet-like holder, and a place to stick finished cards. I also have a box for requirements cards.

There are piles of index cards all over, and things still get lost, but less things get lost, and I can swap ideas out of my head with less fear of losing them.

For me, index cards aren't an organization strategy; they are a way to compensate for my lack of organization.

My next step will probably be to get a notebook, so there is a sense of order and history to the notes; this might help me recollect ideas later. The problem is that the notebook will have to be about the size of a PDA, or I won't carry it with me ...

Still, today's favorite tool is a 3x5 index cards. If you want one single ridiculously cheap tool to start trying today, there's my number one suggestion.

Monday, December 11, 2006

I've blogged a bit about a presentation Chad Fowler made to XPWestMichiga two weeks back. There were several interesting points on marketing yourself and choosing a career in technology. Eventually, the talk will be on Google video and I'll link to it, but I am getting tired of waiting to post something.

In the mean time, there is an interview with Chad on perlcast.com which covers some of the main points; you can download it here.

Imagine: You bundle up the kids and drive ten miles to the Christmas tree farm. Before you can park your car, a young man greets you, asking you to roll down your windows. He asks if you have been to this farm before, you reply "No."

Giving you a wide grin, the young man says "Welcome to Wahmhoff tree farm. We have pre-cut trees right here, but you can also cut your own. Yes, cutting your own is a few dollars cheaper. Did you bring a saw? No problem, we have them on loan in the gift shop for deposit. We have five varieties, with examples over there; prices are on the back of the brochure. The tractor can show you around the farm, drop you off and pick you up. Or, you could walk out; we've got free hand carts on loan. Or you could drive your mini-van right up to the tree you want. We also have free horse-drawn carriage rides today; you could load up your tree on that if you'd like, or just ride around the farm with the kids.

There is free warm popcorn in the gift shop, free pictures with Santa, and a free coloring book for the kids. Is there anything I can help you with?"

Wahmhoff farms is a real place in Gobles, Michigan. When you purchase the tree, they have a machine that shakes out all the dead needles, then they can drill a hole in the bottom for free. (They also sell a special stand with a big peg in the middle.) For a dollar, they have a baling machine that essentially surrounds your tree in shrink-wrap.

Why am I telling you about Wahmoff farms? Well, think about the business model. A nice tree is a nice tree is a nice tree, but they are able to create competitive advantage anyway. They do it by giving stuff away. There was so much to do that we drive ten miles out of our way to make an aftertoon of it, and we'd gladly do it again.

When it got time for me to hand over my forty-five bucks (and it was forty-five because of the stand), I hardly even noticed I was paying, because it was surrounded by so much free. Yet the stand created lock-in; next year, we'll think "We'd better go to Gobles, or else get out the drill ..." or "Better not buy a fake tree, because we invested in that expensive tree stand ..."

In the era of $79.95 looks-just-like real Christmas trees at Target, and heavy competition among real tree farms, Wahmhoff is doing something right, and the market is rewarding them for it.

Whymoff farms doesn't have customers. They have fans. This is a different business model for technologists, and it's one that I think is worth exploring.

Oh, By the way - the saw was wicked Sharp. They must sharpen them at least ever week ...

I read the article and responded to the XP E-mail list; here's a copy of the response.

Ron Jeffries Wrote:I wonder what would happen if Chet and I were putting in four or eight hours a day on this thing. Suppose we're averaging two now, and we did eight. Would we go four times faster in elapsed time to a given feature?

This is a real problem in the freelance writing world as well. People think "Gee, I can knock out a story on a Saturday, I could knock out five a week ... I can afford to go full time!" and it doesn't work out that way.

Besides the obvious marketing and sales problem when you 5x your output, it turns out that nearly all humans have creative output go down when they try to do a lot of it at a sustained pace.

I think pair programming helps with that because if one person is blocked, the second can "bring them along", and the time spent not programming is probably going to be used to something more valuable than surfing the web.

The freelance world knows this (pick up any book on the business of writing), but in the software world we are just starting to realize that IP work doesn't scale linearly.

Then again, there's always those guys like Issac Asimov that could just sitat a typewriting and type for days. But, I suppose that's a different post, and for every one of those, there are a thousand Arthur C. Clarke's ...

Thursday, December 07, 2006

I promise to talk about testing tools, so next up is a DigitalVoice Recorder with PC Link. This particular model records and transfers to PC in WAV format, which can be converted to MP3 with any tool such as Audacity.

Here's the backstory:

When I started by career, occasionally I heard decisions that didn't seem to make sense that would require code change. Several times I put comments in the code like this:

Of course, six months later, some one would ask why we were treating HMO and PPO the same, and it wouldn't be in the requirements, and I would dig out that comment.

That never helped me. Ever. Really - Bob would say "Gee, I don't remember telling you that" and nothing would change. I would still have to change the code, and the department still had "Egg on it's face." (Or some other analogy)

About a year ago I got a voice recorder to record presentations and podcasts and such, but I started using them as a requirements technique about nine months ago. At the beginning of the meeting, I make it clear that the purpose of the recorder is for my own notes, that I will *NOT* be using the recordings as a conversation. At the end of the conversation, if we have changed a policy, I turn the recorder on again and do a formal one, where we have a short discussion about the change, the pros and cons, the final decision, and who is involved. I ask "Do I have the right decision makers in the room?" and get verbal agreement.

THAT recording gets checked into version control, right next to the requirements doc.

Those kinds of discussion are easy to capture, easy to throw away, and I have found that anyone in the room can use a voice recorder - regardless of title.

But here's the secret - even if I promise to throw away the notes, we behave differently when we know we are being recorded. We tend to think things through a little bit more and come to a better decision.

That is the real purpose of the device; not as a CYA tool when someone makes a bad off-the-cuff decision, but as a prevention technique, to make sure we make the right decision in the first place.

Earlier, Lisa Crispin said Test Estimation was hard, and asked if anyone had a perfect method, to which I replied:

> Ask the customer when they want it done, get a prioritized list of> features, and deliver on the day they asked for it?

And she asked:

>...and how will we know how many of these features we will be>able to deliver in a given period of time?>

We don't. Why pretend we do?

There's a slippery slope between asking for good faith estimates ("Knowing what you know now, when do you think you can deliver?") and predicting the future.

Assuming the customer will change his or her mind about what they need, if I deliver running tested features periodically (say, every 30 days), then, ultimately, it's the customer's decision if what we have now is good enough or not. Let them pick the date.

That I can do. The crystal ball thing? Not so much. I think the best I've seen is to use velocity with yesterday's weather, or using traditional functional decompostion methods combined with Critical Chain Project Management. (I cover this a little bit in a talk I gave in Indianapolis last year - here - http://xndev.blogspot.com/2006/10/but-dont-take-my-word-for-it.html )

Wednesday, December 06, 2006

If you watch enough presentations, you start to see things that detract from the message. The speaker has to plug in, power up, and press control-shift-F9 a bunch of times. He has to try to make smalltalk in this period - smalltalk that he didn't expect to do. During the talk, he may have to turn around to face the screen (away from the audience) to read the bullet points. Another annoyance is when the speaker only reads the bullet points; the information tends to come out clipped and awkward. (See Peter Norvig's hypothetical If Lincoln had PowerPoint for an example)

My take on that is that if all the speaker is going to do is read the bullet points, I might as well have downloaded the powerpoint from his website and saved the conference fee.

More than annoyances, some things just cause an awkward pause in the discussion. Drinking a glass of water can be natural, but powerpoint forces some things, like slide transitions, to be awkward. The speaker finishes his thought and has to walk over to the laptop, click the down button, and check to see if it worked; or worse yet, spend the entire talk in front of the podium to avoid that problem.

Or, for fifty bucks, you could get aWireless Presenter and advanced slides where-ever you like.

That particular model comes with a laster pointer; mine doesn't have the laser pointer, but they can be helpful and you only save ten bucks by skipping that.

Or course, when I use these, I turn my head to verify that the slide worked, but that's about it. Occasionally, I have accidentally advanced a slide when I didn't want to, and not noticed it until later.

Still, it's a tool that helps make the presentation seamless, and it's cheap and small, and it's not a trick or manipulation.

Tuesday, December 05, 2006

I'd like to start a short series on my favorite tools. No fluff, just stuff.

Tool #1: Google Reader. This is like TiVo for blogs. You search for, find, and list your favorite blogs, and it updates itself with a GMail like interface when new posts come out. Because Google indexs, well, everything, you can have it notify you when any website changes.

If you've avoided using an RSS-feed reader because they can be annoying or weren't mature yet, try Gooder Reader.

Example: It's winter in Michigan, and it's snowing. So, on my way to work this morning, I see a truck that is plowing. The truck is doing an excellent job of plowing, shifting from forward to reverse very quickly, accelerating quickly and using heavy brakes. He sure is marching and moving.

The problem was, I was trying to get past him, and I couldn't, because his sudden shifts in momentum were so uprupt that it would be dangerous to pull forward. I had to wait. And wait.

So, the truck was doing an excellent job plowing, but overall throughput on the road suffered.

We do this in software development all the time. We optimize the job for our role, focus on handoffs, signatures, and role-based contracts, instead of trying to be helpful and collaborate. Devs refuse to proceed without documented requirements instead of having a conversation and trying to figure out what the customers and analysts desire. Architects (and I use the term loosely) refuse to begin design until "all" the requirements are elaborated. Testers refuse to begin testing until all the documentation is completed.

Each of these tactics helps the individual role be efficient (or easy), but the overall project suffers. In fact, you can make a strong argument that the Waterfall model gained it's popularity not because it was good, but because it is conveinient and easy for management. (After all, for the schedule, you can just "set it and forget it.")

In my own career, in general, I've protected the project at my own expense. While I have taken some slings and arrows (and been told at least twice to "know your role, be your role" when I was trying to be helpful and get things done) it's led to more successful projects, and allow me to develop expertise in software testing, requirements, scheduling, and the overall process.

A rising tide lifts all boats, so I would like to submit this to you: Forget your role, keep the project moving, and your title and position might just take care of itself.

Monday, December 04, 2006

A) I want to describe blue man group, and how that might impact our communcationB) Discuss possible ways to contribute to software development, and then what I plan on doing next (my next big thing)C) I'd like to talk about why I'm blogging, and why you might want to read itD) I want to talk about the cult of stability in software development

My main goal of the blog is to cover some new ground - to talk about some things that aren't in the text books yet are important. Yes, virginia, there really is career advice beyond silly cliches and Going Into Management. Yes, writing and communicating is extremely important in software development; in fact, it's a skill, and it can be improved with effort, direction, peer norms, and advice.

So, here's the interesting thing that happened to me: Addison-Wesley asked me to review eight chapters of a book proposal. It's a book in the review process; they've essentially asked the author to write a first draft and he turned it in.

I asked AW if I could blog about the experience, and they said yes.

So - would you like to find out what it's like to review book proposals? I've placed two chapters on the web; one of the ones that I felt was strong and one of the weaker ones. They are chapter 0 (the introduction or preface) and chapter 4 ("the right stuff") - The book is on teamwork in software development and the author is Jim Brosseau.

Friday, December 01, 2006

I have one graph, which is a stacked-line graph. On the X axis I have time. On the Y axis I have deliverables.

Each deliverable has phases - need requirements, in dev, in software engineering test, in customer acceptance test, waiting for prod, and in production.

I update the chart every monday. Of course, I am an agile guy, so I dev as I test, so spending a lot of time in SE test tells me something. (If I move to SE test on tuesday and promote to CA test the next day, it never shows up on the spreadsheet. That's good.)

Looking at this sheet, I should see the size of the delivered features go up regularly. Now that is what I care about. It also shows the relative size of the work-in-progress inventory.

It helps the devs. The testers. The requirements people ... and it's holistic.

Now, if I see things start to stack up at a specific point, (especially testing), I know something is going on, and I put more effort in eliminating the bottleneck; that's basic constraint theory.

Of course, it's a first-order approximation. Some deliverables are done in a few days, others are a few weeks. I could weight them, but that seems like 'good enough' for now. Of course, the graph tells a story, so I show it to people in context only.

That has nothing to do with the CMM(I) - it's what I actually do to make my life easier. It has pros and cons, but it gives me and my management visibility into what I am producing, and opportunities for feedback.

As for the CMM(I):

I just spent a considerable amount of time reviewing the CMM(I) Integrated Version 1.1 for Systems Engineering and Software Engineering, looking for a tie between metrics and testing.

I couldn't find it. Of course, the thing is so poorly written that it's probably in there.

The one thing I did find was that for level 4, it could be argued that you need to measure your adherance to the defined process. ("Quantitative Project Management"). Now that is relatively easy, and counting test cases doesn't help you get there, unless you have a standard policy of X test cases per 100 lines of code, or something like that.

I don't know your environment, but in mine, I would want a CMM(I) assessor who believed that our environment changes so rapidly that common approaches to test metrics would be nieve and premature, and that we could get all of the level 4 goals accomplished without them. (Ref: Handbook of SQA, 3rd ed, Schulmeyer/McManus)

But, to be honest, I have fundamental problems with the CMMI. I suspect that you might be better off reading "Quality is Free" for yourself.

So please take this with a grain of salt. I did a best-effort attempt at a answering your question, but my head hurts now.

During my time as a cadet, I mostly sought leadership ("command") positions; I desired power and position. We used to have these summer encampments where we "beat down" the first year cadets and then "built them back up in our image" - I never agreed with that model. Instead of summer encampment, the summer I was seventeen I went through real US Army Basic Training (In the Reserve), and realized there was a lot more too it than that. It's something between naive and just plain wrong to assume that a bunch of teenagers can do in a week what takes professional, trained drill sergeants do in two months.

Oh, and fear is a crappy motivator.

The following summer in staff training, I stood up and said "I think we're doing this wrong. Shouldn't our goal be to inspire cheerful and willing obedience to orders?"

Yes, it was a cliche, but that was good - half the room had heard the term before. The attitude of that encampment was split about 50/50, but I'm proud of what we did that year.

At 20 I became a professional programmer, and at 22 accepted a commission as an officer in the Civil Air Patrol. As an adult, my role changed from executive to advisor, coach, and mentor to cadets. About this time I realized that command was no test of leadership; people were expected to obey orders. Instead of seeking command positions where people had to listen to me, I sought staff positions where people had the option of ignoring me. That way, I had to develop my influence skills.

Professionally, I graduated with a Math degree and a concentration in Computer Science, not a CS degree, so I felt insecure about my skills, so I read every book I could find on development and methodology to "catch up." Eventually, I found that I knew more than the guy next to me, but I was still "Wrong" because I didn't get it; I kept going for this simple, elegant solutions instead of developing robust, reusable frameworks. I kept arguing for developing features in slices and saying that our crystal ball was wrong; every time we spent 6 months developing an extensible framework, the customer would request a new feature and we'd say "Gee, we never thought of that. THAT possibility isn't in our extensible framework ..."

Clearly, I didn't get it, so I went back to school at night and earned an MS in Computer Information Systems. About this time, I was studying Eli Goldratt, AlfhieKohn, John P. Kotter, Michael Porter, Ed Yourdon, Steve McConnell, and Ron Jeffries. When I found extreme programming I about blew a gasket. :-)

Oh, I read Fred Taylor and Peter Drucker, and realized that the command and control structures in the typical North American company are based on an out-moded, anti-intellectual-for-workers approach that Taylor developed for European Immigrants in the early 1900's. The typical employee of his first study had, on average, a 3rd Grade Education - typically in German. Just like Drucker said, today's white collar worker is better educated and has a larger scope of responsibility than the 2nd-level supervisor of 1910.

After that, I started studying Jerry Weinberg and General Systems thinking. When I graduated, I found that I had developed a habit of spending ten hours a week on professional development, and just kept at it, turning that time into writing, speaking, and consulting.

Which takes us to today. I view software development as intellectually challenging, creative work. I am interested in two forms of innovation - upstream (ideas to improve the product) and downstream (ideas to change and improve the process.)

My curentbugabo is Traditional "process improvement" models that focus on creating stable, predictable, repeatable systems, or the focus on implementing a complete spec. My software projects are all different; trying to be repeatable when you are doing different things doesn't make sense to me. And the focus on the complete spec over collaboration eliminates my ability to do upstream innovation.

After fifteen years of feeling insecure about my skills and being patted on the head, told that one day I will "get it", I'm beginning to believe that it is in fact the Cult Of Repeatability that doesn't get it. I think they read the two-page summaries of Deming, Juran and Crosby and missed the point. They should go back and read the entire book.