I am always glad to see someone pointing out that exception handling is horribly handled!

First, a disclaimer. I’m a bit old school. I think exceptions are generally bad things that have led to worse code and worse error handling. I recognize that I am in the minority and that the ship has sailed, though.

Mr. Jarvis is raising a point which goes to the heart of the problem. We write code that swallows (or logs unimportant exceptions because we get yelled at for swallowing exceptions) because our libraries use exceptions poorly and because our development culture uses exceptions even more poorly. Exceptions, no matter what academic purists and book authors claim, are used in to mean any of several conditions in the real world, and they’re used with the same syntax despite the semantics.

In the wild (including in the .NET library), an exception could mean any combination of:

The function couldn’t perform its contract (the classic definition of an exception).

The function is informing you of a side effect (“Hi. I’ve consumed all your disk space. Have a nice day.”). This may not even be an error.

Some internal object threw an exception. I hope the library correctly listed every exception that type it can throw AND the functions it calls can throw. And that they correctly listed this.

There is an informational message you should log, but you can continue.

We’re done with a loop. The exception indicates the end of some condition (reading a file, network access, etc.)

And we may want to respond wildly differently:

The system state is inconsistent, exit the app ASAP. This is often the correct answer in client apps.

Try a different function.

Move on. For example, some type conversion failed and the original value is as good as it gets.

We get in this situation because we use exceptions poorly, but the response can’t be “Use exceptions better.” We have to deal with libraries that we can’t change, we have to deal with legacy code that we shouldn’t change, our team members have–and will continue to hire–people of varying skill, and writing exception handling isn’t the reason we’re writing software. We’re writing software to meet some business need (or some general need, if you aren’t comfortable with the word “business”). Exceptions take up too much of our time (as does anything that isn’t about making our users and customers happier and more successful).

For a fabulous example of the current situation, consider the joys of int.Parse() and the need for int.TryParse(), which has a horrible legacy syntax and obfuscates the logic of the code.

So we’re left with a mix of reasonable responses to exceptions:

Catch the exceptions we can do something useful with, even if that is crashing, and let some unknown error handler above us catch the others. This is the most common suggested solution.

Catch the exceptions we have some alternate response to and swallow others with some default action.

Swallow everything because failing what we’re doing is not critical. Some comment to this effect makes code reviewers happy (and is generally good for other reasons!), but the same boilerplate comment in many places actually decreases readability. Differentiate Decision from Idiom.

Log the exception so we don’t get dinged in a code review for ignoring exceptions. Off-site code reviews by outside consultants love to make long lists of correctly swallowed exceptions and present them managers who are less-technical or who don’t have a familiarity with the code or platform and can’t identify which are well-considered and which are just lazy1.

This post calls out a good response that is often left implicit, because it also gets dinged in code reviews and it is a little scary:

Let any unknown upstream called catch exceptions, if they can, and hope they can handle it

Here’s what I’d like to add: It’s a good response when a few conditions are met. In other conditions, other responses make sense (of course). Code should express its intent and assumptions to other programmers who will have to maintain it for years to come; it needs to be readable as to which conditions the programmer assumes are present.

All exceptions are handled here. When the libraries change, this code needs to be re-evaluated for exception handling.

Known exceptions are handled, but bailing out to upstream handlers is acceptable. Finally blocks are used correctly, for example.

Exceptions are informational, but shouldn’t stop execution.

Downstream code throws exceptions when this code doesn’t care and downstream functions use the “pass exceptions along and hope they’re handled” philosophy. We just want them to go away.

Also, note that there is a significant different between handling exceptions around one statement, such as int.Parse(), and a statement block which may throw exceptions from any of several calls.

I augment coding standards with something less draconian: a “stop” list (https://s4sd.wordpress.com/2008/09/17/development-red-lights/). This is a list of statement, idioms, and techniques which require stopping, stepping back, and explaining to another (senior) developer. If a second set of eyes agrees that it’s reasonable in this context, you go forward (and say who agreed in the checkin comment!). Swallowing exceptions is a good Stop item.

Make a considered and deliberate choice in how to handle exceptions. Then make it clear. Code that swallows (or logs) any exception indiscriminately won’t go away. It’s impossible to handle exceptions better than your downstream calls throw them and real, reasonable conditions arise where exceptions in downstream code just don’t matter. As an obvious example, an exception from an error logging call (say, network or disk unavailability is preventing logging) shouldn’t be logged.

Just for fun, here’s a cute way to document swallowing errors. “Cute trick” is usually a code phrase for “bad for maintenance,” but maybe this one is reasonable.

A lot of teams that do produce specs could benefit from adding this one: it’s a hidden piece that gets shuffled between Marketing, Development, and Support, often winding up on the floor between them. It’s often a short document, but the thinking that goes into it can be subtle and it’s often a point of disagreement. Get it on paper as early as you can and let it change as it needs to1.

The Environment piece of the No-Spec Spec describes the context where the product is intended to operate. This can be pretty short; sadly, it’s rarely simple.

Although you should only include sections relevant to your project, Context (Environment) can include:

Who is the user?

What is the user trying to accomplish?

Is the user the same as the customer? If not, who is the customer and what are they trying to accomplish?

What is the timeframe the product runs in? Does it have an expiration date? Note that this is different from the product schedule. It is about how the business/regulatory/calendar environment affects the product. A game may need to ship in November for sales reasons, but that doesn’t mean playing it is tied to the calendar. Software for students, however, may be affected by school schedules in the target area.

Is there a time constraint on execution? (Only at night, only while the store is open, must finish before opening of business, etc.)

What hardware is the product running on?

What OS does the product require?

What other software is running on that hardware?

What other business processes does the product interact with?

What other products does the product interact with?

What kinds of aftermarket extensibility does the product expect? (Note that this related to, but is not dependent on, the Capability document)

What regulations control the product’s behavior?

What kinds of support does the user/customer expect as part of normal use?

What continuous billing or auditing does the product require?

In many companies, these answers are kept in different departments. In many companies, these answers are never written down.

Conceptually, this is a simple doc. The hard part comes when people try to pin the answers down2.

Some sample answers (no, these are not for the same hypothetical product):

User

High-school or college student with a laptop or tablet that they take to class.
Some functions require a tablet.
(List tablet functions)

User’s goal

Have one or more chosen songs play while enjoying a social time out

Customer:

Owner of the jukebox

Customer’s goal:

Continual income from users paying to play music

Timeframe:

During school sessions (Sept-Dec and Jan-May in most places; list trimester schedule is appropriate)

Time constraints:

Software updates and new database contents must download and install (or rollback) by start of business

Monthly metered billing. We need to connect and read the use meter by the 5th of each month.

This is a pretty simple document (although you want more detail than the example above!). You may be surprised how many arguments break out over defining the answers, though. Get through them.

Then the answers will change. When this happens, you need to re-evaluate a bunch of things. At a minimum, re-evaluate the Behavior and the Capability documents. Most of the time, very little will change, but when it winds up being a big change you’ll be glad you didn’t miss it. And if the Behavior or Capability do change, re-evaluate some non-spec documents. At least validate that technical design and test plans are still correct.

Pay special attention to the user and customer goals. These are the primary downstream consequences of the Fit, Identity, Purpose, and Assumptions. Make sure everything else in the document supports those and make sure they are re-validated if the upstream documents change.

This doc is no less important than the others, but it is usually easier. Enjoy having it: you will find that it helps in unexpected ways.

1) Make a sign. Put it up where everyone can see it. Put these words on the sign: “Don’t get it right; get it written.” Really. Do it.

2) One product I worked on could only define the user as “anyone with a credit card.” That was pretty far from accurate, but it was even farther from useful.

Go read his post. And read the comments. And if you forget what it says (since I know you went and read it, didn’t you?), here’s a short summary.

Lippert gives us two example in which two people, Eric and Alice, meeting to talk over lunch. In each, Alice starts by saying, “I assume you know what I want to talk about.”

In the first example, Eric responds with a “yes” and tries to launch into the conversation, only to discover that he was wrong; on top of that, he was telling Alice something she didn’t know and possibly telling it in an overly abrupt way.

In the second example, Eric says he doesn’t know what Alice wants to talk about and discovers that it is something completely innocuous and easily dismissed.

[If you didn’t read don’t remember his post, Eve and Bob are two mentioned in the first example; Alice is concerned that Eve is threatening her relationship with Bob. You’ll want to know that for later on.]

Lippert correctly points out that Eric is guilty of mind reading1. He thinks he knows what Alice wants and proceeds without checking it out. Lippert provides a great image, “Remember, there are at least two thick slabs of bone between your brain and everyone else’s brain. Those thick slabs of bone impede telepathy.”

Mind reading is one of the most common occurrences in daily communication and it causes a vast number of hidden problems. It’s also fundamental to how humans think: we make models of things, including other individuals, we “try on” other people’s experiences, and we generalize behavior, including other people’s behavior, to reduce the load of detail we have to consider. It’s a strength; the problems come when it’s misapplied and unrecognized.

Lippert gives us a great, short caution about mind reading. So let’s go a little deeper.

Mind reading is a subset of assumptions and it usually manifests as (cognitive and linguistic) presuppositions. Understanding and recognizing them is fascinating, fun, useful, and complex (for example, in Lippert’s examples, Alice mind reads—even before the conversation starts—that Eric knows what she wants to talk about).

Recognizing mind reading is great. Lippert’s recommendation to sidestep and say we don’t know what someone is thinking is a good, blunt rubric. But to be truly practicable, we need a little more. When we start to form judgments or give recommendations we have to go outside the purely-cognitive world of assumptions; we need to include relationships, goals, context, and emotional states.

It’s great stuff to think about and it matters to each of us every day we interact with people. You paying attention just to mind reading will improve your whole team’s rapport and productivity immediately. Notice and utilize assumptions and presuppositions as well and it only gets better. And if you start to recognize and adjust response potential, which is touched on below, you’ll become a miracle worker.

There is more going on than the mind reading problem, although that alone is enough to have ruined more marriages than poker night.

Mind reading is essential to community

Alice wants to talk about something and she assumes that Eric a) knows about it, b) considers it important (at least to Alice), and c) is expecting to talk about it now. She’s mind reading and making some risky assumptions. The nonverbals (voice tone, facial expression, physical proximity, choice of meeting place, etc.) could indicate if it’s a positive subject (to Alice) or not2.

In the example, Alice not only assumes she knows what Eric is thinking, she also assumes she knows what Eric wants and needs. Eric’s response is the second mind reading. They could each have prevented the telepathy problem, but they can only prevent their own assumption that their subject of interest is also the most pressing subject (or even important at all) for the other person. To clarify: Alice wants to talk about subject S1. She assumes that a) Eric knows about S1, that b) Eric knows that Alice knows about S1, that c) Eric knows that S1 is so important/urgent (to Alice) that Alice wants to talk about it.

Sometimes, Alice and Eric may guess right. Maybe even most of the time3. They each expect not only that they know what the other is thinking, but also that the other expects them to know. Answering "no" violates that expectation and might cause an abreaction4.

It’s a cultural expectation

We expect people to mind read. Expect it too much and we’re being imperious (“he never told me he wanted it done that way!”), but expect it too little and we’re being detached. If someone always assumes you don’t share their thoughts, if they explain themselves constantly, they come across either as incredibly low in self esteem or (more commonly) as incapable of connecting with the people around them.

In the example, Alice expects Eric to mind read. Her opening statement is a challenge to ensure he does. It’s a version of the “guess a number” game. Alice is not only assuming that Eric can read her mind, she is assuming that Eric knows what’s important to her and gives it enough thought to make assumptions about the meeting. If Eric doesn’t know what Alice wants to talk about, he’s not only been wrong and violated her expectations about the meeting (producing some cognitive dissonance to get over), he’s violated her belief that he was aware of her and her needs/goals/values/thoughts.

Eric’s response (in example one), where he assumes that Alice wants to talk about Bob and Eve being seen together, doesn’t have the same degree of challenge. He presents information instead of asking for it and his mistake produces cognitive dissonance. In the example, Alice is mollified about Eric because the information he presented was in line with her values and needs. He didn’t know what she wanted to talk about, but she would have wanted to. It’s worth pulling this out because Eric passed the challenge (“guess what’s important to me”); the resulting problem masks the greater danger in Alice’s assumption.

Culture, Context, and Communication

Let’s go a little beyond the mind reading part. Whether someone is mind reading is measurable and simple5. Why someone is mind reading, to what degree they’re aware of it, how it affects relationships, whether it’s a help or a hindrance, what impact challenging it would have, etc. are far more complex questions. They’re also where the rubber meets the road: knowing that you don’t really know what someone is thinking, even if you often guess right, is nice, but knowing what to do with that knowledge is useful.

Response Potential

The statement "I suppose you know why we’re here" is intended to build response potential. It increases the importance of the subject and tends to increase the emotional content of the conversational. This type of challenge is an indication that emotional response is what Alice wants (for example, when it’s about an upcoming party, it’s intended to increases the excitement; when a cop uses it, it is intended to increase the emotional responses of powerlessness and culpability). Responding logically to an emotional plea is usually a recipe for mismatch, mistrust, and missed opportunity. It usually pushes one person much deeper into emotional response, forming a loop familiar to many people ("I’m so angry with you!" "But honey, look at it rationally!" “You never listen to me!” “I just told you I can fix it!”). It can also drive the person responding rationally into an emotional state or into detached (often withdrawn) emotional unavailability, often without either party realizing it6.

High response potential is another term for volatility. Eric may want to lower it (based on Alice’s nonverbals and how much they match Eric’s mood) or he may be comfortable with it. If he wants to avoid or reduce the volatility, a break state is a good response7.

Context is King

In mind reading Alice, Eric has to consider: is there a content frame around the meeting (Alice has come in to complain about Eve twice a day for a week), is there a context from to the relationship (especially in an asymmetrical relationship like a boss/employee might reasonably assume the talk is about that context)?

Eric has a lot of options when Alice says “I assume you know what I want to talk about.” The mind reading isn’t even a factor in all of them. Depending on what he wants, Alice’s apparent state of mind, etc., Eric could:

Answer "no" and trust that Alice will be ok with that

Answer "yes" and say what he thinks Alice wants to talk about, but keep it to what Alice has mentioned as an issue/concern/fear/etc. in the past

Answer "yes" and say what he thinks Alice really wants to talk about, even if it’s been concealed or unconscious (for example, give his interpretation of Alice’s situation rather than give his observation or reflect her statements [using the "observation-interpretation-advice" model])

Answer "yes" and say what *he* wants to talk about. This might shock Alice (who is already deep into a context that might be different), which can be bad or good (it might get her to change from an unpleasant state to a better one, like curiosity or empathy; at the least, it might get Eric what he wants from the conversation!)

Dodge ("you tell me," "yes, but I want to hear it from you," "no… you start," answer "yes" and just look expectant, …)

Lay cards on the table ("I think I know, but I could be wrong. Do you want to hear what I expect or just want to tell me what you want to talk about?")

Break state (make a joke, change the subject)

In the second example, Eric leaks information. But it’s clearly information he expects Alice to have, doesn’t mind Alice having, and wants to talk about. Eric may actually be happy about how the conversation went to that point; even if Alice didn’t know about already Bob and Eve being seen at Snooty Pretension Mistress Dive, Eric has a chance to frame it from the start. Eric’s response brought up something *he* wanted to talk about. He was wrong with his mind reading (and Alice was wrong with hers), but he’s responded to the context and subtext of the situation and he’s talking about something he wants to.

He also hasn’t hurt his relationship with Alice: she didn’t immediately denounce him for telling her (or for having not told her earlier, which seems like a good one to me) and he is still in an advisory role, allowing him to a) help his friend Alice, b) protect his scummy friend Bob or Eve, c) attack Alice and Bob’s relationship subtly so he can put the moves on Alice (or on Bob, or on Eve), d) increase Alice’s perception of his power and responsibility, e) get Alice to pick up the check, f) get an opinion on the food at Snooty Pretentious Mistress Dive, or whatever he wants.

And in the end the real points are about context, relationship, and what people need. What does Eric want and need? What does he think Alice wants and needs? How can he support/maintain his relationship (boss/friend/coworker/conspirator/lover/partner/…) with Alice? Is he there as an advisor, an ear, a person with power, a co-conspirator, a co-victim, an antagonist?

Like the rest of us, Eric and Alice need to remember that they don’t know everything, they won’t say everything right, other people understand them better than they fear, and open and honest communication both works and takes work.

1) In this case, “mind reading" means making an assumption about what someone else is thinking. The “Meta Model of Communication” (a model coming out of neuro-linguistic programming and originally designed for therapists) lists it as one of the common “violations” of clear and complete communication that get in the way of therapy. The meta model tries to give content-neutral, linguistic tools to identify its basic “violations” and describe specific (if occasionally rude) responses.

2) One limitation of examples posted in text on a blog is the loss of almost all nonverbals. “I assume you know what we’re here to talk about” has a clear denotation (that the speaker is owning up to an assumption) with layers of connotation (the speaker is offering the listener a chance to challenge the assumption, the speaker wants to control the initial subject of discussion, …), layers of presupposition (the listener knows there is a conversation imminent, the conversation is important, the conversation has a prepared subject, …), and open to wide interpretation (is the speaker angry, scared, joking, excited? is the subject positive or negative? is the matter private or public?). The connotations are primarily cultural and the presuppositions are mostly universal. The greatest variety comes from the interpretations the listener makes, and these interpretations are formed from the context and the nonverbals. They have to be from the verbals, since the words are what carries the denotation, connotation, and (most of the) presuppositions.

3) To some degree, this (how often the mind reading is both expected and correct) depends on the degree of rapport the speaker and listener have. It can also depend on to what degree the content and the expectation (what one person assumes another is thinking and whether the other person assumes the first knows) is part of a shared context (including—and often—part of a shared culture) rather than idiosyncratic and surprising.

4) One example in the comments on Lippert’s blog was Alice responding to Eric’s “no” by saying, “You mean you forgot that tomorrow is our anniversary?” If Eric did remember and did expect Alice to want to talk about that, saying “no” would have caused more communication problems that mind reading.

5) To measure if a statement about someone else’s mental state is mind reading, ask what sensory evidence supports it: what can/did you see, hear, smell, taste, or feel that indicates that this statement about someone is true. You may have:

7) Learn to create break states. Use them to interrupt repeating cycles in teams, to nip arguments in the bud, and to separate meetings into “chapters” so people keep energy up and remember better. You can introduce a break state with humor, with a distraction, or by simply changing people’s physiology and metabolism: get them to move and breathe. I kick people out of a meeting room for 5 minutes at one point in my timeline postmortem process; this is why. Opening the door to a conference room and getting people to breathe works. My personal favorite is to suddenly sniff a few times and say, “Hey, do you smell popcorn?” [Thanks to Tom Hoobyar for that one.]

Most likely, the first time you write an Assumptions piece, it will seem easy. The second time, it will seem very hard. And the first time you have to apply it, it will take courage and make a lot of people angry.

It’s worth it. Especially the making people angry.

The Assumptions document is a place to write down what should make you rethink the project: what things outside of the project could change the requirements, change the deadlines, or even suggest cancelling the project. You want to start this list early, before you are in the day-to-day give-and-take that makes everything seem more—or less—fluid than it really is.

Assumptions come in a variety of categories, but they all represent something that has to stay true for the project to proceed as planned. For each assumption, identify what will be affected and what process or work is invalidated if the assumption proves false1.

Common areas of assumptions:

Technology

The 3rd party dependencies are available on time

The 3rd party dependencies work as planned

Internal dependencies (libraries, hardware design, etc.) are available and work

New versions of tools and libraries integrate smoothly

Team

The team can focus on the new project as planned (rather than being pulled onto other work)

Key personnel remain available

New hires can be filled on schedule

Outside contractors perform as expected

Corporate

The corporate structure continues to support the project

Necessary resources are available to support the project (no emergency or suddenly-critical project consumes all IT support or asset creation, for example)

Business

Competitors behave as expected (with respect to the project)

Business partners remain committed (and in business)

No unexpected developments that affect the project (e.g., new entries into the project space)

When you write your assumption list, don’t be general. Don’t say "3rd party tools work as expected." List them, list when and where you expect to apply them, and list what they impact. When they’re written most usefully, assumptions aredigital: they are either true or not. Don’t accept "well, it’s mostly true, so we don’t have to consider changes." Down that road lies failure.

An assumption comprises:

A statement about the project’s environment that you can test as true or not

A list of disciplines that should re-evaluate the project and what they need to re-evaluate. This can be as complete as "is this still the right project to do."

Here are some examples:

Assumption

If this Assumption changes, re-evaluate these:

Congress will authorize <specific regulatory change expected>2

– Legal: List of likely implications – Sales: Begin managing customer expectations and bring the results back to Business – Business: re-evaluate if the project is still viable – Product: review and sign off on the entire functional requirement

Ad sales will be remain our main source of revenue

– Product: re-evaluate the design – Business: re-evaluate if the project still makes business sense

Hardware development and production will occur on-schedule

– Product: Features dependent on the hardware – Sales: Commitments and schedules – Product: Relative priority and urgency of other projects

Legal issues with design <whatever> will resolve by the time we ship

– Legal: Advisability of shipping while in litigation – Product: Can features based on that design be removed? – Development: Other designs – Business: Schedule vs. Features vs. Risk

It is important to note: "re-evaluate" is not the same as "check that what we said is still valid." Looking at the old plans and seeing if they can be made to work is how projects spiral slowly into failure, always chasing one more fix to react to the world. Re-evaluate is just that. Look at the product, the project, and the business goals and decide what is needed; then compare that to existing plans. The whole point of the term "assumption" is that it indicates a surprise. Whatever you did before could not have been based on the current conditions.

An assumptions list is very much like a risk list. It is similar, but neither completely replaces the other. The risk list identifies things that might happen to affect a project and lists amelioration plans; it focuses on the things you most want to prepare for—the things you most expect and which you can tweak back under control. A risk list is an attempt to create a proactive situation; assumptions stay reactive. The assumptions list calls out things you do not expect to surprise you. These are generally things you cannot control. You put something on a risk list to make sure you pay attention to it. An assumption is something you don’t pay attention to every day, but when it calls for your attention you have to respond fully.

Risk

Assumption

Changes:

Often

Occasionally, maybe never

Respond With:

Small changes

Complete re-evaluation

Control:

Internal

External

Scope:

Usually smaller

Usually larger

When one of the assumptions changes, you don’t expect to make a subtle change. You get people together, demonstrate what’s happened, and ask, "From the current plan, what is now wrong? Not how to fix it, but what needs to be replaced?" You can work out a new plan when you know if the project is still worth doing.

So what do you gain? Clearly, most assumptions wouldn’t go unnoticed during a project (although some do). The assumption list gives two direct benefits and one powerful indirect one.

It serves as a commitment to respond fullywhen circumstances change. Because many assumptions are large and pervasive, it is common to try and keep moving while reacting. The team keeps working on something they know isn’t right and the project drifts while plans are half-made and tried on top of each other. Often, no one wants to call something broken and recommit4. The assumption list provides a visible, binary test for when to withdraw and re-evaluate.

It reminds the team that the whole project is affected. It is very easy for one discipline to believe that a problem doesn’t affect them5. Often, a faulty assumption isn’t fully visible to every discipline, and it’s as easy for the discipline most directly affected to overlook it as it is for a discipline very remote to the assumption. Pulling out the assumptions list and showing someone how many groups are affected and how significant an impact is can help you get people to give up a problem to the group or to agree to devote their time to a situation they didn’t think was important.

1. This is where the dependencies between the elements of the No-Spec Spec can help you, if you’re using all the parts. Often an assumption supports one piece of the process: the behavior, the mission, etc. If that needs to change, it’s relatively easy to see what other pieces are affected.

2. I worked for E-Stamp back in ’99. Being in a business where business plans (and technical plans that follow) are dependent on congressional action is incredibly difficult. My heart goes out to those who try and my hat goes off to those who succeed3.

4. Engineering is usually the last to admit that something isn’t worth fixing. Consider a broken third-party tool or library. Most engineering groups will muddle along with it for months, causing delays and introducing quality risk, rather than announce that they won’t make it work and choose another route. Doing so turns a technical problem into one affecting every part of the project, but the problems often creep up in small increments and engineers never like to throw out something they might be able to fix.

5. Again, it’s as common for Marketing to decide to ignore a problem as being "Engineering’s responsibility" as it is for Engineering to try and convince Marketing to do so. I’m amazed at how often major problems in development groups are overlooked because non-technical people saw it first.

I’ve been told I’m too grumpy-pants lately 🙂 So here’s a quick non-rant with what is, I hope, constructive feedback for an app I think I might wind up liking in a few revs.

I asked @dariusdunlap what he was using to follow hashtag searches on Twitter, since Twitter itself is so horribly broken about them and he was so well on top of the Community Management Unconference happenings a few weeks ago. He recommended Seesmic Desktop (@askseesmic). So, I’ve been giving it a quick glance-over.

As always, I’m just hoping the developer (who seems far too nice and with-it to do this) comes back and says, indignantly, "That feature is is the product!" because I love quoting the lesson my friend Sarah taught me with an off-hand comment once that changed my whole understanding of product design, especially from my end of it with the coders: "If the user can’t find a feature, it isn’t in the product." But I suspect Team Seesmic is ahead of that already.

Netnet

+ It’s worth a look if you need to follow search results regularly or if you want to separate people you follow into subsets for viewing purposes, both quite reasonable use cases most Twitter apps don’t support.

– No thought to the interaction metaphor or user flow means you’ll be learning to work around how the designer thinks instead of how to get your job done and the technology it’s built on (Adobe AIR) is a famously least-common-denominator platform that will leave you reaching for common, everyday conveniences Windows added in the 90s.

= Integrates Facebook viewing into Twitter, but only for one Facebook account and it doesn’t really "get" the difference in Facebook use. It’s really just a view of some Facebook stuff in your Twitter feed, so it’s a nice-but-not-important feature in the current implementation

Summary

Great for watching one or two of N things at a time and switching which one or two at will

Bad for grazing content

Badly needs some help text. I think this is what I needed to know when I started, but nothing actually tells you how to use the app unless you want to spend time watching videos:

The nav bar and the column immediately to the right are the whole of the basic UI. Some columns will be pulled out and stacked to the right of the nav bar’s column; these are called "detached," but they are still attached.

The "Home" view is what all of your accounts can see, in one timeline

The @Replies view is a search on Twitter for all of your accounts (but does nothing for Facebook)

When you send a tweet, check which of the accounts it will send it through: you may be very surprised (and embarrassed!)

Make a userlist and add people to it. Then exit the app and check that you did it the way it expected. When you have it right, it will begin aggregating new status updates from those people into that list. It is not a search function: it is a new, special bin for new items to be copied to.

Some nice stuff

Love the handling of text over the length limit (graying out text after the line and changing the chrome UI)

Userlists are quite nice. Well, they would be if they worked (if they worked as expected, but they’re pretty nice so far)

But, some major, major problems

(These are probably showstoppers for using Seesmic in any non-toy capacity at this stage, but it’s still very much in development)

No app should change what a user writes without explicit authorization: the default install converts urls via some url-hiding company. This needs to be opt-in.

If you have Facebook enabled, scrolling down the timeline is unusable. As you scroll, it downloads images which cause it to resize the cells you’ve scrolled past, making things on-screen jump down, off-screen. Basically, if you have Facebook installed, you shouldn’t try to scroll the timeline unless you scroll it all the way (by paging, not by jumping the thumb) and then walking away until every picture it passed has been loaded, then heading back up. It’s amazing to me how unusable this is. I wouldn’t have expected it (and clearly Seesmic’s creator didn’t either, although s/he could have tested it)

Adobe AIR is easy to code for and can be a lot of fun, but it really isn’t ready for end-user apps: AIR eschews interaction standards on the OS; instead it behaves like Adobe’s former Macromedia people think the universe should.

Even if you agree with them, having Seesmic be the one app on your system where control-right takes you to the end of the word, rather than the beginning of the next word, means you have exactly one app where all text editing takes all of your attention and breaks all of your habits. Sadly, I can’t recommend using any AIR app that hasn’t been custom-built to your needs, since it’s going to be so broken in little ways you won’t feel until after you’ve paid for it.

Some details

Bad UI

(Some of these might be AIR’s fault, but I don’t think so. I haven’t written for AIR in about a year, though)

Can’t resize columns. Hope they’re at a size you like(they aren’t for me)

Can’t stack columns. Horrible for splitting out several things to watch at once

Can’t fill people into a userlist unless you have content from them

Creating a userlist doesn’t do back-searches, even if the content is already present. I made a userlist on Friday and it doesn’t aggregate content until people start tweeting on Monday. Stupid.

Scrolling is horrible as images get pulled in, moving what you’re reading around while you’re reading it

Definitely AIR problems:

(Seesmic touts AIR’s ability to run on multiple platforms, which is kind of nice, but it means, at least in this case, that the UI implementation sucks on them all—it’s the same reason any UI in Java1. Consistency with the platform is why we don’t have to spend all of our time remembering what keys do what and can instead complain about what happened when they did it. If it works exactly the same on a Mac, then those users are just as unhappy in different ways.)

Whole-word selection is inconsistent with the platform

Control-arrow behavior is inconsistent with the platform

Can’t resize window except at lower-right thumb

Context menu behavior odd and inconsistent

Mis-reports its frame size. Maximize it and it will spill over onto the next monitor

AIRs drop-down lists don’t behave much like Windows lists (well, they behave similarly, but they try not to look like it, leading to user confusion).

Broken Bits

I cannot find a single link to the Seesmic web site anywhere in the app

Defaulted to editing my tweet text and using bit.ly without my approval. Editing my words should be opt-in! Note that I believe url hiding services are an evil that is rarely necessary.

No way to indicate that there new items in a search (this may be a feature request at some level, but it begs an interesting question: are searches polled or are they pulled on demand? the configuration UI implies that they are refreshed on demand…)

Need context menus on items in the left nav bar

Why does clicking a hashtag launch the web site instead of searching in the app?

Need to be able to size or hide the nav bar

YouTube videos are not a replacement for documentation. Ignore your ego, be willing to spend some actual effort writing words instead of code, and pen a few lines people can read (without devoting blocks of time and without needing audio) to decide whether to use your app and to learn how.

Sending a tweet seemed to send from two accounts at once when I didn’t expect it to. That’s an advanced use of the app, don’t make it so easy to blunder in to. Now that I know about it, I’m reasonably safe

As an aside, it says something that I assume the checkboxes on the account selector were a mistake and were meant to be radio buttons. The rest of the app is pretty clean, but that seemed like a likely mistake given the limitations of AIR, the stage of the product, and the general level of attention to detail in the interactions throughout the app.

Use of the X (standard meaning when on bar of icons affecting a window: close this window) to mean "delete the item that launched this window" is horrible. That it follows up with a task-interrupting modal dialog should be the clue that it’s bad design

Let’s be more specific: here are the buttons on the title bar of a search frame: After the "refresh" button, we have 1) a trash can, 2) an X, and 3) a "go back (through the door)." Which one means "reset the data", which means "delete the search", and which means "close the window"? I’m betting on 99% of people getting it wrong the first time.

And remember, every search frame will have all of those buttons forever, no matter how tired you are. Don’t ever click that X thinking it shuts the window…

Userlists remember that they are in "edit" mode far too much. Make editing a task, not a state.

Similarly, having "search for a person" be a state of an account means I can’t return to that account without cancelling the search and I can’t search for two accounts at once. This interaction model needs some thinking,

Button to enter edit mode on a userlist is nowhere near the buttons to end it. Very confusing, especially with the current bad IA around what editing means. Very easy to think you’ve renamed a list, but be wrong.

No feedback when refreshing

Adding a search us completely unlike adding anything else, but shouldn’t be. It’s one line of text, just like a new userlist, so it should have a plus sign on the nav bar

Missing features

Multiple Facebook accounts

Drag and drop to userlists (actually, dnd is missing everywhere)

# of new items by userlist and search (in the nav… the popup notification is nice but it doesn’t work well for "what’s new over lunch?") At least bold out the search if there have been new items since it was last active (showing in the special, left-most columns or showing in a detached column that has the focus)

Context menu on an unselected hyperlink should refer to the link, not the whole text of the item

Context menu on an unselected user id should refer to the user, not the whole text of the item

Export to some local format. Given the passable search handling and the very nice userlist feature, this would be essential to use this for actual business purposes

Let me separate the detached frames to a different top-level window or to the desktop. They are fundamentally different from the one, magic frame tied to the nav bar. This UI metaphor is pretty broken, but it isn’t easy to see what would be better, so give me a stopgap. I spent 10 minutes just trying to figure out why I couldn’t rearrange and resize the detached columns, since they are still completely attached, just not "leftmost"

Minimize to Tray

View list of people I’m following (and friends on Facebook) and apply to userlists from there

Wrapping up

It’s an interesting product and fun to use. I recommended that my gf use it to track some work with a client and I may make it my preferred "search watch" interface (at least until Trillian Astra gets one). Take a gander.

And if you’re on the Seesmic team, nice job for a 0.3 app! I’m happy to expand on my gripes if you want, I’m happy to shut up (at least to you) if you want, and I promise you: I only write this much about apps I think have promise and I it’s all intended to be constructive.

1) Well, Java UI is also doomed because the client implementations leak memory, the Windows implementations crash, and new Java features aren’t free anymore, so the garbage collector fixes that would help most are only available to enterprise customers who buy the "non-free java" license. AIR’s way ahead of that. And it has scroll-wheel support, which means it’s at least caught up to 1998 user-side technology.

Great. More important software with personal information moving to remote servers I can’t control at home and my IT department can’t manage. I can’t imagine what my end-user benefit might be, but at least Opera doesn’t have to worry about handling addons or extensions, since it’s still purely a networking/HTTP-and-rendering/javascript tool instead of a productivity platform.

At least I’ll get to pay for the privilege of letting them control my updates and mine my personal and corporate click-trails, usage habits, and bookmark data, which sounds great.

And some people think Alexa, an opt-in service with a no charges, a privacy mode, clear pro-privacy policies, is evil! But Opera is the browser we geeks love to extol–as long as we don’t have to use it!

(Posted from Opera Mobile; yes, I *do* like the browser, even if I hate the hypothetical cloud-browser idea and I don’t think Opera is quite worth what they charge.)

(Disclosure: I used to work for Alexa, I trust the policies I have seen there, I still have friends there, and I am slightly bitter at having to tell every major anti-malware product not to remove the opt-in service–with code *I* wrote–shouldn’t be removed just to artificially up their statistics.)

Renders some extensions such as Flash, music, and video players what both assume they are running locally and wouldn’t be reliably useable if run remotely (they’d have to push down screen-scrapes at a furious and consistent rate)

By their nature, browsers know—and often remember—everything you browse:

This user made POST requests to the purchase page of these 3 gardening sites this month, but did not make POST requests to this other

After visiting LocalBookstore’s web site, this user went to Amazon.com and did not return to LocalBookstore

This user just started visiting job sites

Between 9 and 5 local time, this user visited 20 shopping sites, 15 comedy sites, and 3 public utilities

Of course, your ISP and IT department already have this browsing information, even if you haven’t set a proxy server. You don’t think all that traffic on port 80 is secret, do you?

Anonymized and in aggregated, this is extremely valuable:

These 3 gardening sites are the most popular in terms of purchase-visits. Would YourGardeningSite like to know this? Do you have cash and a service contract2?

LocalBookstore gets hits, but users head to Amazon.com to purchase (would LocalBookstore like to know what pages they left the site on? Does LocalBookstore have cash and a service contract?

Would JobAnalysisGroup like to buy a monthly report of aggregate job site visits across multiple sites? Would they also like to know how many users just started visiting job sites? How many stopped? Does JobAnalysisGroup have cash, a service contract, and a promise to report the data as "supplied by OurCompany.com"?

Would JournalistPerson like to report on business-hour site usage? Would JournalismUser like the source broken down by home DSL/cable (telecommuting) versus corporate networks (reverse telecommuting)? Will JournalistPerson use our name prominently in the article, the statistics graphic, and any search tags?

Web browsers also need to record bookmarks, which is more potentially minable data

Many web browsers offer to store account and password information for users, often including corporate accounts for users at work

I’m not seeing what a "cloud browser" would add for the user, but I do see what it could do for Opera. There are some plusses and minuses that do look interesting, though:

Cloud applications are upgraded and maintained by the service company, which relieves end users of the patch/update burden but also removes IT department control of upgrades, preventing them from vetting updates and managing security configuration

Cloud applications carry your preferences to wherever you logon. Whether you want the same bookmarks, cookies, and security settings at work and home is an interesting question. If you do, Opera’s hypothetical "Cloud Browser" would compete with many services that already move some of your data (I like IE7Pro, personally).

Of course, a "cloud browser" is speculation at this point, but even if Opera doesn’t do it it does seem like a reasonable direction for Yahoo! or Google to go.

We’ll know on Tuesday.

1) Well, Opera has always been a pay product in my experience, and I know the mobile version is (I’m still evaluating if I want to pay $24 for a web browser on my phone). I can’t find a link for the price of the desktop version (although they do tout that their beta version is free, which is usually a sign that the stable version isn’t); of course, many products online try to hide their costs until after you have them installed.

2) Of course, we don’t want Our Cloud Browser to look like an evil data miner. I don’t know why so many net pundits like to say that anonymous data mining is evil, even as they quote it, and I love how so many claim the supposed openness of the web means it can adapt to user needs, without mentioning that you can’t adapt to information unless you collect the data. So some other company, owned by Our Cloud Browser or its parent company, gets to sell the data. It buys the usage trails from Our Cloud Browser as a way to launder the money.

Interesting stuff and well worth considering. I don’t know that I agree with Mr. Feux’ interpretations, but I certainly don’t take the rosy view some of the comments on his blog propose.

As I read it, Feux’ general points are

Experienced developers cost more than inexperienced developers but can’t have more experience on any new technology.

As a developer, you need to base a career plan on the assumption of age discrimination. He lists some options:

Consult, with the assumption that the experience:cost ratios are treated differently for consultants

Advance out of programming. He gives a variety of reasons he finds this distasteful; these are presented as caveats, not as definitive arguments. He also lists some elements of management he finds positive. You may find his opinions backwards at times, but the points are worth reading.

Pick some technology and stay with it as it becomes less "cool" and more rare. Feux describes this as inherently unpalatable (a clear indication of his values and his assumption that all programmers share them), but it certainly can be lucrative and stable.

As usual, I’d like to look at presuppositions and their implied values and beliefs. This is one of my favorite parts of reading anyone’s comments, and blogs are especially good places look.

He says, regarding a 1998 study

The use of dubious sampling of only computer science graduates to support its conclusion undermines its credibility…. It completely ignores the significant number of working programmers who either earned their degree in another discipline or never finished college.

It’s a great criticism, but I don’t know that his unsupported follow-up comment, that "the Government has been very slow to grok the software engineering trade" is either true or relevant.

1."The Government2" used CS grads as a measure because they thought it was especially relevant, rather than, say, because no other rubric came to mind

2.The "software engineering trade" comprises one thing to be grokked—or not—as a whole. I suspect, from reading Feux’ blog, that he has considered that "the Government" has a good, deep understanding of some aspects of some kinds software development, but they are the kinds of development which Feux doesn’t care about and doesn’t usually consider.

Mr. Feux makes a clear—and interesting—belief statement:

The software engineer depreciates only slightly more slowly than the machine he or she toils behind

3.Software engineering is "toil," at least in the corporate or professional world.

4.Software engineers lose value over time—they "depreciate."

5. (Included for completeness.) Development hardware depreciates at a rapid rate. Most of us would agree with this immediately; using it to support the other beliefs is less universal, however.

Feux supports this belief with a 13-year old statement by a former President at Intel who had himself moved from the world of technology to the executive world on his path to multi-millionaire-dom. It isn’t clear if Dr. Barrett, the executive quoted, considered himself, a materials scientist, to be governed by his own statement, which was about hardware and software engineers. At any rate, he had been out of engineering directly for 22 years and does not appear to have ever been an engineer at the company3.

In describing Dr. Barret’s quote, Mr. Feux takes pains to defend the value of the statement even though it was made by "a suit," so let’s add:

6. The opinion of a manager or executive is suspect when it regards engineers or engineering.

Mr Feux makes an interesting comparison: in conventional wisdom, "a programming career is roughly the same [length] as a professional basketball player’s"

The article then moves into the issue of age discrimination, which isn’t the only possible way to interpret the data or the quotes. So the presupposition is:

7. The small number of software engineers over 40 is the result of age discrimination

Those 10 years of C++ experience made the veteran candidate progressively more expensive as they leveraged the value of that experience in jobs requiring C++. The problem is that the marginal utility of that extra experience must exceed the marginal cost of hiring the veteran to justify paying the premium. [emphasis mine]

Note that confusion around the word "marginal" might mislead some people:

"Marginal" has two meanings here. In "marginal utility" it seems to both mean and imply a small amount of utility. In "marginal cost" it connotes small, but the word seems to mean (denote) "incremental."

This leads the reader to a confused comparison: is the utility of the experience small (as Feux seems to imply) and if so, is the cost also small (favoring the veteran) or simply "incremental"? I read is as: the cost is incremental, but Feux would like us to assume or believe it’s small. I don’t think it was intended to mislead, but it’s a pitfall worth calling.

Let’s take the misleading word "marginal" out of the quote and see how it reads:

…the utility of that extra experience must exceed the cost of hiring the veteran to justify paying the premium.

Clearer and less likely to introduce bias. It’s also a good statement of hiring ethics

Feux weighs in on whether the extra experience a veteran developer has is relevant:

Herein is the source of the problem. The more irrelevant experience a candidate has, the more lopsided the utility/value equation becomes….

8. As an engineer ages, (large?) portions of their experience becomes irrelevant. (I suspect that his belief goes on to state that this is monotonic, but it is possible Mr. Feux would disagree.)

I am sure that Mr. Feux would agree that which portions of a candidate’s experience are relevant is highly contextual. Given his bias for switching to new technology, however, I also suspect he would describe certain experience as inherently irrelevant.

The unfortunate truth is that unlike other forms of discrimination that are more arbitrary and capricious, age discrimination can often be a result of objective and sound business justifications

I have worked with many, many managers and senior engineers who considered their discriminations—whether based on age, sex, medical conditions, personality traits, clothing, or attractiveness—to be the result of objective and sound business justifications4.

In fact, I have never worked with someone who both described their prejudices as irrational and suggested using them as a basis for hiring.

In every case, the sound and objective business justifications seem good until you look at them.

I’m not trying to justify it as an acceptable practice, but just trying to describe the pickle it puts the manager in trying to make a sound business decision without compromising the ethical and legal obligations of the company.

This is one of those areas where engineers and technical managers need to call on outside expertise. A good, trained, and experienced HR person is your safety net. They are there to recognize your biases and tell you whether they are reasonable or not.5

…a little gray hair and a smattering of experience in different technologies can create a beneficial bias for companies when they are renting brains instead of buying them outright. It may have something to do with the tendency for consultants to be vetted from higher up in the management chain where the silver foxes live.

9. Like calls to like: It takes older managers to hire older engineers.

Let’s ignore the slipped-in insult. It’s beneath Mr. Feux and I doubt he was aware of it when he wrote it.

…management thinks that everyone including technologists harbors a deep longing to “graduate” into their ranks. I think it a fallacy that no one would continue to design and build software for 20 years unless they had no ambition or growth potential. However, people like me that respect such dedication to the craft are in the minority.

This one’s a straightforward belief statement, and one well worth looking at in the context of this article.

10. Staying in development is a sign of dedication and craftsmanship.

11. (All/most) managers consider engineering nothing more than a stepping-stone.

12. Management roles are a step up in prestige from engineering.

A couple of quickies from his notes about management:

[If you become a manager] Meetings, politics and dealing with unrealistic requests will pretty much become your life.

13. Engineers don’t have to deal with meetings, politics, and unreasonable requests (or not as often as managers)14. (All?) Engineers want to avoid meetings and politics (and unreasonable requests)

[If you become a manager] You may try to avoid it, but management-speak will creep into your vocabulary

15. There is such a thing as management-speak, and it has no or little value. (All?) engineers want to avoid it and will spend energy avoiding it.

[As a manager] even when you make it succeed, your team should get the credit. [Emphasis mine]

16. Engineers take (and prefer) individual credit as opposed to passing credit on to their team.

[As a manager] you’ll have to check your ego at the door

17. Engineers are not asked to check their egos, nor should they be.

I know you love programming because you like technology, so this may go against your very nature, but no one says you’ve got to jump every time….

18. (All?) Programmers (always?) want to jump to new technologies.

And a corollary, which I suspect is very true of Mr. Feux: 19. Developers prefer wide-if-shallow experience to narrow-but-deep.

It is highly likely that you will still be able to earn some decent coin in the technology you know and love even after a few decades

20. Even though developers prefer to jump to new technologies, working with an old technology can be personally rewarding if it’s one you love.This is a nice complementary belief to #18 and #19.

There are some interesting beliefs here, some of which I agree with entirely and some I think are good descriptions of the industry’s beliefs as a whole.

My take, as an almost-40 programmer and consultant

I was fortunate enough to have the first time I was daunted by someone’s age be with a guy named Dave (I’ll add his last name if he is OK with that). He was quite a bit older than we were (the whole team was under 25 at the time and the team lead had just turned 21) and, we quickly learned once we hired him, relatively conservative and highly religious, in a conservative protestant sort of way. We had the hung-ho, 1995, startup mentality. We worked 60 hour weeks at a minimum, we played loud heavy metal in the darkened dev pit, and we took breaks for six-player games of Descent and Doom.

When he started, we made a few changes:

We stopped playing any music over the speakers unless we were in a serious crunch mode. Even then, we avoided Nine Inch Nails, which had been our favorite. Dave got jumpy when we played Closer on a loop.

We moderated our swearing. We had been a foul-mouthed group that liked to shout obscenities at our code. He wouldn’t complain, but we realized something pretty mature for our age: making him uncomfortable just wasn’t good teamwork.

We learned that he was even more excited about new technologies than we were: he had been around enough that he could quickly explain pros and cons and apply old patterns to new problems. We learned from him and he tolerated our naivety with apparent amusement.

When he left at 6 or 7 (and came in ridiculously early, given our late nights), we started out kind of "making allowances for his age and lifestyle," but that couldn’t last. He was contributing as much as we were and he was learning as fast as we were and he didn’t have as many bugs to fix because he was careful and experienced; instead, we realized that his coming in on a Saturday—or, when the chips were really down, a Sunday—meant he was showing as much or more dedication than we were at 3 AM.

I’m still learning from Dave, even though I haven’t seen him since 1995.

I’ve given a lot of thought to the age discrimination I felt when I first met Dave. I know what it was for me and I strongly believe it was the same for my teammates. There were two issues:

We had convinced ourselves we were better than other people because we were doing what we were doing so young. When we got pulled on to special projects and advanced work ahead of older programmers, we decided it meant we were better6. Seeing someone in his 40s doing the same work we were threatened us by making him seem as "good as us," with added expertise.

None of us were sure we could be doing what we were—what he was—when we got to his age. We thought we would slow down and we would want to go home and sleep instead of coding all night and we would calcify and stop being able to learn as fast. We projected those insecurities on him and when he defied them he both reassured and scared us: what if he could do it, but we couldn’t?

All poppycock, of course. Research shows we stop learning if we stop learning, sleep is good for thinking at any age, and deep knowledge in any area of technology transfers to sudden insight and intuition when learning another.

But we were just kids. We didn’t know that yet.

On the hiring side I’ve seen age discrimination in action and I’ve seen people try to justify it several ways:

"He’s too young to hire as a senior engineer" This is the most recent one I saw. The manager saying it also liked to talk about whether female candidates were attractive enough, whether a candidate with a lazy eye was going to be too "creepy" to work around, and so forth. When challenged on this, he stopped letting HR into the debriefings and complained that it was allowed in his country7.

"He won’t fit in with the culture" This is the dodge I see most often. If it’s used about people regardless of age, it may be a valid opinion, but look around: are they describing a culture that presupposes everyone is under 35, is single, is using drugs, or meets other ridiculously narrow criteria?

"He has lots of bad habits from COBOL/C/JAVA/RPL/whatever that he can’t unlearn" Check this for ageism versus technical religion. Squash it either way, but know which one it is.

"I’m not sure he has the energy" This is practically a code phrase for "too old." If you hear it, slap the person around and then demand to know what they mean.

In every case, your response, as a responsible, ethical, law-abiding, and generally human engineer or manager, has to nip it in the bud. The response is the same as with any discrimination.

Stop the person right then, right there, even if it’s in a group debrief. This is not the time to "praise in public, punish in private." You aren’t correcting the behavior for that person’s benefit. You are showing the team as a whole that a) discrimination is not part of the culture, and 2) you are aware of it and will do something about it. It is bad for a team to have one openly-discriminating member, but it is far worse to have that and the feeling that people in charge are clueless.

When they defend their actions, don’t attack their defense. You don’t explain why fundamentally unacceptable behavior is not allowed, just like you don’t explain why they have to wear clothes and refrain from murdering coworkers.

When they claim they are making sound and objective business judgments, reduce the objections to observable behavior:

When you say "energy," what do you see or hear from your coworkers that you don’t believe he can do and why is it essential to the job?

What duties of a senior engineer at this company, performed by other seniors will he fail at and what experience would a different candidate have to have to succeed?

What would happen on an average day with the team that he would be excluded from and how, specifically, would it leave him or someone else unable to do their job?

When you did your in-depth technical interview, what specific habits did he exhibit and how would they compromise the project at this company, at the level of this position? When you challenged these habits, how did he react?

You may get reasonable responses to some of these questions. Take those responses seriously and do not let them off the hook for the discriminatory bias. If you were willing to accept prejudice, you wouldn’t need interviews.

Record the event and get it to HR. They need to know. Some people feel this is disloyal to their coworker. It is. It is also loyal to yourself, your company, every candidate who will ever apply to a team the coworker is on, to every junior programmer who will ever learn from that coworker, and to the industry and every programmer in it.

On the candidate side, I’m pushing 40. My hair started salting in high school. I’m still using C++ regularly, both for clients with legacy code8 and on some of my personal projects. When asked to solve a problem, I’m as likely to use a batch file9, greppipsedpipeawk, or a quick VBA macro inside Outlook/Word/Visual Studio as I am to pull out a .NET 3.5 trick or mashup some twitter feeds10.

So, do I have to counter my age? Yes. I know some of these 21-year-old wunderkind are less excited to work with me than with some other whippersnapperyoungin’. I remember when I was 24 and we Dave; I had the same concerns and biases and prejudices.

My response is to meet it head-on. I ask interviewers if they think my age is a detriment. This has two good effects. Most programmers don’t want to be prejudiced; if they become aware of their biases, they’re happy to adjust for them. Most managers recognize that my awareness of the issue is a) a sign that I’ll work around it, and b) a sign that overt discrimination could lead to legal trouble. Item (b) isn’t really a good thing, but I don’t want to work with teams that care overmuch about avoiding people they may abuse to the point of a lawsuit11.

When I’m meeting a company, I want them to know a few things:

How does my age affect my energy/esprit/dedication? When I was 22, I was fine working 60 hour weeks. Now I’m not. I like to think it’s because I’m wiser, not older, but that may be self-delusion. In any case, I need them to know that a) I know what the job entails—crunch time happens, demos can’t me moved, and some bugs or last-minute features just take extra hours—and b) I’ll do what it takes. They also need to know that I won’t stay until midnight "just because" or to give "moral support" to someone else working late. Well, not usually.

Basically, they need to be reassured that I’m focused on shipping the product. If my leaving at 6:00 or 6:30 on most days is a problem then it’s really one with their culture, not with me. Not as long as I’m willing to stay late or work weekends or work from home after hours to meet deadlines when necessary.

Note that this is a different issue in a large, established software house than in a small startup. A company with a family-first culture (say, Intel) won’t ask me about this as much as the entrepreneurial startups I tend to work with.

The takeaway: Tell them upfront what I will and won’t do. Show them that I know what’s expected. Give examples and be specific.

Have I kept learning? This is pretty simple to cover. They’ll think of me as a C dinosaur if I talk about C a lot. If I talk about their technology, they know I know it. If I talk about principles, they recognize that I can apply my expertise no matter where it came from.

The takeaways: 1) Show what I can do in their world. Use their language; know the SDKs they use. This isn’t really an old-guy thing, but it’s more important to demonstrate when they worry that I’d prefer to hand them an 8" floppy with some assembly on it12.

2) Leave the wistful reminiscences for later. Even with an engineer my age, talk about what I can do today and what I know now rather than wax poetic about the Good Old Days.

3) Talk about principles. Principles are experience reified. Mentioning common closure, dependency inversion, command patterns, etc., and how I’d use them to solve whatever problem we’re talking about goes a long way; mentioning how I’d apply the principles in their technology of choice goes further; but mentioning how I’ve applied them across languages, target environments, and APIs goes furthest.

4) Tell them about hobby work. I write programs for fun; most of us do. I write programs to solve problems that only I have; many of us do. When an older programmer shows you an actively-developed, open-source project he or she is doing, it goes a long way toward reversing the assumption that they’re "slowing down."

Am I relegated to "old person" things13? In other words, am I a management type more than a coder? In my case, I’m firmly in both camps and proud of it. Once I show some coding skill, though, most programmers don’t want to pigeonhole me as a manager even at my advanced age. Most programmers are good people at heart; I have to remind them that I have project management and process improvement skills to offer.

In fact, managers and executives are far more likely to treat me as having multiple skill-sets than programmers are. They see the value of the "soft" skills even when programmers don’t and they tend to assume the technical skill once they see someone else respecting them.

Do I love this? In the end, this is the only one that matters. If they can see that I love what I’m doing, petty assumptions around my age become as irrelevant as they should have started.

Show them that I love what I do and show them what I love about it. If that’s what they’re doing—if I love coding and they code, if I love games and they write games, if I love databases and they use a database, whatever—they’ll want me to do it with them. Wouldn’t you?

This isn’t really a takeaway for being an older candidate. It’s a guideline for living and working. Let people know what you’re passionate about; they’ll want to help you do it and they’ll want to do it with you.

And in the end, there is one rule that only an old codger can really have learned: If they don’t want to hire me because I’m older, then they’re not good enough to work with me.

So, summing up: Age discrimination is very real in our youth-oriented industry, it requires vigilance on all of our parts to help show our coworkers when they exhibit it, the common justifications for it are both ridiculous and easy to respond to, and if you’re doing what you’re doing because you love it you’ll be fine.

The "Plan B" options

I really liked Feux’ "Plan B" options, but let me re-state them how I read them:

Take your expertise and teach it to other people. Consulting lets you see a broad selection of projects, work with a wide variety of teams, and pass along your habits, beliefs, and skills everywhere you go.

Manage teams, projects, or companies: Mentor people and apply your expertise across a large swath of the project at once. Make the hard decisions and shield the less-prepared engineers from the confusing world of departments with different values and criteria. Communicate what the engineers need to the rest of your company, translate what the company needs so your engineers can understand it, and build a culture that gives your programmers the learning, success, and chance to have a life that you may not have had.

Find things you love and keep doing them, even when they become un-faddish. You love them for a reason and people still need them. You can still learn new things, both within your older technology and elsewhere, and you can handle working on more than one project at a time. If the one that pays well and keeps your family feeling safe about the mortgage and the college fund, spend your evenings and weekends on the bleeding edge. When you’re not spending that time with your family; finding that your family is a higher priority than you ever expected isn’t "growing up," it’s just "growing."

This, right here, is why I do what I do and why I’m so passionate about it. I consult with startups and small teams doing technology and dev process, but I measure my success by asking two questions: Will the client ship the next project, the one without me, successfully? And will the team’s quality of life be better for it? And I couldn’t have done this at 21.

Beliefs

Just some notes about the beliefs and presuppositions I see in Feux’ post. I grouped them arbitrarily and I’m sure he would disagree with many of my interpretations. I’d love to learn where I’m wrong; I hope he’d love to learn what a reasonably careful reader gets from his writing.

About programmers and programming

2. The "software engineering trade" comprises one thing to be grokked

I don’t think for an instant that Mr. Feux would agree with this consciously. I do suspect, however, that he unconsciously dismisses large swaths of the trade as uninteresting to him. I probably agree on what the interesting bits are, but I need to keep in mind that he may be ignoring something important to me and he is probably unaware of it.

3. Software engineering is "toil,"

Again, I don’t think he would agree with this consciously and it’s quite possibly he was being ironic or sarcastic14 with his poetic turn of phrase.

But I may surprise you: it certainly can be toil. At some point, it is for most people; some people get out of the industry then, some get through it and return to loving what they do, and the rest become the sad, embittered, mediocre developers we all fear turning in to.

10. Staying in development is a sign of dedication and craftsmanship. 14. (All?) Engineers want to avoid meetings and politics (and unreasonable requests)

These are projections, but largely true and useful beliefs. I’m sure they are motivating for many people. Including me 🙂

8. As an engineer ages, (large?) portions of their experience becomes irrelevant

This is really a belief about novelty: that technology is only relevant if it’s new. It’s not a business-focused or results-oriented belief; it’s a belief that learning new things is more important than producing reliable and reproducible results. It’s a good belief for an engineer to help his career. After a certain point, though, I would say that it’s a limiting belief. If I find a developer who is still focused on novelty over reliability and doesn’t find that old knowledge is always relevant, I have found a good, solid engineer, but not one ready to take on what I consider the duties of a senior software engineer.

16. Engineers take (and prefer) individual credit as opposed to passing credit on to their team. 17. Engineers are not asked to check their egos, nor should they be. 18. (All?) Programmers (always?) want to jump to new technologies.

These are core beliefs in the "Cult of the Programmer" belief system. They prize individual effort over results, cleverness over sustainability, and silo, cowboy programming over teamwork. It’s common "Type A" stuff that makes up the backbone of our macho, testosterone-fueled culture where programmers live out our adolescent power fantasies with code instead of capes.

I went through it and I still indulge in it sometimes. I see it destroy non-aggressive programmers, good projects, and good companies on a daily basis, though.

19. Developers prefer wide-if-shallow experience to narrow-but-deep.

This is a common, but not universal, belief and a common, but not universal, preference, in my experience. The "deep guru" is also highly respected, but I suspect that on average, it is true that a wide-but-shallow developer is more respected than a deep-but-narrow one.

I know that a wide-but-shallow developer can easily appear to be more skilled, more experienced, and more generally impressive than one with a narrower skill-set, even when that breadth is irrelevant to the task at hand. To sound smarter and better prepared than we are is an easy skill to master and one most developers learn extremely early (especially the more Type A, Cult of the Programmer-oriented ones).

I should write an article about it. I’ve actually taken classes in acting more intelligent than I am 🙂

20. Working with an old technology can be personally rewarding if it’s one you love.

I love this one. I’d take the word "old" out. Actually, I’d take "an old technology" and replace it with "anything," but that’s outside the scope of Feux’ blog and mine.

About management

6. The opinion of a manager or executive is suspect when it regards engineers or engineering. 11. (All/most) managers consider engineering nothing more than a stepping-stone. 15. There is such a thing as management-speak, and it has no or little value.

This is related to the Cult of the Programmer belief set, but it’s also a variant of the beliefs that every culture has about other cultures, especially ones it believes have power over them. They’re broad generalizations that break down in practice as often as they stand up, but they are so much a part of our culture that they act as "signifiers" indicating membership in our self-appointed elite.

12. Management roles are a step up in prestige from engineering.

This is an interesting observation and a belief about what other people (people who are not in technology, even) believe. I suspect it’s true, although managers are seen as more accessible and understandable than engineers. Engineers are still viewed as a priesthood with mysteries and as unattainable to the average person, although that is finally changing.

13. Engineers don’t have to deal with meetings, politics, and unreasonable requests (or not as often as managers)

I don’t understand this one. It’s completely untrue in my experience. But it would be nice.

About age

4. Software engineers lose value over time

I know for me, when I held this belief it was a projection of insecurity. I doubt it is for everyone. I know I don’t believe it and I haven’t seen any evidence of it.

I would like to see a reformulation, though: "Software Engineers lose value if they don’t change over time."

9. It takes older managers to hire older engineers. 7. The small number of software engineers over 40 is the result of age discrimination

I need to think about these two. I’m not sure that either is a law of the universe, but they are certainly true in many people’s experience.

I believe that both are changeable, both in the small, through each person doing what they can to work around them, and across the industry as a whole… through each person doing what they can to work around them.

Personal to John Feux:

Thank you for writing such an interesting article! Great post and I’m really excited to see the active discussion it’s provoking.

I hope my comments are taken in the spirit intended: friendly, respectful, and supportive. Your blog has made it to my feed list. You write some great stuff.

1) I just want to say that I really, really thought this was an article about programmers and sex when I saw the title. But that’s probably me; I assume "Plan B" doesn’t bring up the same interpretation for everyone.

2) Gotta watch "The Government." It isn’t really one thing, it doesn’t have one set of goals, and it doesn’t have motives in the abstract.

3) To be fair, the article I read about Dr. Barrett was on Wikipedia, so all I saw was what the last anonymous internet user chose to write.

4) It’s not really a secret that engineering is a sexist, racist, ageist culture that wants to forgive almost anything in the name of intelligence. At one company I vetoed a candidate for saying that he managed two employees, but because they were women he couldn’t give them anything important to work on. He backed that up by saying that they "would just get pregnant and quit" when deadlines approached.

The sad part was that none of my coworkers thought that was a problem in any way. Some even thought he had been prudent.

5) Sadly, the average white, male, American programmer would rather take advice from a 90-year old, female, non-white, foreign programmer in a suit than from the best of HR reps.

6) It never occurred to us that a) we were the people not already assigned to important projects and b) we were the people the CEO liked to hang out with and relive his youthful techie days.

7) What country did you first think of? It was Ireland, btw.

8) "Legacy code" is a code phrase at small companies and startups. It means "anything I didn’t write or anything I wrote using a tool I’m bored of." For many programmers today, that means any C++, even if it’s brand-new and running on the cutting-edge platform.

9) Okay, I’ll name it .cmd instead. Call me avant-garde.

10) If you’re reading this more than a month after it was written, pretend that wasn’t horribly outdated. I’m on to something new, I hope.

11) In general, you don’t want to sue over this kind of thing, unless you want to switch into activism as a lifestyle or career for a few years. You don’t want to imply that you’ll sue. But you can’t prevent a good HR rep from considering it.

I have a not-so-funny story about a good HR rep, an Irish engineering manager, HIPPA, and me not suing, but this isn’t the place for it.

12) Actually, I do have some 8" floppies with some assembly code on them, but only because I don’t know how to get rid of them.

13) It’s worth thinking about this: in software, we do have roles we assume older people (i.e., over 35) will take, just like we have rolls that women will take. It’s wrong and wrong-headed in both cases; it’s dangerous because it’s out of our awareness and because we don’t talk about it.

Scrum-master, Manager, MS Project, writing specs, … Are these what we think a programmer over 35 does? Does s/he do it as well as code, or instead of? If s/he moved into programming at 30, does that change anything (i.e., is it based on age or experience)?

UI design, community management, running focus groups, HR, coding the "easy" pieces with user interaction instead of the "hard" pieces with data mining: Are these what we expect a woman hired into the team to be doing?

Sure, we all know of counter-examples. The ageism Feux is talking about is a great, and more acceptable to challenge, entry to recognize other prejudices that permeate software.

14) According to the definitive linguistic analysis in the book Lamb, "Sarcastic" is what "ironic" becomes if you know you’re doing it.

Features are delivered late and QA time is sacrificed to keep the date

QA isn’t able to test whole-system issues like performance and scalability until the end

QA spends too much of the up-front time doing planning instead of testing

QA test plans don’t match the real world

QA sticks to test plans robotically

QA can’t find the bugs that customers can

Mr Moore has some interesting answers to a really good question; it’s worth a second consideration. Let’s start with the question itself. I notice three immediate, interesting things:

The term "best bugs" is a fun one. It isn’t clear from Moore’s post what qualifies as a "best1". Is it most subtle? Most careless/amusing? Easiest to fix? Most impact on the product? It could be any of these and retain the value of Moore’s post; it begs the very interesting question: What did you think of when you saw the term "best bugs"?

There is a presupposition of universality in the question: it isn’t scoped to one technology, language, problem domain or development methodology. Is it universally true that the "best bugs" are found at the end? Always? All of the "best bugs"? If this is true in your experience, or true in your experience with one team/company, do you think it speaks more to the nature of software development, to the current state of the industry, or to a specific individual’s (or team’s, or company’s) methodology? Do you think any testing or development process methodologies or tools address this (or try to)?

For that matter, do you think it is an inherent property of software? Is it a bad thing? A good thing? How good? How bad?

In the list above2, the first two items are properties of the project itself: they’re project management issues. The other 4 answers are all ways of blaming QA’s methodology3. None consider the role that developers, project planning, and the general nature of programming play in the process.

I don’t mean to imply that John Moore blames every QA team in the world for making this a universal problem; he’s explicit that these are only some of the reasons for an observed phenomena, although he does describe these are the causes he’s seen most often4.

So let’s add some reasons that don’t implicate QA; everyone should get in on the fun.

Product Design, Marketing, Sales, etc:

Product managers deliver incomplete specs and don’t finish the specs in a useful order; important features aren’t fully specced until very late in the lifecycle, leaving those features to be (re)written quickly at the end.

Product management adds new features throughout the project, leading to features that don’t have the support of a full process of design, review, test planning, and inspection.

Product managers never paper tested the design and product owners change the requirements when they actually see what they asked for, leading to quickly-turned-around rework.

Sales or Marketing promise customers that they can see specific features early without the dev process being considered, restricting the ability either to front-load risk or front-load low-hanging fruit.

Development:

Developers front-load risk and believe they can do the "easy stuff" at the end, without any planning.

Developers take the easy stuff first and don’t give high-risk items attention until late in the process.

As schedules tighten, development cuts corners on code inspection and documentation, leading to less well-thought-out work.

Under the threat of a deadline, developers are likely to give less-tested code to QA, figuring they can save time on unit testing by risking the entire QA team’s time on a dead build.

Work that requires technology new to the team (new languages, new APIs, etc.) has much higher risk and can’t be delivered until later in the project, when the developers have had some time to learn.

Floundering developers are left on their own, causing their high-risk code to ship at the end or having it rewritten by more-senior developers at the end.

"Cowboy" features and fixes are slipped in at the end, when developers are buttoning up features.

Towards the end of a project, developers are focused on the next project and don’t give the current work their best effort.

Executive Oversight:

Any particular discipline in the process is likely to be pulled off on a random firedrill without the rest of the project understanding the full impact—often because no one but the executive could know how bad it is and the executive is in denial.

Features that are pulled out for a future release become must-have features for the current cycle, often causing them to push ahead with incomplete (or no) planning.

Project schedules are interleaved with no relationship between timeframe, risk, and importance—such as the most critical project delivering last, but taking resources from higher-risk projects that deliver first, causing those projects’ inevitable failures to impact the critical project and lead to last-minute attempt to catch up.

General Life in Software:

QA find the easiest and most-visible bugs first; the subtle stuff takes time to discover, reproduce, and document.

Early in a project, QA is only likely to get code that developers have very high confidence in. As deadlines near, code has to be given to QA ASAP instead of sitting on a dev box until someone has time to go through it again.

Interesting bugs are likely to cross functional boundaries, meaning they can’t be discovered until several functional areas are delivered (and working well enough to see them pipelined).

At the end of a project, when deadlines are tight and the team is pushing hard, exhaustion overcomes the team, leading to bugs in the more complicates systems.

At the end of a project, when deadlines are tight and the team is pushing hard, exhaustion overcomes the team, causing subtle bugs to take longer to fix and generate more false-fixes, further increasing their apparent complexity.

In short, everyone is susceptible to basic, human behaviors that lead to the "best bugs finish last" situation. It is fundamental to human nature and mitigating it is the #1 goal of every development methodology ever. It’s easy—and currently chic—to say that agile methods are focused on eliminating the "big ball of failure at the end of the project," and they can do a pretty good job of it, but don’t fall into the trap of thinking older (or newer) methodologies embrace failure. They have the same goal and make some reasonable attempt to reach it.

If I had to guesstimate, I’d say that development—in the roles of coders, inspectors, and estimators—is is responsible for the plurality of the last-minute "best bugs," with product management (in their role as "constant revisers of too-incomplete specs") being close behind. But really, all of these are a symptom of the fundamental problem of too many people trying to do too hard a job on too little time and with too little information. In other words, it’s all a symptom of human beings producing software in an entrepreneurial world.

We see it, we face it, we mitigate it, we plan for it, and we deal with it when it—inevitably—happens.

It’s what we do and it’s why we keep coming up with new ways to do it.

1) Moore’s post does say that these bugs are the "most interesting, and sometimes the most critical."

2) The list doesn’t match the bulleting or order in the original post; I have reordered them to group related items and I have flattened his nicely-nested bullets.

3) This could easily turn into a "developers versus QA" situation; let’s agree not to do that.

4) I certainly do believe that these are the causes Moore has seen most often. I do not believe these are the causes Moore has most often been exposed to. As a developer with a passion for SDLC and development process, I have realized that I am likely to see faults in QA’s methodology (without necessarily understanding the methodology well enough to be right) long before faults in areas I’m closer to, like dev process, coding, or design. I assume everyone, including Moore, has similar biases.

I have a friend who’s a relatively junior programmer; the other day he was asking me about copying reference objects in .NET. References, pointers, value/stack objects, and the difference between deep and shallow copy semantics is something no language has really done right and it’s the source of bugs, performance and memory hits, and time lost writing test code on many, or maybe most, projects.

.NET makes it both easier and harder with the clear distinction between reference types and value types. It’s easier in that the behavior is clear if you know which type you have (although each can contain members of the other type), but harder in that far too many developers don’t really know the difference. Also, the notion of boxing comes up in all the wrong contexts: it isn’t really important as a performance consideration in most real-world applications, but it does hide logic errors that are unlikely to be detected by unit tests, since the developer writing the unit tests didn’t understand the distinction in the first place.

The second-worse language I’ve used for reference vs. value confusion and the need to code review shallow vs. deep copy semantics in every function is JavaScript/ECMAScript. The language works fine, but for some reason even extremely careful developers seem to get lost in the almost-scripting and almost-object-oriented1 language when combined with multiple, almost-stateless frames in a large application.

But it’s the worst language that causes the most concern: HTML. And here we have to go into an issue of overloaded operators: what is a "reference"?

In software, a reference is a stand-in name for something. In lower-level languages it’s usually backed by a pointer to memory and in higher level languages it is likely to be an index into an object map, but in either case you can have multiple pieces of code using the reference and all are accessing the same object instance. If one function changes the object, subsequent accesses will get the latest changes.

In HTML, a reference is an HREF: a Hypertext REFerence. It is a stand-in name for a document (or a location in a document). HTML makes no assumptions about whether a document’s content is stable. In fact, most pages on the web are assumed to change, with comments added to blogs, forum posts edited, online catalogs updated, etc.

In nonfiction or scholarly writing—including encyclopedias—a reference is a stand-in name that identifies not only data, but who takes responsibility for that data and exactly which version of the data is being referenced. This is so important in that context that scholars and journalists have defined standards for representing references consistently.

The Wikipedia Problem

Which brings us to Wikipedia. And to a way to teach references and values that might resonate with "kids these days.2"

It’s common to link to wikis, especially to Wikipedia, when explaining or defining things. Wikis are great: they facilitate communication, collaboration, and community and they are a good way to get the most-wanted content in a reference populated first. They have their limitations, but every tool does.

The problem with linking to Wikipedia (or another wiki, although to a lesser degree with slower-changing wikis) is that HTML links are true, blind references. You get whatever is there right now, and Wikipedia is known for (often transient) spam content and inaccuracies.

In software, the solution is a deep-copy: actually copy the content to your own site and reference that (probably with a link to the current Wikipedia content). It isn’t a bad choice, but it can get tedious. Another solution is to link to the Wikipedia article history. For example:

http://en.wikipedia.org/wiki/Reference is a link to the Wikipedia article of References (of several kinds). Looking at it now, as I’m writing this, I see several sections I would love to comment on here—and several sections that have tags indicating they are likely to change soon. If I use that link, my comments are likely to be irrelevant later. I’m keeping value data (my text) that doesn’t correspond to the reference data (the content beyond the link).

http://en.wikipedia.org/w/index.php?title=Reference&oldid=284625244 is the link to the version I see while writing this3. If I want to make comments, or if I am worried that the page will be modified in some way I don’t want (say, a page prone to political, religious, or commercial spam), Wikipedia promises that this URL will be stable. In the language of C++, it is a "const reference."

Of particular interest to me, writing blog entries, is the References4 section of the Wikipedia page. This is the section where a Wiki article contains external reference data (the rest of the page is value data and internal references to other Wikipedia pages). I’m likely to want my readers to see the references. Because that section is very likely to change, the purportedly-stable link lets me refer to these references-in-my-reference safely. But notice that I don’t know if those objects (the pages behind links in the Reference and External Links sections of a Wikipedia page, even a stable one) are stable, although in some cases I can make a good guess (links to PDFs of published papers, for example, are likely to be very stable).

Summary

"So this is all interesting, but what is the point?" There are two lessons here, one for passing on information (in blog posts, emails, twitters, etc.) and one for programmers, especially of the more-junior sort.

When you’re passing on information:

Know if you’re giving someone a reference or a value. Copy information you need stable and available, just as you would in code.

Don’t link to Wikipedia’s "current page" unless that’s what you mean.

And for coders, if you understand hyperlinks, you understand references and values:

A reference is like a hyperlink: It goes to something else and you don’t own it.

A value is like content in a page: It’s owned by the page and no one else can change it unless they can edit the page.

A reference copy is copying the URL: it’s still a reference to the same object. If someone edits the page, you will get new data the next time you access it.

Remember that a reference may contain references: Pages can contain links to other pages.

A shallow copy is copying a page: If the page had references (links) on it, your new copy has those same links.

A deep copy is a web spider: You can copy a page and every page it links to (and every page they link to, and so on). If you do that, you have a new, unchanging page and no one can even see it unless you tell them the URL.

A const reference is a link to a page that doesn’t change: But it may link to pages that do change!

1) ECMA is an object-oriented language, and whether JavaScript in a web page is "scripting" in the usual sense is an interesting question, but most interesting JavaScript applications are "almost" scripting and "almost" object oriented. And yes, those two are different axes; there are many excellent object-oriented scripting languages.

2) I’m feeling old. I was listening in on a discussion this week between a Senior Architect and an Architect where the Architect didn’t seem to be following some of the more fiddly details; the Senior commented to me later that the Architect may never have written a WNDPROC.

3) You can get this from the History tab, but it’s probably better to click the "Permanent Link" or "Cite this Page" item in the "Toolbox" section of the navbar.

4) At this time, the relationship between the "External Links" and "References" sections of the Wikipedia page template is unclear and apparently undocumented.

This is something I’ve been wanting to write about for a while; I might as well use a short answer I wrote on LinkedIn as a jumping-off point.

In the Scrum Practitioners group, Aqueel Khan asked about baseline implementation before starting an agile project:

I know there’s no cookie cutter template for every team in every organization. But there has to be "something" which needs to be implemented as the foundation of a Scrum/Agile/XP project.

Gunther Verhyen gave a fabulous description of the raw process requirement for Scrum1, suggesting that the process has no procedural prerequisites.

So I added some comments about beginning any process, with specifics that apply to Scrum:

Gunther’s answer is pretty much definitive, so let me add something outside of the Scrum process that you need for company and team success: know what you aren’t managing with Scrum.

It gets very easy to take a simple and effective process like Scrum and assume it is larger than your project, larger than your product, and larger than your company; the result is to assume Scrum (or UML, or … or …) will solve problems it isn’t designed to address. Don’t let people try to shoehorn product planning (market research, customer interviews, experimental prototypes, …), engineering R&D (technology evaluation, build/buy analysis, large-scale architecture and refactoring plans, …) into a project Scrum.

It is very easy to take the simplicity and success of Scrum, combine it with the increased day-to-day involvement of groups that are traditionally "above" the implementation phase, and find your organization start to assume that the Scrum process is one never-ending chain of sprints that somehow encompasses all product work. It isn’t intended to be that and trying to use it that way will weigh down your implementation.

Make sure the Scrummaster(s), the heads of technology, the heads of product, and the heads of sales/support/customer-interaction all know what not to try to inject into the Scrum.

If you want to keep this in a Scrum framework, you can do so by creating backlogs for groups, as well as projects/products, by keeping a firm hand on the project backlog to prevent creep-in from larger-than-project items, by ensuring that projects are running with larger sprint lengths to work on larger items (I know, many poorly-trained Scrum fans find this type of "planning" anathema, but successful teams have it whether they recognize it or not), and keeping a sense of humor about the whole process.

Or, just remember that nothing inherent in the Scrum process would lead a company to cancel a project; clearly, something outside the process has to exist to do larger things. Keep that separate from the Scrum so neither interferes with the other.

Overall, this is a problem that wheedles its way into every methodology. In fact, it’s one of the measures of when a successful methodology becomes a "religion": when people insist that it solves problems it doesn’t.

Scrum is about software development at the low, development team level. It manages projects by breaking off chunks that meet certain requirements (size, testability, and completeness-to-ship) and managing the day-to-day implementation to finish that chunk on schedule. It does not address anything else.

Scrum can be used to manage other, non-coding projects and is quite successful with any pile of work to be done that can be broken down into linearly-independent units. For what it’s designed to do, Scrum is great.

Unfortunately, Scrum suffers from too much adherence by too many people with too little understanding of how it fits into their overall problem. Scrum doesn’t say anything about what to build, when to build it, or what to build next. It doesn’t address training, delivery, internal or external documentation, or requirements gathering.

Most agile practitioners eschew any kind of overall architectural design (believing it always leads to if "big design up front"), yet senior developers invariably practice a form of it—it is one of the features that makes them senior. Because it isn’t part of the day-to-day development, Scrum doesn’t address larger-scale architecture—or even small-scale design2,3. Because it isn’t part of Scrum, many houses embracing Scrum try to avoid doing it at all, believing that Scrum—a deliberately short-sighted process—somehow replaces future-looking planning.

The Agile Manifesto tried to avoid some of this. It specifically called out that "things on the right"—which includes process, tools, documentation, and planning—have value. It claims that other things have more value when they come into conflict.

This is a process of criteria ordering. It is one of the first pieces of work any team should do and one of the premier jobs of technical management, all the way up to the director or VP level. Criteria ordering is relatively simple and incredibly valuable and the Agile Manifesto authors are to be commended in their work on it and the way they tried to bring it to the forefront.

But given who they are, who the initial signators of the manifesto were, they really should have known the damage they were doing and they should know how little the have done to ameliorate it.

Every major agile process, and especially Scrum, has done a great job trying to bring the left-side criteria into play by a) ignoring the right-side items, or b) diminishing the value the right-side items bring. This isn’t bad with people who know what they’re doing, but it is devastating when people embrace the methodology as larger than their problem.

Let’s set Scrum aside to show the situation. Back in the dark ages (1999 or so), the religion was UML, especially the Rational approach to UML. Agile development was still a guerrilla phenomena and the disdain for "big design" was partially in reaction to UML and it’s more narrow-minded adherents. To make matters worse, UML was usually tied to one of a few big, multidimensional tools that did very good jobs at some things and very poor jobs at others, but integrated them well enough to force some team members to use a poor tool to make other members happy.

Like other methodologies—waterfall planning, OOAD, BRD-as-product design, even the relatively benign CRC—people heavily invested in the process came to believe that the model in the process was somehow able to model the world outside of software development (a kind of reversal of the Conant-Ashby theorem).

When this happens, you find several fallacies and errors creeping in to daily life:

Items that are not explicit in the process are disregarded (design, buy-in from other groups, redesign in the face of change, real measurement instead of believing the schedule, etc.)

Work that isn’t part of the process gets shoehorned into the process somewhere ("Well, considering those user requests is really part of bug tracking, so we’ll just have the bug database manage it.")

Some data or functionality in a tool is partitioned off and used in a way that doesn’t make sense. Sometimes fields are used to mean something entirely different just because they hold the same kind of data.

Business processes that have nothing to do with software are driven in the development process even when it doesn’t work4.

I am as guilty both of doing these and of letting other people do them on my watch. They are seductive and there are times to do the wrong thing in favor of the greater project.

I’ve done them. But you don’t have to.

When things seem amiss, ask yourself some questions:

Is this thing we’re doing something inside the model of development that the process envisions?

Is this process, as we’re implementing it, designed to support this business process?

Is this working for us? Have we reduced work, shortened schedules, increased participation, eliminated political maneuvering, or increased quality?

If we’re responding to something outside the process’ model (a change in the real world, usually, as opposed to something in the virtual world of our plans and hopes), what are the parts that are not about this process and have we made sure they’re being addressed in their own way?

Once you see the problem, it isn’t usually hard to address (although the die-hard adherents of any belief system will require some convincing). Just remember the formula:

The Real World > The Project > Your Process > Your Tools

Items on the right cannot contain items on the left. Models on the right are unlikely to be good models of anything on the left.

2) Also, any enshrinement of design into Scrum would have made Scrum less applicable to non-coding projects and would have left some people confused over the agile principles to value "working software over comprehensive documentation" and to value "responding to change over following a plan."

3) Unfortunately, the stigma associated with design is felt more strongly by more senior developers, who have more invested in appearing to be closely allied with the current development religion. The people most hurt are the junior developers, who are deprived of the seniors’ intuitive expectations as to how a project will "look" when it is done.

4) When a process works for something, just keep doing it. Change the names of things if you have to.