A colleague and I were looking at an issue we call "coherency design" – a focus on ensuring that different stakeholders maintain a common vision of a product's value proposition, and that the different stakeholders' design solutions function together in a cohesive way. I shared the essay with a group of colleagues. The original essay and the unedited feedback are presented here.

Original Essay:

Who sits at a product design table? The product manager is usually there, software architects, system architects, maybe sales, maybe QA. Together, these designers anticipate the issues that may come up during development, implicitly reducing risk.

Why is the software architect there? That’s clear: the software architect ensures that the product can be built. The systems architect? Also clear -- the new software has to work across the entire network. We wouldn’t want to build something and then have it fail because of unanticipated network complexity.

An important function of the product design table is to expose unanticipated issues that could impede the value proposition. Why aren’t the shipping personnel there? Huh? Shipping problems don’t impede the value proposition. Unless the object being produced weighs 20 tons. Then shipping probably does need to be considered.

It is inherently risky not to have the proper people at the product design table. Why? Because fundamental choices in one aspect of product design affect other aspects, often without intention or recognition.

One of the most under-represented functions at the product design table is the work-modeler. This person understands how others will use the tool being built and its context. Take a drill, for instance. A drill used in construction, one used by a jeweler, by an astronaut and one used in orthopedic surgery all need different operational characteristics because of the context in which they are used, and how the operators think about their problem.

The construction worker, jeweler, astronaut and doctor all have different thoughts regarding how pressure is applied, where the residue goes, impact of sparks from the motor, and presence of gravity. Each work model shapes designers differently. The designer of the shell of the orthopedic drill may choose to completely encase the motor so that sparks don’t impinge on the sterile operating field. This decision impacts the motor designer who will likely worry about heat build up. If the two don’t vet their designs with each other, the value proposition will be at risk.

This story is important because in software product design the work model, data model, and software architecture are strongly mutually dependent. One cannot be designed without the others, and each has impacts on what the other can be. However, the work model is rarely, if ever, planned for, and its validity is usually ascertained through feedback from the market after the product is released.

Some might argue this isn’t a problem – we’ll just rework the software. But if the problem is fundamental, and the basic software architecture and data models are incompatible with the needed work model, then change is costly – no one wants to go re-architect their software. At this point, the band-aids and patches are applied, and the growth of the product becomes quirky.

Our solution is to not only be purposeful in designing the work model for software, but also to analyze the system in question to ensure that all germane design areas are represented at the product design table, and to ensure that all present understand the work model and all component design solutions, the interactions between the different design solutions, and the likely resultant behavior of the system as a whole before any investment in construction is made.

Thanks for the comment! Actually, human factors, UX, cog. engineering is where we started. We found a lot of resistance when we mentioned it in those terms because most people were predisposed to put UX as a "phase" much later in the process. In some regards, our whole effort is a way to get UX at the design table without saying it that way...

You're trying to zoom out farther, which is a good thing. Work flow is good. Workflow and outcomes is better, IMHO. Outcome-driven design and help you put the effort where the value is. Check out http://hbswk.hbs.edu/archive/2815.html.

Thanks for taking the time to read and comment -- I appreciate it, and

I'll have a look at the article you mention. (By the way, I was in charge of the current re-design of HBS Working Knowledge -- do you like it?)

One of the things we're testing is to see how to communicate these ideas, and how to convince reluctant bean-counters that this is actually a valuable exercise. I recall one conversation in which I pointed out how Google completely messed up on the personal medical records by designing it around Billing Codes, and using the billing codes to infer disease states -- and that can't be done. Docs use billing codes completely differently than diagnoses, and each diagnosis requires a lot more than just a code to understand what's going on.

As a design engineer (analog, the kind where I don't have to worry much about software...) of hybrid analog/digital products, I entirely agree with your essay. I find what you call the "work model" generally called a "use case." Understanding what people are going to use the product for is essential to successful product design, and your example of the electric drill is a good one. My employer has done pretty well at this, having in marketing traditionally some field service people who had worked in the field using our stuff.

I know some people who work for a company that writes software to control very large cargo transportation facilities. One of my friends was writing manuals for them, and they were asking about use cases *after* the software was in alpha or perhaps beta test. She quit.

Apple seems to do well at this kind of thing. Their software generally does about what I expect, with little (but not none, hey this *is* software we're talking about here!) fuss from me.

What strikes me as interesting is the basic idea: you need end user representatives-ones who really understand the users-not marketing theorists, involved in the design from its inception.

Thanks very much for reading and commenting -- I appreciate it. Thanks for bringing up the relationship to use cases. I guess we think about a use case as a specific instance, or a single response that comes from a work model. We're thinking of work model more like the "way the person thinks about how they should use the tool" in general. Interesting...

One thing we're really curious about is how this idea might sell to "bean counters" -- will someone be willing to change the way they do things (like, in your example, where you look at use cases) if they get what we're saying. Another way of saying it is, how can we get the bean counters to include user stuff up front -- can they be convinced?

Ideally, someone on the product team should be representing the end user. This person needs to understand what makes a product elegant or clumsy for the end user, etc. This person needs to be able to anticipate the evaluation and purchase consideration of the end user.

This is not to say that the team will get this exactly right, but it has to be close. Too often this is expected to be addressed as part of the user feed back. The problem is that if only concepts are presented to the end users before the product is built, the user will feel the team does not understand the business and few users will provide sufficient effort to ensure that their input will correctly guide the initial design. If user input is sought with an early product model then it is a very expensive process and there may not be enough time or resources to make significant changes to incorporate the user's input.

Another direction that could use a coherent process is establishing the foundation that can support an elegant implementation through the product road map. Too often the timeline for the first delivery requires only a subset of the full feature vision. Two examples of common bad decisions are (a) designing the architecture with only the initial feature set and (b) setting the architecture for the complete feature set and including features that are conveniently completed according to the ship schedule. Organizations sometimes do not acknowledge that the first product shipment must contain a feature set that can be successfully used (sold) on its own merit. Failure to recognize this forces the sales organization to gear up sales efforts that essentially demo products and desperately attempt to keep customers from buying other (more appropriate) products, delaying customers decision until the next release. This is expensive and a negative experience for the vendor and customer.

Symptoms of problems of these sorts.

- Shipping a product where the customer "loves" the features but will not go to deployment (and thus not purchase) until the next release includes necessary diagnostics, reliability, or maintenance features.

- product shipped early with great kudos to the development team, only to have the product require rewrite 18 months after product release.

Sometimes this is never acknowledged as a rewrite but called a size reduction, a speed up, a modularization for new features... anything but a re-do.

Thank you very much for taking the time to respond, and thanks for the feedback.

I've been a UX designer for a long time, and the fellow I'm working with does the same kind of thinking, but in a different domain. We've been scratching our heads for a long time trying to understand why bad UX design happens, and we're exploring the idea that there's something just fundamentally wrong going on.

What we've come up with, in a way, is a different and more nuanced view of UX design. One piece of our work has to do with thinking of design as a method for accomplishing an intention given a set of situated facts at a specific level of abstraction. For instance, an architect starts with a rendering of a house, which was created taking into account the site, the homeowner's taste, how houses are used, and a number of other goals. Then he goes to another level of detail, say rough floor plans; then another, and another. Ultimately, there are electrical designers, plumbing and heating designers, structural designers, and the like involved. Each level of design is based on an understanding of - or coherency with – the previous level, and with a common root design represented by the sketch.

No one would imagine changing one design without considering impacts on the other -- imagine someone just moving the windows without considering the plumbing, or just deciding to use 2x8 lumber instead of 2x10 lumber.

But in software, no one seems to force coherence across all of the design disciplines. To be sure, part of the reason is that its really hard to craft a design document that captures all the nuances and subtleties inherent in an interaction design. And so, people draw incorrect conclusions, perpetuating incoherence in design. Or, the confusion is noticed, people will trace back through levels of design to understand a decision better.

But if the different aspects of design don't have a common root, the traceback won't work.

What we're pondering as the "special sauce" of a consulting practice is working with leads of many different functions to produce a coherent "root design" or vision for a product, articulating that design in a set of coherent design documents (which need not be text, BTW) and may be custom designed for the project, and drilling each design lead to ensure that their interpretation of the design is coherent with everyone elses. Those "root" design documents would then act as a common point for the rest of the traditional planning practice to happen.

I have seen this kind of failure even with the design of the extension to our home. Specifically, the space usage for the HVAC was not designed and it ended up taking up space I was hoping to have for some other use. a surprise.

let me offer one clear example of a pothole most product teams fall into. In fact, I consistently expect my competitors to make this mistake when I am crafting my competitive tactic.

- the first release of a product (especially software heavy products) focus on the core functionality, the core data flow for the functions.

- Maintenance and diagnostic features are usually part of v1.2 or later.

- this is what I referred to in terms of products that fail to go to production. It could be diagnostics. It could be some feature needed for soft fail or high reliability.

I bet you have seen these behaviors many times.

I don't know if you can address this kind of problem in a consulting practice.

I would guess you can focus on the infrastructure.

Example: Some tract homes are designed for extensions (then are not centered on the lot, face in a direction to allow it). or foundation designed to allow addition of second floor.

>A colleague and I are looking at an issue we call "coherency design" – a focus on how solutions to different types of design problems in a single product constructively or antagonistically interact to either further, or to hamper the initial value proposition.

Just a "top of my head" response. I'm in the middle of too many projects, so I've only skimmed your piece, but you might want to look at what's called "feature interaction" in the telephone industry (e.g., how does call forwarding interact with call waiting).

It seems interesting, but how does the notion of "work modeler" differ from the usual concept of usability? Not that a usability specialist is generally at the table either. But what does this new concept add to our understanding of system design?

Thanks for taking the time to read the essay. You've hit on one of the main issues that underlies this essay, which is: Why is it that user experience designers are not at the table? One confounding issue is that different people mean different things when they talk about usability, user experience, HCI, etc. The "work modeling" name is intended to do 2 things: first is, when reading the essay, to avoid having people categorize usability where they normally put it in the process -- we want people to keep open minds about that. Second, is to focus attention on a part of the process that we think is close to the "root" of user experience design -- the initial vision from which all other aspects of usability design stem.

A great deal stems from the work models. When someone sits down to use a piece of software, he typically has a goal in mind, and will quite often have in his head a picture of what needs to be done to get to that goal. I call that the "user work model". The software itself embodies a work flow, and requires a bunch of steps to be taken one after the other. I call that the "software work model". Our user has to maintain 2 "instruction pointers" in his head -- one points to where he is in his work model, the other points to where he thinks he is in the software's work model.

The closer these 2 are, the easier the job of keeping track of where one is. The more divergent these are, the more difficult.

In a building metaphor, the equivalent of the work modeler would be the architect who puts rooms and space together with a thought to how individuals need to use the space. This is different than designing how tall the doorways need to be (a human factors / usability task) and this is different from what kind of faucet spigots to use in a doctor's office (paddles, so they can hit them with elbows and not wreck sterility of their hands).

Does this help? Hopefully I'm not rambling too much... Please forward on any more thoughts.

Your explanation of the difference between the "software work model" and the "user work model" is novel - to me, anyway; I'm not a usability expert, I just try to read up on it and run small-scale tests - and the insight that a primary aspect of usability is the difference between the two models seems to me to be elegant and useful. In fact it's obvious to me as a software practitioner that this particular gap is one that is very likely to be impossible or impractical to address once a system's architecture is fixed, and very much helps to make the case that the work modeler needs to be "at the table".

I don't know whether "System Coherence Design" is the best name for what you're trying to do - it seems vaguer than the contents of your essay and the additional explanation you gave me. You are talking about a particular aspect - arguably the most important aspect - of the user's experience with the product. There are many possible meanings of "coherence" - it could refer to the system architecture, for example, but having a coherent architecture is no guarantee of an appropriate software work model. And although design plays a role, it's also about analysis and understanding. Seems it's more like "System Fitness" or something like that.

A suggestion. You're making, what I consider to be, a common mistake. One of the key people that should be at the table is your closest ally to the customer, your customer/product support team. It is these people who know best, what the customer wants, can use, can *understand*, and will accept.

These people are also the people who will be tasked with taking the brilliant ideas of these others at the table and present/represent them to the customer/end-user.

Too often the support process is an after thought instead of a key part of the product design process.

My favorite example comes from my time in BBN Hark:

The new "Voice Connector" service had taken root and was starting to grow into a product for customer consumption. We had just put our system into The Boston Globe's office and planning for some aspect of how it was going to be sold was being held in one of the conference rooms.

As an accident I ran into someone from the group that was invited into this meeting. As the support guy for BBN Hark, this person decided to, on a whim, invite me into the meeting which was about to start.

I sat in the meeting, listening to some of the most brilliant people I've ever met and feeling a bit out of place but I began to see that they were missing an overall view of the product as a service. I timidly raised my hand and asked a question, I think it was something along the lines of "who's going to answer the phone for problems?" and the room went silent. It was obvious that no one had thought of the problem nor did they want to be the guy carrying the pager (aka DOPE = Designated On Page Engineer).

So I offered some solutions that they had not considered. One was, a tiered support model where the normal business day (Cambridge time) calls were handled one way and outside those hours two more tiers existed, each of which cost more. One was for hours outside normal business hours (Cambridge time) were at an additional cost and the last was that calls outside of these other two tiers, such as weekends, after 8pm Cambridge (5pm California) time was at a higher rate.

It was at this point that the sales person in the room began to imagine her new car. You could see it on her face, she'd never thought of this revenue stream!

I also suggested a number of protocols to be implemented to provide the customer with the smoothest support channel, giving them a single number to call, providing the customer with a realistic SLA that we provide in writing, and an internal protocol for the transfer of responsibility for the support from person to person which provides a real, honest-to-god, paper trail useful for audits and of post-mortems which *will* happen no matter how carefully we try to prevent them. One of these things was a single page "check list" type document which is kept in a notebook, it was to follow the transfer of the pager so that during an audit, it can be shown that the person getting it, checked the battery, had a new battery available, called the main number that the customers called, made sure the call was forwarded to the appropriate place, the answering service answered it correctly, the pager beeped when triggered by the helpdesk and the phone number given in the pager was the correct support number. There were other things checked too but this was the key item people grabbed onto. Each of these one-page-documents had a place for the DOPE to date and sign it so that there was no question.

Some of the other things that were done was creating a schedule for escalation and contact sheets should it be needed. Suddenly people were telling me that they were glad that I'd shown up. Since then, I've seen many cases where my view, as a customer support professional (at the time) provided a value in that I could see the entire elephant, not

See, this is what happens when we don't read the entire document......

You're actually arguing the same things I am/was in my letter to you!

My apologies for firing without aiming more carefully.

However, with that said, I can see that your statements are *almost* the same as mine but you are not considering that the customer support person(s) are your best interface here, they "represent" the customer but are *YOUR* people and are not trying to "give away the ship" as the end customer would more likey be.

Thank you for your time and you can use my comments in your development if they help provide a positive value. :)

A couple of points: The conclusion I'd like to draw is that a product entails many streams of design, and none should be left to chance, and each stream of design should be checked for interactions with other streams of design.

I agree with your observation that customer support is a stream of design that should not be neglected. What's necessary is to identify all of these streams up front, and get them all to talk to each other.

I also agree that support teams would likely get a much different picture from customers than sales or others would. Where I'd amend your observation, however, is this: to understand our customer, we've got to see the entire system of relationships and transactions involved -- and no one really has the "best view" of the customer. Also, the "customer" isn't represented exclusively the user, and the best value of the overall system includes factors that go beyond the user.