OK, well not really “home”, but since the East Coast was the first place I lived in the U.S. and for the longest time (13 years), I identify with it more than any other place.

Anyway … in June I left USAA (and my awesome team there) after 4 years and moved to New York to head the design team for Capital One’s commercial bank.

It’s an exciting time to join C1 and the commercial team because there is a huge up-swelling of interest and support for design-centric approaches to problem solving across the entire company. But even more important to me is the “do the right thing” attitude that permeates this organization.

Last week in Dallas I was privileged so see Rich Fairbank, our founder and CEO, speak for about 8 hours on the strategy for the company. His most powerful moment (and there were many) came with this (oft-repeated) line “Doing the right thing cost, and it cost, and it cost … until one day it didn’t”. He backed this up with many examples of how C1 has done the right thing for clients (reducing fees, working to increase rewards redeemed, etc.) which cost the company money and reduced profits in the short-term, but which ultimately contributed to increased client loyalty, usage, and profit in the long-term.

It’s a great example of a win-win scenario on how a for-profit company and its clients can exist in a harmonious, symbiotic relationship. And a fascinating study on how an intensely analytically company can also make huge business decisions based on “how would you want your mother to be treated?” (another Rich quote) and have faith that the economics will work out.

An often-held viewpoint is that we need to “remove all constraints from designers – let them be creative!” On the surface, this sounds great – but in actual fact, the opposite is true.

Now that I’ve made that provocative statement, let me explain.

Design is not Art, Art is not Design. Artists create things to ask questions, they follow no rules, no set process. Designers, on the other hand, create things to answer questions – they solve problems, they follow rules, they have constraints. (See here for more).

Some of those constraints designers impose upon themselves. They understand and constrain themselves to the mission of their client and what they’re trying to achieve; they understand and constrain themselves to what is technologically feasible; they understand and constrain themselves to their users’ mental models – how they think about the world; they understand and constrain themselves to human abilities related to ergonomics and usability and they understand and constrain themselves to aesthetics that humans find pleasing and simple.

Using these self-imposed constraints, designers can create beautiful stand-aloneexperiences (e.g. the Uber app).

There are, however, another set of constraints – those imposed upon designers when they choose to work within an existing, large ecosystem. Those designers understand and constrain themselves to only add the features and functions that are complementary to the ones already in the ecosystem; they understand and constrain themselves to connecting their designs to the rest of the ecosystem; they understand and constrain themselves to creating predictable patterns of interaction to make the ecosystem easier to use and they understand and constrain themselves to the brand of the ecosystem, to present a unified, cohesive whole.

Many large organizations have recognized this and have created design systems to encourage their designers to direct their creative energies in productive directions and not reinvent the wheel. (US Government, NASA, Google)

Constraints are not bad; in fact they’re necessary, if you don’t believe me, how about Charles Eames?

“Here is one of the few effective keys to the design problem — the ability of the designer to recognize as many of the constraints as possible — his willingness and enthusiasm for working within these constraints. Constraints of price, of size, of strength, of balance, of surface, of time and so forth.”

It’s the most wonderful time of the year … the IA Summit chair is making his list and checking it twice – the acceptance and rejection emails for presentations have been sent out, sparking a flurry of conversation on the twitters about the selection process and (mainly) the value of blind reviews.

Although I haven’t been heavily involved in the organization of the conference since 2008, when I was the conference chair, I thought i’d give some transparency into how we did things then – I don’t think it’s changed significantly since.

Back in 2008 we received about 150 regular session proposals and had 45 available session slots. The first step in the selection process was a blind review – 50 volunteers reviewed about 20 proposals each against a set of standard criteria, giving each proposal about 6-7 reviews and scores.

These reviews were done “blind” (without knowing the author) because the final selection committee wanted to gather research on what topics and presentations potential attendees might find attractive without that research being polluted by speaker name recognition.

Every proposal was then reviewed by a final selection committee of 3, led by myself – taking into account the blind reviews, scores and speaker identities as well as our own experience and opinions and the list was narrowed down to the final 45.

We found that the blind reviews and scores were an excellent starting point and a way to get different perspectives, but they were just that – a starting point. We ended up picking 26 of the final lineup (57%) from the top 45 as scored by the blind reviews and 19 (42%) from outside it. Three of the sessions in the final lineup were the 137th, 127th and 111th as ranked by the blind scores – so you can see that the research needed interpretation!

Bonus Information about Scheduling: Once we had the final 45, we didn’t create the schedule straight away – we published the list on the website and asked attendees to pick their favorites. This enabled us to do 3 things:

Put popular sessions in large rooms.

Pre-schedule the 6 most popular sessions into the flex-track as repeats.

Through a pairs-analysis, avoid pitting sessions that attendees wanted to see both of against one another.

So there you are, a little window into our process, hopefully it sheds some light into the role of blind reviews.

To be crystal clear, here is my position on the recent discrimination in UX. Ian, thank you for encouraging me to say this in 458 characters as well as 140.

Discriminators: Stop it, now.

Victims: Draw strength from those in similar situations, talk to them. (Sarah, Relly, Leslie, Whitney, Amy, Jessica). Speak up publicly to raise awareness so the rest of us can help make this treatment culturally unacceptable.

Everyone else (incl. conference organizers): Listen. Believe. Don’t suppress discussion, even if it makes you uncomfortable or you think it has a negative short-term impact. Don’t stand by and do nothing.

Amidst so much bashing of the term User Experience, I was delighted to read Robert Hoekman Jr’s post on Boxes and Arrows recognizing the weakness in the term but acknowledging it’s momentum and calling for it to be defended.

Why are we so readily abandoning the term just when the going gets tough? I am reminded of quote from Denethor, Steward of Gondor in The Lord of the Rings:

‘Yet,’ said Denethor, ‘we should not lightly abandon the outer defences the Rammas made with so great a labour.’

‘Much must be risked in war,’ said Denethor…. ‘I will not yield the River and the Pelennor unfought — not if there is a captain here who has still the courage to do his lord’s will.’

Well Lord Hoekman, here’s one captain still up for the fight!

(we should of course ignore Denethor’s delusional belief that he could win that fight – every metaphor breaks down somewhere!)

The last time I checked the dictionary definition of “indicator” it said:

in·di·ca·tor – noun

a person or thing that indicates.

a pointing or directing device, as a pointer on the dial of an instrument to show pressure, temperature, speed, volume, or the like.

an instrument that indicates the condition of a machine or the like.

an instrument for measuring and recording variations of pressure in the cylinder of an engine.

No mention of predicting the future there. In fact Key Performance Indicators can take two forms, “leading indicators”, which as Jared states – are used to predict the future, and “lagging indicators” which measure past performance.

Now I happen to agree with Jared that leading indicators, if you can discover them (it’s hard!), give you the most value – but that doesn’t mean you shouldn’t give any thought to lagging indicators. A powerful dashboard or scorecard technique is to define complimentary leading and lagging measures that show cause and effect, this can help you validate your indicator selection and give stakeholders confidence in your measurement framework.