The difficulty lies not so much in developing new ideas as in escaping from old ones. – John Maynard Keynes

User experience architects are constantly being asked to, “think outside the box” in an effort of create revolutionary ideas for their companies. Some companies have even created separate innovation labs to this end. These lab groups are encouraged to be nimble, fail quickly, and maintain a startup mentality. This article explores some of the concepts discussed in The Commonwealth Club of California’s broadcast, R&D, Innovation Labs, and Channeling Your Inner Startup.Will Young, Director, Zappos Labs, Brian Luerssen, GM, OKCupid Labs, and Mark Randall, Chief Strategist and VP of Creativity, Adobe, discussed the place and importance of innovation labs within large companies.

What is an Innovation Lab and Why Would a Company Want One?What made a company become large and successful at serving its customers can eventually make it unsuccessful at being innovative. The goal of an innovation lab is to build brand new businesses and experiences – not just product extensions – to keep companies from stagnating.

Will Young commented that innovation labs aren’t about direct revenue. If that were the case, Zappos’ innovation lab would be sitting around every day creating different ways to issue coupons. The point of an innovation lab is to try to keep larger companies nimble. It will work if you’re doing it right but it won’t always work. Failure is a fact of life. Successful labs have to balance their success and failure rates.

Mark Randall was asked how the people at Adobe’s lab came up with new ideas. He suggested thinking about the perfect experience, and used as an example, an imaginary helmet. You can put this helmet on and imagine a drawing. As you imagine your drawing, it appears on the screen in front of you. Do we have the technology to do this today? No. But ask yourself, “How close can we get?” This kind of thinking drives Adobe.

Anyone in the company who would like to test out an idea is given a Red Box. The Red Box contains everything a person would need to launch a new idea and test it. Recipients don’t have to get permission or sign-off to receive a Red Box – Adobe finds that too much paperwork kills innovation – they just have to have an idea.

Each box contains a credit card with a pre-paid $1,000.00 limit and six levels that must be completed. There are instructions about how to run through the levels, and at the end of each level there is a specific action to take and a task to complete. When the idea has made it through all six levels and, “beats the Red Box,” the owner gets the, “Blue Box.” The Blue Box has six more levels, and upon completion is given executive exposure. Employees from administration assistants to engineers have been given Red Boxes. Since their inception, there have been 900 Red Boxes issued, and 15 Blue Boxes.

Randall said that yes, in the beginning, their financial team looked at them in terror when they suggested giving recipients the pre-paid card, but the idea was to alleviatethe need to turn in credit card receipts and waste time completing paperwork. And they found that the Red Box recipients were even more careful with the company’s money, and in some cases added their own money to continue funding their project.

While proposing the concept of the Red Box, Adobe leadership suggested that the boxes should only be given to the company’s top performers. Randall suggested an innovation contest between leadership and the company’s top performers, and Randall and the company’s lowest performers. Randall insisted that he would win. He said that it only took one person, someone who is frustrated with their current project but has something they have a passion for or are excited about. Ideas come from people at the extremes and happen at an intersection of a moment in their life, when the company need, the people around them, and their passion, create a perfect storm.

Fail-ConHow do you know you’ve failed? The panelists agreed that the hardest part is figuring out if you’ve failed or if you just haven’t tried hard enough – if the idea was a good one but failed because of poor execution. But failure is both important and inevitable if you’re thinking outside the box. Breathtakingly spectacular ideas are rare. An innovation team will likely never hit a home run its first time out. The labs exist to try crazy things that will probably fail, and learn lessons from those failures. Leadership needs to understand that and not disband the lab at the first string of failures.

The panelists spoke about the difficulty of motivating innovation team members after they had experienced so many failures. The team members often became depressed, despondent, or afraid they might be fired. Mark Randall suggested having a yearly Fail-Con to celebrate the company failures, and the lessons those failures produced. He proposed the idea of making a tiny memorial plaque for each failed project, and that these plaques could be scattered around the Adobe’s campus lawn as a reminder to employees that the people who thought of these ideas are still here and still successful. He wished for an enormous graveyard of lessons learned.

Innovation at SearsI do not think Sears will be starting a separate innovation lab any time soon, but in the meantime, how can we apply some of these concepts to our own work groups? What might a Sears red box contain? What might our Fail-Con look like? And how close can we get to a Kim Kardashian imaginary helmet?

Knowing how something originated often is the best clue to how it works. – Terrence Deacon

Patternality is the ability to detect and make new patterns in our environment. It represents a deep instinctual drive to impose order on the world to make it useful, understandable, and survivable. All animals must recognize patterns in their environment if they are going to survive.

Humans encountered several developmental turning points that required our brains to change to reconcile the demands placed on us. As luck would have it, our brains are genetically at the ready for breakthroughs. If we look at how our species cognitive abilities have changed and adapted over a roughly six million year period, we may discover something about how we might change and adapt to today’s digital technology.

Our Ancestors

To survive, like all animals, our early quadrupedal ancestors had to go through a period assembling things into patterns and categories: things edible, things inedible, things drinkable, things undrinkable, things poisonous, things nonpoisonous.

One of the earliest turning points came when our ancestors first stood upright and gained the ability to walk bipedally. Hearing, seeing, and sensing what was above ground, created new survival opportunities. Memories of what led to succeeding or failing as quadrupeds would have to fade. They were required to construct new ways of identifying and creating patterns that allowed for success or failure as bipeds. Hand eye coordination needed to be developed. Things fell into new patterns and categories: things liftable, things unliftable, things transportable, things untransportable.

With our newfound hand eye coordination we were able to explore the affordance of nature’s objects. Each natural object was a tool-in-waiting. Our ancestors’ first tools aren’t found in any museum. Long before the man-made handaxes of the late Pliocene, a stick might have frightened out dinner from beneath a felled tree and a rock, close at hand, may have been used to bash its skull. Some of these objects became useful, then popular, then pattern enablers.

These ancestors continued to leave evidence of their social activities. Objects fell into new patterns and categories: things fashionable, things giftable, things symbolic of faith or prestige, things presented after accomplishments or rights of passage, things distributed, things traded, things bought and sold.

Other significant turning points include the invention of speaking, writing, and reading.Each of these inventions expanded our brains’ capabilities to think in ways that altered the cognitive evolution of our species.

The Digital Turning Point

If we are, with the invention of our present digital technology, at another evolutionary turning point, where would we find indications of new patternality? Consider babies that have tried to “open” a photo in a book with finger motions that work on their tablet computers. Is there a new pattern category emerging: things openable, things unopenable, things pinchable, things unpinchable

Patternality and User Experience

Humans rely on pattern recognition and our mental models to help us navigate any new and unfamiliar environments we encounter. Influences in our past drive our ability to detect patterns and create new ones. Humans feel more comfortable encountering situations we are familiar with. When users discover familiar patterns in interfaces, their brains release comforting dopamine. We love this comfort so much that we try to detect patterns where there are none. And when a pattern doesn’t behave as expected, users feel anxiety and panic. Imagine yourself using a browser with no “back” button. What if you were unable to close a pop-up window?

From then to now and into the future

The transformation of our environment brought on by the invention of digital technology has created another developmental turning point. New patternality is being formed, at times, without our realizing it. We must design with people’s mental interaction models in mind. When we can’t leverage an interaction model or pattern, we must create an interaction experience that draws from common mental models as much as possible (e.g. removing documents by moving them to the “trash.”) At the same time we can’t force a design just because it leverages the familiar.

It’s better to have people learn a new model than try to shoehorn a familiar pattern into a place it doesn’t fit. The first instinct of almost all of the usability test subjects of the first computer “mouse,” was to hold it upside down in their non-dominant hand and manipulate the track ball with the fingers of the opposite hand. But after test subjects watched the test conductor place the mouse on the table and give it a jiggle, its affordance was almost universally, immediately understood. A new pattern was born.

Will our memories of what led to succeeding or failing in the days before digital technology slowly fade with the passing of each millennium?Changes in our environment, natural, physical, or man-made, provide constant grist for the patternality mill, forever producing new avenues for pattern sensing and pattern making in an effort to make sense of our world. As our species encounters new climactic, biologic, and technological changes in our environment, what new patterns will we create?

Picture book / when you were just a baby / those days when you were happy / a long time ago. – The Kinks

Lens flare: The magical mistake. It happens when shooting film through a lens pointed into a light source causing an optical defect –rings or circles of light – that obscures part of the image. If you can’t come by it organically, most photo editing software can artificially produce the effect. And that software is being used a lot these days in television shows (Saving Hope, Dr. Who), television commercials (Honda Insight), movies (Transformers: Revenge of the Fallen, J.J. Abrams’ Star Trek), video games (Effect 3, Syndicate, Battlefield 3), and most recently in our new, “This is Sears” advertising campaign. Apparently, the present’s so bright, we gotta wear shades – but why?

In the late 1960s and early 70s, the first significantly cheap cameras were manufactured and made available to the great unwashed – like your fat old uncle Charlie, for example. In the past, professional photographers worked hard to ensure that lens flare didn’t happen. Flare on a photo meant the photographer was an amateur who didn’t know what he was doing, so for the most part, the only photos with lens flare were made by fat old uncle Charlie with his new Kodak Pocket Instamatic. Suddenly there were heaps of amateurs, and heaps of lens flared photos. For example, the Lankin family photo album contains several photos of what look to be our friends and relatives disembarking a spaceship at Devils Tower to embrace Richard Dreyfuss.* So lens manufacturers started making coated lenses that would prevent lens flare, and soon fat old uncle Charlie was the Midwest’s answer to Ansel Adams. Time marched on; we grew older.

Enter the reminiscence bump.

Psychologists have studied a phenomenon they call the “reminiscence bump.” Starting around the age of 30, people begin to remember their youth with a great fondness. Our ability to remember our past generates strong random images and emotions, as well as clear and specific memories. Your ability to remember memories from that time period literally gets a little boost, or “bump,” and you begin to experience the phenomenon of nostalgia.

In the 17th century, nostalgia was considered a physical disease, a sort of leprosy of the brain. And the physicians of the day couldn’t decide if it was caused by very tiny demons squeezing bits of the ol’ gray matter, or brain damage caused by listening to cowbells for too long.

But nostalgia’s not what it used to be (cough), and today studies show that nostalgia has psychological benefits; it makes us happy and improves our state of mind. Researchers from Loyola University report that thinking of good memories for 20 minutes a day can make people happier than they were the week before, and that thinking about the past makes people feel better than when they are thinking about their present lives.

Even people with lousy childhoods may look back on these years with at least some affection. Studies show that our brains work to clarify happy events and dim sad ones in our memory. And if it can’t dim the sad events, it helps us view them as little stories of triumph and redemption, turning our personal negatives into personal positives. For example, a beleaguered future UXA from an impoverished background conquered the back-breaking newspaper distribution industry and made enough money for a pair of green, grapefruit-sized puff balls – each with its own jingle bell – for her white leather roller-skates and went on to become “The Queen of Roller Bootin’.”

If you love some artifact from your past – a book, movie, song, or toy – chances are good that you loved that period of your life. You’re not watching Breakin’ 2: Electric Boogaloo for the eighteenth time because it’s a great movie, you’re watching it because it reminds you of a time when you were relatively happy. You can’t easily separate the quality of the movie from the time you first saw it. Even the crappiest artifacts from your past are made great simply because they were around when you were a kid. When we attach importance to things from our past, we’re telling ourselves that our own lives are important.

As it turns out, anything from the past can bring on nostalgia. A song you hated growing up that triggers no specific memories for you can induce feelings of nostalgia, because the song you hated was part of an ongoing “life soundtrack” of songs that included other songs you loved. You actually wasted your time listening to the hated song. But no one wants to think they’ve wasted their time, so our brains turn, “wasted time” into, “invested time” and the hated song becomes meaningful in its own way.

Most people like recalling things that happened during their lifetime. It makes them happy. So fat old uncle Charlie’s lens flared photographs – and photographs made to look like them – can trigger happiness if you’re over a certain age even if they’re not photographs of things specifically from your childhood. The majority of our customers have reached their reminiscence bump, so using this retro photo technique in our advertising should trigger memories of positive events that shaped who they are now. Will they connect that happiness with Sears? We don’t know what the future holds; we’ve only got the past – and the time and effort we invested in simply existing.

*Do yourself a favor and watch Close Encounters of the Third Kind. It’s full of flare.

Having trouble selling your house? In my neighborhood, a popular home remedy is burying a small, plastic statue of St. Joseph upside down in your flowerbed. With enough prayer, the neighbors assure me, a successful closing will be right around the corner.

But those who lack faith might try an old realtor’s trick; put a tray of cookies in the oven before potential buyers come over. The theory is that the comforting smells of fresh baked goods will evoke positive memories, and those memories will get people to fall in love with and eventually purchase your house. But why might this work?

The olfactory nerve is located close to the areas of the brain connected to the experience of emotion and memory. The ability to smell is so intertwined with the brain’s memory center that when areas of the brain connected with memory are damaged, the ability to identify scents is impaired. This may be why odor-induced memories are so emotionally potent.

While marketing wars are either won or lost in the consumer’s imagination and emotions, a business that understands how to tickle a customer’s psychology will be rewarded with higher profits. When a brand experience is connected to our sense of smell, something interesting starts to happen.

About ten years ago, oil tycoons, celebrities, and sovereigns around the globe began complaining that success didn’t smell as sweet as it used to – at least when it came to their Rolls-Royces. The newer models rolling off the line had much of their wooden components replaced by plastic, and the synthetic smell of it was overpowering the once dominant leather and wood scents of the old models. So Rolls-Royce – using the 1965 Rolls-Royce Silver Cloud as its model – came up with a fragrance that smelled like the classic. When the fragrance was sprayed on the underside of the car’s seats, the complaints stopped. Rolls-Royce continues this practice.

Scent branding doesn’t stop at objects. The movie, Twilight, and television show, Desperate Housewives, each have their own perfume, and so do celebrities, who are themselves brands. After all, Jay-Z’s not a businessman, he’s a business, Man, and according to his fragrance’s promotional material Young Hov’ smells like lime and goji berries. Meanwhile, David Beckham runs around the pitch in a cloud of grapefruit, cedar, and vanilla. But why stop at objects and people? Why not scent-brand a space?

In 1990, after researching the affect scent has on consumers, the Mirage resort and casino started pumping a “Polynesian” scent through its Las Vegas air vents. Consumers spent significantly more than they had before. In fact, it was such a success that the company, AromaSys, now scent-brands the Venetian, Caesars Palace, Mandalay Bay, Luxor, Bellagio, Wynn, and all MGM-Mirage Resorts and Casinos.

It seems our hearts are connected to our wallets by smell. One study found that a pleasant smell can alter a person’s perception of time, which may lead to more time lingering in the physical location examining the merchandise, or patiently waiting for help. Consumers tend to return to the scented store more often as well.

Whether a scent is successful at its job depends on how closely it matches the space. If I had to develop a scent for our stores, I’d use the space in the photograph at the top of this post as a guide.

Look at the photograph. In your mind’s “nose”, can you smell that space? Do you smell the sweet top notes of freshly cut lumber and sawdust mixed with the warm odor of oily machine parts and the sharp, metallic scent of nuts and bolts, mixed with nail dust in the corners of old toolboxes?

Group we belong to, group we belong to, group we belong to, lend me your ears.

Picture it. Sicily. 200,000 BC. There’s no Il primo course; pasta hasn’t been invented yet, so everyone is hungry and there’s not enough food to go around. Your clan meets mine for the first time, just outside of Palermo. Do we invite you into the cave for antipasti? No. You’re filthy outsiders, here to trash the joint and take all our Mastadon spada alla ghiotta. So we fight, in groups we’ve instinctively formed to help us better survive this very situation: competition for insufficient resources. And those who survive enjoy life, liberty and the pursuit of procreation. The concept of “Us” versus “Them” may not be politically correct, but it was crucial to our survival. And our brains are still hard wired for this type of division.

A member of the Sears accounts team is hardly likely to bash in the skull of a member of the marketing team while competing for the last ballpoint in the stockroom. Instead, we use that in-group instinct to determine our self-image. The groups we belong to shape our identities. We belong to groups of people we know personally like our family, our book club, and our drinking buddies. We also belong to groups so vast that we can’t know everyone personally – Males, Egyptians, graphic designers. Whatever group we’re in, we feel like that group’s fate is our own, so it makes sense that we have warm feelings of connectedness for our groups, even though we may dislike some individuals within those groups. For example, I take great pride in my Canadian heritage but can’t stop seem to stop apologizing for Celine Dion, Justin Bieber, and of course, Bryan Adams.

It’s not surprising that studies have shown that people will make enormous financial and personal sacrifices to become part of an in-group they wish to be part of. People may endure the pain and humiliation of hazing to pledge a fraternity, or pay to become members of everything from the NPR to the Emperor’s Club.

Some of the most successful companies concentrate on creating an in-group for their customers. For example, we know that the Apple in-group, thanks to their famous marketing campaign, is composed of the smart, hip, youthful, and fit, as opposed to the Windows in-group, who are a bunch of uptight, suit-wearing, unimaginative losers. People will pay a lot of money to belong to that Apple in-group. The new iPad, with its “stunning Retina display, 5MP iSight camera, and ultrafast 4G LTE” starts at $499.00. And it comes with a license to complain about the Genius Bar with as many doe-eyed, skinny-jeaned, Zooey Deschanel look-alikes as you can cope with.

We may have made great strides – especially in the OBU – in creating our Searsemployee in-groups. The majority of my co-workers take pride in working for such an established and historically groundbreaking company, in such a great location. We clearly have the know-how to manipulate this universal, primitive wish to be part of an in-group, so why aren’t we doing more to make our customers feel like they belong to “Team Sears?” While we give wonderful rewards to those joining the Craftsman Club or becoming Shop Your Way Rewards members, inclusion in these groups doesn’t exactly make the Average Joe want to strut down the street like John Travolta during the opening credits of Saturday Night Fever. The tagline, “This is Sears” is not inclusive. The marketing campaign is beautiful, and startlingly different than our past campaigns, so it may work well in spite of its tagline. But I’d love to see the affect a more inclusive message would have on our bottom line. Until then, This is Your In-Group. This is Sears.

"Y'all take a chill. You got to cool that sh*t off. And that's the double-truth, Ruth." - Samuel L. Jackson, Do The Right Thing

My cat, Rudy, used to go wild when a show with birds came on the television.

He’d run to my Zenith and bat the screen with his paw. Occasionally he’d turn to look at me on the couch and meow as if to say, “You sure you don’t want in on this?”

“They’re not real birds, Sweetheart,” I’d tell him. “They’re bird simulations.” But he kept on behaving like they were the real thing. Silly boy. Of course, humans are more tech savvy than my cat, right? Maybe not.

Students were tutored for an upcoming exam by a voice-equipped computer. After the tutoring session, the students were asked to evaluate the computer’s performance, specifically, how well they thought it had taught them. Half the students were asked to evaluate the computer’s performance by the same computer that had just tutored them, the other half used a different computer across the room – equipped with a different voice – to complete their evaluations. The students consistently gave significantly more positive responses to the computer that asked about itself than they did to the computer across the room. Students felt far more comfortable complaining about the tutoring computer behind its plastic “back” than to its LCD “face.”

For all the years that Homo sapiens have been evolving, the percentage of time we’ve been interacting with computers is infinitesimally small. And because playing nicely with others helped our ancestors’ genes survive, we’re driven to keep doing it, even if the “other” isn’t an actual member of our genus and species. Yes, we may have figured out how to set up email accounts, but that information is being stored in our pathetic, lizard-y, stone age brains next to all the ancient, “How to win friends and influence people” information. So we tend to treat technology – especially technology that can appear to help us – like other humans. And we expect them to behave the same way when they interact with us. If they don’t, if they’re rude, impersonal, and unhelpful, we’re going to feel like accidentally on purpose running them through with a spear during the annual mastodon hunt. Those of us old enough to remember Microsoft’s assistant, Clippy, know what I’m talking about.

iPhone’s voice-activated virtual assistant, Siri, is the first personal experience with artificial intelligence many people have ever had. Siri helps us schedule appointments, order soup on a rainy day while we dance around in our pajamas, and solve our hotspacho* problems. We can even teach Siri our name. So as predicted and by design, our stone age brains have gone mad for Siri to the point where Apple engineers predicted and programmed sassy responses to anticipated questions such as, “will you marry me?”, “Do you love me?”, and “What are you wearing?”

Experiments have shown that one can take almost any social experimental finding and apply it to computer-human interaction. When it conforms to social rules, technology seems more likeable, smarter, more helpful, and harder working. Who knows? In the future, Sears’ device presence may have a voice of its own, but until then we can keep this information in mind when improving our site in lower-tech ways. If our web presence is slow to react or slow to respond, if it shows no regard for what a customer’s wants and needs might be, if it asks for unnecessary information, or can’t remember our customers’ names, then it’s no longer our customers’ ally. And they’re probably going to complain about it behind it’s electronic “back” at a competitor’s website.

There it is, pinned to an office partition near you, under the Dilbert cartoon: a list of user-interface design guidelines. There’s been at least one copy stuck to a cubicle wall in every office I’ve ever worked at after 1996. Sometimes it’s Shneiderman’s, Eight Golden Rules of Interface Design. Sometimes it’s Jakob Nielsen’s Ten Usability Heuristics, or it maybe be one from Norman or Brown. If you hold these lists side by side, you’ll find they’re strikingly similar. Two separate rules on one list may be combined into one rule on another. The phrasing may be different, but the ideas are the same. Why is this?

The top five user experience superstars – Norman, Nielsen, Molich, Shneiderman, and Brown – helped create these guidelines years ago, right around the time your alphanumeric command line interface switched to a graphic one. Goodbye, “rmdir c:\finances /s /q.” Hello, “drag to trash.”

What made these guys our industry’s MVPs? Unlike most people working in our field today, they had backgrounds in cognitive psychology, and they applied their knowledge of cognitive science to try to improve the design and usability of interactive computer systems. They weren’t interested in pretty or profitable; they were scientists, doing research independently, on how are brains solve problems.

Have you ever listened to Jakob Nielsen take 45 minutes to explain the mathematic formulas – N(1-(1-L)n) and others – that went into the study of the number of usability problems found in a usability test with n users where N is the total number of usability problems in the design and L is the proportion of usability problems discovered while testing a single user? I have. See, the typical value of L is 31%, averaged across a large number of projects he studied. You’re getting sleepy. Plotting the curve for L=31% gives the following result that we need to run qualitative test with 5 users.

Zzzz….

Not even his “I’m feeling lucky” Google boxer shorts joke could wake most of us out of the stupor he created in that auditorium. And that’s because he’s not a poet, or and artist, or even a particularly good story teller. But Jakob is hardcore when it comes to his research analysis.

So when he came up with his top ten, he wasn’t out to differentiate himself from those who had gone before. He wasn’t trying to impose his own like and dislikes upon the HCI community. His rules call for things like clarity, consistency, error prevention, recognition, and recovery, just like the rules of those that had gone before him, because the those rules’ origins come from a common source. His guidelines, like his colleagues’, are based on human psychology: how we learn, reason, remember, and absorb information.

And this is useful information to those in our field because the human brain is often terribly inadequate when it comes to helping us understand things. Our color vision is minimal; mantis shrimp have 12 types of color receptors in their eyes – humans have three – and can also see ultraviolet, infrared and polarized light. Our peripheral vision is limited; the average German Shepherd has a visual field of 250 degrees, while a human’s visual field is only 190 degrees. Our focused attention span is around eight seconds, only slightly above goldfish, whose attention spans last three seconds. And truckloads of neuropsychology test results show us that our own memories cannot be trusted, that our brains are constantly messing with our heads, and that we need to address these issues if we’re going help users successfully perform tasks using a computer interface.

Recently I’ve heard talk from those in our industry who don’t have a scientific background, of changing these rules to suit different business plans, marketing concerns, or design choices. “They’re old”, they argue, implying that, “old is bad.” And while it is important to keep current with new technology in our field, please let’s keep in mind that the original and best user-interface design guidelines were created by exploring the limits of the human brain and discovering what errors we are likely to make. As technology advances, and neuropsychologist learn more about how our brains work, we may find these lists of guidelines being augmented or updated. But for now, instead of trying to differentiate ourselves by making up our own personal list of interface guidelines based on our personal wishes, let’s focus on designing something to help our users cope with their brain’s limits and the corresponding errors that arise because of them.

]]>http://ux.sears.com/the-minds-why/feed/2Chesterfields: to sit or to smoke?http://ux.sears.com/chesterfields-to-sit-or-to-smoke/
http://ux.sears.com/chesterfields-to-sit-or-to-smoke/#commentsWed, 16 May 2012 16:47:55 +0000http://ux.sears.com/?p=31309Tweet

A typical Canadian family enjoying a hot summer day.

Many things influenced my choice of career. There were my anxious parents and my best friends, my tendency to doodle in homeroom and the teacher that encouraged me to keep doodling, But the biggest influence on my career was something a bit loonier. Blame Canada – it made me a UXA.

I am a Canadian American. Yes, it is a real thing. I was raised by Canadians who moved to a small, Midwestern American town where we spent most of our time. But my summers and vacations were spent with my family in Canada. Like Persephone, I felt I was l living a double life, with the small, Midwestern American town playing the part of the underworld. This was because the town was populated with small, Midwestern American children, who made me miserable most of the time.

My parents spoke Canadian English – yes, it is a real thing – so that’s what I spoke too. And children, when they’re not composing adorable commentaries about kittens for the Internet, can sometimes be horrible, mean-spirited little shits. My new school chums wasted no time telling me I was an idiot, a moron, a retard, a dummy, and a loser because I, “talked stupid”.

There were so many words for me to re-learn. Why were the Rockets in my Halloween bag called Smarties, and why were the Smarties called M&M’s? Why was Canadian cheese called American cheese? How on earth would you smoke a chesterfield? This matronly, brown woman, the one you call, “Mrs. Butterworth”, why do you refer to the substance that oozes out of her as “maple syrup?”

As if the harassment from the children wasn’t enough, I received plenty more from my grade school teachers who constantly gave me angry lectures and poor grades for misspelling words like labour, honour, colour, theatre, and cheque. And because my parents spoke and wrote this way, my friends concluded that they must also be stupid, so It didn’t help either one of us when my mother frequently ordered me to, “put on rubbers and run the corner for some homo” in front of my new “pals.”

Technology comes to us with its own language, and everyone in our field understands it, but the majority of our customers do not. What seems obvious to someone in our field might confuse a potential customer. Here are some examples of messages you may find confusing.

“You spilled Schooner on the chesterfield? Mop it up with this flannel.”

“FieldT12empty”.

“If the expiry date has gone, chuck it in the garburator.”

“PC Load Letter”

I didn’t mind learning how to speak American English, but I would have appreciated a little help and understanding along the way. Consumers, as well as Canadians, shouldn’t be made to feel stupid because they don’t know the lingo. If a consumer is confronted with a confusing label, term, or message, he may feel stupid, embarrassed or inadequate. It’s likely, he’ll move to another website that speaks his language. And please don’t give a customer error messages that tell him he did something wrong. It’s not nice to rub his nose in it. Instead, guide him through the process of how to get it right. Or better yet, design a process he can’t get wrong. You’ll have a customer – and maybe a friend – for life.

]]>http://ux.sears.com/chesterfields-to-sit-or-to-smoke/feed/4The Knee is the Achilles Heel of the Leg: Thoughts on Analogyhttp://ux.sears.com/the-knee-is-the-achilles-heel-of-the-leg-or-extremely-random-thoughts-about-analogies-human-cognition-and-design/
http://ux.sears.com/the-knee-is-the-achilles-heel-of-the-leg-or-extremely-random-thoughts-about-analogies-human-cognition-and-design/#commentsWed, 02 May 2012 13:25:40 +0000http://ux.sears.com/?p=30798Tweet

Ogg tries unsuccessfully to communicate with the Maria-tron 6000.

My friend and I were at a Golden Nugget Pancake House a few weeks ago. After ordering the Denver omelet, she started talking about moving into her new, much smaller apartment. She was particularly unhappy with the living room. She folded her paper placemat in half and set it in front of her. She placed the mustard bottle in the middle of it to represent her couch; sugar packets placed around the bottle represented chairs. She put fork and spoon “bookshelves” along two of the placemat’s edges, and finished by trying to force her spoon in the lower right corner where it overlapped the edges of the placemat. “See?” she lamented, “There’s no room for the media cabinet.”

It was an effective analogy. I clearly understood her problem, mapping my idea of “living room” and its standard contents effortlessly onto to the contents of a restaurant tabletop. This kind of mental mapping is a tool we’ve all used, at some point or another, to communicate concepts and ideas through a computer interface. Our livelihoods depend on our ability to create and understand analogies, and on the ability of other people to possess this skill as well, but never fear – humans make analogies so often and so easily that they are unexceptional to most of us. Understanding analogies is a sort of sound barrier the artificial intelligence crowd – specifically the branch devoted to creating a computer that can be made to “think” like a human – can’t break. Computers have a terrible time with analogy, which is why we’re all still eating eggs and complaining about our sitting rooms with our human friends instead of our Maria-tron 6000 droids. But how did we develop this ability?

Many cognitive scientists propose that our brains are able to understand analogies because understanding analogies helped the genes of our ancestors survive. The first caveperson who had the idea to make a plan to ambush wild game by drawing a map in the dirt with a stick must have been some sort of “cave-Einstein.”

Consider the following:

Setting: Four homo sapiens crouching in the Patch of Very Long Grass looking at a map OGG has drawn in the dirt with a stick)

OGG: (pointing with a stick to one of the scribbles he has drawn) Bunga, you and Thak hide behind Very Large Rock, here, behind (OGG points to a second scribble with a long nasal protuberance) wooly mammoth. Then come out from behind Very Large Rock, jump, wave spear in air, and make big sound – frighten Wooly Mammoth. Gred and I stand here (OGG points to a third scribble) by Three Big Trees, kill wooly mammoth as it run by.

OGG: (moves into position behind Three Big Trees. Yells.) Gred? Why you still in Patch of Very Long Grass?

GRED: (yells) You say stand behind third scribble. You say third scribble is Three Big Trees.

OGG: (yells) No, third scribble only representation of parallel relationship between similarities of two disparate items to facilitate comprehension.

GRED: (trampled by a Wooly Mammoth) Augh!

Clearly, none of our consumers are related to Gred, as he was unable to produce offspring after “the mammoth incident,” all because he couldn’t understand Ogg’s dirt scribble analogy. But asking consumers to mentally map the physical process of purchasing an item from a store to manipulating pixels of light on a screen is asking a lot, even of Ogg’s descendants.

Finding the right analogy isn’t always easy. Not all analogies work well. Some are good (helpful) and some are bad (confusing). Locating and purchasing a humidifier replacement filter with a Sears mobile app is probably not enough like photosynthesis for that analogy to work. And although our creation of a bad analogy does not cause our supervisors to, “release the mammoths,” it may spell the difference between profit and loss, or customer satisfaction and dissatisfaction.

As technology and everything else advances, new analogies will be created every day. Our consumers’ minds are full of random concepts just begging to be stitched together for them, by folks like us, to make their lives better. But how do we know when we’ve created a good analogy? I’m pretty sure the role I played having breakfast in a greasy spoon will not map to the role G. Gordon Liddy played in the Watergate burglaries, but until that breakfast, I wouldn’t have thought a spoon would map so easily to a media cabinet. What’s a Homo sapien to do?

Of course, an excellent way to decipher what’s being understood in the consumer’s mind is through user testing. But that’s another blog post.