Meta

Archive for August 2015

As L&D moves from being a training department to a learning organization, LMS’s will need to support the new types of learning required. To support this we need new types of learning systems.

In my view, a 21st century learning system needs the following characteristics, as shown in the picture above.

Social learning must be at its heart. The learning system should have the functionality required to effectively support informal learning; to integrate social activities into learning paths; and to integrate comments and peer questions and answers into knowledge bases.

The learner must be at the center. The user experience is key. I agree with an observation Elliott Masie and Cushing Anderson made a few years back when they advocated for changing the term we use from Learning Management System to Learning System. Personally, I think if we started thinking of it as a learning delivery system it might refocus many of our efforts.

It should support formal learning, including training, reinforcement, and process-based learning paths.

It should support informal learning, with communities, informal learning activity calendars, and social media.

It should support performance on the job with knowledge bases.

It should support a full range of learning activities, including courses, coaching, communities, knowledge sharing, performance support, and action learning.

IT owning knowledge management is like the stadium builder coaching the basketball team.

I was working with a disability and life insurance company a few years ago, and was talking to their director of knowledge management. She was livid. “You’re not going to believe what the IT director just told me,” she said.

“What’s that?” I asked.

“He came into my office with a big smile in his face. Then he said, ‘We did something for you last week. We heard that there were complaints that the knowledge base search function wasn’t really useable, so we fixed it! It used to be if you searched for life plus VT plus individual you would get 700 hits. Now you get over 2,000!’”

“I can’t believe it,” she added. “The reason we got complaints was the wild number of false positives. Who can work with 700 hits? Now he tripled it? And, of course, being IT they didn’t ask us what the problem was – they simply assumed in their infinite wisdom that they knew better than anyone else and went ahead and fixed it.”

Imagine that a builder gets the contract to build your city’s new stadium and does a fantastic job! And your city council immediately turns to him and says “You know you built us one fine stadium. The basketball facilities inside it are great. In fact they are so good, we’d like you to coach the basketball team.

That would seem pretty silly, wouldn’t it. Yet this exact thing happens in knowledge management with distressing frequency. If you believe, as I do, that knowledge management and learning and integrally connected, then the people who organize the organization’s knowledge, provide it to workers in the form of performance support, explicate tacit knowledge, and facilitate knowledge sharing need to be the same folks who are accountable for employees learning the skills they need to do their jobs.

This is why, in several organizations with which we work, L&D is called the Knowledge and Learning department.

Not every performance problem is a training problem. We need to know which is which, and stick to the problems we can solve. Throwing training at all every performance problem doesn’t work, and guess which department gets blamed when it doesn’t?

Some years back, I was called by a person who introduced herself as the director of a large botanical park. “We need motivational training,” she said.

“That’s great, I’m in the training business,” said I. “How many people do you need trained?”

“200,” she said.

“In what size groups?” I asked.

“What do you mean?” she replied. “All 200.”

“Oh,” said I, not quite understanding. “And how long do you have for the training?”

“An hour,” she said.

“I see,” I said. “So what you really want is a presentation.”

“No,” she said. “I need them trained.”

“OK,” said I. “Help me understand your problem a little better. Why do they need to be trained?”

“They aren’t motivated.”

“Is that something new? When did this start?”

“Three weeks ago,” she said. “When they were told that everyone would be fired next month.”

Obviously, this person did not have a training problem.

If someone held a gun to their head, the employees could have done their job. They had a motivational problem, one that would not be solved by training.

In their seminal 1970 article Analyzing Performance Problems; or “You Really Oughta Wanna”. Robert Mager and Peter Pipe suggested that “When faced with a discrepancy between the actual and the desired performance of a student, employee, or acquaintance, the usual course of action is to ‘train, transfer, or terminate’ the individual.”

As learning professionals, we need to ensure that performance problems are learning problems, not motivational or structural problems. This is a primary function of the analysis step in ADDIE, to analyze the source of the performance discrepancy. If they don’t have skills, it’s a training problem. If they have them but don’t want to use them, it’s a motivational problem. And if they have the skills and want to do the job but have insufficiency authority, resources, or time, it’s a structural problem.

In the situation I mentioned, one solution would have been to take every Friday and help the employees write resumes, build interviewing skills, and use online resources to find jobs, in exchange for good-hearted effort the other four days. Insofar as part of our charter in L&D is to be performance consultants, this would be a great approach. Other performance problems may be amenable to recognition, incentives, resourcing, or other management interventions that are simply not in the purview of L&D.

But the bottom line is that we can’t throw training at every performance problem. It simply won’t work.

Rapid eLearning is basically fancy content presentation. And if content equaled learning, universities could be replaced by libraries.

A few years ago, I was working with a company who needed to revamp their project management training. The old training department had very little power, and basically was tasked with feeding whatever slide decks subject matter experts (SMEs) gave them through Articulate, exporting the result into SCORM, and calling it an eLearning module. The result was 160 slides, each having an average of 50 words. Unsurprisingly, few people learned effectively from this monstrosity.

Lessons Learned

The lesson that it should not surprise us to learn is that SMEs are not instructional designers. There’s a big part of me that is surprised that this even needs to be said. However, there is a popular notion in the learning world that if you give a SME an authoring tool that is simple to use, that useful learning objects will be pooted out the back end. We call this rapid eLearning, and honestly all too often I think that what is pooted out the back end is what one would expect is usually pooted out the back end.

How does the instructional designer add value?

The instructional designer (ID) is charged with designing learning activities that will teach a person with certain defined entry-level abilities to manage a project. Thus, the ID needs to:

Identify the knowledge and skills required to do the job. (This is almost always a small subset of the knowledge in the noodle of the SMEs.)

Express these as learning objectives that distinguish what type of behavior is required, e.g., create a project’s work breakdown structure v. analyze project risks v. use provided reference materials to find the answers to obscure questions that come up from time to time.

Identify which elements need to be taught during the course, are assumed as prerequisites, need to be available as references, or are skills developed during post-training reinforcement.

Determine the best sequence for teaching the knowledge and skills.

Decide on the best modalities for instruction.

Determine the most appropriate way to assess how well learners can apply the skills on the job.

Call me old fashioned, but I think these things are important if we want people to actually be able to take new concepts and skills and apply them on the job.

Two very intelligent colleagues of mine, Christy Keener and Tom Hilgart, introduced me to this simple yet extremely useful model for thinking about what it takes to enable a person to be ready for their job.

Awareness is “knowing about.” It’s the ability to define terms, know where resources are located, explain a business process or guideline. For example, employees need to understand the policies related to personal time off; failing that, they need to know where to obtain this information. Awareness is often associated with some of the lower order verbs in Bloom’s taxonomy: the ability to state how many days off I will accrue this year, and to list the times when PTO is not required (for instance, bereavement leave). For some things, awareness suffices.

For other things, skillfulness is needed – the ability to apply defined processes and procedures in standard situations. This is often true when a person is supposed to refer complex situations to someone else. For instance, a Level I Customer Service Rep (CSR) should be able to quickly, confidently, and accurately use the job aids and past knowledge to answer a defined set of questions related to the product he is supporting. She should in addition, recognize calls that she is not qualified to answer, and be able to escalate the call to a Level II CSR.

For our purposes, we can think of proficiency as the ability to do a complex task independently in novel situations. Another way of thinking about it is that proficiency comes when we shift from asking for assistance to providing it to others. It is when a person is proficient at the various tasks comprising her job, that readiness has been achieved.

So if we are in the readiness business (https://goo.gl/BSz2rY), we really need to start by understanding what readiness entails for each job that our audience does. We need to base our planning for becoming a learning organization on an understanding of what skills and knowledge are optional, and which ones are vital. We need to understand which areas a given person needs passing familiarity with, and where proficiency is required.

Achieving proficiency takes time and effort – on the part of the organization and on the part of the learner. Make no mistake, for those critical skills, it won’t just be about training. It will be a continuous learning process that may involve formal and informal learning, social learning, performance support, coaching.

If you agree that the fundamental purpose of the learning organization is to promote readiness, then the primary goal should be to develop speed to proficiency in our interventions, recognizing that proficiency may take weeks or months to achieve.

Abstracted from a forthcoming book on learning effectiveness (c) Bill Bruck, Ph.D., 2015

We need to develop learning objectives that relate to the type of action learners need to do.

Learning Objectives

They say that good results without good management come from good luck, not good management. I would suggest further that without good performance objectives, managers don’t even know when their reports have achieved the results they need.

Similarly, as we prepare people to be ready to do their jobs, we might say that good results without good learning objectives come from good luck, not good planning. I would also suggest that without good objectives, it is almost impossible to assess whether we are doing the job we told our customers we would do.

Of course, there is a discipline to creating SMART learning objectives. We have taxonomies to help us create good terminal objectives. Bloom’s and Gagne’s are two that have stood the test of time.

Work-related behaviors

But as we design our learning interventions, I think this misses an important point.

We need to categorize learning objectives by action type. At the end of the day, what type of action is required on the job? Do we want learners do some physical action? Effectively speak about something or actively listen accurately? Or do we want them to be able to write something?

For instance, claim adjusters must provide written justification for their decisions that is (a) complete, (b) behavioral, (c) logical, and (d) consonant with contractual obligations and organizational guidelines. All that is great, but there’s one word that’s at the heart of it – “written.” They must be able to write.

And as someone who taught college for 20 years, the bottom line is that the only way to teach someone to write is to have them write, give them feedback and have them correct their work, and repeat the process.

While this seems simple, this fundamental approach of defining the type of behavior and then ensuring your training or other learning intervention produces that behavior is violated right and left!

We provide eLearning courses that tell people how to write, then test them on whether they remember the rules we told them. This doesn’t produce a writer, but a person who is aware of writing rules.

We give them writing samples and ask them to identify the writing errors. This doesn’t produce a writer, but a copy editor.

This suggests that the learning intervention must include producing and correcting writing samples, or some equally focused intervention.

Similarly, we might ask:

What type of learning intervention is required if we want learners to be able to accurately and empathically listen?

What type of learning intervention should we use to help learners to verbally present information accurately and persuasively?

What do we need to do to facilitate learners demonstrating a computer skill or a physical action?

It might be an interesting exercise to look at the various courses and other activities available to learners, and audit them with the question: Does what the learner does and learners in this course connect with the behavior they need to do on the job?

Remember: Without objectives we don’t know what we’re teaching. Without action-related verbs, we can’t connect what we’re teaching to what they need to do. It’s that simple.

Abstracted from a forthcoming book on learning effectiveness (c) Bill Bruck, Ph.D., 2015

“Let’s create compelling content and hope learners are attracted enough to take it.” News flash: Hope is not a method.

In hierarchically organized companies, agencies, and not for profits, managers are responsible for achieving results. And, by definition, they are responsible for achieving them through the efforts of individual contributors. In my mind, this gives the manager the right and the responsibility to ensure that the individual contributor is prepared to make that contribution.

One of our customers uses a proprietary four-step methodology for planning multi-million dollar yearly sales meetings when major accounts come up for renewal. The VP Sales is responsible for ensuring that a certain percentage of these accounts renew, and that a certain level of revenue is reached. His bonus, and in fact his job, depends on this. In my mind, this gives him the right and responsibility to determine whether account managers will follow blindly, use judiciously, or feel free to ignore this methodology. By extension, it also gives him the right and responsibility to require account managers to demonstrate their competency with this methodology.

Let’s say that, for example, the learning organization (a) develops a blended learning course designed to teach this methodology, (b) provides a learner-centered set of resources for account managers to use on their own, or (c) creates a coaching toolkit that managers can use to teach the methodology to their direct reports. I would suggest that, once again, the VP Sales has the right and possibly the responsibility to mandate that account managers successfully complete the learning program.

The difference I have with many industry luminaries is that the ideas of hierarchy, “command and control,” and mandated learning often seem to be anathema to them. Instead, there’s a lot of discussion of Enterprise 2.0, a wonderful world where there will be no hierarchy, but rather everyone will be self-directed, lifelong learners who love learning and (presumably) their work. As a former academic and ultra-humanistic psychologist I can appreciate this sentiment, but as a small business owner and consultant to large businesses, I no longer see this point of view as useful. In fact, here’s another news flash: Luminaries by and large presumably love their work. The average worker bee may not.

The difference I see in usual and customary practice is that among my corporate customers, (a) most learning interventions are training events, and (b) a relatively small share of these are mandated.

It’s time to “man up” – or perhaps “manager up.” There’s nothing wrong with requiring training. In fact, in the claims training described on page 12, in the middle of a very busy season, claim adjusters had to respond to two training “files,” and redo their work if their justifications were not behavioral. They hated it. The 12 senior adjusters that had to review two files each from 600 adjusters detested it. The only person who loved it was the CFO who could report a $300M savings.

We must stop measuring learning for one simple reason. The only two people who care about your learning metrics are you and your mom. And honestly, she doesn’t care. She’s just pretending to be interested.

An executive team meeting at ABC Inc.

Imagine being a fly on the wall at the end-of-year wrap up meeting of the executive committee at ABC Inc. The CEO says, “This is the time. This is the place. We need to know where we stand so we can plan our strategy for next year. I need you all to report out for your operational function.”

The VP Marketing goes first. “We increased our budget and we have purchased space in 50% more trade journals this year! In addition, we upped our Google Ad budget by 200% and have placed ads throughout the Internet.”

The VP Sales is next. “I’m happy to report to you all that an audit of our sales calls shows that 93% of sales calls used the SPIN selling model. Not only that, but our inside sales representatives handled, on average, 12% more calls than they did last year!”

The CFO then reported that accountants’ sick leave was down 15% resulting in a departmental cost savings of 5%, and that the new accounting system was in place and being used very effectively.

It would then come as no surprise that the CLO reported that 100% of employees took all required compliance courses; that there was an average 90% satisfaction with training, and that of the 600 courses in the catalog, 75% had been utilized by 15% or more staff, who viewed them for an average of 15 minutes each.

It should also come as no surprise that either all these executives were fired, or the company went out of business, because no one had their eye on the business results.

Of course activity-based reporting by a CFO or VPs of sales and marketing such as that outlined above is ludicrous and doesn’t really happen in well-functioning business. The CEO wants to know about sales figures, not how many people use the SPIN selling model.

Unfortunately, while activity-based reporting by the CLO is equally ludicrous, it’s all too often the type of reporting that the metrics generated by the learning organization support. The fundamental failure to talk the language of business is why – as I’ve argued above – training is all too often the first to go.

We must to better. We can do better. We simply have to put our minds to it.

Kirkpatrick revisited

It’s somewhat fashionable to poo-poo Donald Kirkpatrick’s four levels of learning evaluation. Okay, okay, I guess that there are newer, shinier models of learning assessment. But Kirkpatrick’s is an extremely workable model.

For those of you who are not familiar with it, Kirkpatrick suggests there is a hierarchy of evaluation methods.

Level 1 evaluations ask for learners’ reactions to a training course. Did they like it? This is the familiar “smile sheet” we often get after a training course.

Level 2 assesses whether, in fact, the person learned something. Did they retain the knowledge? This is often assessed using a post test to see how much of the content was retained.

Level 3 looks at whether they can do their job better as a result of training. Did they transfer the learning from the classroom into the work site?

Level 4 asks a different question – even if they can do their job better – does that lead to a positive result for the business.

We won the battle but lost the war

Many learning professionals argue that Level 4 and Level 5 are the ultimate goals for learning. I disagree. My point of view about this came from an experience I had a number of years ago.

We were training insurance underwriters, who in effect were in the income-producing side of the business. We had had a real success – by all measures, the underwriters were able to underwrite more business with less effort than they had in the past. When we did a debrief with the VP of underwriter training, however, he was not happy. Apparently profitability was down. The folks who wrote underwriting policies had written policies and guidelines that were too “soft,” and loans were being approved that had a high default rate.

We trained the underwriters to use the guidelines they were given extremely effectively, and the more effectively they used the faulty guidelines, the worse it was for the business.

No one blamed us – thank God – but at the end of the day, profits were down, headcount had to be cut, and management was unhappy.

Level 3: The gold standard

What I took away from this, and what I continue to believe, is that for learning professionals, Level 3 is the gold standard. We “contract” with managers to help people do their jobs better, according to the policies, procedures, and guidelines they are given. We do not contract to produce business results. It’s a fine distinction, but we can’t contract to produce business results. There are simply too many other variables related to the market, the product, customer service, etc.

That’s why I believe that our job is to demonstrate a Level 3 result – can they apply their skills and knowledge to do their jobs better. This is where the sweet spot for learning professionals is. This is what – if we are thinking about results – we promise our customers. This manager will be able to give more effective presentations. That project manager will be able to manage risks and issues effectively. This underwriter will be able to use the guidelines she’s given and underwrite business effectively.

We must measure productivity

At the meeting of ABC Inc.’s executive board, the reports should look like this:

The VP Marketing says that market research shows that brand awareness is up 35% in the critical 18-25 male market.

The VP Sales says that sales are up 10%.

The CFO lets folks know that the company made a profit.

The CLO lets everyone know that the VP Sales indicated that 100% of sales trainees were able to handle simple to moderately complex accounts without supervision, and the executive team has endorsed the readiness of five high potential candidates to move into AVP slots should the need arise.

The CLO is now talking the language of business, and reporting results that everyone – not just his mother – is interested in.

Abstracted from a forthcoming book on learning effectiveness (c) Bill Bruck, Ph.D., 2015