Meta

There are a lot of jobs where people need to write accurately, behaviorally, concisely, and completely. As a professor, I was responsible for ensuring that psychology majors acquired writing skills. I did so by having them write short papers. I would review them with specific feedback, and require that the papers be rewritten. As an author, my copy editors would do the same for me. This isn’t rocket science. The way to teach writing is to have learners write.

Unfortunately, this is not possible in an eLearning course. It’s almost impossible in a virtual instructor-led course, and it’s difficult at best in a face-to-face instructor-led course that doesn’t meet repeatedly over time.

If, however, we are muddy enough with our learning objectives, we can pretend that we are teaching people to write when actually we are familiarizing them with the rules of good writing or teaching them to recognize problems in writing samples. Neither of these teaches people to write.

Within the context of organizational learning, this is a problem. In my view, it’s a big problem.

We can’t keep pretending that if we throw out enough YouTube content, MOOCs, and narrated PowerPoint, supplemented with content library and VILT courses, we are preparing people to be ready to do their jobs.

Teaching writing involves having people write. This can be done with a blended learning program or learning path that includes activities over time, but I’m not sure how else one can do it.

Bolting on social media or content management to an LMS is vastly different than integrating these features.

If an LMS says it supports informal learning, social learning, content or knowledge management, or performance support, these features need to be integrated, not bolted on. Why – and how can you tell the difference?

An integrated feature has the same user experience. The screen has the same standard features in the same standard places, enabling the user to use the feature without learning a new interface.

Features work across the platform, when a feature is integrated. There is one help system that addresses all features, and the search function looks through all content on the system (i.e. courses, discussions, and content).

Features interact with one another. Courses may have attached forums. Content can be attached to communities or courses. Users can start discussions with other users about how to best use information found in a performance support or knowledge base document.

There is a single point of user management. When adding a new user to a group, or identifying a user’s role, the user’s permissions to communities and content should all be granted in the same way that their permissions to courses are.

There is a single point of reporting. Completion of courses, participation in communities, and resource utilization should all be tracked and displayed on integrated dashboards.

If an LMS simply bolts on these features with simple APIs, the user experience is disjointed, and administrator’s experience can resemble the Dante’s seventh circle of hell. The end result is that the features are not used.

(If you’re interested in more information on this, sign up for the free webinar: Beyond the LMS: 21st Century Learning Systems, October 6 at 2:00 eastern.)

We need to match the medium to the massage, to paraphrase Marshall McLuhan. So when is instructor-led training appropriate, v. using eLearning modules, informal learning, or simple performance support tools?

When content is in development. If you must teach users how to use a new computer system that is still in development, there’s nothing worse than developing the same eLearning module two or three times.

When developing the skill requires it. If you are teaching a person to give an effective presentation, and part of the skill is their body language and voice tone, it’s pretty hard to have them practice the skill and get your feedback if you aren’t face to face.

When small group interaction is important. When teaching a mid-level leadership class (perhaps on the best way to create a work breakdown structure for project management), it’s often useful to have a case study, or other small group activities that allow learners to learn from other learners. Unless your LMS and corporate culture allow blended social learning activities together with content, you may want to facilitate this with ILT (instructor-led training).

When expert Q&A is important. Some subjects, like working with troubled employees, have lots of tricky situations where there may be alternative right answers, or least bad solutions. In law school, the Socratic case study method is often employed. In cases where much of the learning comes from Q&A with the instructor, it’s handy to have – well – a live instructor.

When building relationships is a goal. In cross-functional high performing leadership training, one goal is sometimes for future leaders in different disciplines to get to know each other and build relationships that will help them partner in the future. Again, in such situations, it’s handy if the can have lunch together.

This is not to say that much (or perhaps even most) face-to-face and web-based classes can’t be converted to eLearning, or even done away with altogether (in favor of performance support tools and access to colleagues who can show people best practices on the job). But we cannot simply throw away instructional and social interaction in a rush to take all our training online.

There is a distressing trend to leave social learning out of training – especially when that training is delivered online.

Sometime we pretend to include social learning in eLearning with “ask the coach” buttons. Personally, I find these worse than useless. They take a half dozen softball questions and have a learner-avatar ask the question of a coach-avatar, when they could simply embed the answer in the training itself. That’s not Q&A. A question is a learner asking for a concept they didn’t understand to be explained using different words or examples. A question is a learner asking how a concept applies in their ‘unique’ situation. An answer should be a live human being providing adaptive response to the question.

If we go back to how the best face-to-face training programs are conducted, it’s not 100% lecture. In fact, I’d suggest it isn’t 50% lecture. Social activities include breakouts, Q&As, case studies, and all the “break times” where learners chat with each other about their work.

If we want to provide true instruction rather than simple content presentation – the poorest form of “training” – we must use a blended approach and use synchronous or asynchronous technologies such as web meetings, discussion forums, or even the venerable conference call to allow learners to learn from the instructor and each other.

I don’t know about you, but I used to learn 90% of what I retained from the social activities in training, not the lectures. Let’s not throw out the social baby the bathwater as we move classes to eLearning modules.

We know smile sheets aren’t terribly useful, and we need to focus evaluation efforts on whether learners can do their jobs better after training. Here’s a simple way to do it.

For those of you who are not familiar with it, Donald Kirkpatrick suggests there is a hierarchy of evaluation methods.

Level 1 evaluations ask for learners’ reactions to a training course. Did they like it? This is the familiar “smile sheet” we often get after a training course.

Level 2 assesses whether, in fact, the person learned something. Did they retain the knowledge? This is often assessed using a post test to see how much of the content was retained.

Level 3 looks at whether they can do their job better as a result of training. Did they transfer the learning from the classroom into the work site?

I believe that Level 3 is the gold standard. We “contract” with managers to help people do their jobs better, according to the policies, procedures, and guidelines they are given.

In designing scores of blended learning programs in a number of industries, I’ve found a very simple approach to doing Level 3 evaluations: Ask the managers.

It’s not the most sophisticated, but it’s easy and it works. Thirty days after completing a program of study, send the learners’ managers a survey asking if the learner is meeting expectations for putting skills into practice on the job. A more complex version of this asks the manager to rate specific skills; a less complex version simply asks the manager to state in their own words any areas where the learner is falling short.

What we found in practice is when this type of information is provided to initiative owners and executive sponsors, they are satisfied that L&D has met the “contract terms” of ensuring that employees are ready to do their jobs.

The vast majority of learners are not knowledge workers. That’s a convenient fiction that leads to wrong-headed approaches to training and learning. They might better be termed “information workers.” By losing sight of this difference, our approach to learning and training may be skewed.

For our purposes in creating effective organizational learning programs, the distinction between the nature of work in what I’m calling information workers and knowledge workers is key. (This distinction relies on the DIKW hierarchy: data > information > knowledge > wisdom.)

I would suggest that the nature of work needs to guide the nature of learning – that more structured and formal approaches to learning will be better suited to enabling information workers to be proficient at their jobs.

People like to learn from people. We’ve been doing it that way for 20,000 years or so, and we’ve sort of gotten used to it. You might even say our brains are wired for it. Paul Zak, in the Harvard Business Review (October 28, 2014) points out that well-constructed narratives can be compelling, memorable, and motivating. However, at a deeper level, his research suggested that in positive social interactions, our brains produce a neurochemical called oxytocin. He says, “oxytocin [produces] a key ‘it’s safe to approach others’ signal in the brain.”

Other research done by the National Training Laboratories and NetG bears this out. The amount of material retained and ready for use is dismal when the learner is passive (lecture, reading) or interacting with the software (CD ROM). The three tallest bars on this chart come from discussions, experiential learning, and teaching others – two of which are clearly social learning, and the third – experiential learning – often is as well.

I’m pleased to announce that my new book, Speed to Proficiency: Creating a Sustainable Competitive Advantage is now available on Amazon in paperback and Kindle.

Learn how to change from providing “so we did it” training to creating learning initiatives that produce capability change. Everything is covered: Aligning initiatives with the business, understanding the roles and responsibilities of stakeholders, weaving together training with reinforcement and coaching, integrating informal learning and performance support, and selecting the right learning technologies.

Proficiency isn’t attained in a class, it takes a systematic combination of training, reinforcement, and informal learning.

Being in the readiness business is really being in the business of building capability; of helping learners move from novice to expert in as short a time as possible – what we call speed to proficiency.

To do so, we need to use a full range of learning interventions. These include training, informal learning, performance support, coaching and mentoring.

As learning professionals, we need to have a thinking framework that helps us understand when and where to use each of these interventions, and how to best weave them together in a systematic way to produce speed to proficiency.

In this regard, I’d like to share the continuous learning model that we at Q2 Learning have used for over 10 years with Fortune 500 customers.

We see three phases of learning on the X axis, in the order that we may most often use them.

Training includes event-based formal instruction, such as face-to-face and online classes, self-paced eLearning, MOOCs, and other event-based instruction. Training is great for building awareness and a certain level of skillfulness – the ability to apply defined processes and procedures in standard situations.

Reinforcement includes planned post-training activities such as graduated assignments, coaching, mentoring, and other forms of on-the-job training. Reinforcement builds on the gains made from training. In our work with customers, we find that the key to achieving proficiency in critical job skills is a reinforcement cycle. That’s something that happens on the job, not in the classroom.

Informal learning includes learner-initiated “over the cubicle” knowledge sharing, communities of practice, experiential learning, and gaining skills and knowledge from performance support systems and other reference materials. Our customers have leveraged informal learning to maintain and enhance skills over time.

Build skillfulness with training, build proficiency through reinforcement, and maintain and improve skills through informal learning – and notice it’s the reinforcement and informal learning that drive to proficiency.

Learning Technology or Training Technology

Most learning management systems (LMS’s) are great at helping instructors replicate the worst practices of education electronically: lecture and multiple choice tests. When LMS’s go beyond this, there are still factors that cause us to think within a very short and narrow box called “learning equals content-based courses.” In other words, LMS’s support training (narrowly defined), not learning more broadly defined. Why?

Focus on content. While the Experience API holds the promise of getting us out of the SCORM trap some day, the vast majority of courses contained within today’s LMS’s are SCORM 1.2 or SCORM 2004. These standards have learning professionals busy creating content objects, not learning objects.

Focus on events. While the primary object creating by authoring tools is a SCO, the primary object maintained in the LMS is a course. Courses are almost always time-bound training events related to the mastery of content. However, most on-the-job proficiency is not created in an event, whether it’s a 30-minute eLearning module or a one-week face-to-face sales training.

The sage on the stage. When you think about it, the courses in the LMS are all about the sage on the stage. The sage creates the eLearning modules. The sage teaches the classroom-based classes listed in the LMS. In web meetings, there is a presenter (sage) and audience.

Assessing the unimportant. The bad news is that LMS’s make it easy to assess that which is pretty much trivial and unimportant – i.e. Level 2 evaluations using “objective tests,” where there’s a right and wrong answer. Critical thinking? Complex decision making? Ability to write effectively? Ignored.

MIA: Informal learning. If you’re lucky, your LMS will have a rudimentary comment system or bolted on discussion forums. However, software to support informal learning that is integrated with other learning activities at the user experience and administration level is simply missing in action. That’s 75% of the learning that our technology doesn’t really address very well.

MIA: Coaching and mentoring. I suspect that most learning professionals would agree that we are not done when the class ends; we also be in the business of supporting the reinforcement of training on the job. Most learning technology simply doesn’t do this. And that’s a shame.

MIA: Knowledge management. Over the past several years, there have been many articles written on the convergence of learning and knowledge management (KM). I’m a great believer in this. It seems that if we are in the business of ensuring that people are ready to do their jobs, our learning systems should also be knowledge management systems.

I believe that LMS’s need to support the learning process, not simply eLearning and classes. Unfortunately, most LMS’s started life as content management systems. Content is in their bones and in their DNA. Later additions – such as those to support social learning – often feel bolted on and not an integral part of the system. As we start thinking about effective approaches to learning, we also need to start thinking about the functional requirements of learning technologies that can support them.