Post navigation

This Week’s L&D Buzz… Pilot Testing Learning Content

Note: not every chunk of content needs to be pilot tested.

As always – it depends.

It Depends On…

The Rewards at Risk … or … the Risks at Risk.

And – as one chunk of instruction may be part of a greater whole set of instruction – ranging the spectrum from most formal to most informal – the Pilot Test data need to be examined only within that larger context … and the data representing that. The ultimate Performance Competence goal may be larger than this one chunk, or a string of chunks of Instruction/Learning.

Anyway … My general ADDIE-like model:

Your model may differ.

Note my Phase 5 – above – and the key outputs.

Pilot Testing

One phase of my approach to ISD is the Pilot Test Phase.

Instructors, facilitators, and administrators prepare materials and themselves to conduct the pilot test. My version of ISD via an ADDIE-like process is shown in the above diagram.

Pilot-Testing is conducted in Phase 5 in both Modular Curriculum Development and Instructional Activity Development, levels 2 and 3 in my 3-level approach to ISD.

As part of the Pilot-Testing preparation, the learning experience is described to managers of pilot participants—especially if they have a role to play before, during, and/or afterwards.

The managers are provided information and/or instruction for any activities that must be completed to ensure successful transfer of knowledge and skills to the actual job, thus insuring PERFORMANCE COMPETENCE:

Before the Pilot-Test Phase Back in Development/Acquisition

Alpha and Beta Testing – Most products, training or not, are tested during their development.

The first round of formal testing is generally called alpha testing. The second round of organized testing on the more finished product is called beta testing. Beta testing is what ISD professionals sometimes call Pilot Testing. But – as there is little consistency in how ISD is conducted in actual practice at various enterprises, better check local practice – and whether that is informal or formal or what … for your adoption/adaptation.

I feel that training developers should also consider whether or not to perform some Alpha Testing – internal and informal developmental – or more formal alpha tests during this phase – as they see fit. See Risk & Reward above.

For example, it’s usually worthwhile to try out exercises to ensure that instructions are complete, that learners have enough information to answer questions, that exercises are not too difficult or not too simple, and so forth.

However, some of the time the structure of the content—and the way it’s expressed—is rather arbitrary; one approach will work just as well as another.

Be aware that if you ask for opinions on content and expression during a developmental test, you will surely get those opinions, along with the consequent rework (and potential schedule slippage).

Unless you feel there are substantive issues on which you would like interim feedback, it may be better to let the pilot test in Phase 5 give you the feedback you want and need.

I suggest that back in Phase 4 – that you subscribe to the realistic notion that you will during the Pilot-Test – deploy imperfection, and then continuously improve – rather than deferring deployment for perfection.

That continuous improvement is what Phase 5 is all about.

But still, don’t use Pilot-Testing to deploy junk (a technical term).

I always expected my developers to produce Pilot-Test ready instruction that should require less than a 10% fix post-Pilot-Test in my Phase 6.

Overview of Pilot Testing

Description – In Phase 5 of Modular Curriculum Development, the Learning/Training content and is delivered during a pilot test, and extensive evaluations are conducted – as warranted.

Key Activities/Tasks – Project activities in this phase include preparing for the pilot delivery (conducting train-the-trainer sessions, as appropriate); conducting the pilot test; evaluating the results of the pilot test; documenting the evaluations; and developing revision recommendations for the Project Steering Team.

The Project Steering Team turns the recommendations into revision specifications.

Key Outputs – The outputs of this phase include

► The Pilot-Test Report

► A Project Steering Team Gate Review Meeting presentation

Tasks for MCD Phase 5 – Pilot Test

The tasks of Phase 5 for Modular Curriculum Development are organized into five subphases.

MCD Subphase 5.1 – Prepilot

In this subphase, the project team prepares for the pilot test. Preparations include coordinating logistics, producing materials, coordinating the personnel required for the pilot, setting up the pilot-test location, and doing final readiness checks.

MCD Subphase 5.2 – Pilot Deployment

During Subphase 5.2, pilot testing is conducted in circumstances that replicate how the T&D will be deployed once it’s ready for general release to the marketplace. Basically, the project team conducts the pilot test, coordinates the resolution of issues that arise, and conducts evaluations on the T&D being pilot-tested. Tasks in this subphase will vary depending on the chosen deployment platform and media and need to be adjusted accordingly by the project manager.

MCD Subphase 5.3 – Postpilot Revision Recommendations

From this subphase comes a draft of the revision recommendations of the project’s ISD professionals, based on a thorough review of the data collected during the pilot test. These revision recommendations are later reviewed and processed by the Project Steering Team.

MCD Subphase 5.4 – Pilot Phase Gate Review

In this subphase, the last formal meeting is held with the Project Steering Team for the Modular Curriculum Development effort. The Project Steering Team receives an overview of the phase along with the ISD Team’s revision recommendations.

The Project Steering Team’s decisions and reactions to the recommendations evolve into a set of revision specifications for use in MCD’s sixth phase, Revision & Release.

MCD Subphase 5.5 – Post-Gate Review

In this subphase, the project manager distributes the revision specifications and obtains sign-offs on the progress completed during the phase.

The Pilot-Test Deployment Team

The purpose of the Pilot-Test Deployment Team is to conduct a pilot test following the initial development of the T&D. The Pilot-Test Deployment Team includes instructors, facilitators, and/or administrators who conduct the pilot session. The types of roles depends on the type of deployment and media being used.

The Pilot-Test Deployment Team is used in both levels 2 and 3 of my 3-level approach to ISD: level 2 MCD—Modular Curriculum Development/Acquisition, and level 3 IAD—Instructional Activity Development/Acquisition.

ISD Team members coordinate all logistics for facilities, equipment, media, food and beverages, invitations, and confirmations for the attendees. They also deliver the instruction or oversee the instructional delivery/participation for the purposes of pilot testing. Finally, they conduct written and verbal evaluations and debriefings to gather feedback for revision purposes.

Along with the roles of facilitators and instructors, another role is crucial for the conduct of a pilot test: the role of the pilot-test participants. Participants attend and evaluate the initial delivery of the T&D for the purpose of generating evaluations and revision recommendations.

The Project Steering Team considers these evaluations and recommendations. They own all final decisions.

The Pilot-Test Team

Pilot-test participants are ideally handpicked by the Project Steering Team to create a balance between

► Target audience representatives

► Management representatives

Target audience representatives are from the pool of eventual learners who will participate in the T&D after the pilot. They are used to measure the amount of learning that occurs.

Management representatives (a.k.a. management spies) are handpicked by the Project Steering Team to participate in the test. They are used to determine whether the right “learnings” are being taught – and learned to an acceptable level.

Right as in: appropriate/ authentic – and not “mickey mouse/close but no cigar. Sounds like ….”

In combination, the two perspectives give the ISD Team the right data to determine what happened well and what did not. And what to improve.

And whether those improvements indict some earlier phase in the process.

Lessons Learned and then doing something about them.

The mark of a mature L&D outfit IMO.

Conducting the Pilot-Test

During the pilot-test session, written and verbal evaluations are collected and debriefings are conducted, hopefully with minimal impact to the “flow” of the instruction.

After the Pilot-Test session or sessions, the evaluation feedback is assessed by the ISDers so as to create “revision recommendations” which are generated for consideration by the Project Steering Team.

The Project Steering Team in their final “Gate Review Meeting” may accept, modify, or reject the revision recommendations. See the graphic below.

The final results constitute the “revision specifications” used in the final phase of an ADDIE-level ISD project:

Revision & Release.

After Phase 5 the PST is out of the project and the Training/Learning professionals finish up the project … unless members of the PST are involved in the Release (Roll Out) activities. Sometimes the Release is a big deal – other times not.

Evaluations

The types of evaluations I use in Pilot-Testing come from this family of evaluations:

► End-of-Lesson Written Evaluations and Performance Tests …used after every one, two-or-three lessons

► End-of-Day Debriefings…done at the end of each day or PT period

► End-of-Event Written Evaluations and Performance Tests …used after every T&D Event

► End-of-Event Debriefings…done after every T&D Event

Note that if there are Coaches or Facilitators/Instructors involved – they have their own versions of the above.

Of course how you implement the above evaluations will depend on the delivery platforms you must use in your Pilot-Test.

The best evaluations are centered on specific performance competence—the ability to perform specific tasks to produce specific outputs to specific stakeholder requirements. That may require more than self-assessment. Or not (for now).

They – the tests of Performance Competence – should be/have “high fidelity and reliability.”

In other words – get real.

Welding isn’t best tested via a paper and pencil test or a multiple choice test on a tablet.

Authenticity is key.

Of the Tasks, the Outputs and the Requirements.

What else could be important?

Typically – that’s not simple for high Risk and high Reward areas of Performance Competence.

Summary & Close

The purpose of the Pilot-Test is to conduct a “full destructive test” of the Learning Content – following the initial development of that T&D content … when that makes sense from a Risk and Reward perspective.

The full destructive test occurs before the Learning Experience/Content is finally updated and then released – “pushed to” and/or made available to be “pulled by” the various target audiences … Push and Pull.

Your models and language may vary.

Of course, ongoing evaluations may occur after every On-Going delivery, or much less often immediately or over time, again depending on the Risks and Rewards associated with the Performance Competence being addressed by the Instruction and Information content.

Risks and Rewards should almost always be considered in most decision making in L&D – IMO. Even after your version of ADDIE, SAT, or SAM, MCD, etc.

Knowing how to do that and communicate that – that local R&R thing – communicating both status and what to do about it – is a personal best practice – IMO.

It’s Not All About Learning

It's All About Performance Competence - at the Individual level, the Team level, the Process level, the Organization level, the Value Chain level and at the Societal level ... or Worker, Work, Workplace and World.

ISPI’s 2010 Honorary Life Member Award Recipient

In an Enterprise Learning Context

I Prefer the Facilitated Group Process for Speed and Accuracy

performance-based CAD and MCD

This slideshow requires JavaScript.

Requests for Training – What & When to Expect and What & When to Suspect

Guy has served 80+ clients including over 45 F500 firms since November 1982.

Recipient of the ISPI - the International Society for Performance Improvement - Honorary Life Member Award - 2010 - for contributions to the Society and to the Technology for Performance Improvement (PI).

Founding member of ASQ’s Influential Voices Initiative - 2010. Served through 2015.

Guy W. Wallace collaborates with his Clients using predictable, visible, proven processes on time and on budget.

Client work won awards for AT&T, General Motors, HP and Siemens Building Technologies.

Guy's 39 years in the performance improvement/ training/ learning business have been focused in 2 key areas:

1- analysis of the organization and its business processes to derive the "Learning Requirements" from the "Performance Requirements" and...

2- design/architecting the configuration of instructional and informational content.

Guy conducts performance improvement projects, Curriculum Architecture Design projects, instructional Design/Development projects, and develops and coaches client staff in his ISD and Performance Improvement methods, processes, and use of his tools and techniques - both formally and informally.

What Learners/ Performers Need

Click Here for Free PowerPoint Show Downloads

Paths-Menus-Guides-Maps for Training and Learning and Knowledge Management

A 1987 On-Boarding Story – Ramping Up a New Product Manager’s Performance Competence – Quickly

When Shortening the Time to Performance Competence is a Critical Business Issue with Worthy ROI. Click on Image for the Post

Measured Results Requires Meaningful Measurements

Click on Image for the Post

12 Process Performance Variables in the EPPI Model

Guy W. Wallace – Consulting Since 1982

Curriculum Architecture Design – Since 1982

Performance Competence Development Paths vs Learning Paths - the difference is in the Analysis.

Recipient of ISPI’s 2010 Honorary Life Member Award

The top ISPI award, was awarded for contributions to both the technology of performance improvement and to the Society - as unanimously approved by two consecutive boards of the Society. Awarded in 2010.

HPT Treasures – for Evidence Based Performance Improvement

Developing L&D Content for Performance Impact

This slideshow requires JavaScript.

If You Could Bring Others Up Closer to the Levels of Your Current State Master Performers – What Would Be the ROI?

The PACT Processes for performance-based T&D, L&D and Knowledge Management

lean-ISD : Effective and Efficient and Focused on the Performance Competence Requirements

There Is Too Much Foo Foo!

Avoid the Foo Foo in Instructional Design and Performance Improvement

You Go Down The Learning Path to Go Up The Learning Curve – to go Up the Performance Competence Curve

Guy has been doing performance-based Training Paths and Planning Guides for clients since 1982. First published on Curriculum Architecture in Training Magazine in September 1984 and on the Analysis methods in NSPI's (now ISPI) PIJ in November 1984.

What Was Innovative in Curriculum or Learning Architectures in 1984 – Would Still Seem To Be Innovative Today – Why?

Celebrating – 30 Year Anniversary of this Publication – September 2014

How to Build a Training Structure That Won’t Keep Burning Down - Training Magazine - September 1984

Celebrating – 30 Year Anniversary of this Publication – November 2014

Using a Group Process to Create Models and Matrices - NSPI Performance & Instruction Journal - November 1984

Performance Development Paths

a.k.a.: Learning Paths focused on Performance Competence

Walk the Talk – of Processes Maturity

Walk the Talk – of Processes Alignment

Walk the Talk – of Processes Centricity

Myth Busting in L&D

Click on Image for the Post

In the Resource Tab…

3 Levers in EPPI – Enterprise Process Performance Improvement

Click Image to Link to the Post

The EPPI View of Processes and their Enablers and Enabling Systems

And the Enabling/ Provisioning Systems and Processes that enable the Enablers. Note that "Awareness/ Knowledge/ Skills" are just 1 of 12 categories of enabling Process Performance variables - when you include the design of the Process itself, first and foremost.

I Offer Over 150 Free Videos On This Site On the Topics of ISD and PI!