As the curmudgeon on the PLSC Executive Committee who has never quite joined the bandwagon in our theoretical statement, I offer this wiki entry as a prod to refining our theoretical statements. In particular, in this brief document I explore the conceptual bases of "event spaces" and "robust learning". (David Klahr)

Contents

I. Event spaces

At present, here is what we say about Learning Events:
From the PSLC wiki: (4/18/2007 – I have added the numbering)

1. Learning events: A mental event involving the construction or application of a purported knowledge component. The event may be directly driven by instruction as in reading a definition of the knowledge component or applying it in a practice problem. While the instruction has a particular correct knowledge component as a target, the student may construct or apply a different correct or incorrect knowledge component.

2. Learning event space: The set of paths that students have available for a particular learning event

3. Learning event scheduling: It has been known since at least Ebbinghaus (1885) that the schedule of learning events influences long-term retention. Learning event scheduling is therefore an independent variable that can be manipulated. However, because of interactions with task domain (declarative or procedural), task type (study or test), and repetition spacing, learning event scheduling is a complex topic.

I do not fully understand these definitions. For one thing, it is not clear how a "purported" knowledge component differs from a real one, or for that matter, how one would determine the existence of a knowledge component. For another the ambiguity (or equivalence) between "construction" and "application" is very confusing. More generally, these three definitions appear to conflate “instruction”, “assessment”, and “learning”. Thus, in the following musings, I attempt to clarify what I see as important differences between instructional events, assessment events, and learning events.
Here is how I see it: Most of our studies aim to provide some instruction (which is usually clearly and unambiguously described), and to measure the effect of that instruction on learning (which is never directly observed) via some assessment procedure (which can be clearly defined … although in some cases it isn’t) that is designed to demonstrate that the intended learning actually occurred.

I.1 Events

An event is an occurrence at a particular time, with a particular duration from T to T+d (d= duration of event). In instructional research, d can vary from seconds to semesters, depending on the grain size of the analysis. There are three distinct classes of events –instruction, learning, and assessment -- each with its own space.

Each space can be characterized in terms of increasing aggregates. The smallest unit is the event. A series of events is a path. The set of all possible paths is the space. Different levels of aggregation, instrumentation precision, temporal duration, and theoretical language may compress what is a space from one perspective into an event in another. Conversely, an event may be expanded into a space as one drills down to finer grain sizes. Successful cross-mapping and comparisons and contrasts among PSLC projects depends on explicit recognition of these different grain sizes.

1.1.a Learning Event Space: Learning takes place in people's minds. A learning event is a change in the learner's internal cognitive and/or motivational states. It is a process with a temporal location and duration, but it is not directly observable. A learning event path consists of a sequence of learning events leading toward increased procedural and conceptual knowledge, as well as changes in motivational states. Thus, learning event spaces are hypothetical entities: induced by instructional events and measured by assessment events.

I.1.b Instructional Event Space: Instruction is activity in the Learner's external environment that causes a learning event. In most of our work, instruction is intentional, goal directed, and highly specific with respect to the learning events it is designed to cause. The intentions and goals in instructional event paths are typically, but not exclusively, created by agents other than the learner.

An Instructional Event, like a Learning Event has a time and a duration. Unlike learning events, instructional events are directly observable.

The activity comprising an instructional event can be generated by the learner or by other human or non-human agents. Some of these agents can also be learners having their own set of event spaces.

Usually, instructional events and instructional paths are planned and intentional, but they can be unplanned, inadvertent, and unanticipated by the learner or the instructor.

In addition, instructional events can be classified as "other – generated" (i.e., instruction controlled and presented by an agent external to the learner.), or "self-generated" (instruction controlled by the learner, such as self paced problem solving, self-explanation, rehearsal, etc.)

Steps, Lessons, Courses, Curricula, etc. are types of instructional event paths: sequences of Instructional events of various grain sizes and complexity, perhaps with contingencies based on interspersed Assessment Events.

I.1,c Assessment Event Space: Assessments are actions designed to yield information about the learner's knowledge state. Assessment events can be initiated, and observed, by either the learner or an external agent or both. Some assessments, in addition to producing information about learners' internal states, may serve as further instructional events.

At present, the wiki is surprisingly silent on this issue: there are no entries for “assessment”, “measurement”, or “testing”. Test items are alluded to in some definitions, but testing and assessment are not treated at the top level. This is a serious weakness: no science can advance without clear operational definitions of its measurement procedures. Moreover, several of our projects already have some of the best knowledge assessment procedures ever devised (e.g., in the cognitive tutors). But this needs to be made explicit in our theory.

Ideally, but rarely, instructional event paths are perfectly correlated with learning event paths. That is, for every instructional event there is a corresponding, and desired, learning event. But an Instructional Event is neither necessary nor sufficient for a Learning Event: i.e. learning may occur in the absence of instructional events, and it may not always take place in the presence of instructional events. Correspondingly, assessment events may vary widely in the extent to which they correspond to instructional events and learning events.

II. "Robust Learning":

The wiki definition is as follows:

Robust learning: (paraphrased slightly from wiki)
Learning is robust if the acquired knowledge or skill:

(a )is retained for long periods of time,

OR

(b) can be used in situations that differ significantly from the situations present during instruction.

OR

(c) allows students to learn more quickly and/or more effectively.

This definition seems indistinguishable from the many senses in which the term “far transfer” has been used for over 100 years. Moreover, “robust learning” has been used in PLSC-speak in two different senses: as both an event and an assessment.

• Robust Learning Events: A learning event that causes knowledge changes in the learner that are sufficiently important, broad, and stable, that their occurrence can be revealed by Robust Learning Assessments.

• Robust Learning Assessments. Assessment events that occur at substantial temporal durations, and in substantially different contexts than the immediate context in which the Learning Event occurred. (Note that, by definition, it is impossible to know if a robust learning event occurred until long after it did (or didn’t) occur, because robust learning event can only revealed by a robust learning assessment.)

At present, I do not see a clear conceptual distinction between the PSLC's preferred term "robust learning" and the venerable term "far transfer", because the distance metaphor in "far transfer" is itself very ill defined. The dimensions along which the assessment context differs from the learning context are many and varied. They include such things as: length of temporal interval, overlap in knowledge contexts, depth of underlying knowledge structure, social context, modality (written, spoken, visual, etc.) (cf Barnett & Ceci's (2002) steps toward remedying this conceptual problem.) Given that we make robust learning one of the central aims of both our instruction and our theory, I believe that we should build upon what efforts have already been made to understand far transfer, rather than ignore, or at best, grudgingly acknowledge the existence of that literature.

PSLC-speak makes a distinction between far transfer and what is termed "accelerated future learning" (AFL). The fluency and refinement cluster wiki defines AFL as:
Learning that proceeds more effectively and more rapidly because of prior learning. It differs from transfer in its putative generality, not dependent on encounters with similar materials that require similar procedures (transfer). It may include what are called “learning to learn” skills.
That same section of the wiki says "by hypothesis the robust learning produces accelerated learning through component competencies or through gains in efficiency that arise from procedures (e.g. chunking) that can apply to new learning." But elsewhere robust learning is defined as producing accelerated learning. It cant be both a hypothesized process AND a definition! How can the hypothesis be tested if the construct is defined this way?
I see no need to isolate AFL from the broad class of types of transfer. Here is a simple example: If I master one web browser (Netscape) and that knowledge enables me to master another browser (Safari) much more rapidly than (a) I learned Netscape or (b) a novice learns Safari, then that would seem to imply that my learning of Netscape was "robust" because it accelerated my "future learning" of Safari. But isn't that just the same as saying that there was a lot of transfer from Netscape to Safari – including not just the specifics of each system, but also knowledge about what kinds of questions to ask about a browser? What is the new language buying us? And what is it costing us in terms of clarity and credibility?

III. Conclusion

It might be an interesting exercise to take several of our projects and see if they can be usefully described and compared, using this terminology. My hope is that such an endeavor would have less of the feel of a Procrustean Bed , than our efforts to date.