The edited and peer reviewed volume presents selected papers of the conference "Beyond knowlegde: the legacy of competence" organized by EARLI SIG Learning and Instruction with Computers in cooperation with SIG Instructional Design. It reflects the current state-of-the-art work of scholars worldwide within the area of learning and instruction with computers. Mainly, areas of computer-based learning environments supporting competence-focused knowledge acquisition but also foundational scientific work are addressed. More specific, contents cover cognitive processes in hypermedia and multimedia learning, social issues in computer-supported collaborative learning, motivation and emotion in Blended Learning and e-Learning.

You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.

Beyond Knowledge: The Legacy of Competence
Jörg Zumbach · Neil Schwartz · Tina Seufert ·
Liesbeth Kester
Editors
Beyond Knowledge:
The Legacy of Competence
Meaningful Computer-based
Learning Environments
123
Editors
Prof. Jörg Zumbach
University of Salzburg
Department of Science Education
and Teacher Training
Hellbrunnerstr. 34
5020 Salzburg
Austria
joerg.zumbach@sbg.ac.at
Prof. Neil Schwartz
California State University
College of Behavioral & Social Science
Department of Psychology
400 West First St.
Chico CA 95929-0234
USA
neil8860@gmail.com
Dr. Tina Seufert
University of Ulm
Institute of Educational Sciences
89069 Ulm
Germany
Dr. Liesbeth Kester
Open University of the
Netherlands
6401 DL Heerlen
Netherlands
Gedruckt mit Unterstützung des Bundesministeriums für Wissenschaft und Forschung in Wien/
Printed with grant support of the Federal Ministry for Science and Research in Vienna, Austria.
ISBN: 978-1-4020-8826-1
e-ISBN: 978-1-4020-8827-8
Library of Congress Control Number: 2008930470
2008 Springer Science+Business Media B.V.
No part of this work may be reproduced, stored in a retrieval system, or transmitted
in any form or by any means, electronic, mechanical, photocopying, microfilming, recording
or otherwise, without written permission from the Publisher, with the exception
of any material supplied specifically for the purpose of being entered
and executed on a computer system, for exclusive use by the purchaser of the work.
c
Printed on acid-free paper
9 8 7 6 5 4 3 2 1
springer.com
Contents
Beyond Knowledge ................................................................................................1
Joerg Zumbach, Neil Schwartz, Tina Seufert, and Liesbeth Kester
Part I-I: Collaborative Learning with ICT and Knowledge Sharing ...............5
Interdisciplinary Perspectives on Cognitive Load Research as a Key
to Tackle Challenges of Contemporary Education.............................................7
Fred Paas, Tamara van Gog, Femke Kirschner, Nadine Marcus,
Paul Ayres, a; nd John Sweller
Interpersonal Knowledge in Virtual Seminars .................................................11
Oliver Diekamp, Brigitta Kopp, and Heinz Mandl
Individual Versus Group Learning as a Function of Task Complexity:
An Exploration into the Measurement of Group Cognitive Load...................21
Femke Kirschner, Fred Paas, and Paul A. Kirschner
Mentored Innovation in Teacher Training Using Two Virtual
Collaborative Learning Environments ..............................................................29
Andrea Kárpáti and Helga Dorner
Instructor’s Scaffolding in Support of Students’ Metacognition
Through an Online Course .................................................................................43
Rikki Rimor, Roni Reingold, and Tali Heiman
Part I-II: Research Notes on Collaborative Learning with ICT
and Knowledge Sharing ......................................................................................55
Fostering Collaborators’ Ability to Draw Inferences from Distributed
Information: A Training Approach...................................................................57
Anne Meier and Hans Spada
Collaborative Problem Solving with Cases in a Virtual
Professional Training ..........................................................................................59
Melanie Hasenbein, Birgitta Kopp, and Heinz Mandl
Students’ Perceptions of a Competency Assessment Program
in an Online Course.............................................................................................63
Luis Tinoca, Isolina Oliveira, and Alda Pereira
v
vi
Contents
Part II-I: E-Learning and Mobile Learning ..................................................... 65
Mobile Phones to Enhance Reflection Upon Collaborative
Problem-Solving .................................................................................................. 67
M. Beatrice Ligorio
The Use of iPods in Education: The Case of Multi-Tasking............................ 75
Geraldine Clarebout, Joke Coens, and Jan Elen
Which Design Principles Influence Acceptance and Motivation
in Professional E-Learning? ............................................................................... 83
Birgitta Kopp, Elvira Schulze, and Heinz Mandl
Preparing Pre-Service Teachers for Professional Education Within
a Metacognitive Computer-Based Learning Environment.............................. 93
Bracha Kramarski and Tova Michalsky
Designing a Well-Formed Activity System for an ICTs-Supported
Classroom........................................................................................................... 101
Jonghwi Park and Robert Bracewell
Part II-II: Research Notes on E-Learning and Mobile Learning.................. 111
A Comparison of Group and Individualized Motivational Messages
Sent by SMS and E-Mail to Improve Student Achievement ......................... 113
Damith Wickramanayake, Charles Schlosser, and Markus Deimann
Computer Assisted Learning and its Impact on Educational Programs
Within the Past Decade: A Bibliometric Overview of Research ................... 115
Gabriel Schui and Günter Krampen
Fostering the Translation Between External Representations: Does
it Enhance Learning with an Intelligent Tutoring Program? ....................... 117
Rolf Schwonke, Anna Ertelt, and Alexander Renkl
Oversold – Underused Revisited: Factors Influencing Computer
Use in Swiss Classrooms ................................................................................... 121
Dominik Petko
Part III-I: Competence-Based Instruction in Mathematics and Science...... 123
Analyzing Computer-Based Fraction Tasks on the Basis
of a Two-Dimensional View of Mathematics Competences .......................... 125
Anja Eichelmann, Susanne Narciss, Arndt Faulhaber, and Erica Melis
Contents
vii
How does it Swim? ............................................................................................135
Halszka Jarodzka, Katharina Scheiter, and Peter Gerjets
Accuracy of Self-Evaluation of Competence: How is it Affected Through
Feedback in a Computer-Based Arithmetic Training?..................................143
Susanne Narciss, Hermann Körndle, and Markus Dresel
Inquiry Web-Based Learning to Enhance Information Problem Solving
Competence in Science ......................................................................................153
Manoli Pifarré and Esther Argelagós
Virtual Versus Physical Materials in Early Science Instruction:
Transitioning to an Autonomous Tutor for Experimental Design ...............163
David Klahr, Lara Triona, Mari Strand-Cary, and Stephanie Siler
A Design-Based Approach to Professional Development: The Need to See
Teachers as Learners to Achieve Excellence in Inquiry-Based
Science Education..............................................................................................173
Eleni A. Kyza and Constantinos P. Constantinou
The Effect of Intervening Tests on Text Retention.........................................183
Liesbeth Kester and Huib Tabbers
Guiding Students’ Attention During Example Study by Showing
the Model’s Eye Movements.............................................................................189
Tamara van Gog, Halszka Jarodzka, Katharina Scheiter, Peter Gerjets,
and Fred Paas
Part III-II: Research Notes on Competence-Based Instruction
in Mathematics and Science..............................................................................197
What Makes a Problem Complex? ..................................................................199
Samuel Greiff and Joachim Funke
Activation of Learning Strategies When Writing Learning Protocols:
The Specificity of Prompts Matters .................................................................201
Inga Glogger, Rolf Schwonke, Lars Holzäpfel, Matthias Nückles, and
Alexander Renkl
Part IV-I: Multimedia Learning ......................................................................205
One More Expertise Reversal Effect in an Instructional Design
to Foster Coherence Formation........................................................................207
Babette Koch, Tina Seufert, and Roland Brünken
viii Contents
The Influence of Spatial Text Information on the Multimedia Effect .......... 217
Florian Schmidt-Weigand, and Katharina Scheiter
Arguing a Position from Text: The Influence of Graphic Themes
on Schema Activation........................................................................................ 227
Neil H. Schwartz and Charles Collins
The Role of Attribution, Modality, and Supplantation
in Multimedia Learning.................................................................................... 237
Joerg Zumbach, Birgit Reisenhofer, Stefan Czermak, Peter Emberger,
Claudio Landerer, and Gerhard Schrangl
Part V-I: Tool Support...................................................................................... 247
To Embed or Not to Embed, That’s the Question .......................................... 249
Geraldine Clarebout, Holger Horz, Wolfgang Schnotz, and Jan Elen
Learner Variables, Tool-Usage Behaviour and Performance in an Open
Learning Environment...................................................................................... 257
Lai Jiang, Jan Elen, and Geraldine Clarebout
Fostering Hypermedia Learning with Different Argumentation Tools:
The Role of Argument Visualisation................................................................ 267
Joerg Zumbach, Martina Ramsauer, Neil H. Schwartz, and Sabine C. Koch
Supporting Prewriting Activities in Academic Writing
by Computer-Based Scaffolds .......................................................................... 275
Antje Proske and Susanne Narciss
Part V-II: Research Notes on Tool Support.................................................... 285
Presentation Manager and Web2.0: Understanding Online Presentations..... 287
Gisella Paoletti, Sara Rigutti, and Anna Guglielmelli
Hypertext Didactics and its Incidence in the Quality of Academic Literacy
During the Initial Training of Primary School Teachers............................... 291
Beatriz Figueroa, L. Domínguez, L. Ajagán, V. Yáñez, and M. Aillon
Learner-Controlled Use of Interactive Videos in the Context of Classroom
Education: Learning Strategies and Knowledge Acquisition ........................... 295
Sonja Weigand and Stephan Schwan
Appendix: Reviewers and Editorial Assistants............................................... 299
Author Index .....................................................................................................301
Subj ect Index ....................................................................................................309
Beyond Knowledge
The Legacy of Competence in Meaningful
Computer-Based Learning Environments
Joerg Zumbach1, Neil Schwartz2, Tina Seufert3, and Liesbeth Kester4
1
University of Salzburg (Austria), 2California State University (Chico, USA), 3University of
Ulm (Germany), 4Open University of the Netherlands
Beyond Knowledge: About the Content of this Book
Learning and instruction with computers is intrinsically tied to current educational
practice in schools, universities, the corporate world and informal settings of
learning. However, integration of technology in the practice of education is a sensitive task that has to be well planned in order to meet the needs of learners and
teachers . Current changes in European education stress the role of competencies
and educational standards; thus, fostering both within the practice of education is
eminently important. Meaningful computer-based learning environments contribute to the achievement of learners’ acquisition of competence and directly address
the superordinate standards of education. They stimulate active learning by providing students with control over learning environments and offer realistic problems with which to practice – environments that can simulate conditions impossible to mimic in the real world, and environments that can embed learning
scenarios within the structure of interactive and highly motivating games (Merrill,
2002; Reigeluth, 1999; Van Merriënboer & Kirschner, 2001). Furthermore, the
environments also provide the capability of leveraging vast information resources
within a myriad of modality-specific deployments – for example, texts, auditory
fragments, and animations.
This book presents a highly select compilation of research dedicated to these
environments – empirical research (both basic and applied) aimed at the analysis,
understanding, and promotion of learning by computer-based and other instructional state-of-the-art approaches.
Section one of this book is dedicated to approaches of competence-based instruction in mathematics and science. By definition, competence is characterized
by integrativity, specificity, and durability. Integretivity refers to the combination
of knowledge, skills and attitudes as well as aptitudes of students; specificity, refers to the idea that competence is always bound to a context that is either highly
specific (e.g., a profession) or more general (e.g., a career); durability relates to
1
2
J. Zumbach et al.
the notion that competence does not rely exclusively on tools, working methods or
technologies per se (Van Merriënboer, Van der Klink, & Hendriks, 2002). Thus,
competence-based instruction requires a holistic approach, consisting of whole
tasks that address the coordination and integration of knowledge, skills and attitudes (Van Merrienboer & Kester, 2008). From this perspective, the chapters in
this section address the question of how scientific thinking, and epistemological
beliefs, in the context of a science classroom, can be extended and enriched by
digital learning environments as well as innovative approaches of instructional design.
Part two of this book explores current approaches aimed at analyzing and fostering collaborative learning. As such, these approaches consider collaborative
learning under the auspices of information and communication technologies (ICT)
in addition to issues of knowledge sharing. The social and communicative aspects
of learning are addressed in addition to suggestions for enhancing collaborative
transactions of learners in group-based instruction.
The third section is dedicated to issues of e-Learning and mobile learning in
general. There is little doubt that when using mobile devices, the opportunity to
learn alone or in groups comes with unique and special requirements. These requirements refer to the issue that technical devices, like mobile phones, iPods and
other mobile appliances need to be handled as learning tools – tools that must be
able to negotiate the ubiquity of open learning environments that are, by comparison to traditional environments, significantly more amorphous. That means the
environments in which students learn need to be prepared by the learners themselves so that the learning processes are appropriately initiated and properly controlled by continuous metacognitive processes. The research in this section addresses these issues directly, especially with regard to the innovative applied
approaches to support and design meaningful and competence oriented learning
environments as they are exploited by the use of mobile tools.
Computers as learning tools, and tool support for computer-based learning, are
the focus of the fourth section of this volume. Rich computer-based learning environments enable a qualitatively different way of learning compared to traditional
learning environments. By comparison to typical school classrooms, computerbased learning environments allow for non-linear learning by giving students control over the instructional material they are intending to learn. Thus, students are
allowed to select information, tasks, instructional formats (e.g., video, audio,
graphics, or text), interface properties, and content (e.g., analogies) in their preferred order and at their own pace (Merrills & David, 1994). Although learner control can be highly motivating (Gray, 1987; Lawless & Brown, 1997; Lou, Abrami,
& d’Apollonia, 2001), its effect on learning outcomes is not unequivocally supported (Fry, 1972). Thus, the use of support tools in computer-based learning
might be an important means to enhance the learning outcomes of students in control over their own learning; however, at present the complexity of these environments renders them currently vulnerable to outcome efficacy debate. Answers to
questions about the nature and surplus value of learning support devices, as well
Beyond Knowledge
3
as outcome oriented instructional design approaches are major themes that guide
the contributions in this section.
The topic of the final section of this book is multimedia learning. There are
three perspectives on multimedia learning presented by the research in this section. First, the psychological perspective describes memory systems and cognitive
processes that explain how people process different types of information and how
they learn with different senses. For example, Paivio’s dual coding theory (1986;
Clark & Paivio, 1991) and Baddeley’s working memory model (1992, 1997) form
the bases for this perspective. Second, the design of instructional messages identifies multimedia principles and provides guidelines for devising multimedia messages consisting of, for instance, written text and pictures, spoken text and animations, or explanatory video with a mix of moving images with spoken and written
text (e.g., Mayer’s generative theory of multimedia learning (2001) and Sweller’s
Cognitive Load Theory (2004; Sweller, van Merriënboer, & Paas, 1998)). Finally,
models for course and curriculum design prescribe how to develop educational
programs, which contain a mix of educational media including texts, images,
speech, manipulative materials, and networked systems. In short, the research in
this section explores the three perspectives underlying multimedia in varied and
important experimental work.
The chapters in this book provide excellent research having undergone doubleblind peer review comprising newly completed investigations in the field. As
such, they reflect new data, fresh thinking and new findings in the field. On the
other hand, we also decided to include research notes that represent work-inprogress – innovative approaches that might affect future research. These selected
research notes also underwent a double-blind process of peer review in order to
emphasize the role of current and future processes of instructional design and
learning and instruction with computers. We hope to present a valuable resource
for the field and thank all contributors for their excellent and outstanding work.
References
Baddeley, A. D. (1992). Working memory. Science, 255, 556–559.
Baddeley, A. D. (1997). Human memory: Theory and practice (Rev. Ed.). Hove, UK:
Psychology Press.
Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology
Review, 3, 149–210.
Fry, J. P. (1972). Interactive relationship between inquisitiveness and student control of
instruction. Journal of Educational Psychology, 63, 459–465.
Gray, S. J. (1987). The effects of sequence control on computer learning. Journal of Computerbased Instruction, 14(2), 54–56.
Lawless, K.A., & Brown, S.W. (1997). Multimedia learning environments: Issues of learner
control and navigation. Instructional Science, 25, 117–131.
Lou, Y., Abrami, P. C., & d’Apollonia, S. (2001). Small group and individual learning with
technology: A meta-analysis. Review of Educational Research, 71, 449–521.
4
J. Zumbach et al.
Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.
Merrill, M. D. (2002). First principles of instruction. Educational Technology, Research and
Development, 50(3), 43–59.
Merrill, M. D., & David, G. T. (1994) Instructional design theory. Englewood Cliffs, NJ:
Educational Technology Publications.
Paivio, A. (1986). Mental representation: A dual coding approach. New York: Oxford
University Press.
Reigeluth, C. M. (Ed.). (1999). Instructional design theories and models: A new paradigm of
instructional theory. Mahwah, NJ: Lawrence Erlbaum.
Sweller, J. (2004). Instructional design consequences of an analogy between evolution by natural
selection and human cognitive architecture. Instructional Science, 32, 9–31.
Sweller, J., van Merriënboer, J. J. G., & Paas, F. (1998). Cognitive architecture and instructional
design. Educational Psychology Review, 10, 251–296.
Van Merriënboer, J. J. G., & Kester, L. (2008). Whole task models in education. In
J. M. Spector, M. D. Merrill, J. J. G. van Merriënboer, & M. P. Driscoll (Eds.), Handbook of
research on educational communications and technology (pp. 441–456). New York:
Routledge, Taylor & Francis Group.
Van Merriënboer, J. J. G., & Kirschner, P. A. (2001). Three worlds of instructional design: State
of the art and future directions. Instructional Science, 29, 429–441.
Van Merriënboer, J. J. G., Van der Klink, M. R., & Hendriks, M. (2002). Competenties: Van
complicaties tot compromis - Een studie in opdracht van de onderwijsraad [Competencies:
from complications towards a compromise – A study for the National Educational Council].
The Hague, The Netherlands: Onderwijsraad.
Part I-I: Collaborative Learning with ICT
and Knowledge Sharing
Interdisciplinary Perspectives on Cognitive
Load Research as a Key to Tackle Challenges
of Contemporary Education
Fred Paas1, Tamara van Gog1, Femke Kirschner1, Nadine Marcus2,
Paul Ayres2, and John Sweller2
1
Educational Technology Expertise Center, Open University of the Netherlands,
Fred.Paas@ou.nl, 2University of New South Wales, Sydney, Australia
Abstract In this contribution we argue that challenges of contemporary education
require new forms of collaboration and communication across disciplines.
Interdisciplinary perspectives are needed to enable us to make truly original and
useful contributions to cognitive load theory and practice. Using cognitive load
theory as an example, I will show that the cutting edge of cognitive load research
lies across the boundaries of disciplines. Four examples will be presented to
illustrate how the transfer of methods and findings from exercise physiology,
neuroscience, and cognitive aging research have advanced or may advance
cognitive load theory: (1) Ratings of perceived exertion from the discipline of
exercise physiology have been adapted and successfully used in cognitive load
research to measure cognitive load. (2) Findings from recent neuroscience
research may further the explanation for why dynamic visualizations are
particularly effective when learning tasks involve human movement, and largely
ineffective when depicting mechanical, non-human movement. (3) Research on
interhemispheric cooperation is used as a model for cognitive load research into
the effectiveness of group learning. (4) Cognitive aging research is used to show
that age-related reductions in attentional control over information that was not
initially relevant can actually lead to superior performance for older adults when
this information serves as a solution to subsequent problems.
In this contribution we argue that challenges of contemporary education require
new forms of collaboration and communication across disciplines. Research is interdisciplinary when we build on theories and previous research from more than
one discipline and use methods for data collection and analysis from more than
one research tradition. Using cognitive load theory as an example, it is shown that
the cutting edge of cognitive load research lies across the boundaries of disciplines. Interdisciplinary perspectives are needed to enable us to make truly original and useful contributions to cognitive load theory and practice. Four examples
will be presented to illustrate how the transfer of methods and findings from exercise physiology, neuroscience and cognitive aging research has advanced or may
advance cognitive load theory with regard to the measurement of cognitive load,
the effectiveness of learning from dynamic visualizations, the effectiveness of
7
8
F. Paas et al.
collaborative learning, and a new perspective on age-related distractibility as an
opportunity for learning.
(1) Back in the 1950s Gunnar Borg, a Swedish exercise physiologist, introduced the field of perceived exertion. His rating scales of perceived exertion
(RPE) are used worldwide in medicine and exercise physiology to produce estimates of exertion that are comparable across people and across tasks. Ratings of
perceived exertion are based on the assumption that people are able to introspect
on their physical processes. This method of data collection and analysis from the
discipline of exercise physiology was first used in cognitive load research in the
early 1990s, and assumed that people would also be able to introspect on their
cognitive processes. Using a similar RPE scale it was shown that people were capable of giving a numerical indication of their perceived cognitive load. Since
then, the cognitive load rating scale has been successfully used in many studies to
differentiate between the cognitive load effects of instructional conditions.
(2) Dynamic visualizations such as video or animation have become a popular
means of providing instruction on natural processes such as how lightning develops or how the tides work, on technical systems such as the functioning of a bicycle pump or a chemical distillation process, or on abstract processes such as probability calculation. It has been suggested that dynamic visualizations should
enhance learning by assisting students to perceive the temporal changes or movement in a system, whereas learning from static visualizations requires students to
mentally infer these temporal changes, and inference requires more cognitive effort than simple perception. However, despite their popularity and the fact that dynamic visualizations seem an intuitively superior instructional format for representing change over time than static graphics, research has failed to establish the
superiority of dynamic over static visualizations. In this presentation I will report
on findings from neuroscience research that can be used to further the explanation
for why dynamic visualizations are particularly effective when learning tasks involve human movement, and largely ineffective when depicting mechanical, nonhuman movement. In the context of cognitive load theory, learning by observing a
dynamic visualization of human movement may be less problematic for working
memory, since an important part of processing human movement information occurs automatically via a circuit of neurons that deal with the perception and imitation of human movement, i.e. the mirror-neuron system. Several recent studies
have provided support for the hypothesis that animations of tasks involving a human-movement component (e.g., paper folding) would lead to better learning than
static pictures.
(3) In contemporary learning paradigms, collaboration is emerging as one of
the promising learning approaches in education. However, while all levels of
education are making use of collaborative learning techniques in both traditional
and electronic learning environments, either synchronously or asynchronously,
either distributed or non distributed, the effectiveness of these types of education/learning has still not been proven. A recent literature review of research comparing the effectiveness of individual learning environments with group-based
Interdisciplinary Perspectives on Cognitive Load Research
9
learning environments has revealed mixed results. It is still not clear under what
circumstances collaborative learning is more effective than individual learning. I
will show that neuroscientific research on interhemispheric cooperation can be
used as a model for cognitive load research into the effectiveness of collaborative
learning. This research has shown that dividing processing across the hemispheres
(cf., group learning) is useful when cognitive load is high because it allows information to be dispersed across a larger expanse of neural space. In contrast, when
the load is low, a single hemisphere (cf., individual learning) can adequately handle the processing requirements. In this presentation I will show how these results
can used to generate new hypotheses about the conditions under which collaborative learning is more effective than individual learning.
(4) There is an abundance of research evidence in the cognitive literature of
age-related declines in various functions including speed of processing, selective
attention, working memory, long term memory and problem solving. Recent evidence suggests that one source of slowing of processing speed is an age-related
increase in distractibility. In this presentation, cognitive aging research of Kim,
Hasher, and Zacks (2007) is used to show that age-related reductions in attentional
control over information that was not initially relevant can actually lead to superior performance for older adults when this information serves as a solution to
subsequent problems. They presented young and old participants first with a reading task in which older adults are differentially disrupted by concurrent distraction
and then with a problem solving task, in which some of the items could be solved
by words that had served as distractors in the reading task. Their hypothesis that
older adults would show greater facilitation or priming from the reading task to
the problem solving task was confirmed. I will show how this new way of thinking of Kim et al. may help cognitive load researchers in generating new research
questions and find new instructional techniques.
Interpersonal Knowledge in Virtual Seminars
Oliver Diekamp, Brigitta Kopp, and Heinz Mandl
Ludwig-Maximilians-University, Munich - Department of Psychology, Leopoldstr. 13,
D-80802 Munich, diekamp@psy.lmu.de, birgitta.kopp@psy.lmu.de, heinz.mandl@psy.lmu.de
Abstract Interpersonal knowledge of learning partners plays an important role in
collaborative learning. Because of the special characteristics of computermediated-communication, it is necessary to investigate the formation and the effects of interpersonal knowledge in virtual learning scenarios. This field study
evaluates the formation and the effects of interpersonal knowledge in a virtual
seminar. The seminar involved 33 participants who worked together in groups of
3–5 members. At the beginning and end of the virtual seminar, participants were
asked about their skill-related interpersonal knowledge and their emotional interpersonal knowledge of other learning partners. Results showed that interpersonal
knowledge generally increased during the seminar. While skill-related interpersonal knowledge did not lead to more efficient interaction, socio-emotional interpersonal knowledge was positively related to conflict oriented-consensus building,
posing task-related questions and the contribution of ideas. Both skill-related and
socio-emotional interpersonal knowledge were positively correlated to participants’ satisfaction with, and acceptance of, the seminar.
Objectives and Purpose
Communication between learning-partners plays a crucial role in collaborative
learning. Learners not only discuss individual perspectives on tasks and subject
matter, but also must coordinate their learning activities in the group. In typical
computer-supported learning environments such as virtual seminars, communication becomes even more complex as learners are restricted to asynchronous textbased communication. As a consequence, it is increasingly important to understand the characteristics of interpersonal communication. One important aspect of
communication is the interpersonal knowledge that learners develop about other
learning partners. Empirical evidence indicates that interpersonal knowledge has
an influence on specific learning activities in the reduction of the amount of coordination needed, because members know the strengths and weaknesses of other
learning partners (Adams, Roch, & Ayman, 2005). In addition, interpersonal
11
12
O. Diekamp et al.
knowledge should also reduce conformity and the suppression of alternative perspectives and judgments (Gruenfeld, Mannix, Williams & Neale, 1996; Shah &
Jehn, 1993) which should result in a better performance of collaborative learning
groups. This paper addresses two main research questions: First, how does interpersonal knowledge develop in a virtual seminar under the specific conditions of
computer-mediated-communication? And second, what effects does interpersonal
knowledge have on collaborative learning activities like coordination and critical
evaluations as well as on satisfaction with, and acceptance of, a virtual seminar?
Context of the study was a virtual seminar “Introduction into Knowledge Management” which was offered by the Virtual University of Bavaria.
Theoretical Framework
According to theories on impression formation and impression management, individuals actively try to manage what they know about their interaction partners and
what their interaction partners (should) know about themselves (Fiske, Lin &
Neuberg, 1999). In traditional face-to-face-situations, individuals are generally
familiar with strategies for managing the formation and development of interpersonal knowledge (e.g. social information seeking, selective self-presentation).
However, asynchronous and text-based communication produces a completely different scenario. According to Walther’s social-information-processing approach
(1996), and his research on impression formation and relationship development in
computer-mediated-communication, communication partners can reach the same
level of interpersonal knowledge and relationship as in face-to-face settings; they
only need more time to adapt to the restrictions of the medium. As interaction is
easier with a certain degree of interpersonal knowledge, it is expected that participants do develop a certain degree of interpersonal knowledge about learning partners in the context of a virtual seminar.
With regard to the effects of interpersonal knowledge, research on transactivememory-systems has shown that skill-related interpersonal knowledge (e.g.
knowledge about strength and expertise in certain domains) decreases decision
time and coordination effort in groups (Wegner, 1986). With regard to socioemotional interpersonal knowledge (e.g. knowledge about emotions and feelings),
the uncertainty-reduction-theory assumes that individuals who do not possess interpersonal knowledge about a communication partner feel uncertain during interaction (Berger & Calabrese, 1975). As a consequence, individuals try to overcome
uncertainty by acquiring interpersonal knowledge about their communication
partner, typically by using social-information-seeking techniques like observation
Interpersonal Knowledge in Virtual Seminars
13
or asking questions (Berger & Kellermann, 1994). As empirical studies show, increased interpersonal knowledge about group members particularly affects the social modes of group interaction. It is reported that increased interpersonal knowledge makes individuals more comfortable expressing disagreement (Gruenfeld
et al. 1996). Socio-emotional interpersonal knowledge can therefore help reduce
the tendency of participants to discuss information which has already been shared
amongst the group (Wittenbaum, Hollingshead & Botero, 2004). In another study,
Shah and Jehn (1993) found that group members with more interpersonal knowledge asked more questions and were more critical of their decisions regarding
candidates. As these are crucial factors for successful collaborative learning, this
should result in higher degrees of acceptance and satisfaction with a virtual
seminar.
Research Questions
1. How does interpersonal knowledge develop in a virtual seminar?
Because of theories on computer-mediated-communication like the socialinformation-processing approach (Walther, 1996), it is expected that interpersonal
knowledge will generally increase during the seminar.
2. Is there a relationship between interpersonal knowledge and collaborative
learning activities?
Studies in the context of transactive-memory-systems (Wegner, 1986) lead to the
expectation that possessing skill-related interpersonal knowledge reduces the need
for explicit coordination. In addition, it is expected that individuals with more
socio-emotional interpersonal knowledge feel less insecure (Berger & Calabrese,
1975) and therefore will: (a) evaluate their learning partners’ statements more
critically, (b) generate more task-related questions, and (c) contribute more proposals and ideas. Moreover, it is expected that socio-emotional interpersonal
knowledge is positively correlated with the amount of social-talk during a group
discussion.
3. Is there a relationship between interpersonal knowledge and satisfaction
and acceptance with the virtual seminar?
As the effects of interpersonal knowledge are expected to lead to better interaction
patterns, it is expected that interpersonal knowledge is generally associated with
higher degrees of acceptance and satisfaction with the virtual seminar.
14
O. Diekamp et al.
Method
Context and Design
The context of this study was a virtual seminar entitled “Introduction into Knowledge Management”. Thirty-three previously unacquainted undergraduate students
from seven universities in Bavaria participated in this online seminar in the winter
term 2004/2005; most (72%) of the participants were female. Based on the concepts of problem-oriented learning (Mandl, Ertl, & Kopp, 2006), the learning material was structured into six modules. In each module, an anchor case introduced
a problematic aspect of knowledge management, e.g. knowledge representation.
For each of the six modules groups had to solve different tasks.
Learning Environment
The learning environment consisted of features such as a user interface or HTML
pages and threaded discussion boards with the ability for users to upload and
download files. Access to the learning environment was provided via the World
Wide Web and saved by personal login data. The home page described the basic
structure of the seminar with a timetable and news ticker. In addition, communication with the tutor was made possible via a question board.
Data Sources
Interpersonal Knowledge
Interpersonal knowledge was measured via an online-questionnaire after the introduction-phase (t1) and before the beginning of the last learning-module (t2). Because interpersonal knowledge is always two-directional, participants assessed
their interpersonal knowledge for each learning partner in two ways. First, participants assessed their own knowledge of their learning partner; then, participants assessed the interpersonal knowledge that the learning partner had about them.
Rating of interpersonal knowledge was divided into skill-related and socioemotional interpersonal knowledge. In order to measure skill-related interpersonal
knowledge, participants were first asked to assess their learning partners’ competences in the domains of knowledge management, new media technologies and
cooperation. Then, participants were asked to rate their confidence in their assessments (e.g. “How competent is Tim in the domain of knowledge management?
Interpersonal Knowledge in Virtual Seminars
15
How sure are you with this assessment?” and “How does Tim would rate your
competence in the domain of knowledge management? How sure are you with this
assessment?”). All questions were based on a 5-point Likert scale with three items
(“very competent” and “not competent” / “very sure” and “very unsure”). The degree of skill related knowledge was expressed by the ratings on the confidence
scale.
In order to measure socio-emotional interpersonal knowledge, participants assessed their own interpersonal knowledge and their learning partners’ knowledge
about them on a five-point Likert scale, including items on knowledge of emotions, living-conditions, character and attitudes of the learning partners (“How
well do you know Tim with regard to his emotions and feelings?” and “How well
do you think Tim knows you with regard to your own emotions and feelings”),
anchored with “not well” and “very well”.
Satisfaction and Acceptance
Satisfaction and acceptance were measured with an online-questionnaire shortly
after the virtual seminar was finished. All items were also based on a 5-point
Likert scale, anchored with “Strongly agree” and “Strongly disagree”. Satisfaction
included six items on efficiency (e.g. “The group discussions were useful and
helpful”), four items on satisfaction with knowledge transfer (e.g. “Knowledge of
other group members helped me a lot.”) and ten items on satisfaction with group
cohesion (e.g. “We had a good group climate”). Acceptance was measured on a
general level (e.g. “I enjoyed participating in a virtual seminar”). All scales used
in this study showed sufficient internal reliability (Cronbach’s alpha ranging from
0.78 to 0.88).
Specific Learning Activities
In order to correlate interpersonal knowledge with collaborative learning activities, an interaction analysis was conducted for the last learning-module. Two
evaluators counted the statements of coordination, elaboration of perspectives and
social-talk. With regard to the social modes, statements were counted that indicated new proposals (those not previously mentioned), task-related questions, and
critical evaluations of learning partners’ contributions. The inter-rater reliability
(ICC) was sufficient (r>0.78).
16
O. Diekamp et al.
Results
Research question 1: Formation of interpersonal knowledge
Results showed (see Fig. 1) that participants felt more confident in assessing the
skills of their learning partners at the end of the seminar as compared to the beginning (mt1=3.43; mt2=3.71; p<0.05; t=-2.38). This effect was not observed with regard to the assessment of learning partners’ knowledge about one’s own skills.
Confidence increased; however, the difference was not significant (mt1=2.84;
mt2=3.08, ns). Similar results occurred regarding socio-emotional interpersonal
knowledge. While the assessment of the socio-emotional interpersonal knowledge
of other learning partners increased significantly (mt1=1.9; mt2=2.2, p<0.05; t=2.23), no significant increase was measured with regard to the assessment of
socio-emotional interpersonal knowledge of the learning partners.
5
4
3,43
3,71
2,84
3
3,08
1,90
2
t1
2,20
1,86
2,03
t2
1
0
LP
oneself
skill-related interpersonal knowledge
LP
oneself
socio-emotional interpersonal knowledge
Fig. 1. Formation of interpersonal knowledge.
Notes: LP=Assessment of one’s own knowledge of learning partner, oneself = Assessment of
learning partners’ knowledge of oneself.
Research question 2: Relationship between interpersonal knowledge and collaborative learning activities
On the basis of research on transactive memory-systems, it was expected that
higher degrees of skill-related interpersonal knowledge would correspond with
more “implicit” coordination. As shown in Table 1, this expectation was not confirmed. The correlation of skill-related interpersonal knowledge and coordination
activity was not significant.
Interpersonal Knowledge in Virtual Seminars
17
Table 1. Correlations between interpersonal knowledge and collaborative learning activities.
Coordination
Skill-related interpersonal
knowledge
Socio-emotional interpersonal
knowledge
LP
oneself
LP
oneself
0.06
0.08
0.27
0.32
Elaboration
–0.02
0.22
0.56**
0.56**
Proposals
0.04
0.23
0.38*
0.47*
Asking Questions
0.02
0.18
0.33*
0.39*
Critical Evaluation
0.22
0.26
0.32*
0.36*
Social-Talk
0.33*
0.12
0.43**
0.40*
Notes: LP=Assessment of one’s own knowledge of learning partner, oneself = Assessment of
learning partners’ knowledge of oneself. n=33; * p<0.05; ** p<0.01, 1-tailed.
As expected, socio-emotional interpersonal knowledge was positively associated
with the willingness to contribute proposals and ideas to group discussion. Results
also showed significant correlations with critical evaluations of learning partners’
statements, posing questions and social-talk.
Research question 3: Relationship between interpersonal knowledge and satisfaction and acceptance
As Table 2 shows, expectations regarding satisfaction and acceptance were confirmed. Socio-emotional interpersonal knowledge was strongly correlated with
satisfaction pertaining to efficiency, knowledge-transfer, and group cohesion. The
more socio-emotional interpersonal knowledge participants possessed about their
learning partners, the more satisfied the participants were. Skill-related interpersonal knowledge was positively associated with satisfaction with regards to efficiency and group cohesion.
Table 2. Correlation between interpersonal knowledge and satisfaction / acceptance.
Skill-related interpersonal knowledge
Socio-emotional interpersonal knowledge
LP
oneself
LP
oneself
Satisfaction with efficiency
0.40*
0.16
0.53**
0.45**
Satisfaction with knowledge-transfer
0.27
0.04
0.63*
0.58**
Satisfaction with group cohesion
0.31*
0.37*
0.80**
0.75**
Acceptance of virtual seminar
–0.02
0.08
0.40*
0.36*
Notes: LP=Assessment of one’s own knowledge of learning partner, oneself = Assessment of
learning partners’ knowledge of oneself. n=33;* p<0.05; ** p<0.01, 1-tailed.
18
O. Diekamp et al.
Discussion
In general, results confirmed previous research conducted in contexts that were
not directly related to computer-supported collaborative learning. Firstly, interpersonal knowledge developed during computer-supported learning. As results indicate, participants showed increasing levels of confidence in assessing the skills of
learning partners. In addition, participants were able to better assess the socioemotional interpersonal knowledge of their learning partners. Consistent with
Walther’s social-information-processing approach (Walther, 1996), participants
seemed to adapt to the restrictions of computer-mediated communication and developed a deeper interpersonal insight, despite the inherent limitations of communication channels.
Consistent with the results of Shah and Jehn (1993), participants with more
socio-emotional interpersonal knowledge showed better social modes of interaction with respect to critically evaluating contributions of learning partners, asking
task-related questions and making proposals. These are all crucial patterns of successful knowledge construction in collaborative learning. Students with a higher
degree of interpersonal knowledge seemed to overcome uncertainty and anxiety as
they interacted in a more open manner. However, findings also indicate that interpersonal interaction does not automatically lead to a more efficient interaction.
Group members did not automatically develop “implicit” coordination through the
awareness of each other’s skills and talents. These findings are consistent with experiences in other virtual seminars where coordination efforts did not decrease by
the end of the course (Schnurer, 2005). It is likely that students would have needed
more time to form a transactive-memory-system with highly efficient coordination
processes.
Scientific and Educational Importance
There are several general inferences that can be drawn from the results, which indicate that interpersonal knowledge can develop under conditions of asynchronous
text-based communication and that this interpersonal knowledge affects interaction in a positive way. More specifically, results clearly indicate that interpersonal
knowledge is an important factor for successful interaction. This lends support to
the idea of implementing strategies that foster the development of interpersonal
knowledge, especially on a socio-emotional level.
Interpersonal Knowledge in Virtual Seminars
19
References
Adams, S. J., Roch, S. G., & Ayman, R. (2005). Communication medium and member
familiarity: The effects on decision time, accuracy, and satisfaction. Small Group Research,
36(3), 321–353.
Berger, C. R., & Calabrese, R. (1975). Some explorations in initial interaction and beyond:
Toward a developmental theory of interpersonal communication. Human Communication
Research, 1, 99–112.
Berger, C. R., & Kellermann, K. (1994). Acquiring social information. In J. A. Daly &
J. M. Wiemann (Eds.), Strategic interpersonal communication (pp. 1–31). Hillsdale, NJ:
Lawrence Erlbaum.
Fiske, S. T., Lin, M., & Neuberg, S. L. (1999). The continuum model: Ten years later. In
S. Chaiken, & Y. Trope (Eds.), Dual process theories in social psychology (pp. 231–254).
New York: Guilford.
Gruenfeld, D. H., Mannix, E. A., Williams, K. Y., & Neale, M. A. (1996). Group composition
and decision making: How member familiarity and information distribution affect process
and perfomance. Organizational Behavior and Human Decision Processes, 67(1), 1–15.
Mandl, H., Ertl, B., & Kopp, B. (2006). Cooperative learning in computer supported learning
environments. In L. Verschaffel, F. Dochy, M. Boekaerts, & S. Vosniadou (Eds.),
Instructional psychology: Past, present and future trends. Sixteen essays in honor of Erik De
Corte (pp. 223–237). Oxford: Elsevier.
Schnurer, K. (2005). Kooperatives Lernen in virtuell-asynchronen Hochschulseminaren. Eine
Prozess-Produkt-Analyse des virtuellen Seminars “Einführung in das Wissensmanagement”
auf der Basis von Felddaten [Cooperative learning in virtual asynchronous seminars. A
process-product-analyses of the virtual seminar. Introduction in knowledge management on
the basis of field data]. Berlin: Logos.
Shah, P. P., & Jehn, K. A. (1993). Do friends perform better than acquaintances? The interaction
of friendship, conflict and task. Group Decision and Negotiation, 2, 149–165.
Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal and
hyperpersonal interaction. Communication Research, 23(1), 3–43.
Wegner, D. M. (1986). Transactive memory: A contemporary analysis of group mind. In
B. Mullen & G. Groethals (Eds.), Theories of group behavior (pp. 185–208). New York:
Springer.
Wittenbaum, G. M., Hollingshead, A. B., & Botero, I. C. (2004). From cooperative to motivated
information sharing in groups: Moving beyond the hidden profile paradigm. Communication
Monographs, 71(3), 286–310.
Individual Versus Group Learning
as a Function of Task Complexity:
An Exploration into the Measurement
of Group Cognitive Load
Femke Kirschner1, Fred Paas1, and Paul A. Kirschner2
1
Educational Technology Expertise Center, Open University of the Netherlands, P.O. Box
2960, 6401 DL Heerlen, the Netherlands, femke.kirschner@ou.nl/fred.paas@ou.nl, 2Research
Centre Learning in Interaction, Utrecht University
Abstract The target of this study is twofold; on the one hand, it is an empirical
study into the learning effectiveness of group versus individual learning as a function of task complexity; on the other hand, it is an exploration into the measurement of group cognitive load as a function of task complexity. The effects of individual versus group learning on retention and transfer test performance and mental
effort were investigated among 52 high school students performing mathematical
tasks. Applying cognitive load theory, groups were considered as information
processing systems in which group members, by communication and coordination
of information (i.e., transaction costs) can make use of each other’s WM capacity.
It was hypothesized that, with low complexity tasks, group members would
achieve the same test performance, but with higher learning effort than individuals
because of the transaction costs. With high complexity tasks, group members were
expected to achieve a higher test performance with lower learning effort than individuals, because the transaction costs are minimal compared to the gain afforded
by a division of cognitive load. On an exploratory basis, it was investigated how
individual-level models can be used as a basis to understand group-level load.
Introduction
The target of this study is twofold; on the one hand, it is aimed at studying group
versus individual learning as a function of task complexity; on the other hand, it is
aimed at taking a closer look at the development of a method to calculate the cognitive load of a group of collaborative learners by examining the amount of mental
effort invested by the individual group members and by the group as a whole.
21
22
F. Kirschner et al.
This study considers groups as information processing systems consisting of
multiple working memories (WM). Consequently, it can be argued that groups
have effectively more processing capacity available than single individuals with
one WM. In a group, the cognitive load can be shared among group members enabling them to deal with more complex problems than individuals. Although there
is cognitive load caused by communication and coordination within a group, the
so-called transaction costs have to be taken into account. In complex cognitive
tasks, these costs are minimal compared to the advantage of being able to share
the high cognitive load among group members. This distribution advantage was
found in a previous experiment comparing the effects of group and individual
learning of complex cognitive tasks on transfer efficiency (Kirscher, Paas, &
Kirschner, in press). By making use of each other’s processing capacity through
sharing of cognitive load imposed by a task, it was possible for group members to
more deeply process information elements, and construct higher quality schemata
in their long term memory than learners working individually. Another situation
occurs with low complexity tasks in which a learner has sufficient capacity to
solve a problem individually. That is, solving the problem in collaboration, in
terms of experiencing cognitive load, does not have an advantage for an individual
group member and can even be disadvantageous. This is so because of the relatively high load caused by the transaction costs within the group. Indeed, research
comparing groups to individuals when performing relatively simple recall tasks
shows that working in a group can be detrimental (Weldon & Bellinger, 1997).
Although groups in all cases outperform individuals in the amount of items recalled, comparing the amount of recalled items by each group member to the
amount of items recalled by an individual shows that working in a group hampers
performance for the group member because the individual performance is higher.
It was therefore hypothesized that with low complexity tasks, group members
would have to invest more mental effort in learning to achieve the same test performance than individual learners, because of the relative transaction costs. With
high complexity tasks, it was hypothesized that group members could achieve a
higher test performance with lower learning mental effort investment than individuals, because the transaction costs are minimal compared to the gain afforded
by a division of cognitive load.
Whereas valid and reliable instruments have been developed in the context of
individual learning (Paas, Tuovinen, Tabbers, & Van Gerven, 2003), there are no
standard methods to determine the cognitive load experienced by groups of collaborating learners. It is not clear if and how these individual measurements can be
used to get a reliable estimate of the group’s cognitive load – in other words,
whether an individual-level model can be used as a basis to understand grouplevel load. Individual cognitive load measurements represent the load that a specific instructional method imposes on the limited cognitive system of a learner.
This load can be anywhere between very high or very low depending on the characteristics of the learner (e.g., age and expertise) and the characteristics of the instructional method (e.g., task format and task complexity). Determining individual
Individual Versus Group Learning
23
cognitive load can be done using a variety of psychological, task- and performance-based, and subjective measurements – all of which have been tested for reliability and validity (see Paas et al., 2003). For measuring group cognitive load,
however, such instruments are not available. It is therefore unclear as to how individually based measurements can be used to determine the cognitive load of a
group of collaborative learners. The individual subjective rating scale developed
by Paas (1992) is based on the assumption that students are able to introspect on
their cognitive processes and can report how much effort it took them to solve a
problem. This rating scale has been shown to be valid, reliable, non intrusive, and
has been used in many studies dealing with cognitive load, providing the opportunity to compare results between studies. In the present study, this rating scale was
used to obtain an indication of: (a) group cognitive load by looking at the average
of individual group member effort scores, (b) individual group member scores of
the effort it took the group as a whole, and (c) a single effort score that was judged
collectively by the group. The goal of this part of the study was to explore the impact of task complexity on the amount of cognitive load people experience in a
group and how different measurements can be used to measure group cognitive
load.
Method
Participants
Participants were 52 second-year Dutch high school students with an average age
of 14 years. They participated in the experiment as part of their math curriculum
and did not receive any academic or financial compensation. Prior knowledge on
math-related subjects was assumed to be the same for all participants, for they all
had followed exactly the same math courses during the last 2 years. The students
were assumed to be novices on the topic of surface calculation for they were only
instructed on how to calculate rectangle surface areas but did not have any prior
knowledge concerning the calculation of surface areas of triangles and circles.
Materials
All materials were in the domain of mathematics and concerned the calculation of
geometrical surface areas; namely that of the triangle and the circle. Materials
were designed for this investigation, consisting of: (a) an introduction on how to
calculate geometrical surface areas, (b) learning tasks in which solving geometrical
24
F. Kirschner et al.
surface calculation problems was the goal, and (c) retention and transfer test tasks
on geometrical surface calculation. All materials were paper based.
Introduction
The introduction was based on three subjects or geometrical figures: rectangles,
circles and triangles. For every geometrical figure, the theory behind calculating
the surface area, as well as a worked out example of how to use this theory when
solving a surface calculation problem, were the core of the introduction. The theory, in all three instructions, consisted of an insight in the relevant formulas and
shapes of the geometrical figures. The three geometrical figures were treated separately in the order of rectangle, triangle, and circle. In this way, students started
with known information to activate their prior knowledge, subsequently extending
their knowledge by studying unknown information. The introduction was paper
based but also discussed in class by the math teacher.
Learning Tasks
Learning tasks were of low, medium and high complexity. For each of these three
levels of task complexity, two tasks in the domain of mathematics were developed. In this way, three tasks focused on the calculation of surface areas of triangles, and three tasks on the calculation of surface areas of circles. Task complexity
or intrinsic cognitive load was determined by using Sweller and Chandler’s (1994)
method based on the number of interactive elements in a task and the insight necessary for solving the problem. The tasks were structured in such a way that transaction costs of communication and coordination were kept to a minimum and the
information elements could be divided among the members of the group.
Test Tasks
Eight test tasks were designed to determine how much students had learned. Half
of these tasks were based on surface calculation of circles and half on triangles.
There was a distinction in retention and transfer test tasks, such that four of the
tasks (two circle and two triangle) were identical in structure to the ones performed in the learning phase; these were the retention tasks. Four of the tasks (two
circle and two triangle) were structurally different from the ones performed in the
learning phase; but to solve these problems, the same underlying theory on surface
calculation had to be used.
Individual Versus Group Learning
25
Cognitive-Load Measurement
To measure the participants’ cognitive load after each task in the learning and test
phase, the subjective 9-point cognitive-load rating scale developed by Paas (1992)
was used.
Performance Measurement
Solving learning and test tasks meant correctly calculating the surface area of a
geometrical figure. One point was awarded for a correct answer and zero points
for an incorrect answer. In the learning phase, this meant that a minimum score of
zero and a maximum score of three points could be earned; in the test phase, the
minimum score was again zero and the maximum eight points. For the statistical
analysis, the performance scores on retention and transfer were transformed into
proportions.
Design and Procedure
All students received written instruction on how to calculate the surface areas of
rectangles, circles and triangles two days prior to the learning tasks. During this
instruction phase, participants had seven minutes to study each geometrical figure
by themselves. Then, the teacher had seven minutes to discuss the theory and a
worked-out example in class and give clarification answers to questions asked by
the students. The total instruction took 50 minutes after which the participants had
to hand in the written instructions to the teacher. In the learning phase, because of
the within subject design of this study, every participant, at one point, worked on
the learning tasks individually as well as in a group. For each participant, the order
of individual and group work was counterbalanced, as was the task subject a participant started (i.e., circles or triangles). At the beginning of the learning phase,
participants were randomly assigned to the individual or group condition, which
meant that twenty-one participants started to work individually on three tasks of
three different complexity levels and then worked in triads on three other tasks at
these three complexity levels. Twenty-one other participants started to work in triads on these problems and then worked individually. If a participant first, individually or in a group, worked on the calculation of the surface area of a triangle,
the second time, being in the individual or group condition, the geometrical figure
was a circle. If a participant, individually or in a group, worked on the calculation
of the surface area of a circle, the second time, being in the individual or group
condition, the geometrical figure was a triangle. The participants had to study and
solve each problem and rate their cognitive load on the mental effort rating scale:
the individual scale (Paas, 1992). On the same scale, group members additionally
26
F. Kirschner et al.
had to rate the amount of mental effort they invested to arrive at the solution together: the group member scale, and the group additionally had to give one score
of their joint mental effort that was needed to come to the solution: the group
scale. In the test phase, all participants had to individually work on four retention
and four transfer tasks; this phase was held one day after the learning phase and
took 50 minutes in total. Again, after each test task, the participants had to rate
their mental effort on the mental effort rating scale.
Results
Because analyzing the data is in progress the results are still preliminary.
Learning Phase
A 2 (learning condition: individual vs. group) × 3 (task complexity: low, medium,
high) ANOVA with repeated measures on both factors was used to analyze the
data obtained during the learning phase. With regard to performance, the ANOVA
revealed main effects of learning condition, F(1, 48) = 4.811, MSE = 1.253, p <
.05 and task complexity, F(2, 48) = 18.606, MSE = 3.055, p < 0.001, as well as a
significant interaction between learning condition and task complexity,
F(2, 48) = 3.792, MSE = 0.610, p < .05. The interaction indicated that groups particularly performed better than individuals on the medium complexity tasks. With
regard to mental effort, the ANOVA revealed main effects of learning condition,
F(1, 49) = 12.810, MSE = 40.412, p < 0.001 and task complexity,
F(2, 49) = 63.384, MSE = 175.847, p < 0.001, but did not reveal a significant
interaction between learning condition and type of test, F(2, 49) = 6.790, ns. These
results indicate that at all three complexity level group members rated a lower
mean mental effort than individuals.
Test Phase
No significant effects were found in the test phase with regard to performance and
mental effort. Performance efficiency was calculated for the transfer tests using
Paas and van Merriënboer’s (1993; see Van Gog & Paas, 2008) computational approach by standardizing each of the participants’ scores for test performance, and
mental effort invested in the learning phase. For this purpose, the grand mean was
subtracted from each score and the result was divided by the overall standard
Individual Versus Group Learning
27
deviation, which yielded z-scores for effort (R) and performance (P). Finally, a
performance efficiency score, E, was computed for each participant using the formula: E = [(P – R)/21/2]. High efficiency was indicated by a relatively high test
performance in combination with a relatively low mental-effort rating. In contrast,
low efficiency was indicated by a relatively low test performance in combination
with a relatively high mental-effort rating. With regard to efficiency, the analysis
revealed a significant effect of condition, F(1, 44) = 11.106, MSE = 6.744,
p < 0.005. Group learning was more efficient than individual learning, as indicated
by a more favorable relationship between learning mental effort and test performance.
Results on the measurement of group cognitive load as a function of task complexity in the learning phase using only the group condition data were analyzed
using a 3 (type of mental effort scale: individual scale, group member scale, group
scale) × 3 (task complexity: low, medium, high) ANOVA with repeated measures
on both factors. With regard to the mental effort score, the ANOVA revealed a
main effect of task complexity, F(2, 48) = 56.132, MSE = 438.177, p < 0.001, but
not for type of mental effort scale, F(2, 48) = 2.513, ns. It also did not reveal a
significant interaction between task complexity and type of mental effort score,
F(2, 49) = 1.432, ns. These results indicate that the higher the complexity level,
the higher the mental-effort score for all three mental effort measurements. Comparing the group member scale with the group scale in a 2 (type of mental effort
scale: group member scale, group scale) × 3 (task complexity: low, medium, high)
ANOVA with repeated measures on both factors only revealed a main effect for
task complexity F(2, 49) = 58.313, MSE = 314.678, p < 0.001. Comparing the individual scale with the group member scale in a 2 (type of mental effort scale: individual scale, group member scale) × 3 (task complexity: low, medium, high)
ANOVA with repeated measures on both factors revealed a main effect for task
complexity F(2, 48) = 44.930, MSE = 253.503, p < 0.001, as well as for type of
mental effort scale, F(1, 48) = 4.047, MSE = 2.920, p = 0.05. This indicates that
the higher the complexity level, the higher the mental effort score, and that the
mental effort score is significantly higher on the individual scale than on the group
member scale. The task complexity × mental effort scale interaction is not significant but shows a trend, F(2, 48) = 2.692, MSE = 1.299, p = 0.075.
Discussion
The goal of this study was twofold. First, it was hypothesized that with low complexity tasks, group members would achieve the same test performance, but with
higher learning effort than individuals because of the transaction costs. With high
complexity tasks, group members were expected to achieve a higher test performance with lower learning effort than individuals, because the transaction costs are
minimal compared to the gain afforded by a division of cognitive load. Secondly,
28
F. Kirschner et al.
it was investigated how individual-level models of measuring cognitive load can
be used as a basis to understand group-level load.
The hypothesis that group members would achieve the same test performance
with low complexity tasks, and higher test performance with high complexity
tasks than individuals was confirmed with a significant interaction between condition and task complexity. Performance on tasks of medium complexity were particularly enhanced. The reason for the highest complexity task not to enhance performance as much as we expected could be explained by a floor effect. The
highest complexity task was just too difficult for both groups of individuals to
complete. The hypothesis that group members would rate a higher mental effort
when working on low complexity tasks and a lower mental effort when working
on high complexity task than individuals was not confirmed. The results show that
independent of the complexity of the task, group members rate a lower mental
effort.
The exploratory part of the study which was concerned with a way to measure
a group cognitive load shows that the mental effort rating scale that is used at the
individual level can be transformed in such a way that it measures group cognitive
load. The three scales: the individual scale, the group member scale, and the group
scale, are sensible to complexity. The higher the complexity of the task, the higher
the rated cognitive load. When comparing the individual scale to the group member scale, the results show that for the individual scale, the mental effort scores are
higher on all complexity tasks.
References
Kirschner, F. C., Paas, F., & Kirschner, P. A. (in press). Individual and group-based learning
from complex cognitive tasks: effects on retention and transfer efficiency. Computers in
Human Behavior.
Paas, F. (1992). Training strategies for attaining transfer of problem-solving skill in statistics: A
cognitive load approach. Journal of Educational Psychology, 84, 429–434.
Paas, F. G., & van Merrienboer, J. J. (1993). The efficiency of instructional conditions: an
approach to combine mental effort and performance measures. Human Factors, 35(4),
737–743.
Paas, F., Tuovinen, J. E., Tabbers, H., & Van Gerven, P. W. M. (2003). Cognitive load
measurement as a means to advance cognitive load theory. Educational Psychologist, 38,
63–71.
Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition &
Instruction, 12, 185–233.
Van Gog, T., & Paas, F. (2008). Instructional efficiency: Revisiting the original construct in
educational research. Educational Psychologist, 43, 1–11.
Weldon, M. S., & Bellinger, K. D. (1997). Collective memory: Collaborative and individual
processes in remembering. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 23, 1160–1175.
Mentored Innovation in Teacher Training
Using Two Virtual Collaborative
Learning Environments
Andrea Kárpáti1 and Helga Dorner2
1
Eötvös Loránd University, karpatian@t-online.hu, 2Central European University, Hungary,
dornerh@ceu.hu
Abstract A classic and a new CSCLE were tested in a mentored innovation setting. Mentoring procedure and learner satisfaction, as well as ICT competence and
educational strategies, were tested to identify the role of the VLE and the mentor
in the success of the training process. Both VLEs were found satisfactory by users.
However, the level of acquisition of ICT-supported educational methods as revealed in lesson plans and self-developed digital teaching resources produced by
learners individually and in pairs or groups were different in quality. Mentoring
was identified as a key factor of success in the in-service training process.
The Calibrate Project – Developing an Educational Content
Repository Through Cooperation of Knowledge-Building
Communities
The major aim of the Calibrate Learning Resources For Schools Project (October
2005–March 2008, www.calibrate.eun.org), supported by the Information Society
Technologies (IST) program of the European Union and co-ordinated by the
European Schoolnet (EUN), was to support the collaborative use and exchange of
learning resources in schools. In the piloting phase coordinated by Eötvös University, Budapest (ELTE), 2401 teachers from 7 countries accessed resources in a
federation of learning repositories filled with open source digital learning materials by the Ministries of Education of the participating countries. The pilot was also
strategically important as it provided user comments on functionalities of the
European Learning Resource Exchange (LRE) launched by EUN in 2006. Thus,
teachers participating in the project were able to search for international learning
resources furnished with English tags and made available through a network of
linked repositories via the Calibrate portal. Also included in the project was
the new LeMill community platform for teachers (www.lemill.net) developed
29
30
A. Kárpáti, H. Dorner
in co-operation of the Universities of Helsinki, Finland and Tallinn, Estonia. This
collaborative learning environment allows teachers and pupils to develop community-driven learning content collections and also to carry out collaborative evaluation, adaptation and learning activities using both content developed by the
schools themselves and resources found using the Calibrate system.
The Mentored Innovation Model in In-Service
Teacher Education
CERI, the educational research institute of the Organisation of Economic Cooperation and Development (OECD), commissioned an extensive study to investigate if and how information and communication technologies (ICT) resulted in
changes in the quality of teaching and learning in public education. “ICT and the
Quality of Learning” (1999–2001) involved researchers from 23 member and allied countries. As part of the project, school based case studies were executed that
evaluated the functioning of schools incorporating ICT in education, internal and
external communication and management.
It turned out that the majority of high-level ICT users were well-equipped,
innovative schools with high SES students. Altogether 91 cases were documented in 23 countries with an anthropological / qualitative approach including
structured interviews, observation of teaching and extracurricular activities and
analysis of students’ ICT-related work. Schools were revisited after 6, 12 and 18
months to see how educational change due to the introduction of ICT culture
prevailed (Cf. Venezky & Kárpáti, 2004). Hungary, the only Eastern European
participant in the project, produced substantial improvement in teachers’ ICT
competence and student learning through the mentored innovation model that
involved coaching of teachers for innovative ICT use in discipline-based groups.
As a result, a series of manuals and DVDs were published (Kárpáti, 2001–2005)
to document innovative teaching practices that were used in many in-service
courses around the country. However, results of ICT inclusion in education by
course participants differed considerably as the efficiency of mentoring seemed
to vary beyond all expectations.
Therefore, in 2004, Hungary joined the European Pedagogical ICT Licence
(EPICT) Project supported by the IST call of the EU to standardise and assess an
in-service teacher training course based on the collaboration of teams guided by
experienced both in ICT use and contemporary, discipline-based teaching and
learning methods. Results of the participant competence development study suggested that one of the key factors in competence enhancement is the quality of
mentoring. Neither previous ICT experiences, nor the age or sex of participants influenced results as much as their mentor’s work. This study focused on personality
traits of successful, average and reluctant ICT using teachers (Kárpáti, Török, &
Szirmai, to appear) but mentor types were also identified and their groups’ success
Mentored Innovation in Teacher Training
31
and failure contrasted with methods of facilitation (Tartsay & Kárpáti, 2007). The
need for a more in-depth study of mentoring discourse – documented in the elearning environment Moodle used for EPICT courses – became a necessity to
improve the quality of training provided and enhance the retention of ICT supported methods acquired by teachers.
The study reported here is a follow-up of previous research on the mentored innovation model that occurs through scaffolding in discipline-based groups. Instead
of only evaluating digital learning materials, national teacher groups also elaborated and evaluated educational methods best suited to resources located. Best lesson plans (LPs) – first showcased on national project web sites and LeMill communities, were uploaded at the end of Phase 1 as examples of ICT use. Teachers
downloaded these LPs and used them as an inspiration for similar pedagogical
programs, followed their links to resources and made use of them according to an
absolutely different educational paradigm or read them through and opted for not
even trying – in case the learning process described or the tools showcased
seemed to be too complex or obscure.
Thus, collaborative learning was related to both learning theory and pedagogical theory or even pedagogical strategy. Teachers learnt from each other because
they performed try and test, criticism, adaptation or adoption of resources retrieved from the LRE which triggered specific learning mechanisms. When mentoring is successful, individual cognition is not suppressed in peer interaction; on
the contrary, interaction among educators of different disciplines in the national
groups generated extra activities (explanation, disagreement, mutual regulation)
which trigger extra cognitive mechanisms (knowledge elicitation, internalisation,
reduced cognitive load) (Dillenbourg, 1999a, 1999b).
Mentored Guidance Through Innovation: Evaluation
of Discourse of In-Service Teacher Groups and Their
Facilitators During E-Training Courses
In Calibrate, members of national validation teams from Austria, Belgium – Flanders, the Czech Republic, Estonia, Hungary, Lithuania and Poland interacted with
each other and also with LRE and LeMill developers (who were also involved in
developing the FLE3) in order to optimise tool functionality. Evaluation and validation processes in Calibrate were interrelated to provide a school-based, authentic assessment of the introduction, acceptance and use of a new set of digital tools
for teaching and learning. Both evaluation and validation in Calibrate is based on
the assumption that the most important agents for administering change in education are teachers. International use and further development of the Calibrate tools
– depend on teacher motivation, readiness and preparedness for use much more
than on any other variables (like infrastructure, quality of student performance,
educational policy, etc.). We also believed that the newly developed collaborative
32
A. Kárpáti, H. Dorner
platform, LeMill together with the first-generational VLE – FLE3 would be an
excellent form for mentored innovation as it offers a variety of tools for scaffolding – a process by which a more experienced person supplies supporting
structures or simplifies a situation or a task in a way that allows one less experienced to solve complex problems that would otherwise be beyond the latter’s
capability. Scaffolding may also be created by computer tools that structure inquirers’ activities in a way that facilitates complex problem solving.
Collaborative knowledge building in Calibrate is related to learning that occurs
in informal settings in a Virtual Learning Environment (VLE) that supports situated cognition and situated learning in knowledge building communities. In its
ideal form, the collaboration involves the mutual engagement of learners in a coordinated effort to solve a problem together or to acquire together new knowledge
(Lehtinen, Hakkarainen, Lipponen, Rahikainen, & Muukkonen, 1999). LeMill and
FLE3 activities were also related to views of educational communication in terms
of conceptual learning conversations (Pea, 1993), cooperative learning, cognitive
apprenticeship (Collins, Brown, & Newman, 1989), communities of learning, and
knowledge-building communities (Scardamalia, 2002). In these collaborative
learning models, mature communities of practitioners participate in inquiries at the
frontiers of knowledge. Their activities with their National Validation Moderators
or trainers / mentors during the process can be characterised as a transformative
communication for learning.
Sample – A Case from Hungary
In the first phase of the implementation and evaluation of international learning
resources, (March–May 2007), 20 Hungarian in-service teachers worked in collaboration with their colleagues, pupils, facilitators and educational researchers
within the framework of introducing the European Learning Resources Exchange.
The community of in-service teachers searched and evaluated this repository and
identified learning objects (LO-s, simple elements to be used flexibly in different
cultural contexts) and learning assets (complex learning materials that are curriculum-related and may contain cultural characteristics to be adapted or accepted)
useful for teaching practice. They collaborated in small domain- and subjectspecific groups (Mathematics, Science, Humanities and Foreign Languages) creating knowledge-building communities in a technology-supported environment.
The design work and the implementation of learning resources within the
small-group collaborations required highly reflective behavior related to the
teachers’ traditional pedagogical practices, which included providing constructive
feedback and intensive engagement in the long-term co-development of learning resources. The collaborative processes were facilitated and moderated by
e-moderators (Salmon, 2000) – one in each of the domain-specific groups.
They provided professional mentoring i.e. scaffolding the knowledge creation
Mentored Innovation in Teacher Training
33
of teachers by peers, e-moderators or facilitators in an e-learning environment to
support innovative practices. The mentored innovation model relied on the three
basic constituents of the online community of inquiry: cognitive, social and teaching presence (Garrison, Anderson, & Archer, 2000).
The presented scenario was hosted in the virtual learning environment (VLE)
FLE3 which was used for sharing knowledge and pedagogical practices, adapted
or self-developed contents. The VLE played a crucial role both in the collaborative processes and the mentoring events since it served as an appropriate platform
of learning and sharing ideas, materials and practices within the community that
consisted of in-service teachers located in different parts of the country. Thus,
communication and exchange of information was carried out exclusively in this
platform in the form of collaborative knowledge-building discourse (Scardamalia
& Bereiter, 1994). However, besides establishing a flexible tool mediation in regards to the collaborative work, the VLE also proved to be an effective tool for
providing help in the research and data analysis processes.
Survey Instruments and Hypothesis
The mentored innovation model provided professional scaffolding in the process
of developing community-driven learning content repositories and also in absolving collaborative learning activities using both content developed by the schools
(teachers) themselves and resources found using the Calibrate system. By applying the Participant Satisfaction and Communication Questionnaire (Dorner,
2007), we intended to investigate the participating in-service teachers’ satisfaction
with the mentored innovation model, to identify the role of the VLE and the facilitator in the success of the training process and the participants’ self-perceived development, since according to our hypothesis mentoring is a key factor of success
in the in-service training process and mentoring failure (lack of scaffolding and effective online communication within the community, etc.) can lead to lack of collaboration and a high number of drop-outs of virtual courses.
Previous to the questionnaire development process a survey of the literature on
the evaluation of online mentoring models and participant satisfaction was carried
out. Those items were adapted that were considered to be relevant in the presented
pedagogical scenario regarding the experiential information about the respondents
in obtaining their rankings of their satisfaction with the facilitators’ performance,
their perceptions of the facilitators’ and their group-members’ social presence and
their perceptions of the interactions in the VLE during the collaboration and the
mentoring events.
Based on the survey of the relevant literature, the questionnaire concentrated on
the following constituents of the mentoring model: participants’ global satisfaction,
34
A. Kárpáti, H. Dorner
the facilitators’ role, online communication in the VLE and the participants’
perceived social presence. Respondents were asked to consider their ratings in the
context of the online mentoring model and rate their agreement (on a 4-point
Likert scale) with statements concerning the above-mentioned variables.
Results and Discussion
Satisfaction regarding the mentored innovation model was explored by relying on
the perceived (subjective) values provided by the participating respondents. The
questionnaire surveyed satisfaction by rating the participants’ satisfaction with the
above named four constituents of the mentoring model. However, sub-components
of these constituents do not influence participant satisfaction to the same extent.
Thus, instead of employing statistical means and relying on normal distributions
for further analyses, we used multi-regression data analysis to depict the perceived
importance of the constituents and their sub-components that have an assumed
impact on participant satisfaction. During the analyses, both dependent variables
that quantify the respondents’ perception of the constituents of the mentoring
model and independent variables were created. In the first phase of the regression
analysis we focused on investigating the extent to which the independent variables
affect the dependent variable.
The following procedure was carried out in the case of all the four constituents
of the model. The 4-scale ratings were converted to a 0–100 scale in order to yield
single scores for each variable. Regression analyses were computed and significant items were indicated – with the respective importance values. On the basis of
the importance values, global indexes were calculated referring to the four constituents. In the second phase of regression analyses we employed these indexes.
In-Service Teachers’ Global Satisfaction
We found four variables to have a significant impact on the in-service teachers’
global satisfaction (with the whole mentoring process and collaborative online
work in the VLE) (Fig. 1): benefits (affective rather than cognitive nature) (imp.
0,30), the experience gained by participating in the mentored innovation model –
the usefulness of the experience (imp. 0,22) and the quality of learning (imp. 0,19)
(residual 29%). The participants were the most satisfied with the quality of learning (pedagogical innovation transmitted by the project) that took place in the VLE.
The two variables (describing the affective dimension) focusing on the experience
and benefits gained in the collaboration and mentoring events were considered to
be important almost to the same extent, and the participants rated their satisfaction
with the two constituents with the same values.
Mentored Innovation in Teacher Training
usefulness of
t he experience
35
69
benefit s gained
70
qualit y of
learning
72
course global
index
70
50
55
60
65
70
75
Fig. 1. Global satisfaction with the mentoring process and the online collaborative work.
The Facilitator’s Role
Regarding the evaluation of the facilitators’ role, two variables showed significant
impact: feedback provided by the facilitator contributed to the self-perceived
knowledge advancement (imp. 0,25) and the help offered by the facilitator (imp.
0,40) (Residual: 35%). As Narciss (1999) argues, feedback has a complex influence on students (course participants) not just in regards to their information processing in the learning process but also regarding their motivation. In our project, in
the case of the in-service teachers, the strong impact of giving feedback was supported; however, the aspect of professional scaffolding (help provided by the facilitator) was also added. Thus, feedback provided by the facilitator within the
mentoring process on the participants’ activity in the VLE proved to be just as important as the constant help (i.e. the professional scaffolding offered by her). As
regards the participants’ satisfaction with the role of the facilitator, both variables
were rated with the same values i.e. the participants were satisfied with the feedback provided by the facilitator and the professional scaffolding offered by her to
the same extent (Fig. 2).
36
A. Kárpáti, H. Dorner
sca ffol din g
pr ov i ded by
t h e fa cil it a t or
78
feedba ck fr om
t h e fa cil it a t or
78
fa cil i t a t or
gl oba l i n dex
73
50
55
60
65
70
75
80
85
Fig. 2. Satisfaction with the activity of the facilitator.
Social Presence
In respect to the perceived social presence – the so-called “illusion of nonmediation” (Lombard & Ditton, 1997) two variables proved to be significant: participants’ point of view was acknowledged by the facilitator (imp. 0,16) and distinct
impressions of the group members were created (imp. 0,09). As Fig. 3 shows the
residual percentage in the case of the social presence is high 75% indeed. However, this phenomenon can be considered as normal from the research methodological point of view since only little is known about the characteristics of the
form, the content and the effects of social presence as articulated by Lombard and
Ditton (1997).
The “illusion of nonmediation” occurs when a person fails to realise the existence of a medium in her communication and interacts as if it were not there. What
Mentored Innovation in Teacher Training
37
is highly important in this context is that in a VLE the concept of presence manifests itself through the interactions among the participants and the instructor and is
thus, a social phenomenon. Accordingly, this variable needs to be further elaborated since the participants’ and the facilitators’ perceived social presence (which
can be grasped and made visible in the form of online interactions within the mentoring process in the VLE) is crucial to the success of online pedagogical scenarios alike. Since as Picciano (2002) puts it “students who feel that they are part of a
group or ‘present’ in a community will wish to participate actively in group and
community activities” (p. 24).
dist i n ct
im pr essi on s of
cou r se
pa r t i ci pa n t s
73
poi n t fo v iew
a ckn owl edged
81
soci a l
pr esen ce
gl oba l i n dex
27
0
10
20
30
40
50
60
70
80
Fig. 3. Satisfaction with social presence.
Online Communication in the Mentored Innovation Model
Opposed to the social presence, the residual part in the case of the online communication constituent of the mentored innovation model showed a relatively low
value (22%); the following three variables proved to have significant effect on the
38
A. Kárpáti, H. Dorner
participants’ satisfaction with the online communication: feeling comfortable with
participating the online knowledge building discussions (imp. 0,34), individual
opinions acknowledged by group members (imp. 0,27) and feeling comfortable
conversing with the facilitator through the online surface (imp. 0,17). As it is
shown by Fig. 4 all the variables were rated with high scores by the in-service
teachers, however, out of the three the fact that the facilitator acknowledged their
opinions proved to be a very important one. This again should be linked to the activity of the facilitator in the mentored innovation model, feedback and scaffolding
is complemented with the mentor’s openness towards the participants’ knowledge
and teaching practices, she is expected to have an attitude of collaborating with the
in-service teachers as a community of professionals rather than simply subjects in
the process of pedagogical innovation with mainly receptive skills. Out of the four
constituents of the mentored innovation model that were identified as the inevitable ingredients of an effective mentoring model, the online communication constituent proved to be the strongest one (Fig. 5). The second best element within the
model in the presented sample was the activity of the facilitator. Accordingly, the
community of the in-service teachers was the most satisfied with the vibrancy of
discussions and interactions in the VLE – facilitated by the e-moderators.
pa r t ici pa t i n g
in on l i n e
di scu ssion s
75
opi n ion s
a ckn owl edged
78
com m u n ica t i on
t h r ou gh t h e
V LE
74
on l in e com m .
gl oba l in dex
83
0
10
20
30
Fig. 4. Satisfaction with the online communication.
40
50
60
70
80
90
Mentored Innovation in Teacher Training
39
64
cou r se gl oba l
r ol e of t h e
fa ci l i t a t or
73
27
soci a l pr esen ce
on l i n e
com m u n ica t ion
0
74
10
20
30
40
50
60
70
80
Fig. 5. Global indexes of the participants’ satisfaction with the constituents of the mentored
innovation model.
Conclusion
The results of this study support our hypothesis about the role of effective mentoring. Teachers’ competence development is significantly influenced by the activities of the facilitator and defined by the online communication in the VLE, which
are the key factors of success in the training process.
With the survey reported in this paper, we managed to reveal participating inservice teachers’ satisfaction constituents with the mentored innovation model,
focusing specifically on four major issues: global satisfaction, activity of the
facilitator, social presence and online communication. With statistical analyses
(identifying statistically significant values), we identified sub-components of the
above-mentioned four basic constituents of the mentoring model. We found that in
the in-service training case presented here, online communication facilitated and
moderated by a mentor in the online collaborations was of crucial importance and
the online interactions (equally moderated), served as indicators of its success and
as a basis for the participating in-service teachers’ satisfaction.
The results of the survey revealed that the pedagogical role (Berge, 1995) or
instructor role (Hootstein, 2002) of the facilitator (as a consultant, guide and resource provider) was found to be highly relevant for this population. However, the
social director role (Berge, 1995; Hootstein, 2002) that involved the establishment
and facilitation of personal relationships within the collaborating community) was
40
A. Kárpáti, H. Dorner
also identified (indicators of it are pinpointed in the online communication and
social presence constituents through the complex data analysis).
Similar to the results of the participants’ satisfaction with the activity and roles
of the facilitator, the online communication in the VLE was also considered to be
highly valuable and of significance. Through the variable “individual opinions acknowledged by group members” as part of the online communication the social
presence constituent became tangible to a limited extent (further, more specific
sub-components or variables of the social presence constituent need to be investigated and identified, though). Despite the fact that the participants managed to
form distinct impressions of each other and felt that their opinions were acknowledged both by the facilitator and their peers, a fundamental part of the social presence-element has not been made visible yet. What is, however, leading into this
direction is the significant sub-component of the online communication constituent, namely, the fact that the in-service teachers felt comfortable conversing
through the medium (which is in our case the VLE). Thus, the “illusion of nonmediation” as Lombard and Ditton (1997) characterised the concept of social presence i.e. interacting as if we were not using a digital medium of transmitting information, was successfully maintained.
Also, the role of mentor in the mentored innovation process during the Calibrate project was clearly identified (Kárpáti, Hunya, Lakatosné, & Török, 2008).
In the Calibrate project as a whole, national teacher groups that received longer
and more substantial training in the use of innovative tools and international content in LRE, and the VLE FLE3 and LeMill, though not necessarily more successful in technical solutions, manifested more frequent use and motivation for collaboration with peers for the improvement and enrichment of the European
Learning Resource Exchange.
References
Berge, Z. L. (1995). Facilitating computer conferencing: Recommendations from the field.
Educational Technology, 35(1), 22–30.
Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft
of reading, writing and matematics. In L. B. Resnick (Ed.), Knowing, learning and
instruction: Essays in honor of Robert Glaser (pp. 453–494). Hillsdale, NJ: Erlbaum.
Dillenbourg, P. (1999a). Introduction: What do you mean by “collaborative learning”? In
P. Dillenbourg (Ed.), Collaborative learning: Cognitive and computational approaches
(pp. 1–19). Amsterdam: Pergamon, Elsevier Science. http://tecfa.unige.ch/tecfa/publicat/
dil-papers-2/Dil.7.1.14.pdf
Dillenbourg, P. (1999b). Introduction: What do you mean by “collaborative learning”? In
P. Dillenbourg (Ed.), Collaborative learning: Cognitive and computational approaches
(pp. 1–19). Amsterdam: Pergamon, Elsevier Science. http://tecfa.unige.ch/tecfa/publicat/
dil-papers-2/Dil.7.1.14.pdf
Dorner, H. (2007). The role of e-mail communication in fostering knowledge creation in a
teacher training course designed in a collaborative learning environment. A paper presented
Mentored Innovation in Teacher Training
41
at the 12th Biennial Conference for Research on Learning and Instruction (EARLI), Aug 28 –
Sept 1, Budapest, Hungary.
Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment:
computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87–105.
Hootstein, E. (2002). Wearing four pairs of shoes: The roles of e-learning facilitators. In
G. Richards (Ed.), Proceedings of world conference on e-learning in corporate, government,
healthcare, and higher education 2002 (pp. 457–462). Chesapeake, VA: AACE.
Kárpáti, A. (Ed.) (2001–2005). Information technologies in education. A series of manuals and
educational CD-ROMs. Budapest: Nemzeti Tankönyvkiadó. (In Hungarian)
Kárpáti, A., Török, B., & Szirmai, A. (to appear). The effects of personality traits and ICT skills
on changes in teaching style of experienced educators. In P. Nicholson & A. McDougall
(Eds.), Current and future issues in research into ICT and education. Berlin: Springer.
Kárpáti, L., Hunya, M., Lakatosné, T. E., & Török, B. (2008): Evaluation results of the Calibrate
portal for the European learning resource exchange. Paper presented at the 9th E-Learning
Forum, 13–14 June 2008.
Lehtinen, E., Hakkarainen, K., Lipponen, L., Rahikainen, M., & Muukkonen, H. (1999).
Computer supported collaborative learning: A review. The J.H.G.I. Giesbers Reports on
Education, Number 10. Department of Educational Sciences. University on Nijmegen. Digital
version: http://etu.utu.fi/papers/clnet/clnetreport.html
Lombard, M., & Ditton, T. (1997). At the heart of it all: The concept of presence. Journal
of Computer Mediated Communications, 3(2). Retrieved January 2006 from
http://www.ascusc.org/jcmc/vol3/issue2/lombard.html
Narciss, S. (1999). Motivational effects of the informativeness of feedback. Paper presented at the
annual meeting of the American Educational research Association, Montreal, Quebec,
Canada.
Pea, R. D. (1993). Seeing What We Build Together: Distributed Multimedia Learning
Environments for Transformative Communications. Journal of the Learning Sciences, 3(3),
285–299.
Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and
performance in an online course. Journal of Asynchronous Learning Networks, 6(1). Retrieved
January, 6, 2006 from http://www.sloan-c.org/publications/jaln/v6n1/pdf/v6n1_picciano.pdf
Salmon, G. (2000) E-moderating: The key to teaching and learning online. London: Kogan-Page.
Scardamalia, M., & Bereiter, C. (1994). Computer support for knowledge building communities.
The Journal of the Learning Sciences, 3(3), 265–283.
Scardamalia, M. (2002): Collective cognitive responsibility. In B. Jones (Ed.), Liberal education
in the knowledge age. Chicago: Open Court.
Tartsay, N. N., & Kárpáti, A. (2007). The Role of the Facilitator in the Development of
Teachers’ ICT Competence. Proceedings, /NEW LEARNING 2.0? Emerging Digital
Territories, Developing Continuities, New Divides/. EDEN (European Distance Education
Network). p. 60.