has been found to be important and valuable in learning, especially incognitively demanding tasks, such as learning programming. This thesis presents an onlinelearning system, Reflect, which supports learning to solve small problems in this demanding areaand emphasizes on encouraging learner reflection in the learning process.

Reflect is essentially a learning system that supports problem solving based on synthesis as wellas learning from examples. It incorporates a knowledge layer to support several novel reflectiveelements to promote different types of learner reflection. These elements are embedded ineachmajor part

of the system. The knowledge layerenables

Reflect to build learner models, whichwere based on the main learning objectives defined in an ontology by teachers. Studentinteraction with Reflect can be used to provide the evidence to build a model of the learner'sknowledge of each of the learning goals. This information, when visualized provides studentswith informative feedback on

their learning progress. The knowledge layer also enables creationof reflective elements as part of the Reflect task specification, process of submitting solutions andprovides linkage between the reflective elements and automated grading of students’ solutions.

We conducted both quantitative and qualitative analysis to evaluate the system and our approachto promote learner reflection in the Reflect system. An investigation of the general system usagereveals that, within the context of an authentic learning environment, students used the system asexpected andsixty percent of all students

found it helpful for preparing for exams. A study ofstudents’ attitudes to each of the key reflective elements of Reflect was alsoconducted. Theresults

indicate

that most students had favourable opinions ofallthese elements.

Notably, boththe higherachieving

group and the lowerachieving

group saw greatest value on automatic testingaspects. However, the lowerachieving

group placed greater value on Reflect’s support forlearning from examples.

In addition, we conducted a detailed qualitative study of the waysstudents used several of the reflective elements of Reflect, based on a selected group of studentswith diverselevel ofachievement.This points to differential benefits to both in student attitudesand in behaviour for higher versus lowerachieving

Figure 3 Visualization of the learner model of the SQL-tutor.......................................................

17

Figure 4 Interface of e-KERMIT displaying the student model (The screen is divided into threesections, the top section is the problem statement, the middle section is the working space wherestudents can design entity relationship data models and the bottom section provides feedback tostudents including the four skillometers)

Question: Did you find that the sample solutions help youlearn? (2)........................................................................................................................................

in an online learningsystem that supports learning to solve small problems in a demanding area such as computerprogramming.

Learning is a process of gaining understandingthrough the acquisition of knowledge and

skillsthrough study and experience. It is a very complex cognitive process. There are many factors thatcontribute to a successful learning experience. It is common knowledge that, in both formal andinformal education, there are always learners who learn more efficiently than others. This is, ofcourse, affected by the learners’ prior knowledge in the subject. However, a good learningstrategy also contributes to efficient learning. Knowing how to learn is often a valuable skill thatdifferentiates expert learners from novice learners(Ertmer & Newby, 1996).

Here is an example to illustrate the difference between a more effective learner and a lesseffective learner. It illustrates some of the huge body of work on what differentiates moreeffective learners from others,for example(Boud, Keogh, & Walker, 1985; Ertmer & Newby,1996; Weinstein & Stone, 1993). Two students are preparing for an end of semester historyexamination. One student starts revision by reading the text book from chapter one to the end anddoing every exercise in the book. A few days before the exam, the student finds himself/herselfrunning out of time. Then he/she starts to read a lot faster, skipping random content and exercises.The day before the exam, the student finds himself/herself barely remembering anything he/shehas read. The other student, however, starts revision by thinking through the following questions:



What is my goal;



What do I already know about the subject;



How much time will it take me to study and;



What strategies work best for me to study for the exam (Will it be more efficient toread more example solutions or

solve more problems myself)?

Introduction

2

This student refers to these questions repeatedly as they began each study session and plan theirnext stage of study. They will also make a strategic, possible effective, revision plan at thebeginning and they will always be

aware of the learning goals, what they have learned, how muchtime is left and the current learning strategy. Although both students work hard for the exam andboth are dedicated to their study, the second student has greater control over the process, ismorestrategic, with greater potential to find the learning experience much more efficient and enjoyableand, seems more likely to achieve a better result in the exam.

In summary, more efficient learners tend to take conscious control of their learning, planning andselecting strategies, monitor the progress of learning, correcting errors, analyzing effectiveness oflearning strategies and changing learning strategies

and behaviors

when needed. On the otherhand, less efficient learners often fail to stopto evaluate their comprehension of the material.They generally do not examine the quality of their work or stop to make revisions as they goalong. Furthermore, better learners are more aware of when they need to check for errors, whythey fail to comprehend and how they need to redirect their efforts(Ertmer & Newby, 1996).

The type of activities that expert learners do is often described as metacognitive. Metacognition(Flavell, 1987)

consists of two basic simultaneous processes: monitoring the learning progressand making changes and adapting learning strategies if one is not doing well. It is about self-reflection, self-responsibility, initiative, goal setting and time management.The capacity tousesuch metacognitive skills

is developed to different stages in different people.Not all learnersapply these activities in their learning. In fact, many learners are not aware of them. The benefitof developing metacognitive activities is that,as learners become more skilled at applyingmetacognitive strategies in their learning, they gain confidence and become more confidentlearners, leading to pursuing their own intellectual needs without persuasion of a teacher. The roleof the teacher is therefore to acknowledge, develop and enhance the metacognitive capabilities ofall learners.

There is a wide range of metacognitive activities. In this thesis,we concentrate on helpinglearners to become aware of the value of reflection and develop theirability to self-reflect as ameans to enhance their learningofa cognitively demanding synthesis skill, like programming.Self-reflection is an important element in metacognition. It allows a learner to be aware of hisknowledge state, so he/she can makeadaptations to their learning strategy. We believe that ifIntroduction

3

learners are promoted using the right method to self-reflect in learning, they will become moreself-reflective and thereby, enhancing their learning efficiency.

1.1

Motivation

In both the literatureand our experience in teaching programming, it is clear that some studentsare unable to comprehend topics that are considered easy by other students, even though bothgroups try hard and dedicate the appropriate amount of time to study these topics. Thissuggeststhat the low achieving may have used unproductive learning strategies. It seems promising toexplore the possibility that learning effectiveness can be enhanced by using more efficientlearning strategies. Developing the ability to self-reflect is

a key factor in becoming an efficientlearner.

There is a large body of research evidence suggesting that learning effectiveness can be enhancedwhen students pay attention to their own learning by reflecting on the state of their knowledgeand the learning process(Schön 1983; Boud, Keogh et al. 1985; Schön 1987; Pirolli and Recker1993).As described by(Boud, Keogh, & Walker, 1985),Learner reflection

isa generic term forthose intellectual andaffective activities in which individuals engage to exploretheir experiencesin order to lead to a new understanding and appreciation. This ability to self-reflect is developedto different degrees in different people. Some learners are able to apply their relevant previousexperiences at the right moment, while others cannot. However, if a teacher can actively promotelearners to correctly apply their existing knowledge in learning at the correct moment, this may bebeneficial for a student who lacks the ability do so themselves. Moreover, the student will havethe opportunity to learn how to self-reflect, thereby becoming more capable and proficient inlearning.

Therefore, the main motivation for this research project is to help student’s enhance their learningefficiency by using a number of methods to promotelearner reflection in an online learningsystem.

1.2

Context of the Research

The research described in this thesis is conducted in the context of programming education. Thisis mainly because learner reflection is particularly valuable for cognitively demanding subjects.Introduction

4

Learning programming, even at the most introductory level is cognitively challenging(Bonar &Soloway,

1983). Moreover, recent research has found that a majority of students in introductoryprogramming courses do not learn how to program at expected skill level(Lister, et al., 2004;McCracken, et al., 2001).

For this research project, a

system, called Reflect has been developed as a prototype systemdesigned to teach students programming techniques and to promote learner reflection in theteaching/learning process. There are several earlier implementations of this system(Kay, Li, &Fekete, 2007; Li & Kay, 2005b); these versions were more advanced forms of astudent self-assessment

tool thatenabled

students to self-assess their knowledge, monitor their learningprogress and judge if they have achieved required learning outcomes. These were self-assessingand self-reflective in nature.

Earlier versions of Reflect were called Assess which began as a tool for supporting reading ofexample answers to design tasks. By the time of this thesis, the changes in functionality andsoftware engineering pragmatics meant that the new Reflect system needed a new design and wasa re-implementation of the earlier systems.

While still retaining the earlier systems’ goals and

some of the earlier ideas and features, it introduces many new functionalities and a set of newuser interfaces. The most important improvement in the new system is the introduction of aconsistent focus on applying reflection throughout use of the system. The system design centeredon the notion of learner reflection. It employs various techniques such as user/learner modeling,student self-evaluation and computer aided assessment. These will be discussed in more detail inlater chapters.

Reflect has been used in a second year C programming subject as a learning aid. We gatheredvaluable feedback from students and conducted several qualitative

studies to evaluate our system.

1.3

A Definition of Terms

Descriptions of some terms commonly used in this thesis that may be unfamiliar to the reader areprovided here.

Introduction

5

User Modeling

A user model is a

representation of a set of beliefs about the user, particularly their characteristics,knowledge,beliefs and/or

preferences.Early user modelling was often performed by theapplication system,

but notby

a distinct logical entity

(Wahlster & Kobsa, 1989).There was oftenno clear distinction between system components that served

user modelling purposes andcomponents that performed other tasks.

Information about a user was extracted from normalinteraction between the

user and the system. This information was used later to adapt the systemto the user’s knowledge, characteristics, beliefs and/or preferences. For example, a program canask whether a user knows about X, and at a later stage, it decides that if the user knows about X,he/she would probably want to learn about Y. Such implicit user models are not visible forinspection at any level.

Explicit user models, on the other hand, can either be a standalone application dedicated to thetask of user modeling suchas a user modeling shell system(Kobsa, 2000), or they can be distinctmodules within applications. There are two main types of explicit user models,cognitive

andpragmatic

(Kay, 1999).Cognitive user modeling attempts to match the way that people actuallythink and know. Itis of much interest to psychologist and educators. An important early exampleis the knowledge tracing module(Corbett & Anderson, 1995)

in the LISP Tutor(Anderson &Reiser, 1985)

that is based on the Adaptive Control of Thought (ACT) theory of knowledgeacquisition(Anderson, 1983). This type of user modeling is a well-defined artificial constructionof the psychologist-programmer. However, cognitive user modeling is computationally expensiveand the model is difficult toconstruct. Therefore, in many cases, a more pragmatic user modelingapproach is simpler and sufficient. The user model in this thesis is a pragmatic one. An explicituser model allows a system to store relevant information about a user and use this accumulatedinformation to adapt to the user’s needs. Without it, decisions about adaptation of the system canbe made only on the basis of observed learner behaviour snapshots. Another major advantage ofan explicit user model over an implicit one is that it can

be scrutinized, thereby offering potentialeducational value. This will be discussed inChapter 2.

Learner Reflection

There are many different types of self-knowledge that can facilitate learning. The student couldprofitably

seek to answer questions like these

(Kay, 1997)



What do I know?

Introduction

6



How well do I know a particular aspect, X?



What doI want to know? or Do I want to know a particular aspect, Y?



How can I best learn X?

A good teacher would

promote students to think about these questions, as

educational theorysuggests that learning efficiency can be enhanced when students pay attention to their ownlearning by reflecting on the state of their knowledge and the learning process.Learner reflectionisa type of metacognitive activity(Flavell, 1971).

It is a cognitive mode of consciously thinking,analyzing and learning and

a form of response to the learner’s experience.There

is a large bodyof evidence suggesting that learning effectiveness can be enhanced when learners pay attention totheir own learning experiences

by reflecting on the state of their knowledge and the learningprocess(Boud, Keogh, & Walker, 1985; Ertmer & Newby, 1996; Schön, 1983; Schön, 1987).

Schön

identified two main types of learner reflection, namely “reflection-in-action” and“reflection-on-action

is the type of reflection that is triggered by breakdowns (i.e. uniquesituations where learners cannot apply any known theories or techniques) that occur in anylearning activity. For example, suppose a novice C programming student is making a first attemptto write code that traverses a linked list. Suppose the student does not know how to determine thatthe traversal is complete (i.e. when it reaches the end of the linked list): this could cause thestudent to stop, analyze the situation and think about

a solution. It is likely that the student willrepeat this process several times when trying different approaches and eventually reach a correctsolution. Reflection-on-action refers to the type of thinking and analyzing after a learningexperience has occurred and gain new knowledge from the process. For instance, continuing fromthe previous example, after the student reaches a correct solution for the linked list traversal task,he/she can reflect on this particular problem solving experience and learnthe best way to solvethis type of problem.It has been found that such metacognitive activities can facilitate problemsolving, see for example

(Davidson, Deuser, & Sternberg, 1994),

and researchers argue thatstudents can enhance their learning by becoming aware of their own thinking as they read, writeand solve problems(Paris & Winograd, 1990).

Introduction

7

1.4

Thesis Aims and

Research Contribution

The overall objective of this research is tostudy

how todevelop

learners’ ability

to self-reflect,using the Reflect system. While accomplishing this goal, we have made the following researchcontributions:

1.

A new learning environment, which we call Reflect, designed to support learning to solvesmall problems in a demanding area such as computer programming;

2.

Creation of reflective elements as part of the Reflect taskspecification;

3.

Creation of reflective elements as part of the Reflect process ofsubmitting solutions;

4.

Linkage between the reflective elements and automated grading;

5.

Exploration of the use of an open learner model;

6.

A comprehensive set of resources for tutors and teachers to use in reviewing overall classprogress as well

as the work of individual students;

7.

Support for web-based authoring and updating tasks in Reflect;

8.

Within the context of an authentic learning environment, a study of students’

attitudes toeach of the key reflective elements of Reflect as well as;

9.

A detailed qualitative study of the ways students use the elements of Reflect, based onforty students with diverse ability.

1.5

Thesis Content

The remainder of the thesis describes the Reflect system, the design and evaluations. It containsthe following chapters:

Chapter 2 provides a background of existing research studies that are most relevant to the thesis.It addresses previous work in the areas of student self-assessment, learner reflection, intelligentteaching systems, learner modelling for reflection, automatic assessment of students’ solutions inprogramming and programming education.

Chapter 3

provides a walkthrough of the student’s view of the Reflect system, illustrating how thesystem can be used by students to learn and especially how the reflective features of the systemcan support their learning.

Chapter 4 presents the teacher/tutor view of the system. It shows how teachers/tutors can use thesystem to monitor students’ learning progress.

Introduction

8

Chapter 5 describes Reflect from a task author’s viewpoint. It presents the set of interfaces thatare used to create the teaching materials. Especially, it explains how the reflective elements seenin Chapter 3 are created for each problem.

Chapter 6 describes how the Reflect system was used by students during a period of four monthsand various qualitative studies and surveys conducted to gather students’ feedback to evaluate thesystem. It discusses the experimental procedures and presents observations and results. Mostimportantly, this chapter describes how each of the research contributionswas

This thesis describes the Reflect system, which aims to promote learner reflection. Reflectincorporatesfive main aspects:



Student self-assessment;



User/Learner modeling in Intelligent Tutoring Systems (ITS);



Promoting learner reflection with scrutable learner models in ITS;



Diagnosis of students programming solutions and;



Learning from examples.

This chapter provides background on each of these, both to put this research work into contextand as a basis for describing the design of Reflect in subsequent chapters.

2.1

Student Self-assessment

Student self-assessment is a critical part of this thesis work. Therefore, a brief overview of thetopic is important. It has been widely acknowledged that one of the characteristics of efficientlearners is that they tend to have a realistic sense of their own strengths and weaknesses and theyuse this knowledge to direct their efforts into productive directions(Boud, 1986; Schön, 1983;Schön, 1987). More able students tend to be effective self-assessors. However, in order todevelop

this skill more widely among learners, explicit attempts need to be made to develop thisability. The termstudent self-assessment

refers to the involvement of students in identifyingstandards and/or criteria to apply to their work, and making judgmentsabout the extent to whichthey have met these criteria and standards(Boud, 1991).Self-assessment can be formative, in thatit contributes to the learning process and assists learners to direct their energies to areas forimprovement. It may also be summative, either in the sense of learners deciding that they havelearned as much as they wish in a given area, or, it may contribute to the grades awarded tolearners(Boud & Falchikov, 1989). Researchers argue that by teaching students self-assessment,they can become better self-assessors and better learners(Boud, 1986).

Background

10

Student self-assessment has been introduced and used in many different situations and itsprocedures and effects have been examined.For example, it has been applied in foreign languagelearning(Oskarsson, 1980), writing(Butler, 1982), engineering education(Cowan, 1988)

assess themselves in a waywhich ismore or lessidentical to the way in which theywould be assessed by their teachers, ifthey are asked to rate themselves on a suitable marking scale. Boud andFalchikov

(Boud &Falchikov, 1989)

summarised several studies which indicate that students may improve theirability to rate themselves over time or with practice. Thus, the ability to self-evaluate can bedeveloped even for the weaker students.Another study by(Boud & McDonald, 2003)

found thatthe introduction of self-assessment practices was well accepted by teachers and by students. Ofparticular interest and importance is that they found self-assessment training had a significantimpact on the performance of those who had been exposed to it. On average, students with self-assessment training outperformed their peers who had been exposed to teaching without suchtraining in all curriculum areas.

This research provides an important foundation for Reflect: it means that there is real value inproviding a learning environment that helps learners to self-assess. It establishes that there aresubstantial potential benefits inexplicitly developing students’ ability to self-assess.Our

system,Reflect, provides one promising

approach togive learners

opportunities for self-assessment, sothese skills can be developed and

practiced in order to make them better self-assessors and, inturn, more efficient learners.

Reflect is based on an earlier system, Assess(Li & Kay, 2005a),which was initially built for this purpose and it is still one of the principal goals of Reflect. Wenow move to the literature which provides the basis for the knowledge layer that distinguishesReflect from its predecessor, the Assess system.

2.2

User Modelingin

Intelligent Tutoring Systems

It has long been acknowledged that one-to-one tutoring has the potential to be a highly effective,though costly, instruction method

(Bloom, 1984). The research aim of intelligent tutoring systems(ITS) is to build machine tutors that are as efficient as a human teacher in one-to-one tutoringconditions,

but more affordable and accessible. An early field survey(du Boulay & Sothcott,Background

11

1987)

identified the following difficulties when designing and implementing a machine tutor thatis as efficient as a human tutor:



Representing teaching goals and constructing a plan to achieve these goals;



Monitoring student’s actions and deciding what to do next;



Recognizing a student solution and errors in the solution and providing appropriatehelp;



Determining a teaching strategy andexecuting that strategy as a sequence of short-term teaching tactics;



Solving student’s problems and answering unanticipated questions and;



Building and maintaining a knowledge model for each student.

These are still challenging and unsolved research goals

today.

As the Reflect system builds and maintains a user model to monitor individual user’s knowledgestate and make itscrutable

to promote learner reflection, this section and Section2.3

will reviewthe literature in thearea of user/learner modeling and learner modeling for reflection in thecontext of ITS. As the Reflect system also evaluates students’ solutions, Section2.4

describes thedifferent approaches to diagnosing student solutions in various ITSs.

UserModelling

in ITS

Human tutors can often provide a tailored teaching style and strategy to an individual studentbased on what they know about the student. For a machine tutor that tries to imitate humanactions, such functions of adaptive instructions require the dynamic modeling of the student’sknowledge state. This is often achieved by the use of explicit user models (See User Modeling inSection 1.3). User models or learner models, as they are often called in ITS,

emphasize modelinglearners’ knowledge, rather than their characteristics, preferences or goals. They are

often at thecore ofanintelligent tutoring system,

asthey

enable

the

ITS to provideindividualized

instructions anda personalised

learning experience.

An architecture proposed in(Self, 1974)

putthe learner model with domain knowledge and tutoring strategy together to form a tripartitearchitecture, which is now part ofthe encyclopedia definition of what an ITS is(Self, 1999).

There are many types of learner models in different ITSs. In an authoritative review and overview(Holt, Dubs, Jones, &Greer, 1994), learner models were classified into these broad categories:

Background

12



Overlay models(Carr & Goldstein, 1977), in which the learner’s knowledge ismodeled strictly as a subset of an expert’s knowledge model;



Differential models(Burton & Brown, 1979), which are a modification of theoverlay model and focus on the difference between the student’s knowledge and theexpert’s knowledge and;



Perturbation and bug models(Kass, 1989), which normally combine the standardoverlay model with a representation of the learner’s misconceptions.

It is important to realize that there is no standard version of a learner model. They are alwayscontext-determined and content-specific.There are potential benefits in different ITSs sharinginformation. This is reflected in the IMS LIP(IMS, 2001)

specifications which have the potentialto improve interoperability of learner models across Internet-based teaching systems.In thissection, thelearner

models of two well-known systems are described. Both demonstrate theclassical role of user model in ITS, although one system was developed in the middle 1980s andthe other one was developed ten years later.

The Lisp Tutor and Knowledge Tracing

The Lisp tutor(Anderson & Reiser, 1985; Corbett & Anderson, 1992b)

is an ITS that teachesprogramming with the LISP programming language. It is an implementation of the

ACT theory ofknowledge acquisition(Anderson, 1983). The system providesan environment, similar to a texteditor, in which the student can

do exercises. It is intelligent, in that it can recognize studentactions and if the student makes a mistake, it can immediately interrupt and provide intelligentfeedback and explanations.It attempts to model each student’s knowledge state and sometimes

misconceptions using Knowledge Tracing(Corbett & Anderson, 1995). The term describes theprocess of monitoring and remediating the student’s knowledge when they use the system. Thetutor maintains a knowledge model for each student. It monitors each student’s performance incompleting exercises and arranges the curriculum to suit the student’s rate of progress based onthe information held in the model. It is essentially an overlay model that consists of a list ofproduction rules from the expert model. For each rule, the tutor keeps an estimate of theprobability that the student has learnt the rule. An initial estimate of the probability that a studentwill learn the rule just from reading text is assigned to each rule when an overlay model is createdfor each student. This probability is updated every time the student has an opportunity to applythe rule in exercises. The new estimate of whether the student has learnt the rule depends onwhether the student responds correctly to exercise questions. A probability value of 95% has beenBackground

13

adopted for concluding that the student knows a rule. Only after the student has brought all rulesabove the 95% criterion, does the tutor allow the student proceed to the next section. This is toensure mastery learning, a fundamental concept underlying the philosophy of the LISP tutor.Recall that each rule in the learner model is a piece of procedural knowledge; therefore it ensures,with a high probability, that

the student learns all the required knowledge. The expert model inthis system has another important role: it allows the system to solve exercises along with students,and provide assistance as necessary. This will be elaborated in Section2.4.

is a intelligent interactive web-based educational system thatteaches LISP programming. It is a combination of an electronic textbook and an ITS. The systemhas two types of learner models, an overlay model and an episodic learner model(Weber, 1996).The two main features of ELM-ART II, which are curriculum sequencing and interactive problemsolving support, are based on these two types of models. Curriculum sequencing describes theorder of presentation of new knowledge units and concepts and corresponding teachingoperations (i.e.

exercise and tests). This can maximize learning efficiency as the learner onlyreads and practices what he/she needs to.ELM-ART II represents knowledge about units to belearned in terms of a conceptual network. Units are

organized hierarchically into lessons, sections,subsections, and terminal pages. Each unit is an object containing slots

for the text to be presentedand for information that can be used to

relate units and concepts to each other.For each unit,there are static slotswith

information on prerequisite concepts,

related concepts, and outcomes(these

are the concepts that the system assumes to be

known when the userhas worked through

that unit successfully). Units for terminal pages have a tests slot

that may contain the descriptionof a group of test items the learner has to perform. When test

items have been solved successfullythe system can infer that the user possesses the knowledge

about the concepts explained in thisunit. Problem pages have a slot for storing a description of a

programming problem.

Dynamicslots are stored with the individual learner model that is built up for each user. This

user model isupdated automatically during each interaction with ELM-ART

II. For each page visited

duringthe course, the corresponding unit is

marked as visited in the user model. Moreover,

when

aprogramming problemis

solved correctly, the outcome

concepts of this unit are marked as knownand an inference process is started. In the inference

process, all concepts that are prerequisites tothis

unit (and recursively all prerequisites to these

units) are marked as inferred. Information fromBackground

14

the dynamic slots in the user modelis

used to

annotate links individually and to guide the learneroptimally through the course.

On the interface, the system

provides a NEXT button that allowsthe student to ask the system for the most suitable next step. The design aims to ensure thatoptimal learning path can be followed if the student always uses the NEXT button to navigatethrough the course. ELM-ART II has interactive problem solving support, and this employsanother type of user model as will be described in Section2.4.

In summary, the examples above illustrate how learner models enable different ITSs to presentindividualized sequence of learning materials to learners.

This is the traditional functionality ofthe learner model. The mid 1990s saw the emergence of a new research area, named learnermodelling for reflection (LeMoRe). Researchers in this field argue that the role of the learnermodel in an ITS is not restricted to providing personalized learning experience: learner modelscan contribute to learning directly by being available for the learner to scrutinize.

2.3

Learner Reflection, Scrutable Learner Models and Promoting LearnerReflection in ITS

The termLearner Reflection

is defined in Section 1.3. There are currently two main methods toimprove learning by promoting learner reflection in an ITS. One is to make the learner modelscrutable. The other widespreadmethodis to support collaboration between learners and thesystem. Some systems support one of these techniques, while others support both.

Typically, a learner model in ITS gathers and maintains relevant information regarding theknowledge state of a student.By reflecting on this information, students can compare thesystem’s beliefs about their knowledge with their own beliefs. In this way, reflection isencouraged, especially if the system’s beliefs and the student’s own beliefs differ. Therefore,making learner models inspectable to students

can promote reflection and contribute directly tolearning(Bull & Pain, 1995). There are many possible ways to scrutinize the learner model tosupport reflection. It

can be done by simply exposing learners to a simple textual description ofthe content of the model, to a 2D or 3D graphical representation of the model, or even a virtualspace in which students can interact with abstract representations of their cognitive states(Zapata-Rivera & Greer, 2001). There has been no clear evidence that one particular way ofvisualization yields more learning value than the other, although students do have differentpreferences(Mabbott & Bull, 2004).Many existing ITSs haveimplemented, to different levels,Background

To demonstrate the diversity of these approaches, several systems arepresented here.

Skill Meter of the LISP Tutor

As mentioned earlier in Section2.2, the LISP tutor keeps an estimate of the probability that astudent has learnt each concept of the domain knowledge in the student’s model. This estimationis shown in a Skill Meter (Figure1) to provide learner reflection. As shown in the figure, eachconcept has a progress bar next to it. The shadedarea in each bar reveals how much the studentunderstands that particular concept. If the whole meter is shaded, it means the student hasmastered that concept. It is very straightforward for learners to understand.

Figure1

Skill meter of the LISP tutor1

Course Progress Chart of ELM-ART II

ELM-ART II also visualizes learners’ progress in a similar way to the LISP tutor’s skill meter. Itadopts a traffic light metaphor to annotate hyperlinks

pointing to different topics. A screen shot ofthe overview of the LISP course is displayed inFigure2. There are three kinds of colored ballsnext to the links. A green ball means this page is ready to be visited and concepts taught on thispage are

ready to be learned. A red ball means that this page is not ready to be visited, since one

to model the knowledge ofits students.The system contains definitions of several databases, and a set of problems and theideal solutions to them. SQL-Tutor contains no problem

solver. To check the correctness of thestudent’s solution, SQL-Tutor compares it to the correct solution, using domain knowledgerepresented in the form of more than 500 constraints.The student model in SQL-Tutor isimplemented

as an

overlay on the constraint base.As there are more than 500 constraints,itwould be difficult to present

information about each

constraint. Instead, the student modeliscompressedinto a simple

structure that resembles the structure of the SELECT statement

of SQL.The student is shown

sixskillometers, which represent the student model in terms of the sixclauses of the

SELECT statement. For each clause,the tutor

finds

all the relevant constraints, andBackground

17

computesthe percentage of constraints that the student has

correctly understood, the percentageof constraints the students is currently learning (hence, there is incorrect understanding) and thepercentage of constraints that is yet to be covered.

Thesethree

percentages

are visualized asshown inFigure3.

Figure3

Visualization of the learner model of the SQL-tutor

A study in(Mitrovic & Martin, 2002)

evaluted the impact of open learner modelling on learningwith the SQL-Tutor.It was found

that the open modelmay

have improved the

performance of theless able students, and that itmay

have boosted the self-confidence

of the more able students. Italso suggests that the more able students considered the learner model to be beneficial. Theseresults are encouraging, given that the visualization is very simple.

is an ITS that teaches students conceptual database design and has anopen learner model. It provides a problem solving environment in which students can practicedatabase design using the entity relationship data model. Similar to the SQL-Tutor, it alsoemploys Constraint-Based Modeling to model the knowledge of its students. Consequently, dueto the nature of the constraint-based modeling technique,there area large number of

constraintsin

the system, and therefore itwould be difficult to provide meaningful

information about each

Background

18

one of them. The open student model in e-KERMIT is summary of the real student model,illustrated in a form of hierarchy. The constraints are grouped according to the pedagogicallyimportant domain categories. A screenshot of the e-KERMIT system is inFigure4. Summarystatistics of the student model are shown at the bottom left part of the interface as fourskillometers.

Figure4

Interface of e-KERMIT displaying the student model (The screen is divided intothree sections, the top section is the problem statement, the middle section is the workingspace where students can design entity relationship data models and the bottom sectionprovides feedback to students including the four skillometers)

An experiment assessed whether students learn more with an open model, whether they inspectthe models and feel that the open model contributed to their learning. Test subjects were dividedinto two groups. One group used the system with the open student models and the other groupused the system without the open models. Each test subject spent about 110 minutes in theevaluation session. Results showed that the majority of students who examined their modelsconsidered it to be useful, but there was no statistically significant difference between the twogroups in terms of knowledge gained after the subjects used the system. It was also found that thesystem interaction benefits the less able students more than the more able students.

is avisualization of Bayesian student model based on the Bayesian Belief Network (BBN)(Pearl,1988)

to help students understand how they are modelled in a teachingsystem and to promotelearner reflection.It provides a flexible architecture where students and teachers can create theirown views by choosing nodes from the Bayesian model. Using ViSMod, students can understand,explore, inspect, and modify Bayesian student models.ViSMod is built upon earlier work ofanother tool, VisNet (Visualizing Bayesian Belief Networks), for explaining BBNs in an intuitivemanner. In VisNet, the BBN is displayed along with size, proximity and colour of nodes,representing strength

of relationship, marginal probability and probability propagation. ViSModimproves on this by allowing learners to select nodes of interest on the Bayesian model tovisualize. In the tool, each node is a concept that has a score representing the system’sbeliefabout the student’s knowledge of that concept. Different sizes and colours are used to representdifferent scores. An example is shown inFigure5. In the figure, each node is a biology concept.Nodes are connected based on

the relationships between the concepts in the BBN. Each node isalso attached to two “opinion nodes”. One is the system’s opinion of the student’s understandingof the concept and the other is the student’s own opinion. ViSMod allows the student to interactwith the learner model (i.e. the BBN) by providing interfaces that allow him/her to explicitly settheir knowledge levels if they disagree with the system through (i.e. the widgets available at thecontrol panel in the lower part of the window). The student is also encouraged to provide anexplanation of why they think the concept value is wrong or different.

Background

20

Figure5

A screenshot of ViSMod showing a fragment of a Bayesian student model

An exploratory study assessed

the

interactionbetween users and the system,and especially byfocusing on the number and quality of the explanations provided by students and the self-assessment information obtained. It indicated the interface supported

students' engagement,motivation, and knowledge reflection.

There is a considerate body of other work in learner modelling for reflection including(Brusilovsky, Sosnovsky, & Shcherbinina, 2004),(Dimitrova, 2003)

and(Mazza & Dimitrova,2004). These systems utilize inspectable learner models to promote learner reflection. Anotherway to promote learner reflection in an intelligent learning system is to provide various reflectiveelements in guided collaboration between the learner and the system. Researchers argued thatengaging in reflective activities in interaction, such as explaining, justifying and evaluatingproblem solutions and the problem solving process can be potentially productive tolearning(Boud, Keogh, & Walker, 1985; Ertmer & Newby, 1996; Schön, 1983; Schön, 1987). Severalsystems that encourage reflection through system and user dialogue and interactions are presentedhere.

is asystem that encourages collaboration between users and the system, and whose contents may bejointly negotiated by the student and system.The way it

improves

learning through promotingreflection is to havethe student defend his/her

views to the system by discussing and arguingagainst the system's assessment of his knowledge and beliefs.

The system allows both the studentand the system to influence the other through valid argument and defend their own standpoint.Learner reflection is promoted through this process. In order to support this negotiation process,Mr. Collins maintains two separate confidence measures

of a student.

The first reflects thestudent's own confidence in his performance, and is provided by the student. The secondconfidence measure is the system's evaluation of the student, which is based on the learner'sactual performance.This information is made available to students by opening the learner model.Mr. Collins provides students with statistical information about each concept they attempted in atextual format, as shown inFigure6. In the figure, the display

indicates that the system is verysure that the student knows the rule for pronoun placement in negative clauses, but the studenthimself is unsure. The system is unsure that the student knows the rule for the positioning ofpronouns in affirmative main clauses, but the student is more confident.Thesystem’sconfidencelevels are based on a learner's five most recent attempts to use aconcept. Furthermore, if thestudent disagrees with the system, he/she can perform tests to affect the system’s confidence.

A

preliminary study to determine whether learners would actually benefit from the user-systemcollaborationwas conducted.The result suggested

that learners accepted

this approach: they didinspect the system’s knowledge about them, and did suggest changes and argue when theydisagreed with the system. The majority of learners in the study also liked the system to challengethem when it disagreed with their actions in the student model.

Background

22

Figure6

A textual display of the learner model of Mr. Collins and its negotiation process

It is best described as a learning companion.LuCy is intendedto provide a broader and more collaborative interaction with the student than a human tutortypically provides. Its

purpose

is not to coach the student by reporting mistakes, or revealsolutions to the exercises, but instead to provide a more supportive environment forthe student byencouraging reflection and articulation and discussing future intentions and their consequences.

Human tutors often

provide expert advice to a student but the interaction is primarily in onedirection, i.e. information is provided from thetutor

to the student in response to a student’sneed.In PROPA, the student can request a hint or ask a specific question using an inquirycomponent. In either case, LuCy promotes a dialogue with the student that requirescommunication in both directions. LuCy’s dialogues encourage the student to reflect on thinking,and evaluate past actions. LuCy does this by prompting the student to explain the reasoningbehind actions, and justify decisions leading to the actions. For example, when the student asks,“LuCy, do you think my last action was appropriate?”, Lucy asks the student to reflect on whythe action was chosen and helps the student understand why it might have been an appropriate orinappropriate action to take at the time. When the student takes an action in PROPA, LuCysometimes prompts the student to reflect on the reasoning behind this activity by asking theBackground

23

student questions such as, “Why do you think that is a valid hypothesis?” Although there was noreport of evaluation of LuCy, it does contribute some interesting ideas to the design of an ITS.

Intelligent learning systems that adopt either open learner models to promote learner reflection orencouraging reflection by providing guided collaborative interaction between the system and theuser, or

both have been shown to be effective.In Reflect, students’

interactions with the systemare recorded intheir

learner models, based on an accretion representation(Kay, 2000).

These

models

always hold

the system’s beliefs

of

how well the studentshave been

performing.It makes

this information available to students by scrutinising their user models withtheScrutableInference Viewer (SIV)(Kay & Lum, 2005), thereby promoting reflection-on-action. There arealso many other features in Reflect that are designed to promote reflection. They will bedescribed in detail in the next few chapters.

2.4

Diagnosis of Students’ Solutions in ITS for Programming

As listed at the beginning of Section2.2, some of the challengesfor an ITS include recognizingany correct student solution, identifying errors in a solution and providing appropriate help. Ofparticular difficulty is the identification of errors and underlying misconceptions and helpingstudents recognize them in terms of the domain knowledge and correcting them. There have beenseveral different approaches to evaluate students’ programming solutions. Some of them arereviewed here.

The Model Tracing Approach in the LISP Tutor

Performance modelling, also known as Model Tracing(Corbett & Anderson, 1992a)

in the LISPtutor aims to provide individualized problem solving support. The LISP tutor provides continuousinteractive problem solving support as the student works on exercises.It interprets each of thestudent’s actions and follows the student’s step-by-step path through the problem solvingpractice. When monitoring student actions, it is necessary to have, at all times, a pattern againstwhich students’ actions can be measured. That is, as the student generates a LISP program step bystep, the tutor must be able to assess whether each step is on the path to a successful solution. Inorder to achieve that, the LISP tutor is supplied with over one thousand production rules. Eachrule represents one piece of procedural knowledge. The complete set of correct rules for writing aparticular piece of code is referred to as the ideal student model or the expert model. The modelBackground

24

also includes a set of incorrect rules that reflects misconceptions. These allow the tutor togenerate solutions. While the student is working, the tutor simultaneously simulates the steps thata correct solution requires by executing the appropriate production rules. In addition, it modelserrors that students might make at each step on the basis of incorrect rules. By comparing thestudent’s step and the possible correct or erroneous steps the tutor generates, the tutor canrecognize whether the student is on a right path for a right solution. If not, the tutor canimmediately interrupt and correct the student. While this approach can provide immediate andmeaningful feedback to students, it has a significant limitation. Since it always has predeterminedpaths to correct solutions, it cannot accept a good solution if it does not recognize it, therebyrestricting creativity in students’ solutions.

Interactive Problem Solving Support in ELM-ART II

ELM-ART II(Weber & Brusilovsky, 2001; Weber & Specht, 1997)

supports example-basedprogramming. That is, it encourages students to re-use the code of previously analyzed exampleswhen solving a new problem. As an important feature, ELM-ART II can predict the student’sway ofsolving a particular problem and find the most relevant example froma

student’sindividual learning history.When a student requests for

help, ELM-ART II selects the mosthelpful examples

the student has already seen, sorts them corresponding to their relevance, andpresents them to the student as an ordered list of hypertext links. The most relevant example isalways presented first. Students

canalsoask the system to diagnose the code of the solution in itscurrent state. The system gives feedback by providing a sequence of help messages withincreasingly detailed explanation of an error. The sequence starts with a very vague hint on whatis wrong and ends with a code-level suggestion of how to correct the error

or complete thesolution. The student canuse this kind of help as many times as required to solve the problemcorrectly.This

ensures that all students will

ultimately solve the problem.

Both the individualpresentation of examplesand the diagnosis of program code are based on the episodic learnermodel

(ELM)

(Weber, 1996). ELM is a type of user or learner model

that stores knowledge aboutthe user (learner) in terms of a collection of episodes. To construct the

learner model, the codeproduced by a learner is analyzedin terms of the domain knowledge on

the one hand and a taskdescription on the other hand. This cognitive diagnosis results in a derivation

tree of concepts andrules the learner might have used to solve the problem. These concepts

and rules are instantiationsof units from the knowledge base. The episodic learner model is made

up of these instantiations.

In ELM only examples from the course materials are pre-analyzed and the resulting explanation

Background

25

structures are stored in the individual case-based learner model. Elements from the

explanationstructures are stored with respect to their corresponding concepts from the domain

knowledgebase, so cases are distributed in terms of instances of concepts. These individual

cases, or parts ofthem,can be used for two

different purposes. On one hand, episodic instances

can be used duringfurther analyses as shortcuts if the actual code and plan match

corresponding patterns in episodicinstances. On the other hand, cases can be used by the analogical

component to show up similarexamples and problems for reminding purposes.

Ludwig

The previous systems try to match students’ solutions against previously setup rules or cases.This can potentially limit students’ creativity, which is widely recognized as an important aspectin programming. A more obvious method to evaluate students’ programming solutions, which canavoid this problem, is to simply run them. We now describe this approach with an example:Ludwig(Shaffer, 2005)

is a web-based assessment system which allows students to edit theirprogramming code in a controlled text editor, offer code for analysis, then submit it for grading.In the system, students submit programs

to be

run

against sample data set and the output iscompared to

that developed by the instructor. The sample data set may

be prepared by theinstructor or created by the

student.Theinstructor has to create acorrectsolution programthat isnot accessible to students.

When the student’s program is run,the output is compared

to theinstructor's program’s output; differences are displayed to the

student. The student then has theoption of changing the

program or submitting the program

for grading.

Once the student submitsthe program, the system will

create its own new input data set and re-run the student

programagainst the instructor's program and display the

results to the student. This helps to preventstudents from

submitting programs which are little morethan print

statements.

The Ludwigsystem is designed assuming that programming is a task-based discipline rather than knowledge-based, in comparison to the approval of the ITS systems described earlier. Consequently, thesystem cannot interpret the results

from running students’ codein terms of the domainknowledge.

However, it offers small programming tasks for students as Reflect does, whichmakes it possible to design test data to evaluate particular elements of the solution.

Reflect, as will be shown

in the next chapter, will run students’ code as Ludwig but can also linkthe results to components of the domain knowledge. This allows Reflect to not only tell studentsBackground

26

their errors but also the possible underlying misconceptions. Furthermore, students’learnermodels can be updated.

2.5

Learning from Examples

Research has

established

that learning from examples is of major importance for the initialacquisition of cognitive skills in well-structured domains.For decades, most mathematics andmathematics-related curricula, including programming, place a heavy emphasis on conventionalproblem solving as a learning device. However, there seems to be no clear evidence thatconventional problem solving ismoreeffective

than other forms of learning. On the contrary, itnormally leads to problem-solution, not to schema acquisition (i.e. learning). This is because thecognitive load (i.e. the total amount of mental activity on working memory at an instance in time)required during the problem solving process may beexcessive, occupying most of the alreadyvery limited processing capacity of the working memory, thus, leave very little for schemaacquisition (i.e. learning), even if the problem is solved(Sweller, 1988). This points to thebenefits of using more examples in teaching/learning. It has also been found

that students preferlearning from examples(LeFevre & Dixon, 1986; Pirolli & Anderson, 1985). However, in orderto achieve the desired effect of schema acquisition from studying examples, another cognitiveprocess, namely self-explanation, is required.Generating

explanations to oneself (self-explaining)has been shown to improve the acquisition of problem solving skills. It has been found that morecompetent students generate many self-explanations regarding the novel parts of the examplesolutions and relate these to principles; by contrast less competent students do not generatesufficient self-explanations, monitor their learning inaccurately and subsequently they relyheavily on examples(Chi, Bassok, Lewis, Reimann, & Glaser, 1989).Systems that placeemphasis on this aspect of cognition havebeen created and results are positive, for example

(Aleven, Popescu, & Koedinger, 2001).Another study has shown that when students areexplicitly encouraged toself-explain when learning from examples, they will do so and increasetheir learning(Chi, Leeuw, Chiu, & LaVancher, 1994).

Learning from examples is an important element of the Reflect

system. Students are always askedto view example solutions and rate examples with supplied criteria that emphases on the novelparts of the example. In this way, they are encouraged to generate self-explanations on theexample and consequently learn from it.

Background

27

In conclusion, this chapter presents the background of the main elements of the Reflect system.The next few chapters will present the design of Reflect with system walkthroughs.

System Overview: Learner Experience

28

Chapter 3

System Overview: Learner Experience

This thesis describes and evaluates Reflect, as a method to promote learner reflection usinglearner modelling in the context of student self-assessment. This chapter and the next two presentReflect, first with a walkthrough of the system from a learner perspective inthis chapter and thenwith the teachers’ and authors’ perspectives respectively in the next two chapters. This chapterillustrates how students can use the system to learn and especially how the reflective features ofthe system can support their learning.

The system is called Reflect to emphasis its reflective features. According to