Monthly Archives: April 2010

The following is an attempt to describe early attempts at developing a framework for evaluating existing and developing new interventions aimed at improving learning and teaching within universities. It builds on some ideas from an earlier post.

Sorry, but the first couple of sections seem necessary to convince myself that this is connected with some of my earlier thinking.

Rationale and assumptions

As mentioned many times on this blog, I think most attempts to improve learning and teaching at Universities really don’t work as they don’t change what the majority of academics do. At best they get a bit of compliance. The aim here is to come up with a framework/guidelines that help evaluate and design such interventions.

Empathy driven innovation

While the following uses a framework from psychology, the basic message is the same as a range of other work. That message is, focus on the academics, understand what they are currently doing and experiencing, and design interventions that connect with that experience. Most recently this was discussed in a a post on empathy-driven innovation.

Much of my thinking around improving learning and teaching is based on this sort of approach, the following tries and provide some additional guidance into how to do this. But underpinning it all, is a focus on the actual teaching experience of the academic staff (and the learning experience of the students) as the primary focus.

The aim is to understand what they are experiencing and develop ideas about how that experience can be improved, as perceived by the staff and students and not as perceived by management, the government or some research ideal. The focus is on what the teacher does, not what management does.

Improving L&T is an attempt to change behaviour

An assumption underpinning this thinking is that when you are trying to improve learning and teaching, what you are actually trying to do is to change the behaviour of the teaching academics. If you don’t change the behaviour of the academics, then there is no way you can expect the quality of learning and teaching to change.

This is perhaps the main problem that I have with existing methods being employed to improve learning and teaching at Universities. Existing methods don’t actually consistently result in behaviour change amongst a significant percentage of university teaching staff. I’ve argued before that they don’t work and that there is evidence to support this.

The question is, what’s the solution?

Conceptions and teaching

An example of this mismatch can be seen with the on-going focus on teacher qualifications. For example, the Australian government has introduced KPIs for university learning and teaching that include the use of the % of university teaching staff with teaching qualifications as a KPI. I’ve argued this this attempt will encourage compliance behaviour on the part of institutions and academics.

The institutions are already at it. My current institution is currently introducing a new “desirable criteria” for people in charge of courses – having a formal teaching qualification. The assumption is that this will increase the quality of teaching. It won’t.

At best, a formal teaching qualification will change the conceptions of teaching held by an academic. The “teacher’s thinking” layer in the following diagram. There is significant research (e.g. Trigwell, 2001; Richardson, 2005) that suggests that unless all of the layers in the diagram are in alignment, you won’t get improvement in teaching.

That is, even with a formal teaching qualification, if the context, disciplinary expectations and many other factors aren’t right, then there will be no behaviour change. The teacher will continue to employ the same strategies as always.

How to encourage and enable behaviour change

How to change behaviour is something that the psychologists study. This current line of work is a collaboration between myself and a psychologist colleague. The current aim is to try and distill what is known about behaviour change in psychology into a form that helps explain how someone – with a deep knowledge of the current experience of academics – can encourage and enable behaviour change that results in improved learning and teaching.

In earlier work I’ve drawn on insights from Roger’s (1995) diffusion theory as a guide. There are definite connections between diffusion theory and the following.

The initial spark for this thinking is a paper by Michie et al (2008) which suggests that:

There are determinants of behaviour change, (see the table below) someone is unlikely to change behaviour when these determinants have the wrong value.

There are known techniques for encouraging behaviour change.

These techniques are each seen as likely to influence a certain subset of the determinants.

Draw on empathy driven innovation to develop specific interventions, which are based on the information gathered in the previous two steps.

An example

In a previous post that seeks to summarise the Michie et al (2008) paper, I use the following example of the level of interaction or participation of staff in a course site. In the following, I compare/contrast the more traditional approach and the type of approach that I think might work better.

The traditional approach

The more traditional approach is what I suggests is “what management does”. Management will see that there is a problem, i.e. staff aren’t participating enough. They then take it upon themselves to perform some action, which might include:

Require staff to get a teaching qualification.The assumption being that with a teaching qualification they know more about the importance of staff interaction and hence will increase their interaction.

Create a policy that requires a certain level of participation.At the very least they might say that every course site must have a discussion forum. Over time they might add requirements like: every student query must be answered within two days, or every staff member must make 5 posts a week to the course discussion forum.

Run a staff development session.Organise an external expert on the benefits of staff participation to run a session at the institution and invite teaching staff to come along to listen.

Reward staff who interact at appropriate levels.Give them extra money or recognition as good teachers. etc.

Most of the above interventions would require a fair bit of work. Most, if not all, of the senior management at the institution would need to be involved, a working party might need to be formed, appropriate consultation, and if they’re really smart they’ll make sure that someone important within the institution is publicly seen encouraging this goal.

I’ve seen the above approach used again and again. I don’t think I’ve ever seen it work to any great extent.

The behavioural change/empathy-driven approach

Using the process above:

Identify the good practice.Done, we wish to increase the level of staff participation in a course site/forum.

Use the determinants to identify areas of weakness in the current context.As an example, let’s assume that after an appropriate process we have identified the following:

Skills: many staff don’t have the skills or insight necessary to increase their interaction with their course in effective ways.

Environmental constraints: there’s nothing in the existing context that helps them perform this task. In fact, the rewards system tends to suggest that research is more important than teaching.

Norms: there is little visible evidence of what the norms are in terms of appropriate levels of staff participation. Staff don’t know how much other staff are participating, they don’t know what is acceptable and what is not.

Anticipated outcomes: many staff don’t see the connection between increasing their participation in a course site and benefits to students. They do see how increasing their level of participation increases the time they spend on teaching and decreases the time they spend on research.

Use empathy-driven innovation to design specific interventions.
Embed into the LMS a graph that shows the academic staff members level of participation in their course site (self-monitoring) and compare it against the levels of participation in a group of related courses (social processes of encouragement). Have that graph show up whenever a staff member logs into the LMS. Include in the graph some easily visible representation of the connection between student failure rate and staff participation. Include a link “how to increase participation” to a page that outlines various techniques for increasing participation (prompts, triggers, cues) and includes comments or suggestions from other staff about how they did it (modeling/demonstration of behaviour by others).

Once this intervention is implemented, spend a lot of time observing what happens with the use of this intervention and make on-going change.s

Differences

Apart from the obvious, there is one major difference between these two approaches which I must make explicit. The traditional method spends a lot more time on the design of idealistic solutions that are disconnected from the everyday experience (e.g. the setting up and encouraging of people to enrol in a graduate certificate in higher education). Often, such approaches all but ignore the reality of the lived experience (e.g. policies that state a requirement but for which there’s been no work to support implementation). The traditional approach tends to be design heavy.

The empathy-driven approach, focuses on the lived experience, on what happens day-to-day. There is some design work, but much of the time is spent understanding what is going on everyday and trying to respond in an informed way.

The questions

It is still very early days for this work. There are lots of questions, here are just some.

How do you evaluate the level of determinants?

In the above example, I outlined some of my ideas of what values the behavioural change determinants around increasing staff participation on a course site might be. Basing the design on my thoughts is not a good start. It has to be based on a deep understanding of the experience/beliefs of the teaching staff.

How can you develop this understanding?

Are there survey instruments or other approaches from psychology that allow you to get some understanding of the current “level” of these determinants across an institution?

Which are the most important determinants?

Michie et al (2008) identify 7 sets of determinants, there are certain to be complexities within each one. In any large population you are likely to get widely differing results.

How do you figure out which of the determinants is most important or difficult?

Answers to this could help you guide the selection of which determinants to address and/or which of the techniques to adopt.

What’s the relationship between the determinants?

It’s likely that there would be relationships between the determinants. For example, if you increase skills in an appropriate way, I would imagine that this would change beliefs about self-efficacy, at least to some level.

What are the known relationships between the determinants?

What are the factors within determinants?

Fishbein et al (2001) talk about norms as being one of the determinants. Michie (et al) 2008 expand this out to include social influences, emotion and action planning. This is just one example, it is likely that each of the determinants embody a collection of factors. For example, self-efficacy might be a combination of characteristics of the person, the type of task and the previous experience of those involved.

What are the factors or sub-components of each of the determinants?

What types of behaviour are there?

Are all behaviours the same? Is a particular teaching behaviour the same as diet or how you control your children?

Are there different categorisations of behaviours with different characteristics?

What are the specifics of change techniques?

Michie et al (2008) just give a short title to these techniques. I imagine that each one is worthy of a literature that seeks to define, describe and evaluate interventions that have been designed on the basis of a specific technique.

What are the specifics of each change technique? What are the success factors? How do you choose which technique might be most appropriate?

I should be writing other things, but there’s a wave amongst some of the “innovation bloggers” at the moment that I wanted to ride for the purposes of – once again – trying to get those driving university e-learning (and learning and teaching more generally) to realise they have something fundamentally wrong. They are using the wrong type of process.

I level this criticism at most of management, most of the support staff (information technology, instructional designers, staff developers etc) and especially at most of the researchers around e-learning etc.

For those of you at CQU who still don’t get what Webfuse was about. It wasn’t primarily about the technology, it was about this type of process. The technology was only important in so far as it enabled the process to be more responsive.

Empathy-driven innovation and a pull strategy

Over the weekend, Tim Kastelle wrote a post yesterday in which he proposes that a pull strategy is a key for empathy-driven innovation.

What is empathy-driven innovation, well Tim provides the following bullet points about empathy-driven innovation:

It requires a deep understanding of what the people that will use your innovation need and want. Most organisational e-learning assumes that steering committees and reference groups are sufficient and appropriate for understanding what is needed. This is just plain silly. The people who reside on such things are generally very different in terms of experience and outlook than the majority of academics involved with learning and teaching. If they aren’t different at the start, the very act of being a member of such groups will, over time, make them very different. These groups are not representative.

What’s worse, is the support structures, processes, and roles that are created to sit under these committees and implement e-learning are more likely to prevent “deep understanding”, than help it. For example, different aspects of e-learning are divided along the lines of institutional structures. Anything technology related is the responsibility of the information technology folk, anything pedagogical is the responsibility of the instructional design folk and never shall the twain meet. As these folk generally report to different managers within different organisational units, the rarely talk and share insights.

E-learning is more than the sum of its parts. Currently, there is generally a large gulf between the academics and students “experiencing” e-learning, the technology people keeping the systems going, the instructional design folk helping academics design courses, and the management staff trying to keep the whole thing going. This gulf actively works against developing deep understanding and limits the quality of e-learning within universities.

Using empathy for the users of our innovations is the best way to create thick value.A deep contextualised understanding and appreciation for the context of the academic staff and students helps develop truly unique and high quality e-learning applications and environment. Without it you are left with copying what every one else does, which is typically pretty limited.

We are creating ideas that entice people.Almost all of university-based e-learning is based on push strategies. i.e. someone or group who is/are “smart” identify the right solution and then push it out onto the academics and students. They have to do this because their understanding of the context and need of the academics and students is small to non-existent. They decisions are based more on their own personal observations and preferences, or even worse, on the latest fad (e.g. e-portfolios, open source LMS etc.).

They aren’t creating ideas that entice people, they are creating ideas that people have to use.

Researchers are particular bad at this.

Innovations that pull are inherently network-based.The idea is that to engage in empathy-driven innovation, you have to have connections to the people using the innovations.

As argued above, it’s my suggestion that the structures and processes around e-learning within universities are such that they actively work against the formation of these connections. To have empathy-driven innovation you have to connect the folk involved in teaching, learning, technology and instructional design in ways that are meaningful and that actively strengthen the connections.

At the moment, at least in my institution, there is no easy way for an academic to talk to a technical person that actually knows anything about the system, i.e. someone who can actively modify the system. The technology person can’t easily talk with someone with educational knowledge to better inform technological change. Instead each group retreats to talking amongst themselves. The necessary connections are generally only there in spite of the system, not because of it.

The Webfuse Thesis

I’m somewhat engaged in this discussion because I have seen, for a small period of time, this type of empathy-driven innovation work in the context of e-learning within a University. This is the topic of my PhD Thesis, an attempt to describe an information systems design theory for e-learning that encapsulates this.

At it’s simplest, the Webfuse thesis embodies two aspects:

Process. There are two broad types of process: teleological and ateleological. I describe the two types here. Empathy-driven innovation is an example of an ateleological process. The table in the blog post describing teleological and ateleological mentions Seely Brown’s and Hagels push and pull distinction.

University e-learning is always too teleological, it needs to be more ateleological.

Product.Everyone focuses too much on the actual product or system when we talk about e-learning. With Webfuse the product was only important in terms of how flexible it could be. How much diversity could it support and how easy was it to support that diversity. Because the one thing I know about learning and teaching within universities, is that it is diverse. In addition, if you are to engage in ateleological (empathy-driven) design, you need to be able to respond to local needs.

Most importantly, the question of how flexible the product is, is not limited to just the characteristics of the product. Yes, Moodle as an open source LMS implemented with technology (PHP and MySQL) that has low entry barriers, can be very flexible. However, if the organisation implements with technology with high entry barriers and inflexibility (e.g. Oracle) or if it adopts a process that is more teleological than ateleological, it robs Moodle of its flexibility.

Two weekly PhD updates in a row, will be interesting to see how long this keeps going.

What I did last week

The aim for this week was to get chapters 2 and 3 of the thesis into first draft stage and sent to the supervisor for comment. If possible, get chapter 4 into the same stage.

2 and 3 were sent on Tuesday. Chapter 4 is about 3/4 of the way into first draft stage. Hope to have that done over the weekend.

The week has been slower than normal due to school holidays and other work.

The aim for next week

The time factor is going to raise its ugly head. Found out today that a deadline for a grant application is now a month earlier than I thought – it’s now May 4. On the downside, that means that much of the time from now until May 4 will probably be spent working on that application. On the plus side, I’ll have some more free time after May 4 for the thesis.

For the next week, the plan is to:

get chapter 4 into first draft stage;

get a rough version of chapter 6 completed; and

if possible, get started on chapter 5.

My ultimate goal is for submission by the end of June, or as close to that as I can get.

In a couple of weeks I’m off to Canberra to talk PhD, potential ALTC grants and promote the use of BIM. As part of the latter task, I’m giving a quick talk at the University of Canberra as part of their Stuff the works lunches. The title of the talk is “Reducing the aggravation of student blogging: The story of BIM”.

I have to send of a short abstract for the talk today, so thought I’d share it here. I’ll use this post as the home for all the resources associated with the talk. The slides should be up sometime just before the talk and I hope some audio/video will follow not long after.

The basic aim of the talk is to share the why, what and how of using individual student blogs in teaching. The premise is that there is value in using individual blogs, but that it can require a bit of work and that BIM can help.

Abstract

There are many good reasons (reflection and meta-cognition, reducing transactional distance, increasing sense of ownership and community, ICT literacy etc) to encourage or require students to use individual blogs as part of their learning. However, the use of individual student blogs is not without its problems, which include: limited quality of LMS blogging tools, difficulties of managing externally hosted blogs, the question of how to mark and comment on student posts, the novelty of blogging for many students and staff, increased workload etc.

This session will tell the story of BIM (BAM into Moodle). A Moodle module designed to reduce the aggravations of supporting individual student blogging. Since 2006 BIM, and its predecessor BAM, have been used to support 2800+ students in 26+ course offerings creating 20,000+ blog posts. The session will show how BIM works and describe one approach to why and how it was used in one course. It will include discussion of the challenges and benefits of using BIM.

My current institution has adopted Moodle as its institutional LMS as of 2010. Due to my role, I haven’t really had to think about how you best go about designing a Moodle course. Now, however, due to the curriculum mapping project it is likely that I am going to have to engage with this. Hence the question, what are the different principles, guidelines or approaches for designing a Moodle course site?

Why do you ask?

A part of the aim of the curriculum mapping project is to map the alignment of course activities, resources and assessment against graduate attributes, learning outcomes etc. At the moment, the project is at the stage of experimenting with existing Moodle courses – course sites that are live now – and seeing how well (or not) Moodle’s existing outcomes support can be used for this mapping purpose.

I’m only looking at a very small number of courses, however, these are courses put together by academics who care about their teaching, who want to make an effort. From this small sample it appears that, as they stand, the design of these course sites will not easily enable the clear mapping of the activities, resources and assessments against outcomes. It’s clear that the design of these courses is very different, and that’s in spite of the the institution paying some lip service to consistency of experience for the students (which is misguided I think as in the end it only results in superficial consistency and more importantly fails to engage with a key characteristic of learning and teaching – it’s diversity).

It appears that, in order for the curriculum mapping project to fully enable the mapping of activities, resources and assessments against outcomes etc, it will need to make recommendations for better course design.

Which raises the question, what is good course design in Moodle?

What are some of the possibilities?

From one perspective, it is important that this be specifically about Moodle, and not e-learning or LMS course site design in general. Moodle, like any technology, provides a set of affordances, a set of strengths and weaknesses. To get the most out of Moodle, like any technology, the design needs to be aware of the Moodle sweet spot, and its weak spots.

We learn well when the learning environment is flexible and adaptable to suit our needs.

Aside: would be interesting to map the content of courses with these 5 principles and find out how many follow them in some way. I think there would be surprisingly few. Following this evolution over time might be interesting as well. Do people become more informed about Moodle course design over time? Or, do they simply follow the same path they established for their first course?

Course structure or organisation

It’s my perception that the design of Moodle course sites is intended to be a sequence of sections which contain activities for students to complete. It’s interesting that one of the major “innovations” at my current institution is a “course design” that, to at least some extent, breaks this structure.

Rather than a long vertical collection of sections for each week, there is one section which breaks the course site up into a course synopsis and 5 horizontal sections – The course, resources, discussions, assessment and enquiries (which probably change depending on the course).

The courses not taking this approach seem to follow the same approach. One section as “About the course” – usually with a banner and general administration stuff – followed by the weekly sections. The content of those weekly sections is wildly different.

In part, the difference here seems to be between having lots of scrolling or not. A more typical Moodle design ends up with a couple of pages of scrolling. I’m hearing some positive responses from staff/students about the scrolling.

Is there research to see how it is received? Does the scrolling thing cause problems or benefits?

Look and feel

We’re superficial, something that looks good will often result in more immediate positive feelings, even though it’s a pain to use. A fair bit of the Moodle promotion stuff seems focused on showing that Moodle can be good looking. Even though most institutional Moodles appear to focus on consistency, rather than quality.

Learning design

More abstractly, a good course design should obviously – following the theory of alignment – be driven by the learning outcomes. With activities, resources and assessments chosen and presented in a way that best achieves those outcomes.

So, where are the good examples of good, constructively aligned Moodle course sites? What were the problem in achieving those designs?

The blended kitchen sink

One important question for the curriculum mapping project is whether or not the course site captures everything that all students experience. Where face-to-face is possible, it would appear obvious that there may well be some experiences that students have which are not captured in the course site. This suggests it won’t be captured in the mapping.

Should/can a course site contain everything, or just the online stuff?

Suggestions?

So, what say you? What are the other principles? What is out there that can inform answers to this question?

What makes e-learning effective is, of course, typically in the eye of the beholder. One person’s toast and jam may be another person’s steak and kidney pie. This is what makes the drafting of a set of guidelines for effective e-learning so difficult.

Which is just one reason why I think “one ring to rule them all” corporate approaches to web course design is a big mistake.

It’s also the reason why input from many is needed.

Suggestions from Google

Moodle course design – a word document with good coverage of the topicInteresting, makes the point that a consistent theme “gives it robustness”. I like the Dave Snowden distinction between robustness and resilience. Robustness tries to prevent mistakes/failure – which with people is itself destined to fail. Whereas resilience makes it cheap to respond/solve mistakes/failure. I know which I prefer.

For similar reasons the document advises against messing with the standard Moodle design – which is what the local “innovation” does.

The last PhD update I posted here was in early November last year. It’s time to get back into the discipline of posting these updates, especially now I’m in the downhill stretch.

The rationale/excuse for not having posted in the last 3 or 4 months has been work and Christmas. For most of November, I was traveling to or working on conference presentations. I also visited Canberra in that time to work a bit on the thesis and received feedback from my supervisor – reduce content!. December was holidays and then, the great plan to spend January on holiday, working on the thesis went pear shape. For various silly, contextual issues I spent most of January and February working on BIM so folk could use it starting around March. This will hopefully bring some benefits, but it didn’t help thesis completion.

What I did last week

Early last week I had completed rough first drafts of chapters 2 (lit review) and 3 (research method). Since then, when time allows, I’ve been re-reading these drafts and making the slight modifications that are needed. A bit of space – it’s been almost 3 weeks since I finished the draft of chapter 2 – does wonders for perspective.

I’ve re-read all of chapter 2 and annotated it with suggested changes. I’m almost half way through making those changes in the soft copy.

What I’ll do in the next week

The aim for the next week is to have completed first drafts of chapters 2 and 3 sent off to my supervisor.

Time willing, the plan is then to get chapter 4 into the same state. I have until April 6 to get chapters in this state and sent to my supervisor.

This will be a brief extension of previous work around this project. The main aim is to start identifying some of the methods used by Moodle with its current outcomes approach and how those might be harnessed and modified to support curriculum mapping. In particular, some specific questions include: What’s necessary to

allow the outcomes to be grouped and displayed as such when showing an activity/resource? IDENTIFIED

include a “help” link for each outcome or other means to explain? IDENTIFIED

allow the outcome scale to be used on the activity/resource to indicate how well the activity/resource meets the outcome etc? IDENTIFIED

display the curriculum map for a course?IDENTIFIED

add “outcome mapping” to those elements that currently don’t have it? IDENTIFIED

prevent curriculum mapping outcomes showing up in the gradebook?IDENTIFIED

This is a work in progress and will be updated over the next couple of days.

The association – where in code and the database

Moodle tracks which outcomes apply to activities and resources, the question is where in the code does this happen and where in the database is this information stored?

The code

The association appears as part of the edit screen for an activity or resource. This is implemented by moodle/course/modedit.php. This script:

Is given various params, including section and course, including the module being used to “edit” the activity/resource.

Is fairly typical PHP spaghetti code with little or no comments.

Acts has a harness/factory getting the module code to generate part of the form.

Has a section of code that retrieves and display the outcomes, all embedded in this enormous file – ugly.

The outcomes code seems to consist of (this is actually the handling of submission of the form, not display of the form – more on this below)

Get all the outcomes for the course (whether or not to display them, is left till then)if ($outcomes = grade_outcome::fetch_all_available($COURSE->id))

fetch_all_available is implemented in moodle/lib/grade/grade_outcome.php. And basically defines a class that represents a grade outcome. fetch_all_available gets all course related outcomes listed in grade_outcomes (the detail of the outcomes) and grade_outcomes_courses (which outcomes are being used in the course).

Build array of grade_itemsIt then loops through each outcome from above and uses moodle/lib/grade/grade_item.php to create a grade_item object for each outcome. This uses the grade_items table to store information. Am not 100% sure where this fits in.

The actual display is done using a “form” display…more on this below.

So the display is done using the form class defined by the module, which is an extension of moodleform_mod. As the specific modules won’t know about outcomes, the outcomes display would theoretically be done in course/moodleform_mod.php. Yep.

Get grade outcomes for the course, againSeems there is some duplication here, as it gets the grade outcomes for the course, all over again.

Get’s all grade items for the course, it any of them have an outcome set, then set this in the form?

A couple of other steps here, not immediately clear.

The above only seems to be preparatory. There’s a later section of code that adds the form elements for the outcomes. Again, there’s a fetch all available outcomes. This seems more directly related as it simply adds the elements.

Where does it store “mapping”

The next question is where does it store the fact that a particular activity/resource is using/assigned a particular set of outcomes?

This should be set in the code that processes the submission of the form. Which should be moodle/course/modedit.php. Ahh, this is done with grade_item as described above.

i.e. when you map an activity/resource in a course to an outcome or three, that mapping gets stored in the grade_items table. The fields in that table are (the descriptions are tentative):

id – the unique id for the mapping of activity/resource to a single outcome.

courseid – the id for the course that “owns” this mapping.

itemname – this is the actual name of the outcome assigned

itemtype – I believe this describes the type of object you’ve mapped the outcome to. Possible known values are currently mod, course.

itemmodule – the name of the specific module that implements the object. Possible values include: forum, bim (i.e. name of any module), assignment, resource.

iteminstance – I believe this is the id for this particular instance of the module. i.e. the id for the table course_modules. The pathway to more information about this instance.

itemnumber – for outcomes, this seems to start at 1000. It is used to give the sequence with which outcomes are assigned to the item. i.e. the first outcome assigned is 1000, the second 1001, the 3rd 1002 …. It appears that a value of 0, might indicate something important

iteminfo – currently set to NULL for all the entries I’ve seen so far. So, not currently sure what it is used for.

idnumber and calculation – also set to NULL or empty for the contents of my database – which doesn’t include a lot of real courses.

gradetype – integer, currently with value of 1 or 2. With outcomes I’ve set being 2.

grademax and grademin – fairly obvious. Seems to be set by scale and/or other stuff.

scaleid – the scale being used.

outcomeid – the id of the outcome

gradepass multfactor plusfactor aggregationcoef – various factors used for grade calculation, I believe.

timecreated timemodified – time stamps. Could be useful for identifying outcomes that need to be re-checked.

It appears that grade items and outcome items are treated the same, hence their use of the same table. The full view of categories and items give a good overview of this table.

There is the concept of categories of grades/items, this might be one avenue. i.e. a category for curriculum mapping.

What is the implication of this?

The next question is what are the implications to the rest of Moodle. If I map all the activites/resources within a course against a complex set of outcomes, does it have an effect on the gradebook? Any where else?

So, I’ve set outcomes for a number of activities/resources in a course. Does this show up anywhere else? Two ways of looking:

Check the gradebook from web interface.

Look for use of grade_item class/object.

Mmmm, not good. It appears that every time you add an outcome, it get’s added to the gradebook. In terms of curriculum mapping, not what is desired. This is perhaps the first obvious example that curriculum mapping and tracking student performance, while to some extent similar, serve different purposes.

The column in the gradebook for each outcome that is added, has a header that is a link. The link is to a script that shows some detail of the resource or activity that it the outcome is associated with.

Need to turn this off.

Now, you can hide an element in the gradebook. But that just greys it out, doesn’t remove it entirely from consideration, which is what is wanted here.

Adding a description/help

Problem

At least initially the outcomes etc shown are not going to make much sense to a teacher. Moodle currently only displays the name of the outcome. The teacher would have to somewhere else to read up on the outcome before they can determine if it applies. It would be helpful if additional assistance was provided there and then.

Some options in terms of what could be displayed, include:

The description of the outcome.As it stands Moodle allows each outcome to have a textual description. Displaying this as a roll-over or in a new window could provide a minimal level of assistance.

Link to institutional area for discussion and description of outcomes.The assumption being that most institutions would have a website in which institutional outcomes etc are discussed or described. Providing a link to this area, especially to the context specific to the a particular outcome might be useful.

Link to other examples.Many of the forms of outcomes etc. are likely to be used in other courses. e.g. institutional graduate attributes. It might be useful to give the option to see other examples of how these attributes are used in other courses. Even to the extent of link directly to those courses and/or reflections/discussion from other teachers using this outome.

These ideas range from the simple and static through to something you’d want to have some curating.

Possible solutions

The outcomes are displayed around line 220 in moodle/course/moodleform_mod.php. This is where the change would have to happen. Some possibilities include:

Using a Moodle helpbutton.Moodle forms have a function – setHelpButton – which associates a help button with an element. Very easy to make this modification. However, the problem is that the helpbutton is typically a call to open a new, small browser window to display HTML file.

This is problematic as the outcomes are added via the Moodle interface and doesn’t provide support for adding a help file. So, outcome specific would be difficult. However, an institutional area/approach could be possible. It would require the institution to create HTML files for each outcome.

Let’s do a simple test, put the Moodle code under git so I can manage this. And add a help button for each outcome. As expected, works easily. There is the question of how to create the filename for the HTML file. Most outcomes will have spaces and other characters that don’t necessarily play nicely with a filename. The language translation side of Moodle could help there, convert the complete outcome name into something more file system friendly.

More complicated HTMLAnother approach would be to add roll overs, additional links etc to the outcomes. This would require a more radical modification of the Moodle core but not much more than the above. Not to mention the desire to separate attributes up into groups.

Groupings of outcomes

Problem

It is likely that a course may have multiple different types of outcomes etc to map against. e.g institutional graduate attributes, discipline graduate attributes, course learning outcomes, program learning outcomes etc. There are two possible solutions (possibly complementary):

Show outcomes grouped by category. To allow the mapping of an activity/resource against all these different groupings, it would be useful to separate out the different outcomes by category. So you could have a visible separation.

Have a separate cross mapping. Mapping against all of these different outcomes might be somewhat tiresome, especially given a large amount of overlap between them. An approach that has been used is to produce a mapping between the different outcomes and a single point, and then only map activities/resources against that single point. Which of the different outcomes applies, can then be derived from the single point.

Possible solutions

Showing outcomes by category is going to need:

Some way of specifying categories/groups of outcomes.Which probably implies an additional database table and an additional interface or modification to an existing interface (e.g. the import outcome process) to specify which category an outcome belongs to.

A separate interface minimises changes to core Moodle code, modifying existing interfaces is probably a more user friendly approach, depending on how widespread this need is.

Modification to the form display to recognise the categories.This should/would be a fairly simple thing to do, given the information above and Moodle’s form library.

Let’s try a simple test. Create two boxes of outcomes that contain a copy of the same outcomes. Mainly to test if nested header/boxes work. No, they don’t. You’d have to use a separate header label and then have separate boxes for each, perhaps a table? Though Moodle dislikes table for layout…..

Display a curriculum map for a course

Problem

One of the basic functions for curriculum mapping is to get a report that shows how widely (or not) the outcomes are represented within the course in terms of resources, activities and assessments. i.e. you want a visual representation of the outcome mappings.

Possible solutions

Well, it’s basically a report, but you might want it more interactive than that.

Well, the existing outcomes report can do this to some extent. So an extension of this, or the additional of a mapping report might fill the bill.

To a large extent this is a fairly standard web application. Get the data from the database and display it in an appropriate form.

You’ll be needing data from the following tables:

grade_items – given a course id, this will give you all the outcomes for the course that have been mapped to activities/resources and the ids of those activities and resources.

grade_outcomes – will give you details about the outcomes – name, description, and scale id.

scale – details about the scales

course_modules – more information about the module/activity, most importantly perhaps the section of the course in which it appears.

resource – for the same reason as modules

Show outcome scale on activity/resource

Problem

Rather than simply “mapping” an outcome to a particular activity/resource, it may be useful to indicate how well/to what level does the activity/resource map to the outcome. i.e. use a scale, rather than a simple check box.

This is a fairly major distinction between outcomes for curriculum mapping and outcomes for student progress.

Possible solutions

It’s looking like a separate set of “Mapping outcomes” might be the way to go. This would also get around the problem with the gradebook from above. This might mean duplicating the items table, or at least adding a flag to differentiate between mapping and progress outcomes.

Similarly, could probably still use the standard outcomes “creation/import” process for both purposes.

Adding separate support would also help make it a bit easier to add curriculum mapping to an instance of Moodle by minimising disruption to the Moodle core.

Elements that don’t have outcomes

Problem

As outlined earlier there are some elements of a Moodle course site to which you can’t map outcomes. The outcomes don’t appear on the “edit” page. Those identified so far are labels and sections.

Sections might be useful, if you wanted to map a course by weeks, rather than by item. But perhaps not, you can generate such a map by aggregating the mapping of the contents.

Labels are way of inserting HTML into sections. Currently they don’t have support for outcomes. I’ve already seen in one course how such labels can be used to specify tasks, such as reading.

Possible solutions

Well, labels are the only real problem. The form for labels is generated using moodle/course/modedit.php. The same for anything else. It is the place where outcomes are shown. So, perhaps it’s just a switch that needs setting. Perhaps, outcomes aren’t here as it isn’t expected that these will be used in grades – i.e. student progress tracking.

Nope. The mod_form.php file for label actively turns off outcomes in a setting. Yep, set that to true and outcomes are there.

In light of the above, you’d probably have a separate set of outcomes for mapping, have this defined as a feature that modules can turn on/off and go down that route.