Pages

Friday, July 29, 2016

Forget facts - that's the learning of the past. The learning of the future is soft skills. But how do you assess it? Peter Williams argues that we can approach the problem though learning analytics. Along the way he also provides the best theoretical introduction to authentic assessment that I've read yet. He also flags Lombardi's definition of authentic learning:

Real-world relevance: the need for authentic activities within a realistic context.

Ill-defined problem: confronting challenges that may be open to multiple interpretations.

Sustained investigation: undertaking complex tasks over a realistic period of time.

Multiple sources and perspectives: employing a variety of perspectives to locate relevant and useful resources.

Collaboration: achieving success through division of labour and teamworking.

Reflection (metacognition): reflection upon individual and team decisions.

Interdisciplinary perspective: encouraging the adoption of diverse roles and thinking.

So this paper is worth reading for the above reasons. But am I convinced about his pitch for learning analytics as the way forward? No - it's completely fanciful and unsupported by any evidence. Which makes me feel better - we agree on the problem and it's not just me being thick because I can't quite figure the solution.

Assessing collaborative learning: big data, analytics and university futures. Assessment & Evaluation in Higher Education 28 Jul 2016 doi: 10.1080/02602938.2016.1216084
Assessment in higher education has focused on the performance of individual students. This focus has been a practical as well as an epistemic one: methods of assessment are constrained by the technology of the day, and in the past they required the completion by individuals under controlled conditions of set-piece academic exercises. Recent advances in learning analytics, drawing upon vast sets of digitally stored student activity data, open new practical and epistemic possibilities for assessment, and carry the potential to transform higher education. It is becoming practicable to assess the individual and collective performance of team members working on complex projects that closely simulate the professional contexts that graduates will encounter. In addition to academic knowledge, this authentic assessment can include a diverse range of personal qualities and dispositions that are key to the computer-supported cooperative working of professionals in the knowledge economy. This paper explores the implications of such opportunities for the purpose and practices of assessment in higher education, as universities adapt their institutional missions to address twenty-first century needs. The paper concludes with a strong recommendation for university leaders to deploy analytics to support and evaluate the collaborative learning of students working in realistic contexts.

Tuesday, July 26, 2016

Fostering oral presentation performance: does the quality of feedback differ when provided by the teacher, peers or peers guided by tutor? Assessment & Evaluation in Higher Education 21 Jul 2016 doi: 10.1080/02602938.2016.1212984
Previous research revealed significant differences in the effectiveness of various feedback sources for encouraging students’ oral presentation performance. While former studies emphasised the superiority of teacher feedback, it remains unclear whether the quality of feedback actually differs between commonly used sources in higher education. Therefore, this study examines feedback processes conducted directly after 95 undergraduate students’ presentations in the following conditions: teacher feedback, peer feedback and peer feedback guided by tutor. All processes were videotaped and analysed using a coding scheme that included seven feedback quality criteria deduced from the literature. Results demonstrate that teacher feedback corresponds to the highest extent with the majority of the seven identified feedback quality criteria. For four criteria, peer feedback guided by tutor scores higher than peer feedback. Skills courses should incorporate strategies focused on discussing perceptions of feedback and practising providing feedback to increase the effectiveness of peer feedback.

"The push towards anonymous, online marking can mean that personal feedback sessions are an incompatible part of the assessment and feedback loop. Anonymous marking is disruptive to the process because it prevents the tutor from giving connected guidance to students on their progress..."

Worth a read then.

Thanks, but no-thanks for the feedback. Assessment & Evaluation in Higher Education, 05 Jul 2016 DOI: 10.1080/02602938.2016.1202190
Feedback is an emotional business in which personal disposition influences what is attended to, encoded, consolidated and eventually retrieved. Here, we examine the extent to which students’ perceptions of feedback and their personal dispositions can be used to predict whether they appreciate, engage with and act on the feedback that they receive. The study is framed in psychological theories of mindset, defensive behaviours and new psychometric measures of the psychological integration of assessment feedback. Results suggest that, in this university population, growth mindset students were in the minority. Generally, students are fostering self-defensive behaviours that fail to nurture remediation following feedback. Recommendations explore the implications for students who engage in self-deception, and the ways in which psychologists and academics may intercede to help students progress academically by increasing their self-awareness.

Tuesday, July 05, 2016

We over assess students because it is difficult to motivate them to engage without frequent deadlines. But what are the the true effects of frequent assessment? This new paper describes a well conducted study of frequent assessment on Dutch Engineering students (n=219). Using principal component analysis the authors identified and analyzed four elements of assessment:

Value - how much value students attribute to frequent assessments: assessment is popular with students (= "value for money"?)

Formative function - no evidence that frequent testing had any formative value!

Positive effects and Negative effects - no strong cohort wide evidence for either of these (although they may affect individuals).

Summary: Assessment is popular with students but has no demonstrable educational value!

Students’ perception of frequent assessments and its relation to motivation and grades in a statistics course: a pilot study. Assessment & Evaluation in Higher Education 03 Jul 2016 doi: 10.1080/02602938.2016.1204532
This pilot study measures university students’ perceptions of graded frequent assessments in an obligatory statistics course using a novel questionnaire. Relations between perceptions of frequent assessments, intrinsic motivation and grades were also investigated. A factor analysis of the questionnaire revealed four factors, which were labelled value, formative function, positive effects and negative effects. The results showed that most students valued graded frequent assessments as a study motivator. A modest number of students experienced positive or negative effects from assessments and grades received. Less than half of the students used the results of frequent assessments in their learning process. The perception of negative effects (lower self-confidence and more stress) negatively mediated the relation between grades and intrinsic motivation. It is argued that communication with students regarding the purpose and benefits of frequent assessments could mitigate these negative effects.

Monday, July 04, 2016

It's always a pleasure when a student working on a final year project produces a piece of work which is worthy of a wider audience beyond examiners. I was in this fortunately position this year. Unfortunately, I was not in the position of being able to spend months of my time on the tortuous process of negotiating a paper into a traditional journal, so we decided to go down the Open Access route. My first thought was to try the relatively new bioRxiv, but the paper was rejected by them because they do not publish theses (so only partly open access then). After that it was back to the trusty figshare: