Workshops

Workshop

Sunday 9th September 2012, 9.30 – 17.00

The BABELnot Project

Facilitator: Raymond Lister

No cost, Morning and afternoon teas and lunch supplied

On Sunday 9th September, the day preceding the ICER 2012 conference proper, there will be a workshop on the specification of academic standards for programming courses. A system for specifying such courses would be useful to Computing Education Research as it would allow for better comparisons of courses across universities.

It is common for computing degrees to incorporate a sequence of two or three courses to teach programming. (In some parts of the world, these one semester courses are known as subject, units or papers). The learning objectives of these programming courses are often poorly defined. For example, consider the following learning objective, taken from the description of an actual introductory programming subject at a university:

“On successful completion of this subject, the student will be able to … Demonstrate a working knowledge of the basic constructs in the object-oriented language Java.”

Which constructs are the “basic” constructs? What does it mean to have a “working knowledge”, and how does a student “demonstrate” it?

The BABELnot project was started in Australia in 2011. The project aim is to develop a framework for describing with greater precision the learning outcomes in programming courses, and also develop an approach for mapping between learning outcomes and exam questions. Intended project outputs are:

A document serving as an archive of exam questions, with meta-tags describing each question, which map the question on to the framework

Exam performance data from real students for a subset of the archived exam questions

The purpose of this workshopis to introduce the BABELnot project to members of the International Computing Education Research community and to invite their active participation in BABELnot.

Several BABELnot papers have already been published. Below is a brief summary of each of those papers, with a URL for each full paper.

Abstract: The ICT degrees in most Australian universities have a sequence of up to three programming subjects, or units. BABELnot is an ALTC-funded project that will document the academic standards associated with those three subjects in the six participating universities and, if possible, at other universities. This will necessitate the development of a rich framework for describing the learning goals associated with programming. It will also be necessary to benchmark exam questions that are mapped onto this framework. As part of the project, workshops are planned for ACE 2012, ICER 2012 and ACE 2013, to elicit feedback from the broader Australasian computing education community, and to disseminate the project’s findings. The purpose of this paper is to introduce the project to that broader Australasian computing education community and to invite their active participation.

Abstract: This paper reports the results of a study investigating the complexity of exam questions using a purpose built classification scheme. The scheme, devised for exams in introductory programming courses, assesses the overall difficulty of each question along with other measures of complexity such as the linguistic complexity of the question, the length of code involved in the question and/or answer, and the complexity of the programming concepts required to answer the question. We apply the scheme to 20 introductory programming exam papers from five countries, and find a great deal of variation among the exams. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. Most of the other complexity measures correlate with overall difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We conclude by discussing what we can learn from these findings.

Abstract: In an earlier paper, Corney, Lister and Teague (2011) presented research results showing relationships between code writing, code tracing and code explaining, from as early as week 3 of semester. We concluded that the problems some students face in learning to program start very early in the semester. In this paper we report on our replication of that experiment, at two institutions, where one is the same as the original institution. In some cases, we did not find the same relationship between explaining code and writing code, but we believe this was because our teachers discussed the code in lectures between the two tests. Apart from that exception, our replication results at both institutions are consistent with our original study.

Abstract: Recent research on novice programmers has suggested that they pass through neo-Piagetian stages: sensorimotor, preoperational, and concrete operational stages, before eventually reaching programming competence at the formal operational stage. This paper presents empirical results in support of this neo-Piagetian perspective. The major novel contributions of this paper are empirical results for some exam questions aimed at testing novices for the concrete operational abilities to reason with quantities that are conserved, processes that are reversible, and properties that hold under transitive inference. While the questions we used had been proposed earlier by Lister, he did not present any data for how students performed on these questions. Our empirical results demonstrate that many students struggle to answer these problems, despite the apparent simplicity of these problems. We then compare student performance on these questions with their performance on six explain in plain English questions.

Abstract: A typical Computer Science degree is three to five years long, consists of four to six subjects per semester, and two semesters per year. A student enrolled in such a degree is expected to learn both discipline-specific skills and transferable generic skills. These skills are to be taught in a progressive sequence through the duration of the degree. As the student progresses through the subjects and semesters of a degree, his skill portfolio and competence level for each skill is expected to grow. Effectively modeling these curriculum skills, mapping them to assessment tasks across subjects of a degree, and measuring the progression in learner competence level is, largely, still an unsolved problem. Previous work at this scale is limited. This systematic tracking of skills and competence is crucial for effective quality control and optimization of degree structures. Our main contribution is an architecture for a curriculum information management system to facilitate this systematic tracking of skill and competence level progression in a Computer Science context.