Teach Talk

Online Subject Evaluation:
One Step Toward More Effective Teaching

How can we improve the effectiveness of our undergraduate teaching? The Task Force on the Undergraduate Educational Commons spent time and energy addressing this essential topic, benefiting from the wisdom of successful colleagues and from the large body of research on pedagogy in higher education developed over the past quarter century. As a result, it recommended “Enhancing MIT’s capacity to improve the teaching skills of faculty members and graduate students” (see Report of the Task Force on the Undergraduate Educational Commons, p. 125).

Certainly the Task Force faculty heard chastening reminders about the low levels of information retained from even the best classes, and realized the intense effort it takes for us to adapt our expertise in ways that are comprehensible and vivid for undergraduates (and other non-specialists). But we also heard numerous examples of ways in which we could collectively enrich our teaching and the already strong culture of educational innovation at MIT. One such way forward is to provide ourselves with more informative, reliable data from our students.

Subject Evaluation Now

At present, MIT does not have one uniform subject evaluation system. Some departments and the Sloan School run their own online subject evaluations, while most other departments use the paper forms provided and processed through the Office of the Dean for Undergraduate Education (DUE) (led by one heroic staff person in the Office of Faculty Support). This latter “system” is actually a patchwork of processes, and does not provide a searchable archive for instructors to compare or discern patterns in student responses over the semesters.

The process of getting accurate information about instructors’ teaching assignments is onerous for departmental administrators and the DUE alike, and the paper form prohibits listing more than three instructors.

The student responses include only the subset of those who can attend class on one particular day, and if students write with felt-tipped pens or in green ink or make comments outside the boxes, the forms cannot be processed at all.

These are only the most obvious among a number of flaws in the current system that limits its use value.

Of course, any type of student evaluation form should be seen as only one indicator among many: a helpful but not sufficient feedback loop. Quantitative measures are perhaps less useful – especially for seasoned teachers – than are specific comments, and again the format of our current paper-based system limits these to the swiftly scrawled spaces on the back of the single sheet (in some cases not copied or returned to the particular teachers by departments or lead instructors, and often left blank by students rushing to another class or event). The complexity and time-consuming nature of the process – from information gathering at the start to Internet posting at the end – prohibits offering any service other than at semester’s end. Thus we cannot, in the paper-based process, enact the Task Force’s recommendation of “Improving the breadth of coverage and the usefulness of end-of-term class evaluations” (Report, p. 126).

For these reasons and more, during the next few years MIT will be moving its central subject evaluation system online and away from paper-based forms. In parallel, efforts will be made to improve the quality of teaching data and the ease with which it is collected.

This is a multi-year joint project of the DUE and Information Services and Technology: in addition to those managing and administering the process within the Office of Faculty Support, additional staff from the Office of Educational Innovation and Technology (within DUE) and Student and Administrative Information Services (within IS&T) are contributing expertise and leadership.

The project team has been examining policies and practices at MIT and other institutions, including our peers (who are also in the process of moving their subject evaluations online); collecting comments and wish lists from offices, departments, committees, and individuals (among them the Office of Institutional Research, the Committee on the Undergraduate Program, undergraduate officers and administrators); and researching technical and infrastructure issues with members of Information Services and Technology and potential software vendors. This spring, a pilot will begin.

Schedule

Four departments – Physics, Chemical Engineering, Literature, and Philosophy – will be testing selected subjects in the online subject evaluation (OSE) and Who’s Teaching What (WTW) beta pilots this coming spring. All other departments using the DUE’s centralized process, as well as non-pilot subjects within the pilot departments, will continue to use the paper system this spring. (The pilot subjects will also have the paper option as a backup.) Interested departments will be able to join the production pilots in FY09 (academic year 2008-9). The paper forms will be phased out beginning in FY10 (academic year 2009-10).

OSE and WTW Improvements

Policy and Process Issues

Phase 1 (FY07)

• Discovery phase

• Identified issues
• Made recommendations

Phase 2 (FY08)

• OSE beta pilot w/four depts
• WTW improvements and beta pilot

• Create governance
• Prioritize requirements

Phase 3 (FY09)

• OSE/WTW production pilot
• Make available to interested depts
• Paper forms still available
• More features

• Implement new policies

Phase 4 (FY10)

• Implement integrated production system and get all departments on board
• System enhancements
• Integrate SIS Vision project
• Phase out paper forms

• Revise and maintain policies

Expected Benefits

More thorough data collection. All students will have the opportunity to participate in subject evaluation, not just the ones who show up on a specific day during the last two weeks of class. The online survey instrument will allow for the addition of department- and instructor-specific questions. Qualitative comments will be able to be matched with a particular instructor, and will most likely be richer; certainly students will be given the opportunity to provide more detailed, thoughtful comments.

Simplified administration. Many departments will be able to eliminate activities such as typing comments, copying and distributing completed forms, distributing and processing their own paper evaluations for TAs, and creating longitudinal and comparative reports. Even departments that run their own online systems have expressed interest in moving the administrative and technical burden to a centralized system. The improved WTW functionality will make it easier to enter, search for, and maintain teaching data. Future integration with Stellar, scheduling, and other systems will further simplify the evaluation process.

Better reporting capabilities. Faculty will be able to receive individual electronic reports, including open-ended comments, quickly. There will be the ability to accurately report who is teaching what to whom, and to do longitudinal analysis. The variety and depth of the reports will be useful in many ways, such as curriculum planning and syllabus modification, and will be more accurate and uniform when used (as they currently are) in tenure and promotion processes, department accreditation reviews, and institutional research.

Policy Issues

In pursuing their research, the project team has found that questions of policy are even more critical than the technical challenges of developing a subject evaluation solution.

Some of the questions they have raised include:

How will students’ qualitative comments be distributed, and to whom?

What incentives will be effective and appropriate to encourage students to complete evaluations?

Should students who have dropped subjects or registered as Listeners be asked to complete evaluations?

Should the evaluations be anonymous or confidential? Do we wish to keep the results confidential but available for purposes of institutional research?

What are the ramifications of having students complete their evaluations outside of class? Will ratings be affected?

To answer these questions, the Office of Faculty Support has formed a Subject Evaluation Policy Advisory Committee composed of faculty, students, and staff that will begin deliberations in spring 2008. This spring’s subject evaluation beta pilot will be limited to online versions of the current paper forms and reports, in order to minimize variables when we assess its effectiveness and to provide the time required to articulate and prioritize the multiple policy and process issues that must be decided before expansion of the system.

This project’s ultimate goal is to improve teaching and learning at MIT. It can only benefit us to develop the means to present timely and accurate feedback from our students. How we value and learn from this information is up to the faculty, and requires our thoughtful attention and communal effort.