Paper Authors

Valentin Razmov
University of Washington

Valentin Razmov is an avid teacher, interested in methods to assess and improve the effectiveness of teaching and learning. He is a Ph.D. candidate in Computer Science and Engineering at the University of Washington (Seattle), expected to graduate in 2007. Valentin received his M.Sc. in Computer Science from UW in 2001 and, prior to that, a B.Sc. with honors in Computer Science from Sofia University (Bulgaria) in 1998.

NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract

Improving student learning has been a long-standing goal of educators across all disciplines. To
improve effectively and methodically, one needs to know what works well (and needs to be
sustained) and what does not work well (and may benefit from changing). In a classroom
environment, the two direct stakeholders, instructors and students, can both provide valuable
perspectives on how things are going.

This paper presents an analysis of an extensive set of feedback data provided by students across
8 academic terms for an undergraduate introductory course in software engineering, taught at a
large US public university. The feedback was gathered via end-of-term course-specific
questionnaires, separate from and much more detailed than the typical university-sponsored
course evaluations. In total, 162 students gave feedback, while 5 different instructors were
involved with the course, one of whom – the author of this paper – was actively engaged in all 8
offerings.

To give the reader a sense of scale, the end-of-term student questionnaires featured 60-150
questions – mostly multiple choice questions, as well as some free-form short-answer questions.
The subject of the questions were the course structure, the instructors’ teaching approach, class
sessions, readings, writing assignments, project experiences, tools, the feedback that students
received from instructors and peers, as well as questions aimed at capturing student perceptions
of what had worked well and what had not.

Among the encouraging results are that students almost unanimously report feeling better
prepared for industry careers after taking the course. They also increasingly come out with a
heightened appreciation for the value of incremental project development and of many of the
“softer” (non-technical, human) issues in engineering. In contrast, the main aspects that our
analysis identifies as needing further improvement are the choice of course readings, as well as a
stronger emphasis on quality assurance practices and techniques for dealing with ambiguity –
both aspects that students tend to find unfamiliar and unnatural. We also share a few surprises
found in the data.

Our main contributions are the analysis of the rich body of collected data, as well as distilling
groups of questions that have yielded particularly useful results, and categorizing those by target
outcome: questions for evolving the course, for “reading” students’ moods, and for getting
students to reflect on their experiences. Many of these questions may be broadly applicable.

The remainder of the paper is structured as follows. Section 2 elaborates on relevant aspects of
the course structure and describes our mechanism for collecting feedback data. Section 3
discusses what we have learned from our data analysis – first about the course, and then about
the process of doing student surveys. We conclude in Section 4. To give the reader a concrete
view into the nature of our questionnaires, the Appendix contains the full list of questions from
the most recent end-of-term student questionnaire.