Dean of the Faculty Roger Brooks speaks with John Nugent (left). Photo by Brandon Mosley.

Dean of the Faculty Roger Brooks speaks with John Nugent, senior research analyst and special assistant to the president, on how Connecticut College assesses student learning and outcomes.

Edited by Lisa Brownell. Illustration by Brad Yeo.

Nugent: How would you say that teaching and learning have changed in the time you have been at the College?

Brooks: I think there´s been a real sea change in pedagogy nationwide, and it´s reflected at Connecticut College. The change has been from an assumption that teachers teach to the assumption that teachers make it possible for students to learn. And if you adopt a learning-centered model for what happens in the classroom, all sorts of things change. For example, people used to be very concerned with “getting through the material.” Now I think that kind of coverage has shifted to “Have I gotten the students to understand the material?” It often is a tradeoff, perhaps doing less material to get deeper understanding. The other big change is that our students have a different expectation of what happens in the classroom, and that has made us more active teachers. It´s not nearly as common for faculty members to simply stand and give a lecture; instead they tend to give mini-lectures and have students break into discussion groups. You have a mixture of different pedagogy in the classroom.

Nugent: “Assessment” is kind of a buzzword in higher education. What does that word mean to you?

Brooks: In the simplest form, “assessment” is how you improve teaching and learning at the College. It´s a feedback loop. You start by identifying what students are going to learn from particular courses and programs, a major or certificate program. Then you gather information from the students about what they´ve learned in their academic programs. This can be done by direct interviews or looking at the students´ output; then you take that information back to the design of the program, major or course. So it´s a simple loop of establishing goals, gathering information and data, and then using that data to do a better job the next time you offer the same course or program.

Nugent: What would you tell people who ask, “Isn´t assessment what professors do with the practice of grading and giving feedback?”

Brooks: Yes, that is assessment, but it is only on one level. As a student is getting grades, that data can help them measure how well they´ve performed on that particular assignment. But typically, when we think about assessment at the College, it´s in broader terms. It isn´t just about what happens in an individual course but in an entire major. Do the required courses and other requirements as a combined program produce the kind of learning outcomes that we hold for each major? … We have to think about our general education requirements, which are College-wide. Every time that we have examined general education requirements in the 20 years I have been at the College, we have renovated them and added something new on the basis of how students are learning and how faculty are teaching. First-year seminars are a good example.

Nugent: So it sounds like it´s important that those goals be available for all to read in the catalog and Web site, for example.

Brooks: Yes, and this year we´ve asked every major to specify their own learning goals. The purpose is to let people know, even before a student begins a course of study, what kind of outcomes they are likely to have.

Nugent: What forms does assessment take at Connecticut College? How do you start to learn about the higher levels of general education outcomes?

Brooks: Well, grades and comments are certainly some of the most important ways we have of giving students feedback. One of the things as a liberal arts college that distinguishes us from all other types of schools is that we have the close working relationship between faculty and students. For example, professors will offer to review a first draft of a paper and are happy to review the second draft. Those kinds of comments are the fundamentals of how we let students know what our academic standards are and how to constantly improve. Even our best writers, for example, ought to be going to the Writing Center on campus and getting advice on how to write better. Course evaluations provide the same kind of feedback to instructors, and we use peer evaluations as well.

At this point I´d actually like to turn the question back to you, John, because one of the most critical ways we use assessment at the College is through institutional research, which is your area. You and I have worked together on several national surveys.

Nugent: Since 2000 we´ve been participating as a College in the National Survey of Student Engagement, or NSSE, as it is called. That study emerged from the Department of Education at Indiana University as an alternative to rankings such as the U.S. News and World Report rankings, which largely measure inputs, such as money and student test scores, for example. But those rankings don´t tell a lot about what students are learning in the classroom or what kind of outcomes they are experiencing. So the NSSE asks students what they have been doing in their coursework, what they are writing, what they´ve been doing outside the classroom, what kinds of interactions they´ve been having with their faculty and with other students.

Brooks: We´ve also worked every closely on the Wabash National Study of Liberal Arts Education, a survey that is broad enough to give us good comparative data from peer institutions. Both NSSE and Wabash give us that comparative national data. Added to that and our work with individual departments, we have a host of other kinds of assessment. We have seniors who do capstone projects, honors theses, art exhibits, recitals, presentations and student teaching portfolios. All of those are opportunities, at the end of a student´s four years, to plan a major intellectual project and get feedback all the way along the route to completion. We do foreign language competency exams as well. And we do some alumni surveys periodically, and we look at national databases like the National Student Clearinghouse and the National Science Foundation to track how many of our graduates earn advanced degrees.

Nugent: And from what you described as a feedback loop — establishing goals and then gathering data to use to improve the academic program in various ways — it sounds like it´s critical to provide the information back to faculty members so that they can make adjustments as necessary. How does that happen?

Brooks: It happens through course evaluations, of course. After the students write up the evaluations and then hand them in, the professors have a chance to review them and often pick up suggestions in areas that need to be improved or ideas that they´d like to try. … The Joy Shechtman Mankoff Center for Teaching and Learning here on campus runs workshops on best practices, but what´s really been interesting in the last several years is that those workshops have been strongly influenced by the data that we gathered right here.

Nugent: I wanted to return a little bit to the Wabash study. We´ve started receiving our results back, and I know you and I have been interested to see what can be learned from that kind of broad-based, multi-year study. Could you explain how the study works?

Brooks: The Wabash study has really become the gold standard for looking at liberal arts education, and if you think about it, it measures two things and then tries to correlate them. On the one hand, it asks what kinds of outcomes do we hope students will get out of a liberal arts education: critical thinking, moral reasoning, attitudes towards diversity, lifelong learning, all of those kinds of things that we commonly say a liberal arts education gives you. So it measures outcomes like those, and it also tries to coordinate the kinds of programs and kinds of educational experiences students have at the College. There are now about 50 institutions involved in the Wabash study; we were one of the original 18. It attempts to make the correlations between educational experiences and those outcomes. It´s been a very successful study. It works with students in their first year and at the end of their first year. We´re in Year Four so we´ll be looking at seniors, and then a year from now, we´ll be getting the data that does a four-year comparison.

Nugent: And what did we learn so far from the Year One data?

Brooks: Well, one thing that we notice when we go to the Wabash meetings is that many of the other schools in the study would love to have our data, because we´re doing a great job and our retention numbers are very, very good. Other schools are telling us that they wish they were in the situation we are. All that said, I think that the data show that we could probably challenge our students a bit more and signal very high expectations to them. They would like to have more outside-the-classroom faculty interactions. That´s why we instituted this year a faculty-student lunch program where faculty members can eat once a week with students in the College dining halls.

Nugent: So what about measuring outcomes? What do our students do after they graduate?

Brooks: About a quarter of our students go on to grad school immediately, and they mix some full-time, some part-time. About 75 percent go to work. But within a decade or so, half of our students have earned an advanced degree. That means our students have picked up on the idea of lifelong learning, and they understand that it´s really important to continue your education.