Creating an Academy of Learning

Spring 2010

Most agree that schools have a special obligation to study the effectiveness of their educational programs and seek ways to improve student learning. Derek Bok, the former president of Harvard University, has persuasively argued that schools must envision themselves as “learning organizations.” Like hospitals and businesses, Bok writes, schools should “engage in an ongoing process of improvement by constantly evaluating their performance, identifying problems, trying various remedies, measuring their success, discarding those that do not work, and incorporating those that do.” Yet, as Bok goes on to note, few schools are willing to “reexamine familiar practices and search for new methods that could serve the purpose better,” and fewer still provide faculty the incentives or the means to conduct such studies.

Nowhere is the gap between best and actual practice greater than in the area of classroom assessment. Despite the pioneering research and forceful advocacy of educators like Ted Sizer, Grant Wiggins, and Howard Gardner (among others) and the widely accepted belief that authentic assessment is a necessary precondition for student learning, many schools ignore this research and continue to assess in ways that fail to fully engage their students.

At St. Andrew’s School (Delaware), we have taken up Bok’s challenge and sought to become what our head of school, Tad Roach, calls an “academy of learning.” Our efforts to assess in more authentic and nourishing ways have been strengthened and given focus not only by our recently revised (2006) mission statement — which speaks of our commitment to help students “do the work of scholars, artists, and scientists” — but by two other recent initiatives: our schoolwide use of “teaching portfolios” in which teachers share their assessment practices with one another in a process that resembles scholarly “peer review” and our four-year involvement with a new assessment tool, The College and Work Readiness Assessment (CWRA). This latter initiative, already in use by a handful of secondary schools and hundreds of colleges and universities throughout the country, measures students’ ability to think critically, reason analytically, problem solve, and write — skills widely acknowledged by a broad spectrum of the educational community to be essential for work and citizenship.

Focus on Assessment

Not long ago, a parent shared with me a short assignment her daughter had been given in elementary school. It pictured a little boy fishing in a tank in which four fish, two striped and two plain, were swimming. “What,” the prompt asked, “are the chances that he will catch a striped fish?”

“Bad,” my friend’s daughter wrote, ignoring the obvious answer. “After all, the striped fish are moving away from the hook and the boy doesn’t have any bait on it.”

The teacher had marked this response wrong.

Of course, the only mistake my friend’s daughter had made was to actually take this assignment seriously, look carefully at the picture — the striped fish were indeed swimming away from the unbaited hook — and think about it. Instead of providing the “correct” answer and moving quickly to the next problem, she offered a short essay with a wonderfully nuanced and apt hypothesis.

The problem with school, as this anecdote might suggest, is not that we assign too much work to students, though we sometimes do. Nor is it that we ask too little of them, as some proponents of “standards-based” education would have it. It’s that we too often assign our students the wrong kind of work — work that is unworthy of their imagination, intelligence, and ambition and that bears little resemblance to the intellectual tasks they will face in the future. We ask them, quite simply, to count fish.

Such assignments — essays that have no relation to the kinds of writing we regularly encounter in print, laboratory assignments that read like cookbook recipes, math problems that ask for the mechanical application of formulae rather than problem solving — are, unfortunately, all too common, even in the best schools. As Denise Pope has argued in her book Doing School: How We Are Creating a Generation of Stressed Out, Materialistic, and Miseducated Students, we often “fail to challenge [students] with tasks that nurture and sustain their desire to learn” — with the result that many students are, as Pope documents, more susceptible to depression, anxiety, and sleeplessness; more likely to compromise their beliefs and values; and more likely to take a short-term, strategic approach to their learning (as measured by such tactics as cheating, plagiarizing, and contesting grades).

The solution to what Pope memorably calls the “predicament of doing school” is clear. What students want, Pope reports, “are more opportunities to do real work as opposed to game-playing.” When students are challenged with authentic questions that demand reflection and thought, they are happier, healthier, and better prepared for the rigors of college, work, and citizenship.

Our goal at St. Andrew’s, therefore, has been a simple one: to ensure that the work we ask of students is engaging, nourishing, and consistent with the best practices of other teachers and educators throughout the country, and to initiate a process where individual teachers — and the faculty as a whole — can study the effectiveness of their teaching.

Portfolios and Peer Review

The purpose of the teaching portfolio is to encourage faculty members to shape their courses around meaningful student performances and to shift faculty deliberation from the question of what students should know to the more important question of what students should be able to do. The charge is a simple one: we ask teachers, working individually or in teams, to gather evidence that they are assessing in dynamic and creative ways. The portfolio, as we envision it, is much more than a compilation of course materials that is filed away never to be seen again; it is a careful selection of what Howard Gardner calls “signature” assignments, which, taken together and arranged chronologically, make a compelling case for the value, shape, and purpose of the course. In this way, the portfolio becomes, as Ken Bain, author of What the Best College Teachers Do, puts it, the “pedagogical equivalent of a scholarly argument” that answers the following questions:

• What do I assess? What enduring understandings do I hope to impart?

• How do I assess? What do I expect students to be able to do? What kinds of assessments do I use to collect evidence of student understanding?

• How frequently do I assess?

The purpose is not to mandate a single form of assessment — quite the opposite. We want to encourage innovation and honor the diversity of practice. Yet, we also seek to identify and institutionalize best practices. These may appear to be conflicting goals, but they are not. Successful teachers are creative, but also reflective; they innovate, but also, recalling Bok, discard practices that don’t work and incorporate those that do.

We ask faculty to keep three essential audiences in mind as they compile their portfolios: (1) colleagues, including department chairs; (2) leading educators and teachers in their fields at both the secondary and collegiate levels; and (3) students.

Colleagues

“The growth of any craft,” writes Parker Palmer, “depends on shared practice and honest dialogue among the people who do it.” Yet, too often, teachers, even in the same department, work in isolation from each other. Indeed, Palmer has described teaching as “the most privatized of all professions.” To discourage teachers from walling themselves inside their own classrooms, we ask them to share their portfolios with colleagues, department heads, and, whenever possible, the broader faculty. In this way, we are able to formalize dialogue among teachers, encourage transparency, and provide a means by which individual teachers can collect feedback about their assessment practices. These kinds of exchanges enrich and deepen conversation within the school. They also provide department chairs with an important perspective on the work of their departments. In reviewing portfolios, department chairs have a detailed map of student performances that allows them to ensure that the work we ask of students is horizontally coherent across sections and vertically integrated up and down grade levels.

Educators and Professors

We ask teachers to study their own practices against those of college and university professors — and in two ways. First, we ask teachers to research assessment practices at other schools and colleges. Because so many course materials are now available online — think, for example, of MIT’s open courseware, one of many online resources available to schools — faculty members can now, with the touch of a keystroke, explore how colleagues at the college and university level teach, assess, and evaluate student work. Second, we ask departments to arrange biennial consultations with leading educators, teachers, and professors from within their disciplines. These educators visit our school, observe classes, and review our assessment practices. As we no longer offer Advanced Placement courses, this kind of review has been crucial for us, since it ensures that our most advanced courses are comparable in depth and rigor to similar courses at the college level.

Students

Finally, we ask teachers to investigate the effectiveness of their teaching and assessment through the use of twice-yearly student course evaluations. These evaluations offer students the opportunity to reflect on the progress of their own learning and provide faculty the information they need to assess the effectiveness of their teaching. Some object to the use of student evaluations on the grounds that students don’t know the subject and, hence, are unqualified to evaluate how well it is taught. There is some truth to this. But as a number of recent books have made clear — most notably Richard Light’s Making the Most of College: Students Speak Their Minds and Cathleen Cushman’s Fires in the Bathroom: Advice for Teachers from High School Students — students are very knowledgeable about their own learning. As Jennifer O’Neil, a member of our arts department, once commented to me, “Students understand their own learning and can provide faculty with valuable feedback — if they are asked the right questions.”

The College and Work Readiness Assessment

In early September of 2009, soon after our ninth grade students arrive on campus, we ask them to sit for the College and Work Readiness Assessment. Given our schoolwide emphasis on “authentic” forms of assessment, that might seem like an odd choice. Why, given the tremendous skepticism surrounding the use of “standardized” testing and the increasing, sometimes unreasonable pressures the tests exert on high schools, would we embrace such an exam? It is true that many forms of standardized testing are crude, reductive, and, from the point of view of teachers and students, dispiriting — the educational equivalent of factory work. Yet, not all standardized assessments ask for standardized thinking, and some, like the College and Work Readiness Assessment, support, rather than undermine, the desire of teachers to assess in ways that are creative, dynamic, and transformative.

Unlike other forms of tests that rely on multiple-choice formats and focus on the lower end of Bloom’s taxonomy, the College and Work Readiness Assessment draws on the best available research on student learning and assessment and confronts students with authentic questions — the kinds of problems they will encounter as citizens and professionals. In one prompt, for example, students are asked to advise a candidate for mayor on whether increased policing or increased funding for drug rehabilitation is the best approach to reducing crime in a small city. In constructing their written responses, students must draw upon a small library of documents, representing a range of material: newspaper articles, summaries of studies, tables, charts, and research briefs.

Students are evaluated not on whether they have offered a “correct” or “incorrect” answer — credible arguments can be advanced in multiple directions — but on the persuasiveness of their written response and their ability to evaluate and marshal conflicting, often ambiguous, and sometimes unreliable evidence in cogent and convincing ways. And in responding, students must draw on skills they have learned in a range of classes — the ability, for example, to think skeptically about numbers, statistics, and other forms of quantitative evidence (math); to shape persuasive arguments (English); to adjudicate between competing hypotheses (science); to evaluate, interpret, and analyze source material (history), among others. In this way, the tasks ask for what educational researchers calls “transference” — the ability to apply what students have learned to a new and unfamiliar problem.

By focusing on a broad range of disciplinary skill, the performance task also imparts to the work of faculty a sense of common purpose, giving shape and coherence to the educational program. The history of independent schools, we should remember, is largely one of programmatic expansion, as schools, modeling themselves upon colleges and universities, have dramatically expanded academic departments and the offerings within them. Consider, for example, the expansion of the AP program. At its inception in 1952, there were 11 subject exams; there are now 38 in 22 subject areas. One might argue that such expansion has been healthy, as schools incorporate into their curricula new fields of study, but it has come with a cost, particularly for students who sometimes fail to see how their work in one course connects to their work in another. As Gerald Graff has persuasively argued, students often experience school as a “disconnected series of courses that convey wildly mixed messages.” Forms of assessment like the College and Work Readiness Assessment allow students to see the underlying similarities between different disciplines, and allow teachers to emphasize outcomes that we all agree are essential.

Most importantly, the College and Work Readiness Assessment allows schools to study whether or not they are teaching effectively toward those outcomes. Like the Assessment of Inclusivity and Multiculturalism (AIM), sponsored by the National Association of Independent Schools (NAIS), and the High School Survey of Student Engagement (HSSSE), developed by the Indiana University School of Education, the College and Work Readiness Assessment works on the institutional level. In our case, for example, we test our students in each of their four years. By the spring of 2010, we will have — thanks to the institutional reports provided by the Council for Aid to Education — a rich, longitudinal portrait of student learning at the school. This will allow us to see, by grade and in aggregate, how much our seniors have learned over the course of their careers in the important areas of critical thinking, analytical reasoning, problem solving, and writing; whether our students are performing “above,” “at,” or “below” where they would be expected to perform given their incoming ability; how they compare to students at other private and public schools; and how they compare to college freshmen throughout the country.

That the results allow for comparisons across schools will no doubt make some uneasy. But the purpose of the College and Work Readiness Assessment is not, like other forms of high-stakes testing, to sort and rank. Rather, its purpose is to provide schools with information about how much students learn — or fail to learn — over the course of their careers. Schools need this kind of information. Given rising tuitions — a trend that is likely to continue — it is reasonable for parents to expect schools to study their work and, whenever possible, collect information about the effectiveness of their educational program. Of course, many things we value as educators — resiliency, courage, integrity, empathy — are difficult, if not impossible, to measure with precision. The real test of a school is not how its students perform while they are at school, but the kinds of lives they lead after graduation. Are they active and involved in their communities? Have they put their own educations to work in the service of others? Are they doing what Howard Gardner and his team at Harvard call “good work” — work that is excellent in quality, socially responsible, and meaningful to its practitioners?

The difficulty of measuring these kinds of outcomes should not, however, stop schools from seeking to assess those things they can measure. The challenge schools face is whether they can provide comprehensive and credible evidence of rigorous student learning and also maintain their commitment to forms of assessment that are engaging, transformative, and consistent with the research on best practices. Our use of teaching portfolios and our work with The College and Work Readiness Assessment project have convinced us that this is a challenge independent schools can meet. Together these initiatives have allowed us to deepen the conversation within the school about the art of assessment, provided us with the information we need to study the effectiveness of our own teaching, and institutionalized professional learning at the school, helping us become “an academy of learning.”

References

Ken Bain, What the Best College Teachers Do (Harvard University Press 2004).

Derek Bok, Our Underachieving Colleges: Candid Look at How Much Students Learn and Why They Should Be Learning More (Princeton University Press, 2006).

Kathleen Cushman, Fire in the Bathrooms: Advice for Teachers from High School Students (The New Press: 2003).

Howard Gardner, Five Minds for the Future (Harvard Business School Press, 2006).