Tag Archive for assessment

A new article from EdSurge describes a MIT effort to design assessments for next generation learning. “Playful assessment” captures curiosity, creativity and critical thinking within the natural context of student learning activities.”It emphasizes recognizing and reflecting on what works and what doesn’t, and in response, identifying skills to improve on moving forward.”

While such habits of mind are recognized as essential for today’s learners and are frequently embedded in curriculum and lesson design, they are also difficult to systematically and accurately assess. Instruments such as the Mission Skills Assessment and SSAT Character Skills Snapshot have emerged in recent years but are disconnected from classroom curricula. Effective teacher assessment is needed to both measure and deepen lasting next generation learning for students.

Academic Technology Director Jeff Tillinghast and I have co-authored an article for Curriculum In Context, the journal of the Washington State Association for Supervision and Curriculum Development, an ASCD affiliate. We wrote a practitioner’s view of how our teachers use contemporary computing technologies to provide specific, rapid, and varied feedback to students and then accordingly adjust individual student instruction. Read the article (PDF) or access the full issue. Many thanks to Seattle Pacific University professor David Denton for inviting us to contribute to the journal.

On reviewing last winter’s issue of Independent School Magazine, I was struck by stories of schools conducting rigorous studies of their own practice, particularly quantitative studies. Granted, the issue theme was “Assessing What We Value,” but turning the lens of assessment inward onto school practice represented a significant additional step in my mind.

In the article, “The Role of Noncognitive Assessment in Admissions,” the author described several schools that are collecting new information about students, traits that might help predict school success. One school (Choate Rosemary Hall) found statistically significant correlations between self-efficacy, locus of control, and intrinsic motivation (as reported by students) and GPA.

2013 E. E. Ford grant award winners included Castilleja School, to support the development of “meaningful and valid assessments of experiential learning, to apply these tools to improve the effectiveness of innovative experiential programs, and to share these best practices with other educators.” $1 million, three-quarters of this raised by the school, supports this effort.

I am following a similar path here at U Prep. Whether the question is the predictive power of standardized assessments or the meeting agendas of our instructional leadership team, I find myself quantifying behavioral data, seeking patterns, and sharing the information with people. Is this just coincidence?

While I have not rigorously studied and confirmed the possible existence of a trend toward quantitative program analysis (irony intended), it seems to me that several contributing factors might exist. Quantitative data is more easily collected, processed and shared than before. The setup of a Google Form is trivial, compared to the “old days” (actually just 10 years ago) when we used to write online forms in Perl on our school web server. Data visualization has grown as a field, to the point where major news corporations prominently feature beautiful, illustrative graphic representations of data, and programming libraries make the process easier. Publication and presentation tools easily incorporate such graphics. Use of data to support conclusions has remained a respectable practice, notwithstanding occasional misuse.

In years past, schools would rarely conduct quantitative study of their own work without substantial external help or an internal reassignment. This lent a measure of respectability to the work, as one would expect valid work from a consultant or internal member of the faculty or staff. Now, with people like me studying school practice within the scope of our full-time jobs, the risk exists that we will reach conclusions that are not well supported by the data or not well compared against results from other institutions. We have to be careful, as well as thorough.

Here is a pretty interesting comment on the use of 360° reviews for performance assessment.

A study on the patterns of rater accuracy shows that length of time that a rater has known the person being rated has the most significant effect on the accuracy of a 360-degree review. The study shows that subjects in the group “known for one to three years” are the most accurate, followed by “known for less than one year,” followed by “known for three to five years” and the least accurate being “known for more than five years.” The study concludes that the most accurate ratings come from knowing the person long enough to get past first impressions, but not so long as to begin to generalize favorably (Eichinger, 2004).

I missed this when it happened in June. Hot Potatoes, the Moodle-compatible quiz-making application, is now free to everyone. It used to be free to public schools only. The application allows one to create a quiz using a desktop application and then upload it to a web application such as Moodle. Desktop application interfaces have traditionally been easier to use and more powerful than web application interfaces.

The Hot Potatoes suite includes six applications, enabling you to create interactive multiple-choice, short-answer, jumbled-sentence, crossword, matching/ordering and gap-fill exercises for the World Wide Web. Hot Potatoes is freeware, and you may use it for any purpose or project you like. It is not open-source.

Students created an eight-page newsletter entirely on their own, learning new tech skills as needed, and working exclusively during homeroom periods.

Last year, I co-taught fourth and fifth grade technology classes with the homeroom teachers. The first time through, I focused primarily on designing effective learning environments for elementary students. The best activities provided open-ended project opportunities for the kids, taught some basic skills, and tied tightly to homeroom activities.

This year, I plan to emphasize assessment design in my planning. Wiggins and McTighe remind me that assessment design ought to precede lesson design. Identify the learning goals and objectives and then construct assessments to determine student mastery. Paul Black suggests that I vary assessments in form and provide students with feedback useful for further improvement. Bill Fitzgerald makes a push for portfolio-based assessment.

I will certainly tap into the common forms of assessment used at Catlin Gabel. I only teach one 40 minute period per week (the homeroom teachers cover the other 40 minute period). The teachers have designed effective assessments and put a lot of energy into building students’ familiarity with them. Rubric-based assessment is common, which fits the project-based tech curriculum nicely. It also suggests that I could have the kids self-assess, which would build their self-knowledge, provide them with formative feedback, and assess their skill and content mastery.

Students also build summative portfolios in homeroom, which they finalize and share at the end of the year. I could tap into that, but since nearly all tech activities are already grounded in a homeroom project, students may already build portfolio artifacts around them. It seems counterproductive to insist on a technology portfolio piece, when we go to great effort to teach technology as a tool that helps the students get homeroom work done.

Also worth remembering: I have 88 students and a full-time job back in the IT office! I am unlikely to find hours to pore over long assessments. However, if students post assessment products to our course website or their network folder, that will help me review these items quickly and write feedback and notes for future reports.

What assessment techniques do you use with elementary-age students? How do you record them in a way that is useful for future reference?