My students are bright, engaged and well-behaved, but there is something missing: they cannot think.

The Secret Teacher goes on to blame a focus on exams and I agree with the teacher for the most part. But tests are not the only thing to blame for students who do not know how to think independently.

Teachers who spoon feed, stifle thought, or fail to stay relevant are just as culpable.

For instance, the teacher said:

Last week I caught another of my A-grade students using his phone in the lesson. As a starter exercise, I told them to think of as many advantages as they could of being on the UN security council. “What are you doing?” I asked. “I’m googling the list of advantages,” came his wary reply. I was flabbergasted. I tried to explain that there is no list of advantages, but that I wanted his own views.

I am confident that the Secret Teacher is also a Good Teacher. But she also sounds like a traditional one in that she is averse to searching for Googleable answers. Perhaps she did not know how to take advantage of a now natural behaviour to show her students how to think, act, and write critically after Googling.

Most people would eventually realize that the most important factor in a schooling or educational system is the quality of its teachers. Those that join the profession are self-selecting by choice and pre-selected by institutes of teacher education.

But only the exceptional step up to deal with the problems with assessment or learn how to skilfully promote critical and creative thinking in a conservative system. The rest need professional development and the mindset of lead learners to do this.

The layperson’s likely view of assessement is summative tests and exams, typically of the high stakes variety, because that is what they have experienced. As its name implies, summative assessment is perceived and practiced as a terminal or downstream activity.

Informed educators might point out that formative assessment (on-going feedback) is more important for learning. Educated instructional designers will tell you that assessment or evalutation should be developed before content. Wise educational consultants and leaders will tell you that assessment is a key leverage point in systemic change.

Assessment is actually an upstream component. Change that and you affect processes downstream like teaching, learning support, learning environment design, and policy making.

Imagine for a moment that exams were removed and replaced with learner portfolios. Now imagine how teaching, teacher expectations, teaching philosophies, teacher professional development, and teacher evaluation might change.

I would like to answer a question directed at me:

@ashley what should assessment for systemic change look like? how can the data be used for leverage? #asiaED

I cannot say for sure how assessment should change and I do not think that data collected from such assessment only serve as leverage.

Consider an example of a change-in-progress and my suggestions on how to implement change and avoid pitfalls in the process.

There are at least two significant assessment-related changes in Singapore now. One is an emphasis on values-based education (instead of focusing on just grades) and the other is evaluating of the importance of a degree.

parental feedback on the unnecessary stress of high stakes testing (particularly of the Primary School Leaving Examination (PSLE)

the recognition of grade inflation (particularly at the GCE A Levels)

the mismatch between what employers need and what universities produce

new and visionary leadership at the Ministry of Education (MOE), Singapore

All these placed pressures on what we understand and value as traditional, summative assessment.

That said, MOE is not going to sacrifice the sacred cows of tests and exams. But it has started emphasizing other processes and measures.

Values-based lessons are being integrated into previously content-only lessons [news article after its announcement in 2011]. Primary school students can get into Secondary schools of their choice based on non-academic talents with the Direct School Admissions (DSA) scheme.

Experts of systemic change might label these efforts as piecemeal change. They do not profoundly disrupt existing processes and are instead implemented in periodically and strategically in an attempt to create overall change.

However, critical observers might also note that significant and sustained change tends to happen with disruptive interventions. Examples might include:

the impact of antibiotics and anaesthesia on medical practice

the effect of the printing press on schooling and the spread of information

I predict that e-portfolios will rise in importance as a means of recording and evaluating (not just assessing) both the processes and products of learning.

e-Portfolios are a systemic and disruptive change in that they:

start and end with the learner

belong to the learner

emphasize processes and not just products of learning

showcase holistic or other attributes (not just academic ability)

promote lifelong, career wide learning

The battle to create acceptance, buy-in, and hopefully ownership of what we now label as alternative assessment will probably last a decade or more. During this time, it might be tempting to try to collect evidence during a trial or a full blown implementation of the effectiveness of e-portfolios to convince stakeholders that the change is making a difference.

However, this is not a wise move. Efforts to do this would repeat the mistakes of the slew of early educational and action research comparing the effects of intervention A (for example, traditional instruction) and intervention B (technology-assisted instruction). There are far too many factors that influence learning outcomes, attitudes, values, etc.

If data on newer forms of assessment need to be collected, analyzed, and presented, I suggest that they be part of a much larger plan. Such a plan could include:

In summary, assessment is an important leverage point and an upstream component for changing educational systems. Data on disruptive changes like the adoption of e-portfolios for assessment and evaluation can be leveraged on to convince stakeholders. However, such data should only be part of a larger and sustainable plan.

The headline asked a speculative question, but did not deliver a clear answer. It hinted at mammoth change, but revealed that dinosaurs still rule.

Here is the short version.

This is what 13,000 4th grade students in the USA had to do in an online test that was part of the National Assessment of Educational Progress. They had to respond to test prompts to:

Persuade: Write a letter to your principal, giving reasons and examples why a particular school mascot should be chosen.

Explain: Write in a way that will help the reader understand what lunchtime is like during the school day.

Convey: While you were asleep, you were somehow transported to a sidewalk underneath the Eiffel Tower. Write what happens when you wake up there.

This pilot online assessment was scored by human beings. The results were that 40% of students struggled to respond to question prompts as they were rated a 2 (marginal) or 1 (little or no skill) on a 6 point scale.

This was one critique of the online test:

One downside to the NCES pilot study: It doesn’t compare student answers with similar questions answered in a traditional written exam setting.

I disagree that this is necessary. Why should the benchmark be the paper test? Why is a comparison even necessary?

While the intention is to compare the questions, what a paper vs computer-based test might do is actually compare media. After all, the questions are essentially the same, or by some measure very similar.

Cornelia Orr, executive director of the National Assessment Governing Board, stated at a webinar on the results that:

When students are interested in what they’re writing about, they’re better able to sustain their level of effort, and they perform better.

So the quality and type of questions are the greater issues. The medium and strategy of choice (going online and using what is afforded there) also influence the design of questions.

Look at it another way: Imagine that the task was to create a YouTube video that could persuade, explain, or convey. It would not make sense to ask students to write about the video. They would have to design and create it.

If the argument is that the YouTube video’s technical, literacy, and thinking skills are not in the curriculum, I would ask why that curriculum has excluded these relevant and important skills.

The news article mentioned some desired outcomes:

The central goal of the Common Core is deeper knowledge, where students are able to draw conclusions and craft analysis, rather than simply memorize rote fact.

An online test should not be a copy of the paper version. It should have unGoogleable questions so that students can still Google, but they must be tested on their ability to “draw conclusions and craft analysis, rather than simply memorize rote fact”.

An online test should be about collaborating in real-time, responding to real-world issues, and creating what is real to the learners now and in their future.

An online test should not be mired in the past. It might save on paper-related costs and perhaps make some grading more efficient. But that focuses on what administrators and teachers want. It fails to provide what learners need.

Teachers, examiners, and adminstrators disallow and fear technology because doing what has always been done is just more comfortable and easier.

Students are forced to travel back in time and not use today’s technologies in order to take tests that measure a small aspect of their worth. They bear with this burden because their parents and teachers tell them they must get good grades. To some extent that is true as they attempt to move from one level or institution to another.

But employers and even universities are not just looking for grades. When students interact with their peers and the world around them, they learn that character, reputation, and other fuzzy traits not measured in exams are just as important, if not more so.

Tests are losing relevance in more ways than one. They are not in sync with the times and they do not measure what we really need.

In an assessment and evaluation Ice Age, there is cold comfort in the slowness of change. There is also money to be made from everything that leads up to testing, the testing itself, and the certification that follows.

Like a glacier, assessment systems change so slowly that most of us cannot perceive any movement. But move they do. Some glaciers might even be melting in the heat of performance evaluations, e-portfolios, and exams where students are allowed to Google.

We can either wait the Ice Age out or warm up to the process of change.

By reading what thought leaders share every day and by blogging, I bring my magnifying glass to examine issues and create hotspots. By facilitating courses in teacher education I hope to bring fuel, heat, and oxygen to light little fires where I can.

This Washington Post blog entry provided a blow-by-blow account of some terrible test questions and an editorial on the effects of such testing. Here are the questions the article raised:

What is the purpose of these tests?

Are they culturally biased?

Are they useful for teaching and learning?

How has the frequency and quantity of testing increased?

Does testing reduce learning opportunities?

How can testing harm students?

How can testing harm teachers?

Do we have to?

The article was a thought-provoking piece that asked several good questions. Whether or not you agree with the answers is moot. The point is to question questionable testing practices.

I thought this might be a perfect case study of what a poorly designed test looks like and what its short-term impact on learning, learners, and educators might be.

The long term impact of bad testing (and even just testing) is clear in a society like Singapore. We have teach-to-the-test teachers, test-smart students, and grade-oriented parents. We have tuition not for those that need it but for those who are chasing perfect grades. And meaningful learning takes a back seat or is pushed out of the speeding car of academic achievement.

Some people were vehemently against the act and explored the historical reasons of book burning. Others said that the burning was simply one of catharsis.

I think that many of the responses were emotions disguised as logic. It is perfectly acceptable to be passionate, but that does not mean losing your head about what you are passionate about.

To liken the revision paper burning to the way Nazis burnt books is ridiculous. The former was at worst an ill-judged cathartic release. The latter was based on terrible ideology.

No doubt that both types of burning look the same, but they have different origins and purposes. We regularly incinerate old books, newspapers, and other paper-based material (along with other rubbish) instead of reusing or recycling. Where are the voices and arms raised then?

To paint both with the same black-or-white brush is like saying all killing is bad. We kill for food, greed, defence, revenge, etc. Depending on the context, some killings are easier to justify than others.

I agree that the parents who organized or facilitated the burning might have inadvertently sent the wrong message that burning is the thing to do. There are certainly other ways to express relief (perhaps less emphatic or dramatic) other than burning.

But to judge without first understanding and attempting to educate all parties is just as harmful. It is a way to burn the bridge that links both sides.

Like it or not we live in a world with more shades of grey than ever before. That is why is it less important to transmit values than to teach the next generation how to think critically and recreate the values that matter.

Like this:

One of the announcements at this year’s National Day Rally was a wider spectrum of entry criteria for the Direct School Admission programme.

Some might say the DSA makes a mockery of standardized exams because it allows Primary school students to get into the Secondary school of their choice. While Primary School Leaving Examination (PSLE) results are still used as criteria once they are released, the student with entry via DSA already has a foothold that non-DSA students do not.

Anyone who says we don't have a sustained alternative evaluation system need only examine DSA. Not perfect, neither is PSLE #edsg

A few might wonder if the PSLE is even necessary if such an alternative form of evaluation exists. Others might argue that the DSA criteria are not enough.

That brings us back to increasing the selection criteria for DSA. What traits might students be evaluated on? Leadership? Character?

When those traits were bandied about in popular media, people asked if things like character and leadership could be measured among 12-year-olds.

You can measure just about anything, even fuzzy, hard to quantify things like happiness [happiness index]. But let us not kid ourselves into thinking that these measures are absolute, objective, or universal.

A trait like creativity is due to many things, and an instrument no matter how elaborate, cannot measure all aspects of creativity. Most fuzzy concepts, like beauty, are subjective no matter how much you quantify them. Ask anyone to define creativity or beauty and you will get different answers; there is no single understanding.

Whenever you measure anything, there are margins of error that originate from the measurer and the measuring instrument. Sometimes the object or subject measured introduces error. Consider what happens if person A measures a fidgety person B’s height with a tape measure.

Let us say that you could measure leadership or character precisely. Just because you can does not mean you should. How different is a person when he is 6, 12, 18, 24 or 36? What if a value judgement at 12 puts a child on a trajectory that s/he is not suitable for?

That said, we would be foolish to think that we do not already gauge people on fuzzy traits like character. It happens in the hiring and firing of employees. Some might argue that we are just bringing that process up the line of development.

There are many ways to measure fuzzy traits. At a recent #edsg conversation, I tweeted:

Whether or not these measures to provide alternative evaluation are implemented, we will read in forum letters, blog entries, and Facebook posts rhetorical statements like “parents must change their mindsets.”

Of course they must. But they are not going to do so automatically.

Folks who highlight mindset sometimes fail to realize that you have to start somewhere with behaviour modification. In systemic change, you start with one or more leverage points. In our case, it is the way people are evaluated.