PISA Results: Which Countries Improved Most?

The headlines started to stream as soon as the PISA results were in: “Asian countries top OECD's latest PISA survey.” “Poor academic standards.” “Students score below international averages.” It depends on the country, of course. A time to celebrate for some, a time to lament for others.

Today, the Organisation for Economic Co-operation and Development (OECD) released results from the Programme for International Student Assessment (PISA) 2012 / Fifth Survey. PISA tests 15-year-olds in reading, math and science. This year, 65 countries and economies took part; in total, more than 70 countries have taken part since 2000, when PISA began.

Newcomer Vietnam performed very well, with scores of 511 in math, 508 in reading and 528 in science, all significantly above the OECD average.

Since the first round in 2000, no country has improved more than Peru: 76 points in math, 57 in reading and 40 in science; albeit from a relatively low base (from 292 to 368 in math between 2000 and 2012).
Between 2000 and 2012, the following countries have recorded the highest increase in math scores:

In reading:

In science:

These impressive gains are inspiring, but there is still a long road from this level to the level of high performers. Few countries have gone from below to above OECD average (exceptions include Poland, Germany, Latvia, Luxembourg and Lichtenstein). No doubt, these achievement trends and the wealth of information provided by OECD’s PISA tests will help researchers and policymakers lead a fruitful exercise to glean lessons for academic performance.

Comments

Scotland: fading fast, but clinging to the 18th Century Enlightenment self-delusion. Now embracing and enforcing a Curriculum for Excellence (sic) which shifts the focus to the whole person and away from essential learning in Literacy, Numeracy and Science. Please save this proud country from half-baked, second-rate politicians, for whom parochialism would be an horizon!

Good comment, I agree. We need more information about what works to improve learning outcomes. This can only come from rigorous impact evaluations. While we still have a long way to go, there are some things that we know. For instance, for rapid improvers:

Chile: controversial, but the 1980 reforms which extended learning opportunities and extended choice might be paying off: see for example: http://ddp-ext.worldbank.org/EdStats/CHLprwp08.pdf; http://www.economia.puc.cl/politicas-publicas?docid=5525

Mexico: Government intentionally targeted PISA scores and enacted policies to expand opportunities (eg, through the conditional cash transfer program, see for example: http://ideas.repec.org/p/got/iaidps/122.html) and improve school quality (see for example: http://www.sciencedirect.com/science/article/pii/S0304387811000927).

Interesting discussion! A couple of thoughts here. First, Shanghai has about 24 million people in 2013, according to the World Population Review. The total population of the Netherlands is less than 17 million. So, I don't see something wrong to put Shanghai-China for the comparison statistically. Second, some people argue that Asian kids achieve better academically but they are less happy. Well, it really deoends on how "happiness" is defined, which itself is very culturally contextualized. It may be true that many Asian kids feel peer or parents or school's pressure, they study more hours and therefore they have less play time than their western peers. So, if you ask them what they want, they probably would say they want more play time. However, playing more doesn't mean happier.

At a very general level, the testing issue has to be called into question. What is the value of comparing countries that differ markedly. Performance on tests relates to class. Children in a lowly populated, rich country will outperform a country with large populations where poverty is a challenge. Cultural and social capital ultimately shape results. In developing countries as a result of political pressure, there is enormous pressure to improve performance. My view is that countries, especially, low and middle income countries should develop global strategies to improve performance based on using schools to compensate for poverty. This is the only way one could then use international tests more usefully.

Thank you for your comment. I couldn't agree more. Such international tests are benchmarks. They are useful along with other indicators. Policy frameworks are also useful, and are also benchmarks (http://siteresources.worldbank.org/EDUCATION/Resources/278200-1221666119663/saber.html). How to improve performance is very country specific as you say, and schools are fundamentally important to helping compensate. But programs to help will be better informed and perform better if they are anchored in information about what works, what works in a particular context, and who needs what kind of assistance. The PISA tests come with a wealth of information on family and school characteristics. They need to be analyzed carefully and then used to help inform the national dialogue, along with other information, of course.

I have one doubt. Box I.2.2 In the PISA 2012 report Vol I (pp 52-43), explains that trend information can be computed using the first year when a given subject area was prioritised in the test as the reference scale. Thus, trends can be computed for the period 2000-2012 in Reading, 2003-2012 in Math, and 2006-2012 in Science. If I understand correctly, this basically means that for Math and Science scores in 2000 and 2003 for Science) to be considered comparable to the rest of the series, they would need to be re-scaled; that is, we should not use the original data directly.

I got the impression your post relies on the original 2000 (and 2003 for Science) scores without any re-scaling. Is that impression correct?

If this is the case, would you be so kind to explain to what extent the description of progress levels depicted by your text are reliable? As you can imagine this issue is particularly sensitive for a citizen of the country (Peru) your post identifies as the one that improved the most between 2000 and 2012 in Reading and Math and showed also an important progress in Science. While this is clearly correct in Reading, since Peru did not participate in PISA 2003 and 2006, I wonder how reliable that assertion is regarding Math and Science.

Dear Cesar,
Thank you for your comment, and careful reading. You are correct that I did not rescale the scores. I took them as raw. I wanted to see who improved among those that participated in both 2012 and 2000 and did the same simple calculation for all three subjects, despite the warnings from the OECD. Given the many comparability issues involved in international assessments -- see for example http://www.economist.com/blogs/economist-explains/2013/12/economist-explains-4?fsrc=scn/tw_ec/how_accurate_are_school_league_tables _-- I am not sure my transgression is all that serious. In any case, the increase in scores for Peru even if one only takes one subject are nonetheless impressive. In the coming months, and years, we along with many others, will take a careful look at PISA and try to explain improvements.
Thank you,
Harry

Thank you so much for clarifying.
Certainly, the improvement shown by Peru in Reading is really important. As you said, it is related to the poor starting point, but it goes beyond that. You might find the following paper of interest (it shows that a significant part of the progress is explained by the overall socio-economic progress -available in Spanish only): http://www.up.edu.pe/revista_apuntes/SitePages/ver_articulos_web.aspx?idsec=470&idnum=72
Best regards.
Cesar