*Shortened version of a paper delivered at a QIER lunch-time presentation in Education House, 23 April 1990.
(Malcolm Rosier is a Senior Research Fellow of the Australian Council for Educational Research.)

Evaluation is now clearly recognised to be an integral part of the process of education. Individual systems, such as countries or State Departments of Education, are increasingly keen to develop appropriate 'education indicators', which may be defined as:

single or composite statistics that reflect important aspects of the education system ... They are expected to tell a great deal about the entire system by reporting the condition of particularly significant features of it. (Shavelson et al, 1987:8)

In the past, the indicators of changes in systems have been restricted to information derived from school census activities, typically the numbers of students enrolled at each stage of the system, and the numbers of teachers and schools. More recently, the focus has shifted to performance indicators measuring the educational outcomes of the system.

For science education, three basic groups of outcomes cover: (a) science achievement; (b) attitudes; and (c) participation in science. This latter term refers so the extent to which young people remain at school and continue the study of science.

It is difficult to measure and interpret these outcomes. One option is to establish a set of criteria, and measure performance against these criteria. More commonly, outcomes are measured and interpreted in a comparative context. For example, a set of outcomes is interpreted relative to corresponding measures gathered at prior stages.

In addition, comparisons may be made with similar groups of students in other educational systems, to provide an external perspective of performance on the outcomes, and indeed on the kinds of outcomes to be measured. It is not likely that comparisons with other countries or other State systems would be used to determine activities or changes within the system, but the comparisons may provide useful insights into understanding the system, and to assist in refining policy.

This paper presents an overview of characteristics of the science students in Queensland relative to those in the other Australian States. The characteristics and achievement of the Queensland students may therefore be examined in the wider Australian context. The results, for students at the 10-year-old and 14-year-old levels, have been taken from Rosier and Banks (1990), one of the reports of the EA Second International Science Study (SISS) in Australia.

Student Background and Ability

Table 1 summarises the background and ability of the students. The mean value for Australia in this table (and the other tables in the paper) is a weighted value that takes account of the differing sizes of the State populations of students. The standard deviation (unweighted) provides a measure of the variability of the State scores.

The Socio-Educational level (SEL) Index is a parsimonious measure reflecting the occupation and education of the parents of the students, and the literacy levels of the home. It was designed as an indicator of the intellectual-cultural level of the home, not as a social status or prestige measure.

At both 10-year-old and 143-year-old levels, the values for Queensland are lower than the Australian mean. This was largely due to the relatively lower level of post-secondary education of the parents. The SEL index values (and many of the other factors included in this paper) for the Australian Capital Territory probably reflect the special demographic characteristics of its residents.

Although the SEL index was calculated in the same way at both levels, in each State the value was higher for the 10-year-old students than for the 14-year-old students. The parents of the 10-year-old students would, on average, be four years younger than the parents of the 14-year-old students. This suggests an increasing level of education (and increasing SEL index values) of the parents of students entering primary schools. This is encouraging for educational problems that involve the cooperation of the parents in the learning of their children.

The results for the language of the home are summarised as the percentage of students indicating that English was the language used most frequently at home. Table 1 shows the extent to which Queensland had one of the higher percentages of English-speaking students. This implies that Queensland has not had to deal with the learning problems of students with little English competency to the same extent as, say, Victoria.

The Word Knowledge Test required the students to state whether pairs of words were the same or opposite in meaning. This is a simple and rather traditional measure of verbal ability, although in the First International Science Study (Thorndike, 1973) it was found to be highly correlated with more substantial tests of reading comprehension. The Mathematics Test contained 20 multiple choice items, designed to measure basic quantitative skills.

It can be seen that the Mathematics Test scores for Queensland were higher than for the other States (except the Australian Capital Territory). The Word Knowledge score for the lower secondary group was also higher than the Australian mean value. The relatively high level of performance on these measures of verbal ability and mathematics provide a sound basis for the learning of science.

Time for Learning Science

States can do little to alter the background characteristics of the student population. Rather, attention should be addressed to malleable factors. One of the most crucial of the malleable factors is the time allocated to learning. Table 2 sets out several results for variables concerned with time.

The 1 0-year-old sample was drawn from Years 4-6, and the 1 4-year-old sample was drawn from Years 8-10. The mean value of the variable Year level shows that the Queensland students in this study tended to be slightly older on average than those in the other States.

Table 2 shows that the amount of time spent in class on learning science by Queensland students was close to the Australian mean, but that the time spent on homework (measured both as all homework and as science homework) was higher. In general, the time in class is determined at the system or school level, but the time spent on homework is more closely related to the personal efforts of the student and the encouragement received from the school or the home.

A summary measure of the total time spent on learning science was calculated by adding the time in class to the time spent on science homework. The value for Queensland was close to the Australian mean.

Description of Science Learning

The SISS also measured aspects of activities in science classrooms that were
related to the learning of science. Students were asked to rate the frequency with which the activities took place in their classrooms. The types of activities were grouped into three scales, based on the items in Figure 1. The scores for the items and the mean scores for the scales were coded to have a maximum range from 9 to 100, similar to a percentage. A score of O would indicate that the activity never took place, while a score of 100 would indicate that it always took place. The scale score was calculated as the mean of the scores for the constituent items.

At the upper primary level, the Practical Work Scale score showed that the students in Queensland perceived that more practical work was done in their science lessons relative to those in other States. The score for the lower secondary students was slightly below the Australian mean.

The Teacher Initiated Activities Scale measures the students' perceptions of the extent to which the science lessons are directed by the teacher, in contrast to the Student Initiated Activities Scale which reflects the extent to which students considered that they had influenced the content of the science lessons.

For the Teacher Initiated Activities Scale, the mean score for Queensland was higher than the Australian mean at the upper primary level, but lower at the lower secondary level. For the Student Initiated Activities Scale, the Queensland score was lower than the Australian mean.

Science Test Scores

Table 4 shows the State mean scores on the Science Test. The Queensland score is above the Australian mean at both levels. The test results may also be examined in terms of percentages of students with low scores and with high schools. Students were defined to have low scores if they gave correct answers to more than 30 per cent of the items and have high scores if they gave correct answers to more than 70 per cent of the items. The percentages of students with low scores is a measure of the extent to which an education system is satisfying the learning needs of the lower ability students. The percentage of students with high scores is a measure of the extent to which an education system is producing students with sound foundations in science as a basis for further studies.

The percentage of Queensland students with low scores is above the Australian mean for the 10-year-old students, but slightly below the Australian mean for the 14-year-old students. The percentage of high scoring students is above the Australian mean at both levels. These results imply that science education in Queensland is producing a relatively large cohort of strong science students, but that, at the primary level, there is still a relatively large group whose scores on the science test were only marginally higher than if they had made random guesses at the correct answers.

Curriculum Opportunity

Obviously, the extent to which students can answer science tests depends on their coverage of the material contained in the tests. Table 5 sets out results for two measures. The Curriculum Content Score measures the matching between the topics in the science curriculum and the content of the test items. The Opportunity Score is based on teachers' ratings of the percentage of the students exposed to the concepts reflect in the test items.

The results for the Curriculum Content Scores suggest that there was less matching of the curriculum topics to the content of the tests in Queensland than for Australia overall. The students' opportunity to cover the concepts in the test items was above the mean at the primary level, but below the mean at the lower secondary level. However, these mis-matches should be reviewed in terms of the tests themselves. Efforts were made to construct test items dealing with circumstances familiar to the students. Although the formal science curriculum may not have covered the content area, the students may have learnt from their own experiences about the science needed to answer the test items.

Correlations Between States

To what extent do the factors considered here explain the differences between the State mean scores on the science test? Table 6 sets out correlations between variables measuring a range of characteristics and the Science Test Score. Student data have been aggregated to the State level for these correlations. Differences between the States are much lower than differences between the students within the states. There are problems in conducting explanatory analyses of factors to explain States differences in achievement because there are only eight data points. The eight States represent the total population of States. For this reason, no significance levels are given.

In general, the characteristics which were more highly related to science achievement dealt with characteristics of the students themselves: their home background, and their verbal and quantitative ability. At the lower secondary level, States with higher ratings for the time spent on teaching science, the time spent on homework, and the amount of practical work carried out tended to have higher science scores. Three of the four measures of curriculum coverage were linked to science achievement, suggesting that the extent to which the curriculum was covered provided at least part of the explanation for differences in achievement.

References

Rosier, M.J. & Banks, D.K. (1990) The scientific literacy of Australian students, Hawthorn: Australian Council for Educational Research.