Since the Sputnik era, governments worldwide have been working to develop a more scientifically literate society. Although large and diverse groups of scientists, educators, and policy makers have emphasized that scientific literacy involves knowledge of both science content and ways of thinking in science (Culliton 1989; NRC 1996; Maienschein 1998; Michaels, Shouse, and Schweingruber 2007; OECD 2006), the science teaching community has primarily focused its efforts on teaching and assessing students' factual scientific knowledge (Alberts 2009). Although scientific ways of reasoning, such as critical thinking, are highly valued by both the academic and industrial communities (Association of American Colleges and Universities 2005), their teaching and assessment have been neglected (Pithers and Soden 2000; Alberts 2009).

Critical thinking has been defined by experts in the field as "purposeful, self-regulatory judgment that results in interpretation, analysis, evaluation, and inference as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based" (Facione 1990a, p. 2). This is an essential part of the process of scientific investigation, especially the analysis and evaluation of scientific evidence. Although this judgment is required when drawing conclusions from any one particular study, it is essential when evaluating multiple studies--especially when these support different conclusions. Such situations occur frequently in science, law, and matters of public policy. For example, evidence for the impact of human action on global climate includes data from historical sources, current measurements, and computer models. The conclusions of individual studies are not always consistent with those of other studies. Additional recent examples include the safety of silicone breast implants (McLaughlin et al. 2007) and the existence of infectious protein particles, or prions (Pruisner 1998). In these cases, it becomes necessary to assess the credibility of each study by looking for weaknesses in the study and/ or searching for alternative interpretations of its results. Here, the appropriate response to data may be more complex than simply accepting or rejecting one's hypothesis. Chinn and Brewer (1993) have cataloged seven different responses to data that do not support a given hypothesis; in addition to revising or rejecting the hypothesis, these include rejecting the data, holding it in abeyance, or reinterpreting it. In choosing among these responses, it is necessary to implement several key components of critical thinking as defined in the Delphi report (Facione 1990a). Although there are many surveys that measure other facets of critical thinking (Watson and Glaser 1952; Facione 1990b), these surveys typically involve analysis of studies one at a time and thus do not oblige the subjects to directly confront issues of quality, credibility, and interpretation.

[FIGURE 1 OMITTED]

We have developed the Assessment of Critical Thinking Ability (ACTA) instrument, a short open-ended survey (25 minutes) that can be easily implemented online or in the classroom. ACTA evaluates three critical thinking abilities necessary for the evaluation of multiple conflicting studies and provides a detailed description of the set of skills associated with each. This article describes the ACTA survey and a preliminary assessment of its construct validity. Our findings show that ACTA provides information about students' levels in these abilities, information that could be used to help students develop or enhance their proficiency.

Assessment of Critical Thinking Ability (ACTA) Survey

The ACTA survey assesses students on three main critical thinking abilities essential to the evaluation of multiple lines of evidence, as follows:

Print this page

While we understand printed pages are helpful to our users, this limitation is necessary
to help protect our publishers' copyrighted material and prevent its unlawful distribution.
We are sorry for any inconvenience.