The Secretary's Conference on Educational Technology-1999Convergent Analysis: A Method for Extracting the Value from Research Studies on Technology in Education

Cathleen Norris
College of Education, Department of Cognition and Technology
University of North Texas
Denton, TX 75063

Jennifer Smolka
College of Education, Department of Cognition and Technology
University of North Texas
Denton, TX 75063

Elliot Soloway
College of Engineering, School of Information, School of Education
University of Michigan
Ann Arbor, MI 48109

The costs for implementing technology projects in K-12 classrooms - from the use of word processors in writing classes to visualization software in science classes - are significant. While the price of hardware has plummeted, the companion costs of administrator, teacher, and student time remain stubbornly high. Help is potentially available, however: literally hundreds of research reports on the use of technology in education have been published over the past 30 years. In those research reports are the successes and failures that we can learn from; the reports provide a window on others trials in bringing technology to the classroom.

But frankly, there has been little impact of that research community's findings on the practitioner's community of the classroom. Here are several reasons for this breakdown:

Research articles are written, by and large, for other researchers; their style of reporting does not address "what can a teacher learn from this study that is applicable to their classroom, today."

There are many controlled studies that simply report, in a horserace format, who won and who lost, e.g., significantly more students wrote higher quality reports using a word process than those not using a word processor. These studies do not go on to provide an analysis of why - why did that outcome occur?

Literature review articles tend to follow the model of the controlled studies and similarly summarize the research in terms of who won and who lost, e.g., "these n studies found that there was an impact of word processors on writing quality, while these m studies found no such impact." Or they make the following sort of report: "based on a review of the literature, these n factors are involved in the successful implementation of technology in the classroom." The list is a common-sense list of all the issues involved in making technology a success; again, the underlying "why" is not addressed.

There are a number of published articles that summarize the literature and attempt to tell the practitioner "what works." However, these articles tend to be at a very high level, e.g., use simulations, use multiple representations, etc. While good advice, these observations are quite general; the conditionality that should temper the application of these pieces of wisdom is typically not provided.

The objective of this short paper, then, is present a method for extracting value from the research literature that should benefit educational practitioners. We call this method, Convergent Analysis (CA). CA comprises a number of steps.

First, we need to pose a question whose answer can benefit educational practitioners. Thus, rather than going to the literature to ask a broad question such as "does technology lead to increased student achievement," we ask a more focused, practitioner-oriented question:

under what conditions do computers lead to increased student achievement?

That is, what are the issues that need to be addressed, what are the key factors and their values that are involved in leading to a positive learning outcome with technology? For example, from reading the research literature "time-on-task" can be seen as a key factor; for computers to have a positive impact on writing, children need to spend an "adequate" amount of time actually writing on computers. In looking at the range of research papers one can come to an understanding of the different tradeoffs that could result in providing children with "adequate time."

In phrasing the question of computer impact in terms of "conditions under which..." an important opportunity has been created. After identifying those conditions for success the next step is identifying concrete actions that teachers, students, administrators, and parents can take towards realizing those conditions. For example, administrators can work towards getting funds to buy computers so that students can have adequate time writing on the computer. Curriculum coordinators can work towards organizing a curriculum unit to enable teachers to create lesson plans for the unit that enable students to have adequate writing time on computers.

Second, we need to review the empirical studies in the literature and put them into a standardized format. And third, we need to look across all the studies in that standardized format, so as to compare and contrast the issues in each study. Clearly, CA is a time-consuming, detail oriented process!

In what follows, then, we highlight the various steps of the CA process and provide examples of the nuggets of wisdom that we have extracted from the literature on writing education and technology using the CA method.

Two major problems confronted us when we started reading research articles about the use of technology in writing education.

Comparison across studies was not obvious: The research literature we found was exceedingly diverse in its reporting form and content. How did the issues and findings in one article relate to those in others?

Findings were not focused on practitioner issues: The agendas of the researchers carrying out the studies were not necessarily the same as the agendas of the classroom practitioners. For example, we saw researchers structuring their report to highlight one issue: did the technology lead to a positive impact or not. In contrast, a teacher is more interested in the conditions that lead to the outcome, so they could know how to implement and adapt the technology in their classroom.

In what follows, then, we describe how the Convergent Analysis method attempts to address these two problems.

While some fields have de facto standards for research reporting (e.g., studies presented in major psychology or medical journals), the "field" of education and technology is not nearly as organized, and thus the format for reporting an empirical study varies widely. This diversity - a euphemism if ever there was one - makes accumulating the findings across studies exceedingly difficult. The meta-analytic method steers one course through this maze: only studies that admit of specific statistical characteristics are usable in the comparison. Unfortunately, many if not most of the research studies can't meet the stiff meta-analytic demands and are thus excluded from a literature review. But, just because a study isn't tightly quantitative doesn't mean that it is a bad study. We thus wanted to develop a method to analyze the literature that was more inclusive, and viewed the breadth in the research base as a feature, not a bug.

Still further, we observed in reading paper after paper that researchers wrote for fellow researchers. The style of the research reports was clearly academically-oriented, and the content focused on issues of concern to researchers. For example, while a paper might contain an extended discussion of the theoretical framework for the study, it would say precious little about the details of actually running the study in a classroom setting. It is no wonder, then, that the research literature is not consulted by practitioners - researchers don't consider them their audience.

To address both issues we developed a "Research Profile, " a template which now has about 75 categories (e.g., enabling conditions such as teacher experience, technology availability and enactment conditions such as time on task, nature of the task, etc.) The Profile identified the issues that practitioners were concerned with. We consulted with education professionals in order to hone in on the categories of information relevant to classroom teachers as well as school administrators. Over 6 months of reading, rereading, and rerereading the research literature, we went through four major iterations of the profile. And, we still continue to tweak it!

Now, filling out a profile for a research article is no mean feat! It takes hours and multiple readings of the paper in order to accurately fill in the cells of a profile. Interestingly, we reread papers we had originally reviewed before the profile was developed and we oftentimes changed our opinion and our understanding of the research study. The profile helped us focus on the truly salient issues in the research study. In effect, the labeled cells in the profile served as prompts to the reviewer; the profile scaffolded reviewers in getting at all the issues of a study.

Inasmuch as the articles in the literature that we profiled were not directed towards practitioners, it is not surprising that even after multiple readings we were not able to fill in many of the cells in a profile. Researchers did not include in their published articles information that was important for teachers who would want to either replicate a study or adapt the study to the particulars of their classrooms.

Profiling 60 empirical research studies on writing education and technology is a major undertaking. We were fortunate, therefore, to enlist the aid of a graduate class of students at the University of North Texas in the College of Education. Over a 2 month period, 23 students working in teams created the online database of profiles available for public perusal. The findings described in the remainder of this article are based on our readings of these profiles.

Once the literature has been put in a standardized format, it is possible to systematically examine the studies to identify patterns. We have called this focusing in, this triangulating process, "convergent analysis" (CA). For example, in looking across all the studies on writing education and technology, we first put those that showed children gaining benefit from using a word processor in one pile, and those that did not show benefit in another. Then, we asked, can we explain why those that did not seem to show benefit on the basis of other findings? In comparing across cells such as task, we saw the following:

Using a word processor changed the writing task; the first draft was no longer this major stepping stone, since the children modified their documents continuously.

Now, one study that showed no benefit of the technology made that claim on the basis of the "first drafts" of the children not showing much improvement. Using convergent analysis then, we felt we could now explain that negative result. That is, the evaluation used in that study measured the wrong thing; first drafts are not the key marker when children use word processors.

Only by comparing across the literature were we able to ferret out that important observation. And only by having the literature in a standardized format were we able to look across the literature. Thus, we feel the database of profiles provides a valuable resource for educators and researchers who wish to extract value from the research base. To assist in that process, we are now developing computer-based visualization tools that will make it easier yet to compare/contrast across studies.

To further help focus our review of the literature, we developed a "practitioners'-oriented literature review." That is, in typical literature reviews, authors are still speaking to other reseachers; hence the issues they tend to focus on are not necessarily the issues that would assist teachers or administrators. For example, the quote below taken from one literature review of the writing education and technology field is most illuminating of the problem: the author does not provide suggestions for how teachers who don't have advanced students, who don't have pervasive technology, etc. to get around these problematic -- and typical -- situations!

"Fairly consistently, results favored older, more able students, especially those exposed to relatively lengthy treatments that were well-grounded in appropriate theoretical frameworks."

Reed, 1996

In our literature review, then, we first posed a set of 13 teacher-oriented questions. See Table 1, below. These questions were suggested to us by practicing teachers. We then searched the online profiles for empirical studies that were relevant to the questions. The resulting "literature review" is available online.

In what follows we present two examples of nuggets that we have extracted from the research literature on writing education and technology using our convergent analysis method. A caveat: while one might well want a list of specific pieces of wisdom to serve as prescriptions. If only it were that simple and straightforward! The tension is this: to put forward such a list one must abstract away all the nitty-gritty details of the situation. That process produces very general statements of which Reed's is a shining example: "lengthy treatments" are effective.

Our nuggets of wisdom, then, tend to be more of a process, a way to examine a teacher question and address it based on the literature. This issue will become clearer after we present the two sample nuggets.

"How much time should I spend preparing my students to use a word processor?"

By looking across all the studies at how each study addressed this issue, the "answer" that can be constructed to this question is: "it depends." While on the surface that answer is nor particularly satisfying, providing a description of "what it depends on" may well be useful to teachers:

It does not seem to be the case that there is some hard and fixed minimum amount of time on the computer that is needed in order to insure a successful writing experience. For example, we did not see that students must have X hours of keyboarding before they start writing. Rather, success in writing on the computer could be had from a broad range of preparedness activities, from computer literacy training (ie., Eastman, 1989, 324; Beichner, 1994, 82) to keyboarding (ie., Kurth, 1997, 190; Dalton & Watson, 1986, 207), from a few hours (ie., Borgh & Dickson, 1992, 141) on the computer to long-term exposure (ie., Diaute, 1986, 325/163; Parr, 1994-95, 135; Snyder, 1994, 148; Beichner, 1994, 82; Fais & Wanderman, 1987, 275). The research does show that in those projects where students tended to have less preparation they had a greater likelihood of obtaining a negative outcome (ie., Lohr, et al., 1996, 307). In contrast, in those projects where students had even a moderate proficiency coming into the writing activity, there was a good likelihood of achieving a positive outcome (ie., McAllister & Louth, 1988, 161; Snyder, 1994, 148; Lehrer, et al., 1994, 80).

In effect, the amount of time preparing students does not seem to be the determining factor for computer-writing success! Moreover, formal keyboard training does not seem to be a necessary ingredient. For example, if the treatment is lengthy then students with less background will catch up through the extended term of the writing experience (ie., Beichner, 1994, 82; Parr, 1994-95, 135). If word processing is available to children after school or at home, then again, they will catch up if the activity is an extended one. If word processing is used in other subject areas in the curricula, then again, this activity will enable those less prepared to catch up with those more prepared.

In other words, by looking more globally at the research studies, we were able to see how different studies used different strategies to accomplish the same goal. Thus there was no one strategy for achieving the condition that students go into a writing assignment using a word processor with experience using a word processor. The research literature depicts a plethora of strategies. Thus, in any given situation, a teacher will have to decide how to manage the tradeoffs, e.g., extend the writing assignment if your children are less prepared, take advantage of homework periods after school for the less prepared children to become comfortable with word processors, etc., etc., etc.

The fact that there is no simple, straightforward answer to questions such as the above one is not a bug, but a feature! That is, in effect, the literature sanctions teachers to be inventive and to take into consideration the local needs, resources, and even idiosyncrasies of their classroom. From our reading, the research acknowledges the importance and relevance of the local context; prescriptions that ignore that local context implicitly devalue the contributions of classroom teachers towards creating effective learning environments.

How do I evaluate the quality of the children's writing when they use a word processor?

The literature is most interesting on this point. In writing with pencil-and-paper, children create a first draft, receive feedback, and then revise it. (Time permitting, there may be additional rounds of feedback and revision.) In writing education, one important evaluation measure of a child's work has been the amount of change from the first draft to the final draft.

Now, in studies where children used a word processor, it turned out that when that metric was used, it showed that there was not much change between the first draft and the final draft. On the basis of that finding, the researchers concluded that word processors were not helping children write more effectively. (ie., Owston, et.al, 1992, 164; Owston & Wideman, 1997, 238; Snyder, 1994, 148; Diaute, 1986, 325/163)

Again, a more global perspective on the research literature provides a clearer picture of the situation. Using convergent analysis and looking over all the research studies what we saw was that studies reported that children were constantly modifying their work; in effect, there wasn't a first draft! A word processor does make changing a document relatively simple (e.g., in comparison to changing a penned document or in comparison to changing a typewriter-produced document). The studies observed that the writing process children employed using a word processor was different from the writing process children employed with pencil-and-paper technology. While teacher feedback did cause the children to revise their word-processed documents, they were revising as they wrote, in response to their own thoughts and as a result of conversations with other children who read the documents over their shoulders. (ie, Owston & Wideman, 1997, 238)

This example illustrates how technology changes the nature of what goes on in the classroom: word processing technology engenders a different writing process when compared to the writing process using pencil-and-paper technology. Clearly, that change in process had implications for evaluating the children's written documents.

What other ripple effects does this change in the nature of the activity have on the classroom? For example, "time on task" becomes more problematic, e.g., are there more comfortable intermediate stopping points when using a word processor in comparison to using pencil-and-paper? When should the teacher give feedback to children on their word-processed documents? Again, there are no single answers that are right for all classrooms.

Unfortunately, even a careful reading of the literature will not inform all aspects of classroom practice. Research has not typically been driven by the needs of classroom teachers, and thus there are major lapses and gaps in the research literature.

For example, there is precious little research on how to support children in transferring their writing skills to other contexts (e.g., from a writing class that uses computers to a social studies test where writing is not done on computers, or even to a social studies class where writing is done on computers!). While there were a few studies that demonstrated that transfer from one writing task to another was achievable, the number of those studies was small and more importantly, there was little discussion on the conditions for achieving that transfer.

In situations such as the above one, there aren't enough studies to which we can apply convergent analysis, and thus we weren't able to tease out the relevant conditions and potential compromises that are needed to inform practice.

What specific topics need to be explored by researchers? For starters, in looking over our practitioners-oriented literature review, we see a number of questions that have only a handful of references back to the literature. If indeed those questions are important to teachers -- and we think they are -- then these questions suggest areas for further exploration (e.g., transfer, collaborative writing, using multimedia for self-expression -- are all topics that have little research behind them).

In addition to more research, new tools to access and analyze the literature are needed.

Currently, filling out a Profile is a labor-intensive exercise. But, individuals who have actually done profiling, e.g., classroom teachers, report that they came away from reading the literature with a much deeper appreciation and understanding of the research when they used the Profile to organize their reading. Tools to scaffold that profiling activity therefore would be most useful.

Still further, tools that support teachers quickly doing convergent analysis would also be useful. Any list of questions will be incomplete; specific teachers and administrators will have particular issues that they need input on, and thus they need to be able to quickly and effectively do a convergent analysis of the literature. Currently, doing convergent analysis is definitely "an art;" what tools will support end-users in making this analysis technique routine?

Technology is fast becoming more universal, more pervasive in classrooms. While there are negative arguments and naysayers, the trajectory is clear: as technology continues to pervade our everyday lives, it will do the same for schools and classrooms. The need, the demand for effective ways to use this technology in the classroom will only increase. Research can play an important role in providing educational practitioners with concrete suggestions on why and how to use technology with their students. However, there are real barriers for teachers and administrators in gaining access to the wisdom in that research. Currently, research is written with other researchers as an audience; currently, there are precious few tools for practitioners to use in accessing research; and there are significant gaps in the research since practice has not been a major driver of the research. Towards addressing these challenges and extracting value from the research literature, then, we put forward the Convergent Analysis method.