Archive

Technology and education

Scott McLeod’s MindDump blog featured a set of pie charts reflecting professors’ use of technology. The charts are reproduced from a piece in the Chronicle of Higher Education, and is based on a survey of about 4,600 professors from 50 Universities, collected in the spring of 2009. The piece cites, but does not link to the actual study results. Some poking around turned up the FSSE site, but I was unable to find the cited data there. The closest I found was a page reporting on the use of communication technologies, which seemed to reflect different numbers of respondents.

Nonetheless, assuming that the data are not bogus, we can ask some questions about what this means.

The technologies in question may be divided into some broad categories: tools that help the professor manage the process of teaching (course management and plagiarism detection), tools that help with out-of-class interaction with students (collaborative editing software, blogs, video conferencing), and in-class tools such as student response system and virtual world tools. (There may be some overlap in the use of some of the tools, but that shouldn’t detract from my argument.). With the exception of course management software which is used by 72% of the responding faculty, the other technologies were not used.

Unfortunately, neither Scott’s post nor the article it cites provides much context for the charts. Where there differences in response rate by university? by school or department? by faculty member? Where there significant correlations among the responses? Is someone who uses one sort of technology more likely to use more than one, or are these unrelated? Are there significant correlations within each category?

And of course one piece of technology that is conspicuous by its absence is PowerPoint. Why wasn’t that part of the survey?

Another oddity in the charts reported The Chronicle of Higher Education was the low response rate for the use of blogs (13% of respondents used it at least some of time) compared to the 31-37% rate of use of discussion board postings reported on the FSSE page. Surely discussion boards and blogs fall into a similar technological category. Why the discrepancy?

Finally, there remain two important questions: why? and, does it matter?

What accounts for this behavior? Did the people who conducted the survey ask open-ended questions about the professors’ decisions? It isn’t lack of awareness: most faculty knew about the technologies in the survey. Was it lack of money? Lack of interest? Was the technology hard to use? Did it not solve the exact problem the instructors had? What were some other reasons?

And finally, does it matter? It would be great to know whether the use of any of these technologies had material impact on the outcomes of the educational process? It would be great to be able to set up some contrasts in the survey data to see whether the use of technology in the classroom matters, or if other factors, such as class size or individual differences among instructors (for example) account for the variability.

Lament about the lack of adoption of technology is misplaced unless technology can be shown to improve outcomes. After all, the goal of the educational system is to educate, not to provide cannon fodder for technologists with half-baked ideas.

Those numbers aren’t on the FSSE website because we ran them for the Chronicle when they asked us. But maybe we can/should put them up on the website since we’ve already “published” them. I’ll look into it.

From the “give them an inch” department: Do you think there’s a way to make the entire dataset public (and discoverable)? There seems to be a lot of interest on the web in analyzing datasets these days.

Definitely not. Institutions and respondents participate in FSSE with the understanding that responses are anonymous and the data are not publicly available except in aggregate (reports, presentations, etc.).

Kevin, you’re right about preserving anonymity. I was hoping that some higher-level aggregates might be made available, as is typically done with census data, for example. Of course it’s not clear what the basis for aggregation might be, but that seems like an interesting topic to investigate. Thoughts?

This is a great discussion, and has been a debate within education circles for decades. The famous argument about technology and learning was publicly carried out by R. E. Clark and R. B. Kozma. Clark contended that media are mere vehicles, and have no effect on learning outcomes. He drew a famous analogy between learning and media by stating that a delivery truck delivering groceries could cause changes in nutrition. Similarly, technology is a delivery vehicle for content. Kozma countered that media and instructional methods are linked, and both need to be viewed as linked. You can find mountains of opinion on this debate, and many books have been written.

Technology is a tool that should be used to improve learning outcomes. If learning outcomes are not improved, than it’s a pointless exercise to use technology. Too many educators, and marketeers, push technology well beyond its potential and practical use. I agree with with Clark that technology is a vehicle, and similar learning results are possible without technology. Whether more efficient learning is made possible through technology is a good point of discussion, and is beyond the Clark debate. Clearly, anyone can learn anything without technology, but technology does offer the opportunity for different types of learning experiences.

Earlier blog posts here mentioned the use of the iPad at Stanford. Without any empirical evidence of benefits, Stanford adopted the iPad for their medical students. Sometimes educators are among the first to adopt technology as a solution looking for a problem. I cringe when I see valuable dollars thrown at an unspecified problem. Gene made a great point when he said ” the goal of the educational system is to educate, not to provide cannon fodder for technologists with half-baked ideas.” Caveat Emptor to all stakeholders in education who seek to use technology to solve learning issues.

That sounds like a question better directed to Tom Nelson Laird, the FSSE Project Manager, than to me. It’s an interesting idea but I don’t think we’ve ever done anything like that and I doubt our current IRB approval would allow for it. Typically we work with interested researchers individually to give them access to data as necessary and appropriate for their research (quite a few dissertations have been written using NSSE data, for example).

Mark: I, too, find Clark’s arguments convincing. However, I don’t think we’ve heard the final word in the debate. First, I don’t think we can dismiss technology’s impact on teaching and learning even if the impact really is a second- or third-order effect. Second, we’re beginning to see and use technologies that really do things that can not be replicated without those technologies (real-time collaborative editing a la Google Docs is the best example I can readily name) so it’s possible that we’re starting to see effects that simply can’t be disentangled from the technology.

I, too, am also often dismayed by the fetish many have for new technologies and the resources expended on them. I try to reassure myself, however, that it’s good that they’re willing to experiment and explore even if it could have been done more effectively. Better to (let someone else) light a candle (even though they meant to start a bonfire) than curse the darkness. :)