February 23, 2006

Reflections for 2/16-2/22

The ideas and concepts around design-based research are interesting. Of the two articles I found the one by the Design-Based Research Collective (DBRC) a bit easier to read. It think they did a better job of tying some of the theory to an example, which helped clear up the question I had as I was reading them. I read the DBRC article second, and after reading it rereead te Cobb article. The Cobb article made more sense the secnd time around.

One really large weakness of this method is that it cannot asnwer quesitons of causality (P.7 of DBRC). I though both articles were trying to hint that it can, and I was very skepticl. I actually was glad to see the admission in the one article that it cannot say anything about casality.

It seems to be that this method actually sits very much in the quali8tiative merhods camp with its emphasis on thick descriptions and researcher embedded in the process. At first, I thoguh, "How is this differnet from a case study?" However, upon furthger refelction, I understand that case studies usually do not deal with an iteravive process that has adjustments along the life of the studied process. In fact, it seems that this proposed methodology share roots with Grounded Theory in Sociology and Case Studies with the added dimension of a process or product life cycle.

Two weaknesses that I see with this metohd is inability to answer causal quesitons and its inability to be largely generalizable. However, like grounded theory, it is useful for theory development and like a case study it can be useful to hone in on describing desired effect or best practices within a bounded context.

I had "issues" with the two articles that Kelly had us read (Sorry, Kelly!). In the Krentler article, I'm not sure they were asking the right question or were interpreting their data and results accurately. The question, in my opinion, is not whether technology contributed to their grade, but whether participating in class discussion (that just happended to be online) contributed to their grade. Aparently their results say yes to that. They also found that students who used the Internet more but didn't particpate in the dicussions also did better on their grades. I think it is quite a stretch to say that by being more frequent Internet users that technology use "caused" better scores. first, the sample was not randonmly selected. Second, the sample was not randomly assigned to groups. They can claim that there appears to be a correlation, but not a causal relationship. One last nitpick Their two plots (Figure 2. and Figure 3) are interaction plots that did not have the main effects subtracted out of the data plot. Thus the authors were not describing the interactions with those plots but rather the large main effects and the smaller interaction effects together.

In the van der Spa article, I kept thinking, "why did the author choose a general community discussion board to try and examine her theory?" Common sense could tell you that a general community board would be for social interactions and entertainment. You would have to look at specilaized groups and communities to answer some of her questions. Also, her questionaire drew from a convenience sample, and it had a low response rate. Also, I thougth her qualitiative quesitons were pretty weak; they didn't probe. So how did her convenience sample resondents differ from the general population. We can never know.