User:Alexander L. Davis/Notebook/In the Problem Pit/2013/03/23

From OpenWetWare

Entry title

Second round of testing.

Comments

I'm generally confused about procedures that could help better question generation, or help them come up with better critiques of the project. I do think that their critiques of the recruitment document have been helpful, although we are not testing this.

Again, the questions seem to be less than helpful. They seem based on intuitive theories one would expect.

Again, questions were based on what would concern them, confirming the self-projection theory.

It is going to be important to draw the citizen science sample from the same population as those who are offered the program. If people use self-projection, then those projections will be most valid from the actual sample, rather than MTurk masters participants who may be idiosyncratic.

Again, what methods can we use to help them generate effective questions using their knowledge? Are we already tapping that? Reminds me of the DiSessa work with student A, learning and epistemology.