App developer YourTeacher teamed up with KIPP Academy to test grade 8 students to see if their scores would improve after using an iPad. Students were provided with an iPad and the Algebra 1 iBook, available on iBookstore, to replace the traditional textbook.

The program is referred to as a flipped classroom — 80% of the iPad usage was outside the classroom, allowing teachers to focus on more advanced training and one-on-one help in the classroom.

The students were then tested using the KIPP Spring Common Assessment Test. The scores were compared to the students who didn’t have access to an iPad and the results speak for themselves.

“Overall, the percentage of students who rated either proficient or advanced (the ‘passing’ rate) was 49% percent higher in the ‘flipped classrooms’ using the iPads than in the traditional classrooms with no iPads,” according to the report. “The difference was most pronounced in the percentage of students rated as ‘advanced,’ which was 150% higher in the ‘flipped classrooms.’”

Jaryd Madlena

Amazing results for sure, but I’m having trouble locating your source for this post. You have a link to a past literary improvement, but what is your source for the math study? The other links just go to the involved organizations’ websites. Where can I find the source of the study and quotes?

2) Different teachers are probably not randomly assigned to the two groups.

3) Small sample size?

4) Highly unlikely that students are randomly assigned to the two groups.

5) KIPP has notoriously high transfer rates. That makes is incredibly difficult to do research there — particularly because they do not give enough information about it.

6) iPads.

7) Proficiency rates are NOT raw scores. They are not scale scores. This is NOT an increase of 49% in math scores. This is just an increase in proficiency rates. It could be a TINY increase in scores, if students were near the line. (see point #4).

@ceolaf is just saying if most research results in no change or no difference then one can argue the “condition” group is the one that is worse off. For example if a similar study shows that kids with ‘slide rules’ do worse than kids without — then the ones being left behind are the ones who are wasting their time with ‘slide rules’. If the opposite was found then the ones being left behind are the ones without ‘slide rules’.

Basically, the purpose of this kind of group comparison research is to figure out if there is a benefit and to do this you need two similar groups to compare. There is no way getting around this.

Yeah this just seems like one of those studies that will be used to justify school boards buying ipads. How did we ever educate students in the past so that they could use a slide rule and master the math and technology necessary for manned spaceflight? Put that up against your studies and tell me the quality of teachers has gone up over time.

Passing grade starts at 75 points out of 100. Before, 10 of 30 students scored 75 and above. Let’s go to the extreme and say the rest all scored 74.

If 5 students improve their scores by only one point, they pass, and that’s what the text says.

The students didn’t improve by 49%. Depending on the base number of students who passed the test, this whole example could mean anything from “before 2 students passed, now 3 pass – that’s a 50% improvement” to “One student scored one point more, improving the overall amount of students passing by 50%”.

I know math is hard Jim. Maybe you should get an iPad to improve.

Brad168

Conflict of interest much? The fact this was done by “YourTeacher” and “KIPP Academy” pretty much makes any result null and void. Can’t have a reliable result when the people running the “research” are the ones that directly benefit from it.

I love the fact that this discussion is taking place but I have to take issue with the first sentence. If you read the report Bebell, Dorris and Muir put out from Auburn (the one referred to in the first sentence here), you would see that it falls far short of the academic rigour required of a peer reviewed study. In only one of the ten assessments the researchers considered was the experimental group statistically higher than in the control group, and no other variables were considered in the one-and-a-half-page report. I would have been interested to see which apps were used to target the Hearing and Recording Sounds in Words subtest skills these kindergarteners were measured in, for example, as this is an area in which iPad apps particularly excel. I would submit, therefore, that to begin by saying “We’ve all heard how using an iPad in the classroom improves a child’s literacy scores” is misleading at best. We haven’t. We’ve simply heard a claim from the kindergarteners’ headmaster, supported off camera by two associates. I’d love to see if it was true, but it remains untested.

What about Wong et al.’s 1994 research showing that teacher’s classroom management is the number one factor in ensuring student learning takes place? That research has yet to be disproved (unless I missed something). The technology used in the classroom was way down the rank order. iPads don’t improve literacy scores – or math scores. Teachers do. iPads can help a little, but in my humble opinion there are way too many variables out there to make definitive statements like the one that opens this post.