The data team leads had a very productive Elluminate session on 10/21. Thank you all for joining in. Our focus was on the SAI database with Ann Paulson as the guest presenter sharing her knowledge of the SAI database. The benefits of the database for RPM is that it offers a relatively standardized set of data across the RPM local projects. It will not only help us collect and report date to our funders but can serve to provide each project site with data to measure improvements as each site moves forward toward its individual project's aims.

We started the session with a specific charge: What sort of base measure could we collectively use to show improvement, and substantial progress in our math departments?

By the end of the session, we landed on the following measures based on the call’s notes but feel free to clarify and correct if that’s not your understanding:

#1: What percentage of students earn a pre-college math point in the year they attempt pre-college math? (not limited to a fall cohort)

#2: For students who start in the fall and begin math in level 1-3 that first year, how many students make substantive gain (two or more points) by the end of the year?

#3: For students who start in the fall, and begin math in level 4 that first year, how many earn their quant point by the end of the year? (The benefit here of using fall is you have one complete year)

It also sounded as if we decided that splitting this out demographically would reduce the 'n' too much, so Ann agreed to work with the projects on this by request.

Some other topics raised and/or issues to further explore:Mickey raised the idea of value-added measures. RPM could get Accuplacer/COMPASS scores (convert to standard scores). May help identify which students are showing the most progress. However, there’s potentially a lot variation with placement school-to-school. IR representatives can help RPM get placement score data.

Some in the session also discussed how demographic variables can be proxies for math preparation. There's a zip code variable in SAI for example and this may give a good indication of where students are coming from. Race and other demographic variables as well. Are they stable over years? Where students are entering into the curriculum may be an indication of a broader trend.

Issues of number of attempts: Carmen’s data is looking at ‘number of attempts’ (included below). Carmen shows students who got points and how many times it took them to get that point. Next step: Address the issue of underestimating the problem: Merge Carmon’s worksheets by level who earned a point and who didn’t. There is an attempts variable. We need to know about those students who don’t get points.

Taking specific population data into account: For example, under 45 credit programs and certificates may not have a quantitative / math course requirement. This may be worth taking into account when running your college's data.

On our behalf, Ann’s been discussing and looking at substantial progress but is there something in between as well? A common way of measuring substantial progress is passing a course on a first attempt (1:1), but could we consider looking at the percent of students who start at the lowest level and make gains to 1:2 for example.

Jeff Lucas shared an interest in measuring student withdrawals and others agreed this was a good idea. We can explore this more with Jeff and share his team's work on this.

As the data group I'd like to generate some discussion around your insights and growing understanding of the SAI database: what additional ideas does your team have about how this data can be useful to your project? Are there data you would like disaggregated? Are there target groups within the data such as level 1 students or a particular ethnic group etc. you'd like to know more about? How has IR used and shared this data on your campus? What other questions do you have about the data?

One idea that came up frequently at the Institute is this notion of identifying and tracking a gross measure—across the teams – such as 'completion' which Ann feels would be easy to do. For example, we could look at and compare 09-10, 10-11, 11-12 completion data and hopefully see an upward trend. Generally the SAI data for completion has been steady from year-to-year (69%, 70%,… etc.) so change from year-to-year of completion data within the RPM project site colleges could be useful information for one aspect of our broader evaluation efforts.

You all also asked about the gross measure running across all RPM projects, and included in the SBCTC's proposal to the Gates Foundation. Here's what I've found from the proposal:

To increase the overall pre-college math achievement gain at participating colleges by 15% and the substantial gain rate by 10% over the 3-year period of the grant.

Baseline
In 2007-08, across the system 21,640 transfer-intent students (70% of those attempting) made a pre-college math achievement gain; 12,366 (44%) made a substantial gain (2 levels or college math).

Comment from Helen Burn posted Sept 24, 2010:
I'm having a hard time recreating the numbers for "Year 2 Anticipated Progress." Specifically, the attachment from Annie Paulsen includes 2007-08 precollege math attempts (Study Question 2A, p 6) , but I don't see a table showing the percentage of 2007-08 who attained at least one precollege math course. Study Question 3A and 3B (page 15 and 16) shows attainment for 2006-07 and 2008-09 but not 2007-08. Since the percentage of students attaining precollege math points was stable in the two years shown (roughly 70%), my calculation below shows that a 10% achievement gain is closer to 515 students.