Success in Community College: Do Institutions Differ?

Transcription

1 NATIONAL CENTER for ANALYSIS of LONGITUDINAL DATA in EDUCATION RESEARCH TRACKING EVERY STUDENT S LEARNING EVERY YEAR A program of research by the American Institutes for Research with Duke University, Northwestern University, Stanford University, University of Missouri-Columbia, University of Texas at Dallas, and University of Washington Success in Community College: Do Institutions Differ? C HARLES T. C LOTFELTER, H ELEN F. L ADD, C LARA G. M USCHKIN, AND J ACOB L. V IGDOR WORKING PAPER 74 APRIL 2012

4 Acknowledgements This paper was presented at the Association for Public Policy Analysis and Management meeting in Washington, D.C. on November 3, We are grateful to the Smith Richardson Foundation for supporting this research, to the North Carolina Education Research Data Center and North Carolina Community College System for providing access to administrative records, to Jeff Smith for helpful comments, and to D.J. Cratty, Katherine Duch, Megan Reynolds and Eugene Wang for statistical and research assistance. CALDER working papers have not gone through final formal review and should be cited as working papers. They are intended to encourage discussion and suggestions for revision before final publication. The views expressed are those of the authors and should not be attributed to the American Institutes for Research, its trustees, or any of the funders or supporting organizations mentioned herein. Any errors are attributable to the authors. CALDER American Institutes for Research 1000 Thomas Jefferson Street N.W., Washington, D.C ii

5 Success in Community College: Do Institutions Differ? Charles T. Clotfelter, Helen F. Ladd, Clara G. Muschkin, and Jacob L. Vigdor CALDER Working Paper No. 74 April 2012 Abstract Community colleges are complex organizations and assessing their performance, though important, is difficult. Compared to four-year colleges and universities, community colleges serve a more diverse population and provide a wider variety of educational programs that include continuing education and technical training for adults, and diplomas, associates degrees, and transfer credits for recent high school graduates. Focusing solely on the latter programs of North Carolina s community colleges, we measure the success of each college along two dimensions: attainment of an applied diploma, or degree; or completion of the coursework required to transfer to a four-year college or university. We address three questions. First, how much variation is there across the institutions in these measures of student success? Second, how do these measures of success differ across institutions after we adjust for the characteristics of the enrolled students? Third, how do our measures compare to the measures of success used by the North Carolina Community College System? We find that most of the system s colleges cannot be statistically distinguished from one another along either dimension. iii

6 1. Background Beginning in the middle of the 20 th century, with post-wwii industry requiring workers with enhanced technical skills, community colleges assumed an increasingly important role in the nation s postsecondary education. More recently, community colleges have enhanced this role with increased emphasis on the mission of facilitating educational opportunities for degree completion (Dougherty and Townsend, 2006). The mission of community colleges has expanded to include short-term training programs designed to serve the interests of local business, courses to enhance the skills of adults, programs to allow high school dropouts to obtain a high school equivalency degree (a GED), and programs intended for recent high school graduates interested in gaining skills for a job or preparing for further education. As a proportion of total college enrollment, public two-year institutions have grown steadily, reaching 20 percent of all enrollees in 1967 and 35 percent by Among the states, North Carolina, which provides the context for the current study, has been a leader in the development of and reliance on these community colleges. Compared to that 35 percent share for community colleges in the nation, in North Carolina they accounted for 40 percent of all postsecondary enrollments in In 2009 the Obama Administration highlighted the central role of community colleges when it announced what it termed the American Graduation Initiative, an effort to reform and strengthen community colleges (U.S. White House, 2009). Community colleges are also the focus of reform efforts funded by private foundations, such as Achieving the Dream, a nonprofit organization dedicated to helping more community colleges succeed. 3 Efforts to strengthen the potential for community colleges to contribute to more effective education for the nation s workforce also raise the central question of how to evaluate their 1 NCES, Digest of Education Statistics 2009, Table Digest of Education Statistics 2009, Table 214. In full-time-equivalent units, the public two-year share of all postsecondary enrollments in the U.S. was 27%, compared to 31% in North Carolina (Table 219). 3 Website for Achieving the Dream, 8/15/11. 1

7 performance. In its 2006 report, a commission put together by Secretary of Education Margaret Spellings castigated higher education for ignoring technological advances in teaching and urged the development of new performance benchmarks designed to measure and improve productivity and efficiency (U.S. Department of Education, 2006, pp. 14, 19). Merely adopting the approach now accepted at the K-12 level assessment regimes based on standardized tests is widely viewed as impractical for postsecondary institutions. Yet the Spellings Commission spoke for many observers in calling for the development and use of measures that would make it possible to weigh and rank comparative institutional performance (U.S. Department of Education, 2006, p. 20). For example, an institution s graduation rate could be used as a measure of its ability to produce successfully-trained graduates, ready to enter the labor market or continue with further training. Congress in effect endorsed this indicator of quality when, in the 1990 Student Right to Know Act, it mandated that postsecondary institutions report graduation rates. 4 The Act required that institutions calculate and disclose a precisely and uniformly defined graduation rate: the percentage of first-time, full-time students who graduate within 150 percent of the normal completion time for a degree at the institution where they first enrolled. In the case of four-year institutions, for example, this is the percentage of first-time students who graduate within six years of first enrolling. When Congress was debating the bill, in 1990, Senator Edward Kennedy argued in favor of such quantitative outcome measures, stating that transparency would drive improvement: ''Sunlight is the best disinfectant. Once colleges begin disclosing this vital information, those with the poorest records will be under the greatest pressure to improve.'' 5 4 The Student Right to Know Act, also known as the "Student Right-to-Know and Campus Security Act" (P.L ), was passed by Congress on November 9, Title I, Section 103, requires institutions eligible for Title IV funding to calculate completion or graduation rates of certificate- or degree-seeking, full-time students entering that institution, and to disclose these rates to all students and prospective students. 12/27/1. 5 Irvin Molotsky, Congress Pressing Colleges to Give Figures on Crimes, New York Times, October 8,

8 One nagging worry about using a measure like this is that some institutions enroll students with much stronger educational backgrounds than others, giving those institutions a built-in advantage in achieving high graduation rates that might have little to do with their own effectiveness in educating students. This worry is especially acute for community colleges. Compared to four-year colleges and universities, community colleges serve a more diverse population with many students attending part time and trying to balance school, family, and work obligations. Community colleges also provide a wider variety of educational programs than do most four-year institutions. Not only do they offer twoyear associate s degrees, they provide course work for students hoping to transfer to four-year institutions. In addition, they offer a smorgasbord of course offerings ranging from specialized certificates and other vocational training to general-interest courses emphasizing avocation more than vocation. As a stepping stone to four-year college, their role has been debated vigorously (Brint and Karabel, 1991). What is clear, however, is that their students have high dropout rates, low graduation rates, and long periods for completing degrees. In this context of anxiety and ambition to pursue educational attainment through community colleges, policy makers and researchers are increasingly struggling to develop appropriate measures of success. Some of the prior research on community colleges has focused on student trajectories and the personal challenges they face in completing degrees. Instead, our attention in this paper is on success at the institutional level. We ask whether there are important differences across community colleges in North Carolina that make personal success for students more likely at some colleges than at others. Leaders in the North Carolina Community College System (NCCCS) have been addressing this question for over a decade. Under pressure from the state legislature, the community college system in 1999 adopted a dozen explicit performance measures. In 2007 the list was shortened to eight core indicators of student success. They are: a) progress of basic skills students; b) passing rates on licensure and certification exams; c) performance of college transfer students; d) passing rates of students in 3

9 developmental courses; e) success rates of developmental students in subsequent college-level courses; f) satisfaction of program completers and non-completers; g) curriculum student retention, graduation, and transfer; and h) client satisfaction with customized training. 6 As it articulates in the annual Critical Success Factors report, the NCCCS has developed explicit indicators of success, and it backs them up by specifying quantitative performance measures. Among these core indicators is the graduation rate as required and defined by the Student Right to Know Act. Although well-intentioned, these legislative efforts to monitor community colleges provoke the worry noted above: colleges that appear to be the most successful along any of these dimensions may be the ones who enroll the best prepared students, rather than those that educate students the most effectively. Our purpose in this paper is to extend the comparison of institutional success that is contained in assessment efforts such as the federal Student Right to Know Act and North Carolina s Critical Success Factors. We focus only on the curriculum programs that are intended primarily for recent high school graduates, and we use measures of individual student success to assess the institutional success of each North Carolina community college. Specifically, we define two measures of individual success within a community college: a) the attainment of an applied diploma or degree and b) the completion of coursework required to transfer to a four-year college or university. We address three questions. First, how much variation is there across institutions in our measures of institutional success? Second, how much of this variation is attributable to the characteristics of the students enrolled? Third, how do our measures compare to those used by NCCCS? While we do find evidence of potentially important variation in success rates across colleges, particularly once we adjust for observed student characteristics, our measures of success are statistically imprecise. While conventional F-tests permit us to reject the hypothesis that all observed variation across colleges is attributable to sampling error, our test statistics are driven by a relatively small 6 Critical Success Factors, 2009, p. 5. 4

10 number of outlier colleges. We further find that our measures of success are poorly correlated with metrics used in official NCCCS publications. The results illustrate how difficult it is to estimate educational effectiveness in community colleges. Section 2 of the paper discusses the concept of institutional success with attention to the challenges of measuring it, section 3 describes our data, and section 4 quantifies institutional success for most of the state s 58 community colleges. In section 5, we ask whether any easily observed characteristics of the institutions themselves can account for the variation in success and in section 6 we examine correlations among four measures of success. Section 7 concludes the paper. 2. Conceptual Issues in Measuring Institutional Success of Community Colleges In this section we briefly explore the concept of institutional success for a community college and discuss how measures of success may be operationally defined. 7 Like all forms of education, community college represents an investment by the student and the public taxpayers who subsidize that education in return for future benefits that will accrue to the student or to the broader society. Some of these benefits are decidedly private: they go to the student in the form of access to higher paying, more rewarding jobs as well as the opportunity to pursue further education. They also include a host of non-pecuniary benefits, from better health to happier marriages (Oreopolous and Salvanes, 2011). The wider community gains as well, enjoying the benefits of a stronger and more flexible local economy that comes with a well-trained local labor force, and potential savings in the form of lower expenditures on public services such as health care or prisons. Ideally, one might compare the success of one community college to another based on the magnitude of the benefits each generates. Measuring these benefits is difficult, if not impossible, for a 7 The conceptual framework for this section is based on the more general discussion of measuring education quality in Ladd and Loeb (in press). 5

11 number of reasons. First, the very definition of success is contentious because of the varied roles that community colleges play. In particular, there is dissatisfaction with the recent policy of evaluating colleges on degree attainment because many students enter not with the goal of getting a degree but rather of obtaining the course credits needed to transfer to four-year institutions. Another confounding issue is that students may complete all or most of the requirements for a diploma or certificate but not actually apply for this credential if it is not required for a job. A second challenge in measuring success is the absence of good data on many of the outcomes of interest. Success in the labor market, for example, can in principle be measured through careful analysis using earnings data, but data of the required detail and quality are often not available. Third, measuring success is complicated by the problem of attribution. Even if some of the outcomes, such as higher wages, could be measured, it would be difficult to determine how much of the additional wages are attributable to the education provided by the community college, how much to the background characteristics of the student, and how much to the vitality, or lack thereof, of the local labor market. For these reasons, analysts and policy makers have little alternative but to rely on one or more imperfect proxy measures for success, each of which has strengths and weaknesses. These proxies may be of three types: direct market outcomes, measures of student progress in the form of graduation rates or credits received, and input measures. In its Critical Success Factors report (NCCCS, 2009), the NCCCS uses the first of these approaches: direct measures of the employment success of community college completers. For example, the NCCCS uses as one measure the percentage of community college completers who are employed within one year of last attendance. Another measure is the percentage of a sample of businesses that employ individuals trained or educated by a community college who indicated that they are satisfied with the quality of those employees as that quality is related to the training or education provided by the community college. 6

12 The advantage of such measures is that they directly reflect the types of benefits the community colleges are trying to produce. The disadvantages include the fact that they represent only a portion of the total benefits generated, the satisfaction measure is much better suited to programs providing specific training to a well identified group of workers than to the general education programs of the community colleges, the evaluation data may be expensive to compile, and these measures suffer from the attribution problem mentioned above. The second approach to measuring institutional success is to look at students progress through their required courses of study, with a particular focus on graduation rates, as promoted by the federal government. 8 The main advantages of such progress or graduation measures are their apparent simplicity and their parallel to the graduation rates for four-year institutions. Using the 150 percent metric accounting period alluded to earlier, the time frame would be six years for four-year institutions and three years for two-year associate s degrees at community colleges. But graduation rates are flawed as a measure of community college success in a number of ways, some of which have been highlighted by an effort of six states to pilot a better approach (Achieving the Dream, n.d.). Among the flaws of the graduate rates measure is that it does not track the many part-time students enrolled in community colleges; does not take into account that a major mission of many community colleges is to give students an opportunity to transfer to a four-year college; and allows too little time for graduation given the challenges that many community college students face in balancing school, family, and possibly work obligations. Although the attribution problem also arises for this approach, it is easier to address than is the case for the market outcome approach, provided data are available on the background characteristics of the students. Specifically, one would want to adjust any measures of graduation rates or persistence through college for student background characteristics that are predictive of student success. If one did 8 Besides being included in the Education Department s College Navigator, by virtue of the Student Right to Know Law, they are also used, for example, by Achieving the Dream and Complete College America. 7

13 not do so, community colleges that served large proportions of economically disadvantaged students, students with low academic ability, or those who attended weak high schools, would typically look less successful than colleges that served more advantaged students. The third approach is simply to measure institutional quality by the quantity and quality of its inputs relative to the number of students served. By this proxy, community colleges with similar enrollments that have more resources, more highly qualified faculty, or more student support services would be judged higher quality than those with fewer resources. But even this apparently straightforward measure would be difficult to implement. One problem is that community colleges offer differing combinations of programs with differing resource requirements, leading to inappropriate comparisons of apples and oranges. Another problem is that the focus on inputs provides no information on how effectively they are deployed toward the desired goals. Finally, any measure of this type would need to be adjusted for the types of students enrolled. Students who require remedial courses, for example, may put greater demands on the community college than other students. The one advantage of this input approach is that it avoids the attribution problem associated with a measure based on outcomes. In section 4, we use a variant of the second approach, evaluating student progress through required courses of study, to measure the relative success of the community colleges in North Carolina. In using this approach, we take into account the two important and distinct educational functions pursued by most community colleges with regard to the recent high school graduates. The first is the preparation of students directly for the workplace, through applied training that leads to diplomas and certificates as well as two-year associate s degrees. The second, and rather different, function is preparing students for further education by way of transfer to a four-year college or university. We devise measures of success based on each of these two functions: a measure of success in applied training, calculated in terms of applied degrees or diplomas, and a measure of success in terms of 8

14 readiness for transfer, calculated in terms of associate s degrees or transferable credits earned. We use these gauges of progress for measuring community college success in part because we do not have access to the labor market outcome data that would be required for the first approach. Furthermore, even if we had such data, addressing the attribution problem would be a challenge. Fortunately, our ability to link community college students with their school records, as we describe in the next section, means that we have good information on the student background characteristics needed to address the attribution challenge that arises with success measures based on progress through college. 3. Data and Methodology The data for this study refer exclusively to the community college system in North Carolina. As was the case across the United States, community colleges in North Carolina began springing up shortly after World War II. By 1957 the state had established two publicly funded postsecondary systems: one composed of industrial education centers and one made up of two-year junior colleges emphasizing arts and sciences. Six years later, the two systems were consolidated into a unified Community College System that by 1979 had grown to encompass the 58 institutions in existence today. The 58 community colleges in the North Carolina Community College System (NCCCS) offer a wide range of programs, organized under broad categories. These are defined as: continuing education, comprised primarily of non-credit courses; specialized programs, targeting economic opportunities in the community; and curriculum programs, involving courses taken for credit toward the associate s degree, diploma, certificate, or college transfer. The current study focuses exclusively on the success of the institutions curriculum programs, which in 2009 accounted for approximately 37 percent of community college student enrollment state-wide. 9

15 Between 1998 and 2009, enrollments in the community college system increased by 47 percent, as shown in Figure 1. By comparison, enrollment at the 16 four-year colleges and universities in the University of North Carolina system was lower but grew at about the same rate. The analyses in this study require data that allow us to follow individual students over time as they progress from high school to community college. For that purpose, we use data from both the North Carolina public school system and the North Carolina Community College system that were merged and prepared by the North Carolina Education Research Data Center (NCERDC) at Duke University, under an agreement between Duke University and the NC Community College System. The NCERDC linked student-level records from its archive with student records provided by the NCCCS Data Warehouse. Information on institutions was also drawn from these administrative sources as well as from the data files maintained by the National Center for Education Statistics. Thus, information on the public school experiences of students, including test scores, was integrated with information on the experiences of students enrolled in the curriculum programs of NC community colleges between fall 2001 and spring The analyses presented in the current paper are based on information for one cohort of students those who were enrolled in a North Carolina public school and took the state s 8 th grade End of Grade test in math in 1999 and who first enrolled in a North Carolina community college curriculum program any time between fall 2003 and fall If these students made normal progress in subsequent years after eighth grade, they would have been in 12 th grade in 2002/03 and would have graduated from high school in Since some students are retained in grade, we include in the analyses those students who graduated from high school in either the spring of 2003 or There were 89,201 students in this cohort. We restrict ourselves to a single cohort because we wish to take advantage of the linked student records while assuring that we have sufficient data to construct measures of successful community college outcomes. We had to further limit the study cohort by 10

16 excluding students who attended high school in Wake or Mecklenburg counties, or who first registered in a community college in Wake or Mecklenburg counties, because of incomplete data for those two institutions. We also excluded students enrolled in programs not leading to a formal college-level award, such as those completing a high school degree through college coursework. The student-level public school and college variables included in our statistical analyses are described in Appendix 1. While the public school information is complete, and we are reasonably certain that we also have complete community college course information for our students, we are still some missing some degree information related to program completion. That is true because our degree completion totals do not include diplomas for which students, even those who had completed all the requirements, did not officially apply. Also, while students may move between colleges, we do not take this movement into account; that is, we assume students remain in the community college where they first registered. Appendix 2 contains the descriptions of the community college institution-level variables that are included in the analyses described in sections 4 and Quantifying Variation in Success Rates across Colleges We adopt a two-dimensional measure of institutional success applied success and transfer success. The first measure includes students who obtained, within four years of enrolling in their first course, a diploma or applied associate s degree in any one of the vocational areas offered in the state s community colleges. Examples include the diplomas in culinary arts and hospitality management as well as the Associate s Degree in Applied Science in such fields as criminal justice technology or early childhood education. The other measure of success includes students who attained an associate s degree (in arts, science, fine arts, or general education) or successfully completed 10 transferable courses or about 30 transferable credits also within four years of their initial enrollment. Because students need not complete a degree in order to transfer to a four-year college or university, it is 11

17 important to count as successful a student whose progress in courses completed made transferring a reasonable option, even if no associate s degree was obtained. Some students, in fact, achieve success by both measures. Figure 2 shows for each of the state s community colleges the percentages of the entering cohorts who were successful according to one of the two measures. The figure shows that the success rate for the applied outcomes in the state s community colleges ranged roughly from 5 percent to 30 percent. For the measure of transfer success, the rates ranged from about 8 percent to 35 percent. If these measures of success are taken at face value, the graph suggests both that institutions specialize and that they differ in their effectiveness. Specialization is suggested by the loosely negative slope of the points, which indicates that some community colleges those near the top and left, such as college 894 had high transfer success rates but low success in applied degrees, and those on the right and bottom were high in applied success but low in transfers. One college that stands out on the applied dimension, but is only middling on the transfer measure, is college 846. To the extent that colleges are arrayed southwest to northeast on the graph, however, there is the strong presumption of differences in effectiveness, with those to the northeast dominating those to the southwest on both criteria. Yet these comparisons are surely biased as measures of institutional effectiveness in at least one regard. To the extent that the entering students attending some colleges are academically stronger or financially better off, those colleges would appear to have a natural advantage in achieving higher rates of success than colleges with less prepared or economically disadvantaged students. To remove this bias from our comparisons, we control for the differences in the inputs at each institution using information from the detailed administrative records on the middle and high school characteristics of students that are likely to be predictive of their subsequent success in community college. Table 1 reports estimates based on ordinary least squares regressions estimated for individual students, in which the dependent variable is either success at applied outcomes or success at transfer 12

18 outcomes. The regressions show consistent and predictable associations between student success and a number of their personal characteristics. First, a student s 8 th grade math end-of-grade test score is predictive of higher success along both dimensions. Second, females are more likely than males to be successful by either metric. Third, students whose parents graduated from college are less likely to be successful at applied outcomes and more likely to be successful at transfer outcomes than students whose parents terminated their education after graduating from high school (the omitted category for this categorical variable). Fourth, students who were ever eligible to receive free or reduced price lunch, which is commonly used as a measure of low family income for students within the K-12 system, were less likely to be successful than those with higher family incomes. In the analysis below, we control for these four personal characteristics that are predictive of college success to produce measures of institutional success that neutralize their influence. Thus, our adjusted institutional success measures are not influenced by the relative affluence or academic preparedness of a college s student population. In addition to these four variables, the regressions also accounted for differences by race and ethnic group. Compared to white students (the omitted category), black students were less likely to be successful by either measure, and Asian students were less successful in applied outcomes. Using these estimated equations, we statistically control for some of the most important differences in the students who attend the various colleges by calculating the predicted probabilities that each student will achieve individual success in the applied or in the transfer realm. The difference between those predicted probabilities and the actual outcomes (measured by a 0 or 1 on both scales) for each student serves as the basis for our adjusted measures of institutional success measures that control statistically for the measured characteristics of students who attend each college. Specifically, we calculate these residuals for every student in our cohort and then average them by community college. 9 The resulting average can be thought of as an input-adjusted index of institutional 9 These mean residuals are equivalent to institution fixed effects. 13

19 effectiveness. A college whose students over-achieve (by being successful more often than what would have been predicted based on their characteristics alone) will have a positive mean residual. A college whose students succeed less often than would be predicted will have a negative residual. We think of these mean residual scores as adjusted college effects on student success rates, and we believe they represent one reasonable indicator of institutional effectiveness. We have calculated such adjusted college effects for all colleges and for both categories of success and have arrayed them graphically in Figure 3. In comparison to the pattern shown by raw success rates in Figure 2, there is less in the adjusted college effects to suggest specialization. An exception is college 846, which is at the mean in transfers but 0.2 above the mean for applied outcomes. For the most part, though, colleges simply differ by effectiveness on both fronts: some colleges appear to excel at both kinds of outcomes while other colleges look like under-achievers along both dimensions. Differences in adjusted college effects are quite large, implying ranges of up to 25 percentage points on both measures. Among the best on both measures are colleges 880, 806, and 847. On the other side of this coin are colleges whose adjusted effects on success rates are below-average on both scales, such as colleges 843, 870, and 844. Although the evidence points to differences across community colleges in the adjusted effects on student success, two qualifications are worth noting. The first is that some of the measured variation across colleges is likely due to chance rather than actual differences in effectiveness. This fact is captured in the standard errors of the estimates of the applied residuals, which are on the order of 0.05 for applied success and 0.06 for transfer success. These values imply that differences between adjusted college effects less than 0.10 or 0.12 (two standard deviations) are not statistically different from each other. An F-test permits us to test the hypothesis that the observed variation in adjusted transfer or applied effectiveness across institutions can be attributed entirely to random variation, and it can be rejected. The F-statistic is driven primarily by a small number of colleges with adjusted college effects at 14

20 some distance from zero. The majority of colleges cannot be statistically distinguished from one another along either dimension 10. By way of further caveat, additional unobserved student characteristics might provide an alternative explanation for the patterns we observe, though we think this is unlikely. A more likely explanation is that the programs offered by community colleges differ in ways we have not captured in our analysis. For example, it could be that some community colleges prepare a higher proportion of their students for jobs where the actual diploma or certificate is required than is the case for other community colleges. These possibilities notwithstanding, we turn in the next section to ask whether successful community colleges, identified in this two-dimensional way, have any particular characteristics in common. 5. Explaining Variation in Institution-Level Success Rates Our goal in this section is to examine the extent to which any characteristics of the community colleges themselves might be statistically associated with our estimated adjusted college effects. To do so, we regressed both of our unadjusted success measures and both of our measures of adjusted college effects on a number of college-level covariates (defined in Appendix 2). These efforts produced little in the way of statistical association. The regressions in Table 2 have as dependent variables the two raw measures of success. Those in Table 3 are based on adjusted college effects; that is, college effects on students, holding constant the characteristics of the students. The first of the institutional characteristics considered is the college s proximity to a campus of the University of North Carolina. This dichotomous variable takes on the value of 1 if a UNC campus is in a county served by the community college and 0 otherwise. Since many students may well see four-year 10 For applied success, this test produces an F-statistic of 3.92, which is significant at the 1 percent level. Dropping the 20 outliers with the largest deviations from the mean adjusted college effect produced an F-statistic of 1.56, which is not significant at the 1 percent level. For transfer success, the comparable test produces an F-statistic of 3.07, but dropping just 11 outliers in this case makes it impossible to reject the hypothesis at the 1 percent level. 15

A New Measure of Educational Success in Texas Tracking the Success of 8th Graders into and through College National Center for Management Systems (NCHEMS) Contents Introduction 1 The Challenge 2 Purpose

Co-Curricular Activities and Academic Performance -A Study of the Student Leadership Initiative Programs Office of Institutional Research July 2014 Introduction The Leadership Initiative (LI) is a certificate

Complete College America Common College Completion Metrics Technical Guide April, 2014 All major changes from previous years are highlighted and have a * indicator that can be searched for. April 2, 2014:

Virginia s College and Career Readiness Initiative In 1995, Virginia began a broad educational reform program that resulted in revised, rigorous content standards, the Virginia Standards of Learning (SOL),

Advanced Degrees and Student Achievement-1 Running Head: Advanced Degrees and Student Achievement A STUDY OF WHETHER HAVING A PROFESSIONAL STAFF WITH ADVANCED DEGREES INCREASES STUDENT ACHIEVEMENT By MEGAN

Motivation & Research Question: Are high achieving college students slackers? Brent J. Evans There is a growing body of evidence that suggests college students are not academically challenged by or engaged

Predicting Successful Completion of the Nursing Program: An Analysis of Prerequisites and Demographic Variables Introduction In the summer of 2002, a research study commissioned by the Center for Student

Abstract Title Page Title: College enrollment patterns for rural Indiana high school graduates Authors and Affiliations: Mathew R. Burke, Elizabeth Davis, and Jennifer L. Stephan American Institutes for

Cañada College Student Performance and Equity Dashboard developed and maintained by The Office of Planning, Research and Student Success INTRODUCTION Welcome to the Cañada College Student Performance and

Special Report on the Transfer Admission Process National Association for College Admission Counseling April 2010 Each Spring, much media attention is focused on the college admission process for first-year

Connecticut College and Career Readiness Toolkit Supplemental Data Central High School Prepared by the Educational Policy Improvement Center on behalf of the Connecticut P-20 Council Preface This packet

The Completion Arch: Measuring Community College Student Success Developmental Education Placement Annotated Bibliography Adelman, C. (2005). Moving into town and moving on: The community college in the

Measuring Internationalization at Community Colleges Funded by the Ford Foundation AMERICAN COUNCIL ON EDUCATION The Unifying Voice for Higher Education Center for Institutional and International Initiatives

The North Carolina Principal Fellows Program: A Comprehensive Evaluation In this policy brief, we report on our comprehensive evaluation of the North Carolina Principal Fellows program. We find that: (1)

Faculty Productivity and Costs at The University of Texas at Austin A Preliminary Analysis Richard Vedder Christopher Matgouranis Jonathan Robe Center for College Affordability and Productivity A Policy

Testimony for the National Commission on Accountability in Higher Education Oklahoma State Regents for Higher Education Prepared by: Dr. Dolores Mize, Associate Vice Chancellor and Special Assistant to

U.S. Department of Education March 2014 Participation and pass rates for college preparatory transition courses in Kentucky Christine Mokher CNA Key findings This study of Kentucky students who take college

Comparative Study of the Persistence and Academic Success of Florida Community College Student-Athletes and Non-Athlete Students: 2004 to 2007 David Horton Jr., Ph.D. AIR Dissertation Fellow (2008-2009)

BOARD OF TRUSTEES MINNESOTA STATE COLLEGES AND UNIVERSITIES BOARD ACTION Fiscal Years 2010-2011 Biennial Operating Budget BACKGROUND Every other year, as part of the state s operating budget process, the

Teacher preparation program student performance models: Six core design principles Just as the evaluation of teachers is evolving into a multifaceted assessment, so too is the evaluation of teacher preparation

JUST THE FACTS Birmingham, Alabama The Institute for a Competitive Workforce (ICW) is a nonprofit, nonpartisan, 501(c)(3) affiliate of the U.S. Chamber of Commerce. ICW promotes the rigorous educational

An Evaluation of Developmental Education in Texas Public Colleges and Universities Prepared by Hunter R. Boylan, Ph.D. and D. Patrick Saxon, M.B.A. National Center for Developmental Education Developmental

Measuring Internationalization at Liberal Arts Colleges Funded by the Ford Foundation AMERICAN COUNCIL ON EDUCATION The Unifying Voice for Higher Education Center for Institutional and International Initiatives

Determining Future Success of College Students PAUL OEHRLEIN I. Introduction The years that students spend in college are perhaps the most influential years on the rest of their lives. College students

Chapter 5: Analysis of The National Education Longitudinal Study (NELS:88) Introduction The National Educational Longitudinal Survey (NELS:88) followed students from 8 th grade in 1988 to 10 th grade in

THE NATIONAL CENTER FOR PUBLIC POLICY AND HIGHER EDUCATION Policy Alert May 2004 QUICK LOOK Key Issue State policies must be assessed and updated to effectively support the success of students transferring

Aspen Prize for Community College Excellence Round 1 Eligibility Model Executive Summary As part of the Round 1 of the Aspen Prize selection process, the National Center for Higher Education Management

1 Accountability System Reports for Selected Success Measures Very Large Community College Districts Spring 2008 Membership in Group: The group of very large community colleges in Texas includes the following

February 2003 Report No. 03-17 Bright Futures Contributes to Improved College Preparation, Affordability, and Enrollment at a glance Since the Bright Futures program was created in 1997, Florida s high

The Balanced Scorecard Beyond Reports and Rankings More commonly used in the commercial sector, this approach to strategic assessment can be adapted to higher education. by Alice C. Stewart and Julie Carpenter-Hubin

A Brief Research Summary on Access to College Level Coursework for High School Students Provided to the Oregon Education Investment Board August 2014 Prepared by Hilda Rosselli, OEIB College and Career

Worthy Alternatives (Figure 1) Attending a charter high school rather than a traditional high school in Chicago and Florida is associated with a higher likelihood of students graduating and going on to

REPORT ON STUDY OF BILATERAL AGREEMENTS AND PARTNERSHIPS THAT EXIST BETWEEN CONSITUENT INSTITUTIONS OF THE NORTH CAROLINA COMMUNITY COLLEGES AND CONSTITUENT INSTITUTIONS OF THE UNIVERSITY OF NORTH CAROLINA

Degree Attainment of Undergraduate Student Borrowers in Four-Year Institutions: A Multilevel Analysis By Dai Li Dai Li is a doctoral candidate in the Center for the Study of Higher Education at Pennsylvania

February 2012 English Articulation Between the San Francisco Unified School District and the City College of San Francisco Oded Gurantz Background San Francisco s Bridge to Success (BtS) initiative brings

Special Supplement to The Condition of Education 2008 NCES 2008-033 U.S. DEPARTMENT OF EDUCATION THIS PAGE INTENTIONALLY LEFT BLANK Special Supplement to The Condition of Education 2008 Statistical Analysis

Community College as a Pathway to Higher Education and Earnings Thomas Brock MDRC Community College as a Pathway to Higher Education and Earnings According to the most recent data from the U.S. Department

CHARTER SCHOOL PERFORMANCE IN PENNSYLVANIA credo.stanford.edu April 2011 TABLE OF CONTENTS INTRODUCTION... 3 DISTRIBUTION OF CHARTER SCHOOL PERFORMANCE IN PENNSYLVANIA... 7 CHARTER SCHOOL IMPACT BY DELIVERY

TEXAS SCHOOLS PROFILE RE-IMAGINING TEACHING AND LEARNING NEW TECH NETWORK A LEARNING COMMUNITY New Tech Network is a non-profit school development organization dedicated to ensuring all students develop

Life Stressors and Non-Cognitive Outcomes in Community Colleges for Mexican/Mexican American Men Art Guaracha Jr. San Diego State University JP 3 Journal of Progressive Policy & Practice Volume 2 Issue

Economic inequality and educational attainment across a generation Mary Campbell, Robert Haveman, Gary Sandefur, and Barbara Wolfe Mary Campbell is an assistant professor of sociology at the University

NATIONAL CENTER FOR POSTSECONDARY RESEARCH NCPR BRIEF BRIEF AUGUST 2009 Evaluating the Impact of Remedial Education in Florida Community Colleges: A Quasi-Experimental Regression Discontinuity Design Juan

JUST THE FACTS Memphis, Tennessee The Institute for a Competitive Workforce (ICW) is a nonprofit, nonpartisan, 501(c)(3) affiliate of the U.S. Chamber of Commerce. ICW promotes the rigorous educational

NATIONAL BENCHMARKING AND ACCREDITATION Using Benchmarking Data for Regional Accrediting www.nccbp.org 1 Using Benchmarking Data for Regional Accrediting The goal of accreditation is to ensure that education

Volume 17, Number 3 - May 21 to July 21 An Examination of the Graduation Rates and Enrollment Trends in Industrial Technology Baccalaureate Programs from 1988-1998 By Dr. Tao C. Chang & Dr. John C. Dugger

CHARTER SCHOOL PERFORMANCE IN INDIANA credo.stanford.edu March 2011 TABLE OF CONTENTS INTRODUCTION... 3 CHARTER SCHOOL IMPACT BY STUDENTS YEARS OF ENROLLMENT AND AGE OF SCHOOL... 6 DISTRIBUTION OF CHARTER

Community Colleges: Student Engagement and Workforce Development By Cindy P. Veenstra Have you driven past a community college lately and seen an overflowing parking lot? There is a reason for this the

Public Schools May 2014 CURRENT SDP PARTNERS THE STRATEGIC DATA PROJECT (SDP) Since 2008, SDP has partnered with 75 school districts, charter school networks, state agencies, and nonprofit organizations

Chapter Three: Challenges and Opportunities The preparation of Orange County Community College s Periodic Review Report occurs at a time when the College and New York State are experiencing an economic

Getting prepared: A 2010 report on recent high school graduates who took developmental/remedial courses Minnesota State Colleges & Universities University of Minnesota State-Level Summary and High School

SREB State College and Career Readiness Initiative Teacher Development to Increase College and Career Readiness Guidelines and Promising Practices for States College and career readiness is an increasingly

One Year Out Findings From A National Survey Among Members Of The High School Graduating Class Of 2010 Submitted To: The College Board By Hart Research Associates August 18, 2011 Hart Research Associates

COLLEGE READINESS A First Look at Higher Performing High Schools School Qualities that Educators Believe Contribute Most to College and Career Readiness 2012 by ACT, Inc. All rights reserved. A First Look

Ready For College 2010: An Annual Report On New Mexico High School Graduates Who Take Remedial Classes In New Mexico Colleges And Universities Dr. Peter Winograd, Governor Richardson s Education Policy

United States Government Accountability Office Report to the Ranking Member, Committee on Education and the Workforce, House of Representatives December 2014 HIGHER EDUCATION Education Should Strengthen

Proceedings of the Annual Meeting of the American Statistical Association, August 5-9, 2001 WORK EXPERIENCE: DETERMINANT OF MBA ACADEMIC SUCCESS? Andrew Braunstein, Iona College Hagan School of Business,

The Condition of College & Career Readiness l 2011 ACT is an independent, not-for-profit organization that provides assessment, research, information, and program management services in the broad areas

The Benefits of Community Service Employment (PY2006) Prepared for Senior Service America, Inc. By The Charter Oak Group, LLC April 2008 The Benefits of Community Service The customer satisfaction data

Grambling State University FIVE-YEAR STRATEGIC PLAN FY 2017-2018 through FY 2021-2022 July 1, 2016 GRAMBLING STATE UNIVERSITY Strategic Plan FY 2017-2018 through FY 2021-2022 Vision Statement: To be one

Research Report June 7, 2011 WKU Freshmen Performance in Foundational Courses: Implications for Retention and Graduation Rates ABSTRACT In the study of higher education, few topics receive as much attention

JUST THE FACTS Washington The Institute for a Competitive Workforce (ICW) is a nonprofit, nonpartisan, 501(c)(3) affiliate of the U.S. Chamber of Commerce. ICW promotes the rigorous educational standards