A Blog from New America's Higher Education Initiative

New America education policy analysis and information is now available on a fantastic new platform: www.edcentral.org. Please update your bookmarks and head over to our new website. We look forward to welcoming you to our EdCentral community.

Today the education policy program at the new America foundation is releasing "Improving Gainful Employment." This policy brief lays out recommendations for strengthening the Obama Administration's proposal to ensure that career training programs are providing students high-quality options that lead them into good-paying jobs at a reasonable price, not over indebted with minimal job prospects.

The brief is in response to a proposal released this August and subsequent regulatory work by the U.S. Department of Education to define what it means for programs at institutions of higher education to prepare students for "gainful employment in a recognized occupation." This is a statutory requirement within the Higher Education Act that applies to nearly all programs at for-profit colleges as well as some non-degree programs at public and private nonprofit institutions. The Administration proposes to create a stronger definition for what this phrase means due to concerns about excessive debt levels, low economic returns, and high levels of non-completion and student loan defaults among these career-oriented programs.

The proposed definition laid out by the Department is the second attempt to define what gainful employment means after a judge's ruling in 2012 invalidated the prior regulation hours before it was going to take effect. The new proposal from the Department is a simpler and stronger version of what was created the last time.

But the Department's proposal contains potential loopholes, which if left unaddressed could undermine the rule's effectiveness, allowing programs that are not serving students well to keep operating. This brief addresses those challenges by laying out recommendations for the Department and others involved in the regulatory process to adopt in order to strengthen the rule. Among the highlights:

Programs should only be subject to one measure of the average debt levels of graduates compared to their annual earnings, unlike the Department's proposal which included a second measure of income adjusted for basic living expenses.

Instead of the second measure of debt compared to earnings, programs would have to meet three minimum performance tests:

A minimum withdrawal rate test, which shows that no more than 33 percent of students in a program withdrew between the start and end of an academic year--a requirement based on an existing regulatory provision that dates back nearly 40 years.

A student loan repayment rate test that avoids some of the legal challenges this measure faced in the past by replacing a percentage threshold with a test of whether the total amount of outstanding student loan principal owed by students who attended a program is at least $1 lower than it was at the time those loans entered repayment.

A minimum income test that shows the average earnings of graduates from any program where graduates have student loan debt are at least equal to a full-time minimum wage employee--a standard that protects against programs that avoid penalties due to low debt levels while still leaving student in or close to poverty.

A program that fails all of these minimum standards and/or has graduates with too much debt on average compared to their annual income would risk losing the ability to receive federal student aid.

Programs would "pass," "struggle," or "fail," based upon how they did on either the annual debt-to-earnings measure or the three minimum performance standards. But a program can be no better than the worst level achieved on either the minimum performance tests or the annual debt-to-earnings rate--so failing either means a program fails.

Most struggling and failing programs would be given opportunities to improve. However, programs whose student loan payments relative to the income of graduates are more than 300 percent above expert-recommended levels should lose access to aid immediately. This immediate eligibility loss idea is borrowed from the existing measure of student loan default and reflects the idea that there is some level of performance so low that the likelihood of great harm to students outweighs the chances of improvement.

Through this proposal, career-oriented programs would need to focus on all the students they serve, not just graduates but potential dropouts too. It also acknowledges that programs should not pass if they have low levels of student debt and lack sufficient earnings. After all, student debt is just one type of investment in a program--students are also spending billions in federal grant aid and arguably an even more precious resource, their time. They should expect better than living in or near poverty after completing a postsecondary program.

​Since 2008, the federal government has spent nearly $200 billion on the Pell Grant program. We know that this sizeable investment has bought a 50 percent increase in the number of people getting these awards. But how many graduates did these funds produce? What percentage of the individuals graduate? And which schools are doing the best with the lowest-income students?

Congress wanted to know the answer to all these questions. That’s why it included requirements in the 2008 reauthorization of the Higher Education Act (HEA) that required colleges to disclose the graduation rates of Pell Grant recipients, students who did not receive Pell but got a Subsidized Stafford Loan, and individuals who got neither type of aid. But it only asked institutions to disclose this information, either on their websites or potentially only if asked for it, not proactively report it to the Department of Education. The results have gone over about as well as a voluntary broccoli eating contest with toddlers. A 2011 survey of 100 schools by Kevin Carey and Andrew Kelly found that only 38 percent even complied with the requirement to provide these completion rates, in many cases only after repeated phone calls and messages.

Absent institutional rates, the only information of any sort we have about Pell success comes as often as the Olympics, when the National Center for Education Statistics (NCES) within the Department updates its large national surveys. These data are great for broad sweeping statements, but cannot report the results for individual institutions, something that’s especially important given the variety of outcomes different schools achieve. Instead, these surveys can only provide information about results by either the sector or Carnegie type of institution. And the surveys are too costly to operate more frequently.

Fortunately, there’s a chance to fix this problem and get colleges to report this completion data. The Department is currently accepting comments on its plans for data collection under the Integrated Postsecondary Education Data System (IPEDS) for the next several years (see here to submit a comment, here for the notice announcing the comment request, and here for the backup documentation of what the Department wants to do). This means there’s an opportunity for the public to provide suggestions before the comment period closes on November 14 as to what additional information IPEDS should include.To be clear, a lot of what the Department is already proposing to add into IPEDS through this collection will help us get a significantly better understanding of student outcomes in postsecondary education. First, it would implement some recommendations from the Committee on Measures of Two-Year Student Success, which Congress called for in the 2008 HEA reauthorization to capture students that are not currently captured in the federal graduation rate because they are not full-time students attending college for the first time. The committee’s recommendations, which are being implemented here, aim to capture those missing students by requiring colleges to reporting on the success rates of three additional groups: (1) students who are enrolled part-time and attending for the first time, (2) those who are enrolled full-time and have attended college elsewhere, and (3) those who are enrolled part-time and have attended college elsewhere. Colleges would then report how many of these students either received an award, are still enrolled, transferred, or dropped out after six and eight years. And it will start this reporting retroactively so that the public won’t have to wait until 2023 to find out the first results.

Other proposed changes to IPEDS are smaller-scale but also important. Colleges would be asked to provide information on the use veterans benefits on their campuses. And the way for-profit colleges report their finances data would be better-aligned with the way public and private non-profit colleges provide this information.But these changes still leave us without one obvious set of completion information—rates disaggregated by socioeconomic status. Sure, attending full-time can be a proxy for a student’s financial circumstances, but not as definitively as getting a Pell Grant.

The Institute for College Access and Success and others have already argued that the Department should add these data into IPEDS. In response, NCES has noted that improvements to the federal student aid database may make it possible to calculate completion rates for Pell students. But that’s an incomplete solution. That database is legally prohibited from collecting information on students that don’t get federal student aid, so there’s no way to produce the HEA-mandated graduation rate for students who received neither Pell Grants nor subsidized Stafford loans.

Of course, you can’t bring up any discussion of data reporting without running into the “B” word: burden. But remember, this isn’t new burden—colleges are legally required by an act of Congress to provide these graduation rates. Any huge encumbrance these represent (and I’d argue it’s probably not much since you would just be taking a subset of an existing cohort that has easy to identify characteristics based on student aid receipt) has already occurred. In fact, U.S. News and World Report is already getting some schools to provide this information, but it won't share the raw data.

In an ideal world, we would not have to beg and plead with colleges to tell us whether they are successfully using the more than $30 billion they receive each year to educate low-income students. Instead, we would have a student unit record system capable of processing all this information without adding burden to colleges or forcing them to rely on costly alternatives like the National Student Clearinghouse. But thanks to Virginia Foxx (R-N.C.) and the college lobby (primarily the private institutions), we don’t live in that world. Instead, we’re left with IPEDS where these data should be.

Over the last two weeks, George Washington University has been all over the news for lying to its students about its admissions policies. For years, GW has said that it is “need blind” when in fact it isn’t. Every year the university chooses not to admit a certain percentage of students not because of grades or test scores or what admissions officers see as being a “good fit.” Rather they don’t admit these students simply because their families are low-income.

Most of the news coverage has been critical of the school for doing financially needy students a disservice. But, in fact, the opposite is true. GW is actually doing these individuals a tremendous favor since the school does such a lousy job supporting the small share of low-income students that it does enroll.

GW does not come close to meeting the full financial need of the low-income students it admits. Instead, it leaves these students with substantial funding gaps – forcing them to take on hefty debt loads. In 2011-12, GW students from families making $30,000 or less faced a daunting average net price – the amount students pay after all grant aid has been exhausted – of nearly $21,000 per year. That means low-income families have to pony up the equivalent of 70% or more of their annual income for their children to attend GW.

Now it’s true that GW has a relatively small endowment for its size. But this isn’t just a question of money. It’s also one of priorities. The university is a very active participant in the “merit-aid” wars. According to data the school provided the College Board, 19 percent of freshmen had no financial need yet received “merit” scholarships from the university in 2011-12, with an average award of over $17,000. Meanwhile, only 12 percent of GW freshmen received Pell Grants, which go to the most financially needy students.

GW is clearly more interested in recruiting, enrolling, and funding wealthy students than financially needy ones. For that reason, the low income students that GW passes over should know that they dodged a bullet.

Last week, The George Washington University was forced to admit that it has been lying for years about its admissions policies. While the school has long claimed to be “need blind,” it turns out that a student’s ability to pay is factored into its admissions decisions. The best way to get off a wait list at GWU (and other colleges and universities that follow the high-tuition/high-aid model) isn’t to list your latest achievement or write another essay, but to say you don’t need to be considered for financial aid. This is enrollment management at its darkest—the university enrolls rich students to maximize its revenue, while leaving students from low- and moderate-income families out of luck simply because they lack the resources to pay full-freight.

That’s bad enough. But today, we learned about another trick that enrollment managers have up their sleeves. According to Inside Higher Ed, “Some colleges are denying admissions and perhaps reducing financial aid to students based on a single, non-financial, non-academic question that students submit to the federal government on their [FAFSA].” The FAFSA asks students to identify the colleges they wish to attend. Colleges then get that information and can see the order in which they were listed by the student.

The problem is that enrollment managers and management firms like Noel Levitz have discovered that students tend to list colleges in preferential order. In an example from Inside Higher Ed, Augustana College found that 60 percent of the students that list the school first on the FAFSA end up enrolling, as do only 30 percent of those who list it second and 10 percent of those that list it third. In a world of maximizing revenues and yield, why admit, or offer a generous financial aid package to, someone who lists your institution third? Don’t forget, that the FAFSA also contains a family’s financial information and Expected Family Contribution—data that allow a college to better understand just how needy a student is. So if you have a Pell-eligible student, who lists Augustana third, honestly, tough luck for that student.

Apparently, this behavior has been going on for a while. But this type of policy should never be the industry standard. It makes the admissions and financial aid process even more opaque to students, especially first-generation college-goers who have no idea that this policy even exists. Such a policy takes choice away from students. It takes away their ability to freely list the colleges they’d like to attend, without fear of repercussion. It assumes that students only care about their first choice school.

When I worked with students at The College Planning Center in Boston, I saw firsthand that low-income, first generation students did list their first-choice college first on the FAFSA. But oftentimes what separated first and second and third ordering of colleges was negligible. They were excited to be going to college, period. For them, the financial aid package was more important than whether they got into their first-choice school. This policy prevents students from receiving financial aid offers that will help them choose a college that meets their needs both academically and financially.

It’s hard to know how many low- and moderate-income students have fallen victim to this policy, but there is, however, an easy solution. The FAFSA should either not allow institutions to see where students have applied or it should list the institutions in alphabetical order. The College Board and ACT should follow suit with the score reports they send to institutions. These score reports also list institutions in the order chosen by students. The admissions process is already opaque enough, putting low-income and moderate-income students at a disadvantage.

It’s becoming increasingly obvious that “need-blind” and “need-aware” policies rarely exist in the truest form. Instead, they allow institutions to hide behind a policy that sounds welcoming to low- and moderate-income families, when really all they’re doing is trying to maximize their revenues and yield rates.

Two years ago, a for-profit college industry group unveiled a voluntary code of conduct for its members. The organization, known as the Foundation for Educational Success, said that the code would “provide strong new student protections; guidelines for training, enrollment, and financial aid; and include an enforcement mechanism to ensure that participating schools adhere to the principles of the new standards.”

More than a dozen for-profit college companies, including Career Education Corporation and Kaplan Higher Education, pledged to abide by the code. Meanwhile, the industry’s stalwart supporters in Congress held up the code as evidence that the sector could police itself.

At the time, I wrote a post on Higher Ed Watch questioning whether this was “a serious effort to improve industry standards or simply a public relations gambit that the group hopes to use to stave off any further government attempts to rein in the industry?”

Well now we have our answer. According to a report in The Chronicle of Higher Education this week:

Today hardly any trace of the effort can be found. The Foundation for Educational Success, which was coordinating the effort, no longer exists…In addition, the foundation’s Web site was dormant as of Friday, displaying only a notice from GoDaddy.com stating that the domain name expired on September 7 and was pending renewal or deletion. As of Monday, the domain had had apparently been bought and the Web site converted to a health blog unrelated to for-profit higher education.

Today, the College Board released its annual sets of trends reports, one on college pricing and one on student aid. Dense, chart-filled works, the documents tell a story of what today’s postsecondary students are facing. But each report typically carries a message with it, one that often tries to dampen the sense of unabated cost escalation.

This year’s desired headline is 2.9 percent. That’s the change in published tuition and fees at four-year public institutions from last academic year to this one in current dollars. Though an increase, it’s described as the smallest percentage increase in the last 30 years.

But herein lies the difficulty with percentage increases and college costs. One of the benefits of decades worth of uninterrupted price increases is that eventually the same size price hike leads to a smaller percentage change. And sure enough, that 30-year-low in percentage terms is actually a $247 increase in published tuition—the 19th lowest in the past three decades (or 12th highest if you want to look at in a more pessimistic light). In fact, it's larger in real terms than any single year increase that families at public four-year colleges felt from 1971-72 to 2000-01.

In fairness, that $247 increase is the lowest that families have faced in current dollars since the 2000-2001 year. But following on the heels of over a decade of stark increases, it means the base price families are paying is $5,400 more in current dollars than it was at the turn of the century. In that regard, the $247 only feels like some relief from charitable schools only when compared to some theoretical higher price they could have been charged.

Private nonprofit 4-year colleges provide an even better illustration of the wonders of the percentage increase bait and switch. From 2011-12 to 2012-13, published prices in current dollars went up 4.0 percent. But this year, they went up only 3.8 percent. A victory for families, right? Hardly. Published prices went up exactly $1 less than they did the year before--$1,106 versus $1,105. But thanks to prior jumps, that 3.8 percent increase was the third lowest in 30 years, even if the dollar change was the sixth highest in 30 years.

Understanding the dollar versus percentage dynamic is especially important for interpreting charts like the one below. What it shows is the average annual change in tuition and fees over a ten year period, adjusted for inflation. So from 1983-84 to 1993-94, the average real increase in tuition and fees at public four-year colleges was 4.3 percent. By contrast, in the past decade, which we tend to think of as a time of excessively high cost increases, the average annual change at public four-year colleges was just 4.2 percent. If it’s about the same as historical trends, then we’re not seeing bad behavior. It’s just how things go—death, taxes, college costs, as the cliché would go.

But again, smaller absolute changes on a lower base lead to higher percentage increases than they would on a larger amount. And sure enough, this chart essentially lets colleges off the hook through their own increases. Here’s the same chart recreated below, only instead of percentage changes, it shows how much tuition and fees changed when measured relative to the base year of 1983-84. In other words, if the base year is 100 and the following year is 103, then the change is 3. And each type of college has its own base year. So a change of 6 points for a community college is still going to be less of a dollar change than 6 points for a nonprofit college.

Suddenly that last decade does not look quite so rosy. Rather, it rightly shows that the amount costs have been going up at public 4-year schools actually exceeds older decades by a good bit. The chart below makes the same point framed a different way by showing the change in the cost of tuition and fees from the start to end of each decade. These figures also are measured in comparison to the base year of 100 for 1983-84, which represents a different dollar amount for each type of school.

The last decade has not been a good time for families. Incomes are down and have not really recovered except for those at the top of the income spectrum. Meanwhile, state budget struggles, unabated spending at private nonprofit colleges, and a host of other reasons have collaborated to keep college tuition on a steady upward path. While this year's figures show that the dollar change is lower in the public sector than it has been in the past couple of years, it's still greater than it was 12 months ago and still above the rate of inflation. That's not good news. That's just less bad news than usual. And we should not be desensitized by price increases to the point where that's acceptable.

[This post is largely adapted from a previous post that ran on Higher Ed Watch in October 2011.]

Last week I argued that the U.S. Department of Education needs to develop a single, national standard that for-profit colleges would be required to use when calculating job placement rates. Department officials could go a long way in achieving this by revisiting a proposal they offered in the summer of 2010 that would have established a standard methodology to use when determining these rates.

Currently, the federal government leaves it up to accrediting agencies and states to set the standards that for-profit schools must use to calculate the rates, and to monitor them. The only exception is for extremely short-term job training programs, which must have employment rates of at least 70 percent to remain eligible to participate in the federal student loan program.

In June 2010, as part of a package of draft regulations aimed at improving the integrity of the federal student aid programs, the administration proposed extending the standards that short-term programs are required to use to all for-profit college and vocational programs that are subject to the Gainful Employment rules. The proposal was met with a firestorm of protest from for-profit college officials, as the federal methodology is much more strict than that used by accreditors and state agencies.

For example, under the Education Department’s requirements, students are only considered to be successfully placed if they have been employed in their field or a related one for at least 13 weeks within the first six months after graduating. In comparison, some accreditors and state agencies apparently allow schools to consider a graduate to be successfully placed if they work in their field for as little as a day.

Meanwhile, the Education Department has established a strict regulatory regime to make sure the rates are not rigged (the extent to which the agency actually holds short-term programs to these standards is unclear). Institutions are required to provide documentation proving that each of the graduates included in their rates is employed in the field in which he or she trained. According to the Department’s rules, acceptable documents “include, but are not limited to, (i) a written statement from the student’s employer; (ii) signed copies of State or Federal income tax forms; and (iii) written evidence of payments of Social Security taxes.”

To be fair, for-profit colleges were not the only institutions that objected to the proposal. Community colleges and state universities that have training programs that fall under the Gainful Employment requirements also complained that the plan was too stringent. These institutions may have found these requirements to be especially daunting since they generally have not had to track job placements before.

A Recipe for Failure

How did the Education Department’s political leaders respond to this criticism? They punted. Instead of sticking to their guns or devising an alternative proposal, they kicked the issue to the National Center for Education Statistics (NCES). Under the final program integrity regulations, which were released in October 2010, the Department directed the NCES to convene a Technical Review Panel “to develop a placement rate methodology and the processes necessary for determining and documenting student placement” that schools would be required to use to fulfill this mandate.

But putting NCES in charge of developing a federal standard for calculating these rates turned out to be a major blunder. First, this was not an assignment that the NCES had sought out or has typically been asked to do. After all, the Department was not just asking the center to provide technical assistance in devising a new methodology but to take the reins in setting a new federal policy in this highly contentious and controversial area. Second, the Technical Review Panel that the Department chose to carry out this assignment included a number of representatives from schools that were opposed to this effort.

All of this was a recipe for failure. So it was hardly a surprise that, after two days of discussions on this topic in March, the review committee was not able to reach an agreement. The panel suggested in a final report on its deliberations that "the topic be explored in greater detail by the Department of Education.” Translation: This is a job for the Department, and not NCES.

The Education Department's hands have been tied since because the final regulations explicitly require schools to use "a methodology developed by the National Center for Education Statistics, when that rate is available." In the meantime, the job placement rates that for-profit colleges are required to disclose under the new rules are the same ones they report to accreditors and state regulatory agencies. As I've written previously, the methodologies that for-profit schools use to calculate these rates vary state by state and accreditor by accreditor, making them impossible to compare. And because neither accreditors nor state regulators have historically put much of an effort into verifying these rates, the schools don’t seem to have any qualms about gaming them.

As Department officials rewrite the Gainful Employment rules, they need to revisit this issue. Otherwise, prospective students will have to continue relying on faulty information when choosing whether to attend a for-profit college.