Eduventureshttp://www.eduventures.com
Research and Advisory Services for Higher EducationFri, 09 Dec 2016 20:38:03 +0000en-UShourly1https://wordpress.org/?v=4.6.1Trump or No Trump, Can For-Profit Higher Education Reinvent Itself?http://www.eduventures.com/2016/12/can-for-profit-higher-education-reinvent-itself/
http://www.eduventures.com/2016/12/can-for-profit-higher-education-reinvent-itself/#respondTue, 06 Dec 2016 08:30:13 +0000http://www.eduventures.com/?p=10917By Richard Garrett, Chief Research Officer The demise of Corinthian Colleges and ITT might be regarded as inevitable churn in a dynamic industry, but it also raises existential questions. The for-profit pitch is that many non-traditional students are better served by teaching-only, career-oriented institutions that emphasize the student as customer. Yet, the problem for the […]

The demise of Corinthian Colleges and ITT might be regarded as inevitable churn in a dynamic industry, but it also raises existential questions. The for-profit pitch is that many non-traditional students are better served by teaching-only, career-oriented institutions that emphasize the student as customer. Yet, the problem for the sector is twofold:

Large numbers of nonprofit colleges and universities have emulated the for-profit approach, often at a lower price, making the typical for-profit school less distinctive. For-profits peaked at about 12% market share in 2010 and are now in retreat.

The formula for commercial success in other industries—standardization, scale, and consolidation—has not played out in higher education. For-profit schools have yet to make a convincing case to students or government.

Too often, for-profit higher education is a means to an end rather than a transformative student experience.

Even if a Trump presidency is predisposed toward for-profits, the value proposition question needs an answer. Simply rolling back regulation misses this bigger challenge. So, where might for-profit higher education go from here?

In recent years, much private capital has funded the alternative credentials movement or the companies that sell to traditional schools, leaving most major for-profit universities and colleges to pursue reinvention at the margins. They’ve made international acquisitions (DeVry), purchased coding bootcamps (Capella, Phoenix), or focused on selling off less attractive assets (Career Education, Kaplan). Grand Canyon’s innovation is to build a traditional campus for younger students to balance out its online business.

The best example of core innovation may be the for-profit embrace of competency-based education with Capella University in the lead. Even this has yet to generate game-changing momentum, in part because nonprofits are already very active in this space.

Strayer University, however, presents an interesting case study on the direction for-profit higher education might take. Strayer, a publicly-traded, single-brand school with over 40,000 students online and on-site, is attempting reinvention on a number of fronts. Yes, Strayer has acquired a bootcamp, and, like many other schools, is trying to tempt corporations with big discounts on its degrees, but the story doesn’t end there.

Two other original Strayer initiatives, the Jack Welch Management Institute and the “Readdress Success” brand campaign, hint at how the for-profit sector might evolve, but also raise some notes of caution.

Let’s start with the Jack Welch Management Institute (JWMI). JWMI started out at Chancellor University, a now-closed, for-profit university, and became part of Strayer in 2011. The institute is the brainchild of Jack Welch, longtime CEO of General Electric (GE) known for dramatically increasing GE’s market valuation and culling under-performing managers and business units. Welch saw a need for a more hands-on business school, and he only considered a for-profit home for his institute, thinking it essential that JWMI practices what it preaches. While JWMI offers conventional master’s degrees and certificates, it is in other ways a departure among for-profit schools.

Although naming is common among the higher echelons of nonprofit b-schools, normally to honor a major donor, it is rare among for-profits. A name adds cache to a brand, so alignment with one of the best-known contemporary leadership thinkers is a coup. In this case, the JWMI MBA is priced higher than Strayer’s standard program.

Taken at face value, JWMI is doing well. Princeton Review ranked the institute #22 on its list of the best online MBAs, and LinkedIn named it the “most influential education brand” in 2015. Poets & Quants said the institute was one of the top 10 business schools to watch in 2016. JWMI claims it is one of the fast growing business schools in the world, with over 1,400 students.

Jack Welch is said to be involved in all aspects of the institute, a departure from the norm at nonprofits where donors fund big ideas but have little say over the details. Branding aside, what is the substance of Mr. Welch’s contribution? In a video, Mr. Welch talks about teaching students how to build “winning teams” and to learn things on Monday they can apply on Tuesday, and then return to class on Friday to discuss what worked.

These claims allude to a distinct approach to teaching and learning that is often missing among for-profits. While it is a match to the headlines of its guru’s best known books, it can also be tough to distinguish differentiated value from marketing. Students study 100% online, the specifics of which are not articulated, which may suggest the experience actually resembles that at other online schools. The value—to the student—of a for-profit business school, is not spelled out.

JWMI represents an intriguing direction for Strayer and will not be the last branded for-profit business school. Ashford University’s Forbes School of Business is a more recent example. JWMI has the potential to shake up a commoditized b-school market among for-profit and nonprofit schools alike, but must guard against being different in name only.

A second novel move by Strayer is the “Readdress Success” initiative, a clever marketing campaign to get the Merriam-Webster dictionary to change its definition of the word “success.” According to Strayer, the dictionary definition of success in terms of money, fame, and recognition is too narrow. It built a website featuring prospective students, various celebrities, and other public figures, reflecting on happiness, satisfaction, and relationships.

Strayer has been quite creative, setting up a giant megaphone on a city street and inviting passersby to shout into it their life’s goal. It has also persuaded people to phone a loved one—live on camera—to thank them for their support.

This approach is subtly different from the typical “get a degree, get a career” for-profit campaign. The effect is more emotional than transactional, and prompts a less cynical reaction. There is no more than an implicit link to the enrollment funnel.

While the best brands embody such intangibles, they also must deliver something tangible. “Readdress Success” is a great complement to the conventional Strayer brand, but the school is still selling degrees. How to clearly and compellingly foster, define, and report student outcomes is still on the horizon for for-profit schools, and higher education generally. Strayer is no exception.

Our Take

Does Strayer have the makings of a winning formula? There is early evidence that fresh approaches like JWMI and “Readdress Success” may be working. The university recently posted its best quarterly results in many years, showing healthy growth in new student numbers—something that continues to elude many of its peers.

For-profit higher education needs a reset if it is to successfully reach beyond conventional branding and carve out a growing niche. In Eduventures’ view, efforts like Strayer’s represent green shoots but are not yet sufficient. Higher education has a surplus of branded business schools and emotionally-charged ad campaigns. For-profit innovation must be more than nonprofit imitation. Deliberate pedagogies and compelling outcomes evidence remain in short supply across higher education as a whole. Strayer should be commended for thinking differently, and its efforts highlight the need among for-profits to think bigger and bolder.

[Editor’s note (12/6/16): APUS was removed from the list of purchased coding bootcamps.]

This Thursday, we’re teaming up with IBM for a webinar to discuss cognitive solutions. Cognitive systems in higher education have the potential to transform learning , student engagement, interactions in and outside of the classroom, and so much more. This webinar will provide participants with the vision of the “art of the possible” for Cognitive use cases in higher education. Register today!

]]>http://www.eduventures.com/2016/12/can-for-profit-higher-education-reinvent-itself/feed/0Putting “Learning” Back in Analyticshttp://www.eduventures.com/2016/11/putting-learning-back-analytics/
http://www.eduventures.com/2016/11/putting-learning-back-analytics/#commentsTue, 29 Nov 2016 08:30:58 +0000http://www.eduventures.com/?p=10895By James Wiley, Principal Analyst @wileyjames Without question, the momentum behind analytics in higher education has been building. Institutions have expressed a hunger to detect patterns in their vast quantities of data and the marketplace has answered, by providing more and more tools for analysis. Unfortunately, this momentum has collapsed different types of analytics under […]

Without question, the momentum behind analytics in higher education has been building. Institutions have expressed a hunger to detect patterns in their vast quantities of data and the marketplace has answered, by providing more and more tools for analysis. Unfortunately, this momentum has collapsed different types of analytics under one overarching and seemingly singular term, resulting in a mistaken understanding that, to paraphrase Gertrude Stein, “analytics is analytics is analytics is analytics.”

One example of this is the emergence of learning analytics, a recent approach focused on the analysis of student learning data such as academic grades and student interactions with online course materials. Some organizations such as Educause and others in the United Kingdom and Australia have sought to define it, yet many discussions about learning analytics still lump it together with other forms of analytics. Most tend to equate it with “operational analytics,” which is an approach that aims to examine institutional data for discovering patterns of enrollment, retention, and other areas not directly tied to student learning.

This treatment obfuscates any unique considerations institutions seeking to deploy a learning analytics solution should have. Likewise, an institution can go down the road of devoting costly efforts and resources to tools bearing the name “analytics,” without gaining any understanding of whether its students are succeeding, which students need support, or which courses are efficiently delivering instruction.

Understanding Analytics

In a recent webinar featuring Intellify, we sought to build a framework with which institutions can unpack the meaning of learning analytics and develop a strategy. Building on the excellent work already done in this area by the Open University of the Netherlands, we defined learning analytics and described how it differs from operational analytics based on four simple questions:

Who is the audience that will review the output of the analysis?Why do you need to analyze data (rationale)?What are the data and systems required for the analysis?How is the data being crunched for analysis (methods and models)?

As shown in the table below, applying these questions to the approaches of learning and operational analytics shows some significant messiness between the two, but also shows some clear areas of differentiation.

Comparison of Learning and Operational Analytics

Dimension

Learning Analytics

Operational Analytics

Who?
(Audience)

Mainly aimed at instructional staff and instructional designers.

Primarily aimed at institutional leaders, such as provosts and enrollment managers.

The two approaches overlap in some of the models and systems used—each approach leverages the more standard analytic models (e.g., predictive and prescriptive), as well as relying on data from learning management systems and student information system.

On the other hand, learning and operational analytics demonstrate the greatest differences regarding their audiences and rationales. Learning analytics targets instructional staff, while the audience for operational analytics is more likely to be staff focused on enrollment or the overall health of the institution. Likewise, the focus of learning analytics is more on student and curricular improvement, while operational analytics focuses on discerning patterns that concern the financial, enrollment, and other more general aspects of an institution.

Selecting a Learning Analytics Solution

Understanding the meaning and boundaries of learning analytics as part of the broader analytics landscape is important, but how should an institution go about choosing a solution? We have seen two solutions that show promise:

Intellify: While impressive in all components, Intellify shows strength in its focus on the audience and the source systems for learning analytics. It helps institutions develop and capture metrics to support its questions and captures data across a wide range of source systems, including courseware providers, educational apps, and publishers.

Explorance: Like Intellify, Explorance makes a strong showing across our “who, why, what, how” framework, but is especially strong in understanding and adapting to the needs of the learning analytics audience. Specifically, Explorance grasps that, because the learning analytics process is iterative, its solution has to be responsive to any new questions users may discover as a result. Its product—called “Blue”—has a robust dashboard, which can also display course evaluation data, standardized using a using its proprietary text analysis engine.

Stay tuned for our upcoming report that will explore the field of analytics, the technology vendors, and trends to watch. Additionally, Eduventures will provide updates and developments in the field of learning analytics. Our goal is to keep our readers and members informed of the latest advances in this area as they are embarking on analytics projects and vendor selection efforts in the coming year.

]]>http://www.eduventures.com/2016/11/putting-learning-back-analytics/feed/6College football rankings and student success: Who’s performing well on the field and off?http://www.eduventures.com/2016/11/college-football-rankings-student-success-whos-performing-well-field-off/
http://www.eduventures.com/2016/11/college-football-rankings-student-success-whos-performing-well-field-off/#respondTue, 22 Nov 2016 08:30:02 +0000http://www.eduventures.com/?p=10899By Kim Reid, Principal Analyst View our previous ratings from 2015 and 2014. We’re all grateful that Thanksgiving is here, if only because we can turn from one heated discussion that divides America to another equally controversial but seemingly less consequential dinner table topic. Let’s talk NCAA College Football Playoff Rankings. At your Thanksgiving table, someone—brother or […]

We’re all grateful that Thanksgiving is here, if only because we can turn from one heated discussion that divides America to another equally controversial but seemingly less consequential dinner table topic. Let’s talk NCAA College Football Playoff Rankings.

At your Thanksgiving table, someone—brother or sister, aunt or uncle, grandma or grandpa—will have a vehement opinion about how well the playoff rankings represent, or do not represent, the true competitive position of each team. You know how the argument goes: some conferences are stronger than others, some team schedules are tougher, and some teams are untested or underrated.

If your team isn’t performing well on the field this year, we’d like to offer you this consolation. What matters most for all of these schools is not their football performance but their performance in preparing students for success. That is the consequential discussion.

Fortunately, when it comes to academics, we’ve developed a method at Eduventures that takes into consideration “strength of conference,” “tough schedules,” and “underrated teams.” Our Student Success Ratings assess institutions based on their performance given both institutional characteristics and actual performance over time. In other words, we don’t look for top performers in absolute terms, but for institutions that are doing well given their circumstances. We believe this provides better guidance for the task of improving institutions from the inside out.

Now let’s get to the scores. In the spirit of the NCAA Playoff Rankings, here is how top-ranked public research doctoral universities representing five powerful football conferences perform on student success. Scores vary between 35 and 70 out of a possible 100. The “hash marks” on the chart below represent each institution’s student success score. (For more on the Student Success Ratings Methodology, click here.)

Top 25 Football Schools Ranked by Student Success

Note: Private institutions Stanford, USC, Boise State, and Western Michigan are excluded from the chart.

Four playoff contenders tied for the top spot in Eduventures Student Success Ratings, scoring 70 out of a possible 100 points and exceeding the national average of 55 in their category. That ties them all for 10th position among all 159 public research/doctoral institutions we rated—more than respectable. Ohio State takes the top spot combining its solid student success score with a #2 NCAA Playoff Ranking. University of Utah, Florida State, and University of Florida are right on the Buckeyes’ heels.

ACC – Top Conference on Student Success

Among the powerhouse football conferences, the ACC takes the lead in student success with Florida State (70) as the conference leader. On average, the public research/doctoral institutions in the ACC rate a 64, which outpaces all publics in this category by a full ten points. Notably, all ACC public institutions surpass the national average of 55 with even the lowest scorer, NC State (59), cruising past by four points.

Big 10 – Solid Performance, with Room for Improvement

In the Big 10, only University of Minnesota boasts a higher student success score (74) than Ohio State. Public institutions in the Big 10 as a whole (59) best the national average by five points in our Student Success Ratings, but no Big 10 institution truly falls by the wayside. Michigan (52), Illinois (53), Indiana (54), and Iowa (56) are all within striking distance of the overall national average.

Big 12 – Not Making the Grade

The Big 12, as a whole (45), scores well below the national average of 55. With a score of 57, Oklahoma is the overall Big 12 conference leader, although it barely exceeds the national average. While Oklahoma sets a low bar atop the conference, West Virginia sets a far lower one, scoring just 37 at the bottom of the conference. This is a conference with a mismatch in football performance and student success performance.

PAC 12 – Where All the Teams are Above Average

Okay, not quite, but close. Colorado may blow the conference curve with a dismal student success score of 38, but every other team in the PAC 12 is near or better than average. Overall, PAC 12 public institutions (58) beat the national average, while Washington (64), UCLA (63), and Oregon (63) turn in solid performances. Utah leads the conference in student success with a score of 70, while also holding the #12 spot in the NCAA Playoff Rankings.

SEC – Great Disparity

Finally, the SEC story is … complicated. The SEC is home to Florida (70), one of the four highest scorers, but it is also home to Alabama, which trails all playoff contenders, with a score of 35. While the SEC conference average of 52 sits just below the national average, this outcome belies the fact that the distribution of scores for the SEC is all over the place. Some teams, like Georgia (66) and Tennessee (65), are right behind Florida. Others, like Ole Miss (56), Auburn (55), and South Carolina (55) are just mediocre, and some, Texas A&M (45), Mississippi State (45), and Kentucky (43), are struggling to come near the national average.

What does it all mean?

While it’s all well and good to have fun with rankings and ratings, we also believe that when it comes to improving student success, you should focus less on the external competition and more on the internal fight to improve. Any good football coach would tell you the same. External rankings and ratings might spur you to action, but you can only truly improve by working within your team. Having said this, every institution needs support and advice from their larger community.

Athletic conferences are unifying entities not only in sport but in the total self-concept of their member institutions. The way these universities share student success practices and interact off the field can influence a conference identity that has consequences beyond what happens on the football field.

At Eduventures we are here to help spark and inform those conversations and collaborations within your institutions or within your conference cohorts. In the meantime, if you want to decompress this Thanksgiving, sit down with your turkey leftovers this weekend, hide from the Black Friday shoppers, then read our recently published report on student success top performers (Improving Student Success: Is Your Institution Really Ready?). But first, enjoy your family and friends this Thanksgiving.

]]>http://www.eduventures.com/2016/11/college-football-rankings-student-success-whos-performing-well-field-off/feed/0Rethinking the Student Lifecycle: A View from Summit 2016http://www.eduventures.com/2016/11/rethinking-student-lifecycle-view-2016-summit/
http://www.eduventures.com/2016/11/rethinking-student-lifecycle-view-2016-summit/#respondTue, 15 Nov 2016 08:30:37 +0000http://www.eduventures.com/?p=10869By Eduventures Research Staff On October 21-23, Eduventures hosted higher education leaders from across the country for a three-day discussion on improving the student lifecycle. Our line-up of expert keynotes represented a diverse mix of perspectives shaped by data, research, and unique experiences serving both the traditional and adult student markets—all infused with a healthy dose of […]

On October 21-23, Eduventures hosted higher education leaders from across the country for a three-day discussion on improving the student lifecycle. Our line-up of expert keynotes represented a diverse mix of perspectives shaped by data, research, and unique experiences serving both the traditional and adult student markets—all infused with a healthy dose of the latest technologies.

To provide you a flavor of this meeting, we’ve included five key insights that were presented at the conference by our featured Eduventures analysts. They address a broad range of topics, including prospective student mindsets, social media engagement, designing online programs, the recipe for student success, and leveraging constituent relationship management (CRM) for retention. Flip through the slideshow, or scroll down to see all five snapshots!

1. Re-Writing Student Segmentation

Source: Eduventures Prospective Student Survey, (2016).

Recruiting and serving students from a position of true empathy means understanding your students as more than the average of their behaviors and demography. While all students are individuals, there are common, defined behavioral and attitudinal mindsets they hold regarding their upcoming educational experience. It’s up to you to know, understand, and engage with these different types of students in your institution.

In our Summit session, “Insights and Trends from the Eduventures Prospective Student Survey,” we shared six student mindsets based on the expected outcomes, desired experiences, and decision criteria of traditional students (Figure 1):

Social Focus: College is primarily a social experience with a good job, a foundation for career, and lasting friendships at the end.

Experiential Interests: College is a time to get hands on with internships, study abroad, and employment as you work toward your career.

Grad School Bound: College is all about developing the academic and technical foundation for your future in graduate or professional school.

Career through Academics: College is about finding your way to a solid career through a balance of academic and career activities.

Career Pragmatists: College is about finding your way to a career at an affordable cost through campus community.

Exploration and Meaning: College is all about finding meaning in your life and sharing it with others around the world.

Segmenting your market in this way allows you to develop meaningful communications and recruiting experiences, and to consider the opportunity for each student to find a resonant pathway to the outcome they desire at your institution.

2. Moving from Social Media Management to Engagement

Technology used to foster social engagement is now commonplace in industries outside of higher education. This means that millennials—and the up-and-coming Generation Z—have come to expect a real conversation with those who represent their favorite brands. These students wonder: if my favorite shoe company can listen to my feedback on Twitter—and then respond with a coupon or discount— why can’t colleges have a real conversation with me on social media that is remembered throughout the enrollment process?

Eduventures research has found that prospective students engage most with social media content when it is shared by currently enrolled students and gives honest perspectives on the student experience. In fact, students engage with this type of content nearly twice as often as other types of content shared by colleges.

This finding shows that merely posting university website content on Facebook and Twitter is insufficient. Our Summit panel “The Potential of Social Media” noted that colleges and universities are only now starting to maximize their use of social media management tools by sharing authentic stories and experiences through the channels that prospects, students, and alumni view the most. These tools see better adoption rates when they are fully integrated with CRM platforms.

3. The Ingredients for Achieving Student Success

Three effective student success leaders took to the stage in our session, “Changing Demographics and Strategies for Student Success,” to share how their institutions have made it to the top of Eduventures’ 2016 Student Success Ratings. While each school—University of South Florida (USF), the SUNY System, and Saint Joseph’s College of Maine—have very different institutional circumstances, all outperformed their projected student success metrics according to our predictive model.

What makes them successful? Each incorporated all six elements (see Figure 3) of an effective student success culture, and the panelists especially highlighted the importance of the first two elements, “leadership and focus” and “collaborative control.”

Leadership and Focus: Simply put, before an institution can act, it needs the imprimatur of leadership. For example, at USF the Provost put a stake in the ground about high failure rates. Tolerating them in gateway courses would no longer be unacceptable and, as a result, the faculty and administration came on board to make critical changes to improve the pedagogy and support in those courses. That takes real leadership skills and institutional focus.

Collaborative Control: Even with leadership and institutional focus, though, many institutions can flounder without creating the necessary collaborative environment to articulate goals and make progress. While many colleges and universities would say that they collaborate on student success, far fewer have actual “collaborative control”—the structures, guiding documents, information sharing, and accountability necessary to make consistent, deliberative progress.

4. Understanding the Demand for “Convenience” in Online Programs

Source: Eduventures Adult Learner Survey, (2016).

When evaluating their graduate program portfolios, clients frequently ask us if they should move a current on-campus program online. As we discussed during our session “Strategic Recruiting and Programming for the Next Generation of Educators,” institutions should consider many factors, including their audience, when making this determination.

Convenience does play a role in the choices many adult students (age 25+) make about where to study. Research from our 2016 Adult Learner Survey charts this population’s interest in fully online program (see Figure 4). More specifically, it shows the difference between the general population of adult learners and those interested in studying education fields, with instructive revelations:

The definition of “convenience.” Notice that for both the full sample and those interested in studying education, no more than half of students are interested in a fully online program. Generally, less than 40% want to study in this format. Convenience, then, does not necessarily mean 100% online offerings. Instead, institutions should design hybrid experiences to enable student success in a given program.

Age matters. 54% of 25-34- year-olds want to complete a fully online program, while only 32% of all adults are interested in a fully online degree. One might logically conjecture that early career educators who need credits and degrees to advance in their professions or maintain licensure while balancing work and family responsibilities are primarily looking for convenience in a degree program. This example demonstrates that institutions looking to move online should consider their audience carefully.

Eduventures’ research further reveals that the need for online programming varies by area of study. Ultimately, students want to be successful, and it will be up to institutions to meet student needs while facilitating their success.

One of the principal reasons that institutions pursue an enterprise CRM is to tie together academic outcomes data from multiple departments. These systems promise to collect all the actions of faculty, advisors, staff, and administrators to better serve the needs of students and gain a comprehensive view of all students and alumni.

According to our panel, “CRM Across the Student Lifecycle: Enterprise vs. Best of Breed,” this comprehensive view of data is certainly a required step in enterprise CRM adoption, but it is woefully insufficient to meet the needs of students for personalized learning and support services, such as academic advising. Eduventures research (see Figure 5) shows that once an institution has tied together its data silos, access to this data is mostly used by senior administrators, even more than by student advisors for academic and support purposes. What’s more, students themselves rarely get access to this data.

While institutions are certainly getting better at aggregating data on students and putting it in the hands of faculty and staff, they are not truly meeting their own goals of improving student outcomes. Like the adoption of social media management tools to improve student engagement, enterprise CRM begins to shine when it is used to foster two-way communication between students, faculty, and advisors.

For those who attended Summit, session slide presentations are available and posted in our Research Library. Videos of the presentations will be available soon. Eduventures clients who want to discuss any of the topics covered here in greater depth are encouraged to schedule an advising session with one of our analysts.

Select several established and successful public and private universities

Add a dollop of coding bootcamps and alternative education partners

Mix in a dash of the 11th ranked private company on the Fortune 500 index

Blend in several widely recognized quality assurance (QA) entities

Stir in a ranking service for software coders, an alternative education funder, a certified public accounting firm, and a few consulting groups

Shake well and serve chilled over close to $17 million of grants and Federal Title IV student aid

As ED’s latest experiment in higher education innovation, this recipe has tremendous potential to foster advances in both program design and QA practices. While EQUIP may lead to some valuable insights in these areas, several questions remain.

For example, what can this eclectic mixture reveal about how traditional institutions can successfully partner with new, innovative program providers? What unforeseen insights will emerge about the ability of vastly different types of entities to provide quality assurance? Ultimately will EQUIP whet our thirst for innovation, or leave us with a bitter aftertaste?

Given the recent excitement around EQUIP, we thought it would be timely and relevant to dive a bit deeper into this new cocktail.

Not Just Another Higher Ed Innovation Lab

Launched in late 2015 as part of ED’s Experimental Sites Initiative, EQUIP is designed to “promote and measure college access, affordability, and student outcomes.” To achieve this goal, EQUIP is testing the efficacy of new partnerships between established institutions and non-traditional programs as well as alternative methods of providing outcomes-driven QA. A carefully controlled allotment of Title IV funds will incentivize these partnerships and expand access to innovative workforce readiness programs such as coding bootcamps for underserved students.

The rise of coding bootcamps exemplifies a workforce readiness effort that falls outside conventional higher education. Well-resourced students with existing degrees and access to capital have driven much of its growth. Before EQUIP, these programs were unable to utilize federal financial aid, restricting access for low-income candidates.

Federal regulations also prevented accredited institutions from collaborating with non-accredited providers for more than 50% of the content and instruction within a program. ED has waived this rule for EQUIP, enabling accredited institutions to build entirely new programs with these alternative providers.

EQUIP’s scope, however, suggests a healthy dose of caution from ED, highlighting the intent of its experimental design: to measure the ability of partnerships between conventional and non-traditional education providers to reach under-served student populations. The first round of the experiment will impact no more than 1,500 students across eight partnerships, and leverage only $17 million in Title IV Federal aid, a tiny portion of available monies.

Current EQUIP Partnerships

School

Partner

Quality Assurance Entity

Marylhurst University

Epicodus

Climb

Colorado State University –
Global Campus

Guild Education

Tyton Partners

University of Texas, Austin

MakerSquare

Entangled Solutions

Dallas County Community College

StraighterLine

Council for Higher Education Quality Platform

Wilmington University

ZipCode

HackerRank

Thomas Edison University

Study.com

Quality Matters

Northeastern University

General Electric

American Council for Education

SUNY Empire State

Flatiron School

American National Standards Institute

The eight selected programs prioritize upskilling students in software coding and business domains. Four of the eight focus on coding and software development, and three on business administration or criminal justice. An outlier is the partnership between Northeastern University and General Electric (GE), through which up to 50 GE employees will complete an accelerated Bachelor’s of Science in Advanced Manufacturing.

While these workforce sectors have strong forecasts for employment and median income, we wonder whether future iterations of EQUIP can expand to other domains. Could future rounds of EQUIP funding be devoted to partnerships in other high growth areas, such as continuing medical education or cyber-security?

Wide-Ranging Quality Assurance

Given EQUIP’s unorthodox mix of established institutions and alternative providers, quality assurance is important. Adding to the breadth of the experiment, ED required the inclusion of a third party QA entity within each partnership. ED describes this component of the EQUIP experiment as an attempt to assess the viability of “outcomes-based” QA processes. More surprising, is the range and variety of these quality assurance entities.

At one end of the quality assurance spectrum we note the participation of the American Council on Education (ACE), Quality Matters, the American National Standards Institute (ANSI), and the Council for Higher Education Accreditation (CHEA). These represent established arbiters of higher education best practices and accountability. Quality Matters, for example, brings considerable experience to EQUIP, having previously certified more than 6,000 courses, from associate to doctoral degree levels.

At the other end of the spectrum, we see a collection of potentially capable, but untested organizations and companies tasked with significant quality assurance requirements. HackerRank, a ranking platform for software engineers and coders, will be the QA entity for the partnership between Wilmington University and ZipCode in which students will complete a 12-week software development bootcamp and then enter into local apprenticeships or entry-level jobs.

This was by design. The initial EQUIP request for proposal in October 2015 noted that a core objective of this experiment is to measure how QA entities would “determine the quality of a program of study through a set of largely outcome-based questions, rigorous and timely monitoring, and accountability processes.” It will be revealing to see, at the very least, how for-profit companies such as HackerRank, Climb (non-traditional student financing) or Entangled Solutions (higher education consultants) fulfill this QA objective.

A Great Taste or the Ingredients for Another Hangover?

Since the August 2016 announcement of the EQUIP awards, ED has stated that each institution’s existing national or regional accreditor will still have ultimate oversight over each partnership. In this context, we see EQUIP as having the potential to provide a safe and contained space for program and QA innovation. This will still require vigilance and monitoring. At a minimum, we would hope for significant transparency from ED on the partnership’s QA entity performance metrics.

While there are ample reasons to applaud the EQUIP experiment, we hope several questions will come into greater focus as the projects move forward:

Can ED effectively evaluate new models of QA, while also assessing the efficacy of partnerships between traditional schools and innovative program providers?

Will non-traditional QA entities, embedded in several of these partnerships, be able to design and deliver accurate assessments and measurements?

Will the sample size of only 1,500 students across eight projects provide enough evidence for further rounds of EQUIP funding and access to federal monies?

All eyes will be on these outcomes and others as this experiment unfolds. Chances are, a few sips of the EQUIP experiment may whet your thirst for more of this eclectic cocktail. As is the case with most mixed drinks, however, it will be advisable to sip slow, consider the ingredients, and assess impact before ordering another round.

]]>http://www.eduventures.com/2016/11/higher-ed-mixology-tasting-eclectic-equip-cocktail/feed/0EDUCAUSE 2016 Trends: Cloud and Intelligence for Everyonehttp://www.eduventures.com/2016/11/educause-2016-trends-cloud-intelligence-everyone/
http://www.eduventures.com/2016/11/educause-2016-trends-cloud-intelligence-everyone/#respondTue, 01 Nov 2016 07:30:59 +0000http://www.eduventures.com/?p=10832By Jeffrey Alderson, Principal Analyst @eduventuresjeff The Fall edtech conference season was certainly in full swing this month. Eduventures attended Dreamforce 2016 and PESC EDiNTEROP and even hosted thought leaders and innovators from higher education at our own Eduventures Summit. Our research team then capped off a busy October with a trip to EDUCAUSE 2016 last […]

The Fall edtech conference season was certainly in full swing this month. Eduventures attended Dreamforce 2016 and PESC EDiNTEROP and even hosted thought leaders and innovators from higher education at our own Eduventures Summit. Our research team then capped off a busy October with a trip to EDUCAUSE 2016 last week where over 8,000 attendees packed the Anaheim Convention Center to get up to speed on the latest education technology trends.

At EDUCAUSE we met with more than 30 vendors to learn more about a multitude of product announcements and initiatives. Emerging from these conversations was a renewed focus on improving the student experience across the education lifecycle. While each company demonstrated a vastly different way of achieving that goal, two trends were clear: a willingness of vendors to outsource cloud operations and security management, and the integration of the latest advances in analytics and artificial intelligence within the user experience of all types of platforms.

Cloud Operations No Longer Seen as a Core Competency

Two case studies illustrate the growing view among vendors that data center management is no longer a core competency. For example, D2L announced that it would be standardizing on the Amazon Web Services cloud platform. Its aim is to standardize on a single cloud infrastructure, leveraging the continuous innovations in storage, security, and analytics that Amazon provides, while at the same time redirecting critical development resources to focus on the student learning experience.

This announcement follows a similar one from Blackboard at its BBWorld16 conference over the summer. It announced an initiative to transition management of its cloud operations to IBM.

Higher education institutions, of course, have been following this trend for some time. One company that serves higher education with data center and infrastructure operations is Dell Technologies. We spoke with Jon Phillips, Director of the Dell EMC Center of Excellence for Education, to get his take on why institutions—and now vendors—are ready to outsource data center operations.

He confirmed that, apart from the obvious benefit of reducing overall costs, the primary driver for outsourcing among institutions is security. It is still common for administrators and information technology specialists to believe that institutions should run computing infrastructure themselves because student data is too important to trust to third parties. Unfortunately, with thousands of attacks on campus computing infrastructure daily, institutions can no longer take a reactive stance to security.

Even a small data breach, network intrusion, or overloaded instructional services results in lost class time, negative brand reputation, and real costs in the form of lawsuits or identity protection fees. So now, many institutions trust their critical systems—and student data—to vendors that specialize in secure network operations. These include vendors like Dell, IBM, Akamai, Amazon, and Microsoft.

Vendors seeking assistance in managing their cloud operations are doing it for wholly different reasons. Some of the younger edtech companies utilize the latest platform as service technologies from Amazon, Microsoft, and Google Cloud. The surge in the number of successful edtech launches is due in no small part to cloud platforms giving small development teams the tools they need to build scalable education solutions almost overnight. The older education technology vendors have had a more difficult time standardizing their cloud operations on a single platform.

When Blackboard decided to outsource management of its data center operations to IBM and lead the transition to cloud infrastructure across all product lines, it was in part a response to customer and market feedback that the company was getting the cloud wrong. As more and more institutions moved their heavily customized Blackboard solutions to managed services and then on to Blackboard’s latest cloud offerings, the company found itself in the position of needing to invest heavily in data center operations. This development detracted from its ability to develop new user experiences and products for student success.

It is also important to note that some vendors, such as Ellucian, still believe that managing cloud infrastructure on behalf of their institutions is core to their value propositions and will continue to offer this service for years to come. Ellucian used EDUCAUSE to hone its value proposition as the only vendor institutions need for all administrative systems and related services for outsourcing data center operations.

A Changing Analytics Landscape

Also at EDUCAUSE, another primary aspect of Blackboard’s partnership with IBM became clearer. The company announced that it will leverage Watson Education Cloud services (Watson)—IBM’s cognitive computing platform—in both current and forthcoming products. A panel moderated by Michael King, Vice President and General Manager for Global Education Industry at IBM, positioned Blackboard and Pearson as partners in developing solutions for higher education that will leverage Watson.

IBM demonstrated similar projects that it has done with Apple and Sesame Street. These collaborations show what is possible when you merge cognitive intelligence and analytic tools into the experience of tools meant for everyday use in the classroom. A post-panel discussion with Doug Hunt, Global Business Leader for Education at IBM, qualified the types of edtech companies with which working was the most interesting.

IBM is seeking out key relationships with leaders in each phase of the learner lifecycle to build out exemplar applications and show the world what is possible. When married with IBM’s Kenexa learning management system for corporate learning, it hopes to show that a single data and analytics platform that spans the entirety of the learner lifecycle is not only possible, but a reality supported by the biggest names in education technology.

Stay tuned for future Tech Alerts that will highlight additional technology vendors and trends to watch as we enter 2017. Additionally, look for an Eduventures update to our Higher Education Technology Landscape. Our goal is to keep our readers and members informed of the latest advances in technology as you are embarking on technology projects and vendor selection efforts in the coming year.

November 2, 2016, 2pm

Join us as we partner with Software Secure to discuss what you can do to maximize the online proctoring placebo effect, and how to collaborate with your vendor to suppress cheating over the long term. Register today to reserve your seat!

]]>http://www.eduventures.com/2016/11/educause-2016-trends-cloud-intelligence-everyone/feed/0Hitting an Unclear Target: The Impact of Ambiguous “Student Outcomes” on Technologyhttp://www.eduventures.com/2016/10/hitting-an-unclear-target-the-impact-of-ambiguous-student-outcomes-on-technology/
http://www.eduventures.com/2016/10/hitting-an-unclear-target-the-impact-of-ambiguous-student-outcomes-on-technology/#respondTue, 18 Oct 2016 07:30:51 +0000http://www.eduventures.com/?p=10820By James Wiley, Principal Analyst @wileyjames As reported by Inside Higher Education (IHE) last month, Eduventures’ survey of more than 200 higher education leaders from across the country identified the challenges institutions face when trying to improve student outcomes. Too many organizational road-blocks and a lack of clear ownership and accountability quickly rose to the […]

As reported by Inside Higher Education (IHE) last month, Eduventures’ survey of more than 200 higher education leaders from across the country identified the challenges institutions face when trying to improve student outcomes. Too many organizational road-blocks and a lack of clear ownership and accountability quickly rose to the top. Also at the top of the list: different definitions of “student outcomes,” a problem particularly relevant when it comes to selecting and deploying technology intended to support these outcomes.

Consider the figure below. While this figure illuminates the different ways in which institutions prioritize “student outcomes,” it is more notable that there is such a broad range of definitions:

Likewise, in the insightful comments that followed the IHE article, some commentators equated “student outcomes” with “student learning outcomes,” while others identified student satisfaction and the overall learning experience as key outcomes.

These findings and the attention they’ve received are significant because they indicate that the first step in developing a strategy—setting a clear target—may be missing at many institutions because of the ambiguous way in which we use the term “student outcomes.” From a technology perspective, this means that there may be a costly misalignment of technology and student outcomes, where the implemented technology, though powerful and much-liked by stakeholders, does not support the institution’s view of what “student outcomes” means and how it plans to take action to improve them.

Without question, there has been a lot of discussion about student outcomes, most recently on measuring institutional quality and establishing accreditation. Within this context, the definition of “student outcomes” seems to fall into one or more of the following categories:

Student Persistence: Students entering college persist to completion and attainment of their degree, program, or educational goal.

Student Advancement: Students achieve success in their career, specifically in areas for which their institutions prepared them.

Holistic Development: Students progress through their college experience as “whole persons” (i.e., improving their intellectual or social development).

At first glance these categories of student outcomes seem pretty straightforward. When leaders decide the best approaches for improving them, however, certain complexities become apparent:

Improving different student outcomes may require different strategies: Improving academic achievement, for example, may involve an increase in efforts to promote tutoring, while promoting holistic development may involve an increased focus on student engagement strategies.

Improving student outcomes may require an understanding of their interrelationships: If the primary institutional driver for improving academic achievement is increasing student advancement, then deploying academic achievement strategies on their own may not result in the desired improvement.

Measuring student outcomes may involve more nuanced metrics: While the measurement of academic achievement, for example, may seem straightforward, establishing the performance of other student outcomes such as holistic development or student advancement may require conducting surveys and other “softer” forms of data collection.

What does this mean from a technology point of view? First, each approach could require different technology to support it. For example, if student persistence is the primary student outcome focus at your institution, then your key technology could well be a student information system (SIS) to track enrollment. If, however, your primary outcome focus is academic achievement, then a learning management system (LMS), where students can access a variety of instructional resources, might be a better fit.

Second, the consideration of the interrelationships among these various outcomes and the metrics selected are also important for technology deployment. Let’s assume that you have implemented a SIS to improve student persistence. You then discover that holistic development actually drives student persistence, and the best way to measure it is by tracking student engagement through social media—a functionality your newly-implemented SIS lacks. In this case, you may find that your initial definition of student outcomes, while necessary, was insufficient, and now you have to supplement your SIS with a social engagement solution—costing additional resources.

To ensure that you have aligned your technology selection and implementation with any initiatives around student outcomes, we strongly suggest taking the following steps:

Understand your institution’s definition of student outcomes: As discussed above, without a clear understanding of the precise meaning of “student outcomes” at your institution, you may risk implementing a technology solution that misses the mark.

Understand the interrelationships and metrics for student outcomes: Technology solutions offer different features (e.g., analytics, online interventions, etc.), and you will need to map these features to both the actions and metrics stakeholders use to improve student outcomes.

Make outcomes, interrelationships, and metrics visible to the vendor market: Based on their market review and client feedback, vendors gain a sense of what features they should develop. When soliciting vendor responses—either through a request for proposal or information—you should make it very clear what “student outcomes” means at your institution, as well as any interrelationships and metrics you use to measure them.

For more information, please read our Student Success and Outcomes report or contact James Wiley, Principal Analyst, at jwiley@eduventures.com

[Editor’s Note: The Wake-Up Call will be on hiatus next week for Eduventures Summit 2016 and will return November 1.]

]]>http://www.eduventures.com/2016/10/hitting-an-unclear-target-the-impact-of-ambiguous-student-outcomes-on-technology/feed/0Getting to know you. Getting to know all about you.http://www.eduventures.com/2016/10/getting-know-getting-know/
http://www.eduventures.com/2016/10/getting-know-getting-know/#respondTue, 11 Oct 2016 07:30:57 +0000http://www.eduventures.com/?p=10805By Kim Reid, Principal Analyst It’s that time of year again. With new classes safely on campus, enrollment managers now turn to deconstructing the prior year’s recruitment cycle. They review their efforts and fine tune their outreach, communications, and financial aid strategy for the next cycle. They pore over data in an effort to understand […]

It’s that time of year again. With new classes safely on campus, enrollment managers now turn to deconstructing the prior year’s recruitment cycle. They review their efforts and fine tune their outreach, communications, and financial aid strategy for the next cycle. They pore over data in an effort to understand the mercurial 18-year-old mind and the equally mercurial undergraduate college enrollment decision.

They ask themselves: what, if anything, is the perception gap between those who chose to enroll in my school and those who were close to enrolling but chose to enroll elsewhere? This is the critical gap to monitor, since small changes in recruiting practices and improved communications might just make the difference in converting fence-sitting students.

Consider the following brand perception map based on data from one public university’s 2016 Survey of Admitted Students. It provides a visual representation of brand attributes which we see in many public institutions. In the example below, attributes that are commonly shared among public institutions appear in the space between enrollment categories (e.g. Enrolling, Close-to-Enrolling, and Not-Close-to-Enrolling), while attributes that are held more strongly by one category of enrollment decision appear along the edges of the maps. The closer an attribute is to an enrollment category, the more closely this perception is held relative to other enrollment categories.

Example University: Brand Perception Map of a Public Institution by Enrollment Status

What’s striking is the constellation of associations that Enrolling students have in comparison to those held by students who are either Close-to-Enrolling or Not-Close-to-Enrolling. Not only are Enrolling students’ attribute associations dense—Enrolling students on average make 11 attribute associations to the admitting institution, compared with nine for those Close-to-Enrolling and seven for those Not-Close-to-Enrolling—the nature of these associations is quite different.

Enrolling students at this large public institution have warm, personal, and engaging attribute associations, while Close-to-Enrolling students have positive, but sterile, associations. For example, even though both Enrolling and Close-to-Enrolling students think of the institution as “intelligent/intellectual” and “high quality,” Enrolling students think of it as “challenging” while Close-to-Enrolling students think of the institution as “rigorous” or “prestigious.”

Students who were Not-Close-to-Enrolling picked up on surface perceptions, or stereotypes, that many large publics share like “athletics” and “spirit school.” They don’t associate perceptions of quality, warmth, or energy with “well-known,” possibly in the same way your kindly grandmother would describe her disagreeable neighbor as “nice.”

While the pattern for private institutions is similar, there are notable differences. In the example of a medium-sized private institution, we see the same density of energetic associations that Enrolling student students make—a combination of challenge, career-mindedness, energy, intellect, innovation, community, and fun.

Close-to-Enrolling students certainly think the institution is prestigious, but retain some level of distance in their knowledge of the institution. While Not-Close-to-Enrolling students admitted to public institutions saw the stereotypical aspects of publics, Not-Close-to-Enrolling students admitted to private institutions see the simple stereotype of a many privates—“friendly/inclusive” and “well-rounded/balanced.”

While each of the institutions that participated in Eduventures 2016 Survey of Admitted Students has its own distinctive map, the similarities in perception patterns within publics and privates are instructive. These perceptual maps reveal that it is not enough for a student to have a good impression of your institution; they must have a warm, energetic relationship with the institution. Being well-regarded is better than being well-known. Being well-loved is even better.

It’s possible that the density of associations enrolling students have are caught up in a “chicken and egg” cycle of indeterminate cause and effect. Are enrolling students predisposed to greater knowledge of the institution or did these institutions create a greater depth of knowledge through recruiting and communications? It’s hard to say exactly, but it’s likely that both are true to varying degrees.

Across all participating institutions, both public and private, 43% of Non-Enrolling students were “very or extremely close” to enrolling at the institution. In other words, the school was one of their top choices, yet they decided to attend elsewhere. Certainly, affordability is a relevant reason for this choice. More than a quarter of non-enrolling students (29%) say that affordability was the top driver of their choice. That, however, may not account for a vast majority choosing to attend for other reasons that may be influenced by better recruiting and communications.

These perceptual maps provide evidence that your yield campaign should address several issues:

Know who is on the fence – Yield efforts will be most effective if you devote more of your resources toward moving Close-to-Enrolling students off the fence. In order to do this, you have to know who they are. Consider using predictive enrollment scoring, developed in-house or with an external partner, in order to identify fence-sitters.

Build and deepen the relationship beyond the cold hard facts – Knowing the cold hard facts about an institution is certainly important to an admitted student, but this is foundational information upon which you need to build a deeper relationship. As you move through the yield phase, carefully consider the type of information and engagement that will help students understand how the full institutional context.

Engender excitement – In your yield planning, pay attention not just to content but also to voice and tone of messaging. Does it feel authentic? Are you warm and welcoming? Is it distinctive to your institution? Does it meet the expectations of the kinds of students your institution seeks to attract?

Create personal meaning –Eduventures’ Prospective Student Mindsets demonstrated that students imagine different pathways, experiences, and outcomes from their college education. Do your yield events and communications speak to the mindsets that you care about the most? Can you deliver differentiated, but authentic information that illuminates specific pathways to key student mindsets?

In the end, admitted students make decisions to attend institutions that where they can “see themselves.”

]]>http://www.eduventures.com/2016/10/getting-know-getting-know/feed/0The Placebo Effect of Online Proctoringhttp://www.eduventures.com/2016/10/placebo-effect-online-proctoring/
http://www.eduventures.com/2016/10/placebo-effect-online-proctoring/#commentsTue, 04 Oct 2016 07:30:02 +0000http://www.eduventures.com/?p=10793By Jeff Alderson, Principal Analyst @eduventuresjeff As the number of students enrolled in online courses continues to increase—28% of all students take at least one online class—so, too, does the need to protect the integrity of those courses. Either recognizing that cheating is a real issue on their campuses or feeling pressure from outside accreditors […]

As the number of students enrolled in online courses continues to increase—28% of all students take at least one online class—so, too, does the need to protect the integrity of those courses. Either recognizing that cheating is a real issue on their campuses or feeling pressure from outside accreditors to demonstrate that it is a high priority, a growing number of institutions are seeking out technology to help. Given that a majority of cheating in online courses typically occurs at the bookends of a student’s academic career, either during large general education courses or during high stakes exams required for graduation, online proctoring is seen as the obvious technology solution to deter it.

In a previous Wake-Up Call, Eduventures initially predicted that the number of institutions using online proctoring would grow from 1,000 to 2,000 by the end of 2016. Based on our conversations with institutions evaluating these products, we believe the market is well on its way to exceeding our initial prediction. The most common questions we get from our clients as they evaluate online proctoring solutions are “Does it really work?” and “How much can we expect our cheating problem to be reduced if we use [a particular vendor’s] products and services?”

Having dug deeply into this topic over the last year, we’ve found limited data to definitively answer these questions. Most institutions do not collect data on cheating, and if they do, it is based on self-reported student surveys that are highly unreliable. Likewise, while online proctoring vendors themselves may have data to demonstrate the incidents of cheating from their current clients, they do not have access to pre-implementation data to make a comparison.

We do know that regardless of the ability of any tool to catch an instance of cheating, actual cheating will decline in the short term due to students’ belief that they are being watched or that the technology will catch them if they try to cheat. This initial decline in cheating is known as the placebo effect. Similar to the placebo effect seen in medical science, if students taking online exams are informed that a sophisticated technology is watching their every action during an exam, the likelihood of those students to cheat drops precipitously.

Many proctoring vendors market the overall efficacy of their products when in reality they are including (and greatly benefiting from) this placebo effect. Other vendors leverage the placebo effect to reduce the costs or complexity of their platform. Some vendors only analyze a portion of recorded exam sessions leaving students to wonder if they are being monitored at all. Other vendors, such as Software Secure, ensure that 100% of their recorded sessions are reviewed by a human proctor.

Students might be tempted to roll the dice with fully automated proctoring software, thinking that they can fool the algorithm and get away with certain types of cheating. This isn’t wise, as vendors such as ProctorFree, are still using human proctors to audit the results of their automated solutions. In fact, ProctorFree is so committed to improving the quality of their algorithm they are now auditing 100% of their automated exam sessions using real people. This has the effect of proctoring every session twice – not an environment where students should press their luck by testing the system.

In some cases, online programs have reported a drop in cheating from nearly 20% of all students in a general education math class down to less than 2% after the first academic term when using a new proctoring solution. They find, however, that this placebo effect doesn’t last forever. After the initial term, students might begin to “test” the system by searching for ways to cheat the new technology.

If the institution is using a solution that only samples a portion of its exams, this cheating may go undetected. Word then travels fast within students’ social networks and cheating begins to rise again. When cheating is finally detected by the system, the actual rate of cheating could be considerably higher. Institutions using a 100%-reviewed solution should conceivably be catching every instance of cheating if the service performs as advertised.

First, Establish a Baseline

In order to measure the effectiveness of any given online proctoring solution, institutions must first establish a baseline of cheating before implementation. This baseline provides institutions with something to measure the implemented proctoring solution against and can be a critical tool for holding vendors accountable for the efficacy of their products. Yet, rarely do institutions collect longitudinal data on the rate at which cheating occurs in their classes or programs. Often any record keeping of cheating incidents only reflects those who have been caught, and the actual numbers are usually quite higher.

Prior to implementing a proctoring solution, there are several steps institutions can take to collect baseline information:

Within the context of an honor code, create a policy for the reporting of incidents of cheating to a database managed by a centralized office such as institutional research.

Ensure that additional information is known about each incident of cheating, such as the course, instructor, student, modality of instruction, and type of incident that occurred.

Share information on which courses and programs suffer from cheating the most with deans, faculty, and staff.

In order to maximize and capitalize on this placebo effect and to ensure the long-term viability of its online proctoring implementation, institutions should be vocal with their student body and faculty to show that the online proctoring solution is doing its job. Follow these guidelines when sharing information about integrity solutions with your students:

Communicate the technical capabilities of the solution to students during its launch. Emphasize that all students are subject to monitoring when using the service.

Post notices that include institutional policy and quotes from the honor code when logging into the learning management system and when launching proctored exams.

As the new system detects incidents of cheating, share this information with faculty, staff, and students in accordance with the honor code and related policies.

Make integrity protection part of the culture of online learning at your institution.

[Editor’s note: This post’s content was updated on 10/5/16 to correct the reporting on ProctorFree’s auditing of exam sessions.]

]]>http://www.eduventures.com/2016/10/placebo-effect-online-proctoring/feed/1Who Will Guard the Guardians? Accountability in the New Age of Higher Ed Accreditationhttp://www.eduventures.com/2016/09/will-guard-guardians-accountability-new-age-higher-ed-accreditation/
http://www.eduventures.com/2016/09/will-guard-guardians-accountability-new-age-higher-ed-accreditation/#respondTue, 27 Sep 2016 07:30:29 +0000http://www.eduventures.com/?p=10782By Howard Lurie, Principal Analyst @EVhowardlurie While we’ve seen dashboards measuring everything from student completion rates to library circulation patterns, we’ve yet to come across a tool designed to monitor the organizations empowered to watch over the entire higher education system: institutional accreditors. That has changed. The Department of Education’s (ED) National Advisory Committee on […]

While we’ve seen dashboards measuring everything from student completion rates to library circulation patterns, we’ve yet to come across a tool designed to monitor the organizations empowered to watch over the entire higher education system: institutional accreditors. That has changed.

The Department of Education’s (ED) National Advisory Committee on Institutional Quality and Integrity (NACIQI) is piloting a set of dashboards designed to make sense of higher education accreditation. Released in June, NACIQI’s pilot is part of ED’s broader attempt to cast a spotlight on both accreditors and the schools within their accreditation portfolios. While it’s too early to gauge the long-term impact of these dashboards, it is now easier to watch the proverbial watchdogs, and evaluate how well accreditors monitor and improve underperforming schools.

The Regulatory Context

As we’ve noted before, ED has intensified its focus on underperforming schools, often in the for-profit sector. The dramatic closure of ITT Educational Services in early September comes on the heels of ED’s order preventing ITT from enrolling new students with Federal Title IV aid. At the beginning of August, ED rejected an attempt by a for-profit chain, the Center for Excellence in Higher Education, from converting to non-profit status and thereby reducing penalties, such as Gainful Employment regulations.

NACIQI’s accreditor dashboard pilot is both timely and relevant. In the context of a heightened regulatory environment, this pilot may expose weaker accreditors by aggregating data of key performance indicators such as institutional graduation rates, median earnings, and loan default.

The pilot coincides with a new vigilance over the accreditors in its purview. On September 22, ED formally revoked the status of Accrediting Council for Independent Colleges and Schools (ACICS) a national accrediting agency. ACICS was ITT’s accreditor. NACIQI’s dashboard reports that ITT had access to more than $700 million of Title IV aid, but reported only a 32% graduation rate and a loan default rate of 22%.

In NACIQI’s view, ACICS failed to regulate its institutions effectively, thereby calling into question its legitimacy as an accrediting agency. De-recognition of an accreditor is extremely rare: no accreditor with the size and membership of ACICS has ever suffered this penalty. De-recognition of ACICS will have a domino effect on its member institutions. Those institutions seeking to maintain eligibility for Title IV aid for the 400,000 students among them will have to affiliate with a new accreditor, a time-consuming and expensive process.

The Pilot in Depth

A legacy of inconsistent enforcement, governance, and terminology has plagued national and regional accreditors alike. Although recognized accreditors must adhere to common principles, each has evolved somewhat distinct formulas for assessing quality and implementing sanctions. As a result, it’s possible for two institutions that have been reviewed by different accreditors to receive different penalties for similar infractions.

Before NACIQI’s pilot, it was necessary to wade through disparate repositories of financial aid data and school performance to assess the effectiveness of an accreditor. According to NACIQI, it designed the pilot to create a “more systematic approach to considering student achievement and other outcome and performance metrics.”

The NACIQI pilot dashboards collate data from the Postsecondary Education Participants System (PEPS), the Federal Student Aid Database, and the College Scorecard. It presents the data in the following five sections:

First Glance: Member institutions, enrollments, Title IV aid, and institutions by sector

Underrepresented Populations: Pell recipients and students of color

Graduation and Earnings: Percentages of institutions by graduation rate and median earnings

Loan Performance: Percentages of institutions by repayment rate and Title IV volume

NACIQI also provides a dashboard showing aggregated data for all Title IV institutions managing more than $132 billion of aid. There are views combining data for all regional and national accreditors responsible for institutions with enrollments of 15.2 million and 840,000 students respectively.

Finally, there are reports for institutions with more than $200 million in Title IV funding volume (click to enlarge).

Sound and Fury, Signifying Nothing?

Can NACIQI’s dashboards drive improved institutional and accreditor performance? Is the case of ACICS a sign of things to come?

At present, NACIQI offers no commentary on the dashboards. There is no ranking of accreditors based on this data and no threshold standards to orient performance. Accreditation, particularly among regional accreditors, is a private, qualitative matter. There are no bright lines that clearly define acceptable performance.

One path NACIQI might take is to make accreditation more public and quantitative. This path may be tempting for some, but it will be fiendishly difficult to agree to performance thresholds for, say, graduation rates or median earnings when factoring in student demographics. Artificial minimums risk schools gaming the system, as has been alleged in connection with loan default rates. Legal challenges and often obscure school reporting have watered down ED’s gainful employment metrics, an earlier attempt to rein in under-performing schools. Another problem is that the available data on school and accreditor performance largely pertains to first-time, full-time undergraduates, which at many schools excludes the majority of students.

These challenges aside, it’s clear that we are entering a new era of resurgent federal oversight of higher education. ED’s first Educational Quality Through Innovation Partnerships EQUIP grants are designed to test the viability of offering federal student aid for alternative credentials offered by new partnerships between for-profit innovators and public/private institutions. Each partnership includes a quality assurance organization, none of which is a conventional accreditor. NACIQI’s dashboard may prove an important step toward a new era of higher education accreditation.