A new dataset promises to give college leaders, company officials and others involved in the online learning landscape much more information about who offers what programs, how they manage them and where the money is flowing, among other factors.

And the company behind the new data, Holon IQ, published a report today that gives a new name to the large and diversifying category of providers that are working with colleges to take their programs online: OPX, instead of OPM, for online program management companies. (More on that later.)

Also see:

Also see:

Multi-Faculty Collaboration to Design Online General Studies Courses — from facultyfocus.com by B. Jean Mandernach
Excerpt:
While this type of autonomy in course design makes sense for the face-to-face classroom, it may be less practical–and less effective–in the context of online education. Simply put, development of a high-quality online course takes considerable time and advanced knowledge of online pedagogy. If multiple faculty members are teaching the same course online (as is often the case with general studies or other high-demand courses), it is not an efficient use of departmental time, resources, or budget to have multiple faculty developing their own online classroom for different sections of the same course.

After the United Nations declared 2019 the International Year of Indigenous Languages, Google decided to help draw attention to the indigenous languages spoken around the globe and perhaps help preserve some of the endangered ones too. To that end, the company recently launched its first audio-driven collection, a new Google Earth tour complete with audio recordings from more than 50 indigenous language speakers from around the world.

A New Way Forward: CAEL Association Update (August 2019) –from evolllution.com by Marie Cini | President, CAELAs the labor market continues to evolve, CAEL will play a critical role in establishing a collaborative ecosystem linking learners, employers and postsecondary institutions.

Excerpt:

I’m delighted to announce a new partnership between CAEL and The EvoLLLution to deliver timely information on the latest advances related to serving adult working learners. When you consider the rapidly changing nature of the work our members face, it’s hard to imagine a more aptly named organization to collaborate with!

This partnership will provide CAEL members with fresh thinking twice a month in the form of a brief digital newsletter. The focus will be on lifelong learning and transforming traditional structures to better meet the needs of today’s working learners in communities, across industries, inside all postsecondary institutions.

Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.

Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.

Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.

Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)

Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.

Portability: We have the right to easily transfer all our data from one provider to another.

Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.

Algorithmic literacy: We have the right to free educational resources about algorithmic systems.

Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.

Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

Learning in the flow of life. The number-one trend for 2019 is the need for organizations to change the way people learn; 86 percent of respondents cited this as an important or very important issue. It’s not hard to understand why. Evolving work demands and skills requirements are creating an enormous demand for new skills and capabilities, while a tight labor market is making it challenging for organizations to hire people from outside. Within this context, we see three broader trends in how learning is evolving: It is becoming more integrated with work; it is becoming more personal; and it is shifting—slowly—toward lifelong models. Effective reinvention along these lines requires a culture that supports continuous learning, incentives that motivate people to take advantage of learning opportunities, and a focus on helping individuals identify and develop new, needed skills.

“I’m not worried about the moral issue here,” said Gordon Caplan, the co-chair of AmLaw 100 law firm Wilkie Farr, according to transcripts of wiretaps in the college admission scandal that you’re already starting to forget about. Mr. Caplan was concerned that if his daughter “was caught …she’d be finished,” and that her faked ACT score should not be set “too high” and therefore not be credible. Beyond that, all we know from the transcripts about Mr. Caplan’s ethical qualms is that “to be honest, it feels a little weird. But.”

That’s the line that stays with me, right through the “But” at the end. I want to tell you why, and I especially want to tell you if you’re a law student or a new lawyer, because it is extraordinarily important that you understand what’s going on here.

…

So why does any of this matter to lawyers, especially to young lawyers? Because of that one line I quoted.

“I mean this is, to be honest, it feels a little weird. But.”

Do you recognize that sound? That’s the sound of a person’s conscience, a lawyer’s conscience, struggling to make its voice heard.

This one apparently can’t muster much more than a twinge of doubt, a feeling of discomfort, a nagging sense of this isn’t right and I shouldn’t be doing it. It lasts for only a second, though, because the next word fatally undermines it. But. Yeah, I know, at some fundamental level, this is wrong. But.

It doesn’t matter what rationalization or justification follows the But, because at this point, it’s all over. The battle has been abandoned. If the next word out of his mouth had been So or Therefore, Mr. Caplan’s life would have gone in a very different direction.

From DSC:Sorry, but while the video/robot is incredible, a feeling in the pit of my stomach makes me reflect upon what’s likely happening along these lines in the militaries throughout the globe…I don’t mean to be a fear monger, but rather a realist.

As the Fourth Industrial Revolution impacts skills, tasks and jobs, there is growing concern that both job displacement and talent shortages will impact business dynamism and societal cohesion. A proactive and strategic effort is needed on the part of all relevant stakeholders to manage reskilling and upskilling to mitigate against both job losses and talent shortages.

Through the Preparing for the Future of Work project, the World Economic Forum provides a platform for designing and implementing intra-industry collaboration on the future of work, working closely with the public sector, unions and educators. The output of the project’s first phase of work, Towards a Reskilling Revolution: A Future of Jobs for All, highlighted an innovative method to identify viable and desirable job transition pathways for disrupted workers. This second report, Towards a Reskilling Revolution: Industry-Led Action for the Future of Work extends our previous research to assess the business case for reskilling and establish its magnitude for different stakeholders. It also outlines a roadmap for selected industries to address specific challenges and opportunities related to the transformation of their workforce.

What is 5G?5G networks are the next generation of mobile internet connectivity, offering faster speeds and more reliable connections on smartphones and other devices than ever before.

Combining cutting-edge network technology and the very latest research, 5G should offer connections that are multitudes faster than current connections, with average download speeds of around 1GBps expected to soon be the norm.

The networks will help power a huge rise in Internet of Things technology, providing the infrastructure needed to carry huge amounts of data, allowing for a smarter and more connected world.

With development well underway, 5G networks are expected to launch across the world by 2020, working alongside existing 3G and 4G technology to provide speedier connections that stay online no matter where you are.

So with only a matter of months to go until 5G networks are set to go live, here’s our run-down of all the latest news and updates.

From DSC:I wonder…

What will Human Computer Interaction (HCI) look like when ~1GBps average download speeds are the norm?

What will the Internet of Things (IoT) turn into (for better or for worse)?

How will Machine-to-Machine (M2M) Communications be impacted?

What will that kind of bandwidth mean for XR-related technologies (AR VR MR)?

Although we have only seen the beginning, one thing is already clear: the Fourth Industrial Revolution is the greatest transformation human civilization has ever known. As far-reaching as the previous industrial revolutions were, they never set free such enormous transformative power.

The Fourth Industrial Revolution is transforming practically every human activity...its scope, speed and reach are unprecedented.…Enormous power (Insert from DSC: What I was trying to get at here) entails enormous risk. Yes, the stakes are high.

“And make no mistake about it: we are now writing the code that will shape our collective future.” CEO of Siemens AG

…

Contrary to Milton Friedman’s maxim, the business of business should not just be business.Shareholder value alone should not be the yardstick. Instead, we should make stakeholder value, or better yet, social value, the benchmark for a company’s performance.

Today, stakeholders…rightfully expect companies to assume greater social responsibility, for example, by protecting the climate, fighting for social justice, aiding refugees, and training and educating workers. The business of business should be to create value for society.

…

This seamless integration of the virtual and the physical worlds in so-called cyber-physical systems – that is the giant leap we see today.It eclipses everything that has happened in industry so far. As in previous industrial revolutions but on a much larger scale, the Fourth Industrial Revolution will eliminate millions of jobs and create millions of new jobs.

“…because the Fourth Industrial Revolution runs on knowledge, we need a concurrent revolution in training and education.
…
If the workforce doesn’t keep up with advances in knowledge throughout their lives, how will the millions of new jobs be filled?”

Joe Kaeser, President and Chief Executive Officer, Siemens AG

From DSC:At least three critically important things jump out at me here:

We are quickly approaching a time when people will need to be able to reinvent themselves quickly and cost-effectively, especially those with families and who are working in their (still existing) jobs. (Or have we already entered this period of time…?)

There is a need to help people identify which jobs are safe to reinvent themselves to — at least for the next 5-10 years.

Citizens across the globe — and their relevant legislatures, governments, and law schools — need to help close the gap between emerging technologies and whether those technologies should even be rolled out, and if so, how and with which features.

Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.

Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.

EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:

The growing accountability gap in AI, which favors those who create and deploy these
technologies at the expense of those most affected

The use of AI to maximize and amplify surveillance, especially in conjunction with facial
and affect recognition, increasing the potential for centralized control and oppression

Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures

Unregulated and unmonitored forms of AI experimentation on human populations

The limits of technological solutions to problems of fairness, bias, and discrimination

Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.

From DSC:As I said in this posting, we need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have (and the more senior execs have not taken enough responsibility either)!

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know
what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”