The global market for Internet of Things (IoT) technology, which consists of software, services, connectivity, and devices, reached $130 billion in 2018, and is projected to reach $318 billion by 2023, at a compound annual growth rate (CAGR) of 20 per cent, according to GlobalData.

GlobalData forecasts show that solutions for government, utilities and manufacturing dominate the market, with a total of 58 per cent of the opportunity in 2018 and a slighter smaller 55 per cent of the market in 2023, as others such as travel and leisure and retail grow their respective shares. Energy and transportation are other major verticals, with a combined 15 per cent of the market in both 2018 and 2023.

Also see:

As digital technology pervades the utility industry so too does the risk of cyber attacks— from by which-50.com by Joseph BrookesExcerpt:
Smart metres and IoT have the potential to optimise performance and maintenance of the billions of dollars worth of infrastructure in Australian utilities. But each new device creates a potential access point to systems that are not designed with cyber security in mind and, in some cases, are already exposed.

A collective eyebrow was raised by the AI and robotics community when the robot Sophia was given Saudia citizenship in 2017 The AI sharks were already circling as Sophia’s fame spread with worldwide media attention. Were they just jealous buzz-kills or is something deeper going on? Sophia has gripped the public imagination with its interesting and fun appearances on TV and on high-profile conference platforms.

Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike. In an AI-hungry world where decisions about the application of the technologies will impact significantly on our lives, Sophia’s creators may have crossed a line. What might the negative consequences be? To get answers, we need to place Sophia in the context of earlier show robots.

A dangerous path for our rights and securityFor me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care.

It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy. It is not clear how much the Hanson Robotics team are aware of the dangers that they are creating by appearing on international platforms with government ministers and policymakers in the audience.

The FCC this weekunanimously approvedSpaceX’s ambitious plan to launch 7,518 satellites into low-Earth orbit. These satellites, along with 4,425 previously approved satellites, will serve as the backbone for the company’s proposedStarlink broadband network. As it does with most of its projects, SpaceX is thinking big with its global broadband network. The company is expected to spend more than $10 billion to build and launch a constellation of satellites that will provide high-speed internet coverage to just about every corner of the planet.

To put this deployment in perspective, there are currently only 1,886 active satellites presently in orbit. These new SpaceX satellites will increase the number of active satellites six-fold in less than a decade.

From Tesla to Hyperloop to plans to colonize Mars, it’s fair to say that Elon Musk thinks big. Among his many visionary ideas is the dream of building a space internet. Called Starlink, Musk’s ambition is to create a network for conveying a significant portion of internet traffic via thousands of satellites Musk hopes to have in orbit by the mid-2020s. But just how feasible is such a plan? And how do you avoid them crashing into one another?

From DSC:Is this even the FCC’s call to make?

One one hand, such a network could be globally helpful, positive, and full of pros. But on the other hand, I wonder…what are the potential drawbacks with this proposal? Will nations across the globe launch their own networks — each of which consists of thousands of satellites?

While I love Elon’s big thinking, the nations need to weigh in on this one.

The rise of crypto in higher education — from blog.coinbase.comCoinbase regularly engages with students and universities across the country as part of recruiting efforts. We partnered with Qriously to ask students directly about their thoughts on crypto and blockchain — and in this report, we outline findings on the growing roster of crypto and blockchain courses amid a steady rise in student interest.

Key Findings

42 percent of the world’s top 50 universities now offer at least one course on crypto or blockchain

Students from a range of majors are interested in crypto and blockchain courses — and universities are adding courses across a variety of departments

Original Coinbase research includes a Qriously survey of 675 U.S. students, a comprehensive review of courses at 50 international universities, and interviews with professors and students

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

In a world where machines may have an IQ of 50,000 and the Internet of Things may encompass 500 billion devices, what will happen with those important social contracts, values and ethics that underpin crucial issues such as privacy, anonymity and free will?

My book identifies what I call the “Megashifts”. They are changing society at warp speed, and your organisations are in the eye of the storm: digitization, mobilisation and screenification, automation, intelligisation, disintermediation, virtualisation and robotisation, to name the most prominent. Megashifts are not simply trends or paradigm shifts, they are complete game changers transforming multiple domains simultaneously.

If the question is no longer about if technology can do something, but why…who decides this?

Gerd Leonhard

From DSC:Though this letter was written 2 years ago back in October of 2016, the messages, reflections, and questions that Gerd puts on the table are very much still relevant today. The leaders of these powerful companies have enormous power — power to do good, or to do evil. Power to help or power to hurt. Power to be a positive force for societies throughout the globe and to help create dreams, or power to create dystopian societies while developing a future filled with nightmares. The state of the human heart is extremely key here — though many will hate me saying that. But it’s true. At the end of the day, we need to very much care about — and be extremely aware of — the characters and values of the leaders of these powerful companies.

Foresight Tools
IFTF has pioneered tools and methods for building foresight ever since its founding days. Co-founder Olaf Helmer was the inventor of the Delphi Method, and early projects developed cross-impact analysis and scenario tools. Today, IFTF is methodologically agnostic, with a brimming toolkit that includes the following favorites…

From DSC:How might higher education use this foresight workflow? How might we better develop a future-oriented mindset?

From my perspective, I think that we need to be pulse-checking a variety of landscapes, looking for those early signals. We need to be thinking about what should be on our radars. Then we need to develop some potential scenarios and strategies to deal with those potential scenarios if they occur. Graphically speaking, here’s an excerpted slide from my introductory piece for a NGLS 2017 panel that we did.

This resource regarding their foresight workflow was mentioned in a recent e-newsletter from the FTI where they mentioned this important item as well:

Climate change: a megatrend that impacts us allExcerpt:
Earlier this week, the United Nations’ scientific panel on climate change issued a dire report [PDF]. To say the report is concerning would be a dramatic understatement. Models built by the scientists show that at our current rate, the atmosphere will warm as much as 1.5 degrees Celsius, leading to a dystopian future of food shortages, wildfires, extreme winters, a mass die-off of coral reefs and more –– as soon as 2040. That’s just 20 years away from now.

But China also decided to ban the import of foreign plastic waste –– which includes trash from around the U.S. and Europe. The U.S. alone could wind up with an extra 37 million metric tons of plastic waste, and we don’t have a plan for what to do with it all.

Immediate Futures Scenarios: Year 2019

Optimistic: Climate change is depoliticized. Leaders in the U.S., Brazil and elsewhere decide to be the heroes, and invest resources into developing solutions to our climate problem. We understand that fixing our futures isn’t only about foregoing plastic straws, but about systemic change. Not all solutions require regulation. Businesses and everyday people are incentivized to shift behavior. Smart people spend the next two decades collaborating on plausible solutions.

Pragmatic: Climate change continues to be debated, while extreme weather events cause damage to our power grid, wreak havoc on travel, cause school cancellations, and devastate our farms. The U.S. fails to work on realistic scenarios and strategies to combat the growing problem of climate change. More countries elect far-right leaders, who shun global environmental accords and agreements. By 2029, it’s clear that we’ve waited too long, and that we’re running out of time to adapt.

Catastrophic: A chorus of voices calling climate change a hoax grows ever louder in some of the world’s largest economies, whose leaders choose immediate political gain over longer-term consequences. China builds an environmental coalition of 100 countries within the decade, developing both green infrastructure while accumulating debt service. Beijing sets global standards emissions––and it locks the U.S out of trading with coalition members. Trash piles up in the U.S., which didn’t plan ahead for waste management. By 2040, our population centers have moved inland and further north, our farms are decimated, our lives are miserable.

More and more people are learning for themselves – in whatever way that suits them best – whether it is finding resources or online courses on the Web or interacting with their professional network. And they do all this for a variety of reasons: to solve problems, self-improve and prepare themselves for the future, etc.

Learning at work is becoming more personal and continuous in that it is a key part of many professional’s working day. And what’s more people are not only organising their own learning activities, they are also indeed managing their own development too – either with (informal) digital notebooks, or with (formal) personal learning platforms.

But it is in team collaboration where most of their daily learning takes place, and many now recognise and value the social collaboration platforms that underpin their daily interactions with colleagues as part of their daily work.

In other words, many people now see workplace learning as not just something that happens irregularly in corporate training, but as a continuous and on demand activity.

From DSC:Reminds me of tapping into — and contributing towards — streams of content. All the time. Continuous, lifelong learning.

[On 9/24/18], I released the Top Tools for Learning 2018 , which I compiled from the results of the 12th Annual Digital Learning Tools Survey.

I have also categorised the tools into 30 different areas, and produced 3 sub-lists that provide some context to how the tools are being used:

Top 100 Tools for Personal & Professional Learning 2018 (PPL100): the digital tools used by individuals for their own self-improvement, learning and development – both inside and outside the workplace.

Top 100 Tools for Workplace Learning (WPL100): the digital tools used to design, deliver, enable and/or support learning in the workplace.

Top 100 Tools for Education (EDU100): the digital tools used by educators and students in schools, colleges, universities, adult education etc.

3 – Web courses are increasing in popularity.Although Coursera is still the most popular web course platform, there are, in fact, now 12 web course platforms on the list. New additions this year include Udacity and Highbrow (the latter provides daily micro-lessons). It is clear that people like these platforms because they can chose what they want to study as well as how they want to study, ie. they can dip in and out if they want to and no-one is going to tell them off – which is unlike most corporate online courses which have a prescribed path through them and their use is heavily monitored.

5 – Learning at work is becoming personal and continuous.The most significant feature of the list this year is the huge leap up the list that Degreed has made – up 86 places to 47th place – the biggest increase by any tool this year. Degreed is a lifelong learning platform and provides the opportunity for individuals to own their expertise and development through a continuous learning approach. And, interestingly, Degreed appears both on the PPL100 (at 30) and WPL100 (at 52). This suggests that some organisations are beginning to see the importance of personal, continuous learning at work. Indeed, another platform that underpins this, has also moved up the list significantly this year, too. Anders Pink is a smart curation platform available for both individuals and teams which delivers daily curated resources on specified topics. Non-traditional learning platforms are therefore coming to the forefront, as the next point further shows.

From DSC:Perhaps some foreshadowing of the presence of a powerful, online-based, next generation learning platform…?

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

…

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

…

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

Last year Amazon announced a new feature for its Amazon Web Services called Amazon Sumerian, a platform that would allow anyone to create full-featured virtual reality experiences. The Royal Melbourne Institute of Technology (RMIT) have now announced that it will be offering short courses in artificial intelligence (AI), augmented reality (AR) and VR thanks to a partnership with Amazon Web Services.

The new partnership between RMIT and Amazon Web Services was announced at the AWS Public Sector Summit and Canberra, Australia on Wednesday. The new courses will be using the Amazon Sumerian platform.

The newly launched courses includes Developing AI Strategy, Developing AR and VR Strategy and Developing AR and VR Applications. All of these have been adapted from the AWS Educate program, which was created to react to the changing nature of the workplace, and how immersive technology is increasingly relevant.

My team’s mission is to build a community of lifelong learners, successfully navigating the world of work … yes, sometimes your degree is the right solution education wise for a person, but throughout our lives, certainly I know in my digital career, constantly we need to be updating our skills and understand the new, emerging technology and talk with experts.”