The well-known adage “people are our most important asset” holds more weight than ever today. Not only do companies and higher education institutions (HEIs) need to train people for jobs that currently exist, they also need to prepare them for jobs that don’t exist yet.

Cognizant has released a Future of Learning report, based on a global survey of 601 top business executives at leading companies and 262 higher education institutions – to uncover detailed insight into the changes these entities are making in their training and educational programs, and the challenges they face in preparing tomorrow’s workforce.

The research reveals trends which will soon ripple throughout businesses and across the higher education industry.

Preparing the workforce for future jobs is a matter of survival for both businesses and HEIs

The majority of businesses (80%) and HEIs (72%) globally agree it’s extremely important to prepare workers and students to work alongside emerging digital technologies. They have a mammoth task ahead, though: Businesses and HEIs in Singapore estimate that 60% and 55% of their total staff and students, respectively, will be prepared to handle new types of work driven by emerging digital technologies in the next five years. However, a whopping 82% and 63% of Singaporean HEIs and businesses are presently unable to deliver.

Businesses are beginning to bear the burden of learning

Skills have become like mobile apps that need frequent upgrades. While 45% of businesses in Singapore currently update their learning content on an annual or biannual basis, 67% of HEIs only update their curriculum every two to six years.

Globally, businesses are intent on speeding the pace of curriculum updates, with 75% planning to move to a one- to five-month or even continuous refresh schedule in the next five years. In contrast, only 30% of HEIs plan to increase update frequency, from today’s two- to six-year cycle to an annual one by 2023.

The work ahead means working together

Preparing the current and future workforce for the work ahead cannot take place in a vacuum. Three-quarters of both businesses and HEIs globally view collaboration as critical to successfully managing the transformative and disruptive impact of the new machine age.

Singapore businesses appear less keen to collaborate with higher education institutions as opposed to educators – only 55% of businesses see collaboration as critical compared to 75% of HEIs.

Emerging technologies such as AR/VR and AI will supercharge learning by focusing on “how to learn” over “what to learn”

New modes of education delivery will emerge, with Netflix-style, on-demand digital assets allowing for anytime, anywhere self-learning. AI-driven learning platforms will personalize learning, and augmented/virtual reality (AR/VR) systems will become mainstream, with a 220% increase in the take-up of the technology by HEIs and businesses globally in in the next five years.

Based on the insights, Cognizant has developed an industry solution for businesses and higher education institutions, which they define as a ‘Future of Learning equation’. It requires the following elements of change:

More accurate skills identification to align with actual workplace needs.

Overhauling the approach of curriculum and training to be more immersive and personalised.

Provide an environment supportive of self-learning, with access to multiple content sources like open educational resources.

Ultimately, the speed at which these elements are executed will determine their efficacy in preparing an aptly-skilled workforce. Focussing on these areas will enable business leaders and educators to better navigate the rocky path of digitalisation and manage change successfully. In the face of the unknown future, businesses and HEIs will need to engage in more flexible partnerships, quicker responses, different modes of delivery and new combined-skill programs to reliably prepare people for what comes next – to remain competitive amidst the transformations and disruptions of our new machine age.

The researchers make use of various enzyme properties to predict kinetic parameters through machine learning. This allows the improvement of metabolic models and a better understanding of metabolism. Credit: David Heckmann

Bioinformatics researchers at Heinrich Heine University Düsseldorf (HHU) and the University of California at San Diego (UCSD) are using machine learning techniques to better understand enzyme kinetics and thus also complex metabolic processes. The team led by first author Dr. David Heckmann has described its results in the current issue of the journal Nature Communications.

The synthetic life sciences rely on a detailed and quantitative understanding of the complex systems in biological cells. Only if such systems are understood is their targeted manipulation possible. A system already well known is biological metabolism, in which many hundred enzymes are involved. However, a key aspect in this area, namely the individual activity of each enzyme, is only insufficiently understood in quantitative terms.

Together with his colleagues in California and Düsseldorf, Dr. David Heckmann, now in San Diego and former doctoral researcher under Professor Martin Lercher at HHU’s Institute for Computational Cell Biology, has chosen a bioinformatics approach to get to the bottom of the properties of enzymes. For this, the researchers are using machine learning, a sub-field of artificial intelligence (AI), which is already used successfully in other areas, e.g. traffic management or automated translation. Machine learning algorithms have in recent years defeated their human opponents at chess, Go, and even poker.

Their approach allowed the researchers to identify important enzyme properties that are deciding factors for their activity. With these results, they can describe the kinetics of a large number of enzymes far more clearly than was possible with previous methods.

Just two years ago, the company provided Indonesian course targeting expats in Indonesia and overseas market.

It works by connecting local students with professional teachers based in China, Japan, Philippines, and Indonesia to learn and improve foreign language skills via a live video call and text conversation anywhere and anytime.

This year, Squline managed to nab the title of the Next Dev Evangelist of 2018 in a programme by local mobile operator Telkomsel. This allowed the startup to represent Indonesia in Future Makers event in Sydney in the same year.

“The product innovation that we have for 2019 will focus more on an affordable solution and an effective way to learn language online. This will also drive market expansion to Indonesia’s B and C level of audience and upgrade their competitiveness level,” said Squline co-founder and CEO Tomy Yunus.

With very little explicit supervision and feedback, humans are able to learn a wide range of motor skills by simply interacting with and observing the world through their senses. While there has been significant progress towards building machines that can learn complex skills and learn based on raw sensory information such as image pixels, acquiring large and diverse repertoires of general skills remains an open challenge. Our goal is to build a generalist: a robot that can perform many different tasks, like arranging objects, picking up toys, and folding towels, and can do so with many different objects in the real world without re-learning for each object or task.

While these basic motor skills are much simpler and less impressive than mastering Chess or even using a spatula, we think that being able to achieve such generality with a single model is a fundamental aspect of intelligence.

The key to acquiring generality is diversity. If you deploy a learning algorithm in a narrow, closed-world environment, the agent will recover skills that are successful only in a narrow range of settings. That’s why an algorithm trained to play Breakout will struggle when anything about the images or the game changes. Indeed, the success of image classifiers relies on large, diverse datasets like ImageNet. However, having a robot autonomously learn from large and diverse datasets is quite challenging. While collecting diverse sensory data is relatively straightforward, it is simply not practical for a person to annotate all of the robot’s experiences. It is more scalable to collect completely unlabeled experiences. Then, given only sensory data, akin to what humans have, what can you learn? With raw sensory data there is no notion of progress, reward, or success. Unlike games like Breakout, the real world doesn’t give us a score or extra lives.

We have developed an algorithm that can learn a general-purpose predictive model using unlabeled sensory experiences, and then use this single model to perform a wide range of tasks.

With a single model, our approach can perform a wide range of tasks, including lifting objects, folding shorts, placing an apple onto a plate, rearranging objects, and covering a fork with a towel.

In this post, we will describe how this works. We will discuss how we can learn based on only raw sensory interaction data (i.e. image pixels, without requiring object detectors or hand-engineered perception components). We will show how we can use what was learned to accomplish many different user-specified tasks. And, we will demonstrate how this approach can control a real robot from raw pixels, performing tasks and interacting with objects that the robot has never seen before.

Learning to Predict from Unsupervised Interaction

We first need a means to collect diverse data. If we train the robot to perform a single skill with a single object instance, i.e. using a particular hammer to hit a particular nail, then it will only learn about that narrow setting; that particular hammer and nail is its entire universe. How can we build robots that learn more general skills? Instead of learning a single task in a narrow environment, we can have robots learn on their own, in diverse environments, akin to a child playing and exploring.

If a robot can collect data on its own and learn from that experience completely autonomously, then it doesn’t require a person to supervise and can hence collect experience and learn about the world at any time of day, even overnight! Further, multiple robots can collect data simultaneously and share their experiences – data collection is scalable, hence making it practical to collect diverse data with many objects and motions. To implement this, we had two robots collect data in parallel by taking random actions with a wide range of objects, both rigid objects like toys and cups, and deformable objects like cloth and towels:

Two robots interact with the world, collecting dataautonomously with many objects and many motions.

In the data collection process, we observe what the robot’s sensors measure: the image pixels (vision), the position of the arm (proprioception), and the motor commands sent to the robot (action). We cannot directly measure the positions of the objects, how they react to being pushed, their speed, etc. Further, in this data, there is no notion of progress or success. Unlike a game of Breakout or hammering a nail, we don’t get a score or an objective. All we have to learn from, when interacting in the real world, is what is provided by our senses, or in this case, the robot’s sensors.

So, what can we learn, when only given our senses? We can learn to predict — what will the world look like, or feel like, if the robots moves its arm in one way versus in another way?

The robot learns to predict what the future will look like if it movesits arm in different ways, learning about physics, objects, and itself.

Prediction allows us to learn general things about the world, things like objects and physics. And such general-purpose knowledge is exactly what the Breakout-playing agent is missing. Prediction also allows us to learn from all of the data that we have: a stream of actions and images has a lot of implicit supervision. This is critical because we don’t have a score or reward function. Model-free reinforcement learning systems typically only learn from the supervision provided from the reward function, whereas model-based RL agents utilize the rich information available in the pixels they observe. Now, how do we actually use these predictions? We will discuss this next.

Planning to Perform Human-Specified Tasks

If we have a predictive model of the world, then we can use it to plan to achieve goals. That is, if we understand the consequences of our actions, then we can use that understanding to choose actions that lead to the desired outcome. We use a sampling-based procedure to plan. In particular, we sample many different candidate action sequences, then select the top plans—the actions that are most likely to lead to the desired outcome—and refine our plan iteratively, by resampling from a distribution of actions fitted to the top candidate action sequences. Once we come up with a plan that we like, we then execute the first step of our plan in the real world, observe the next image, and then replan in case something unexpected happened.

A natural question now is—how can a user specify a goal or desired outcome to the robot? We have experimented with a number of different ways to do so. One of the easiest mechanisms that we have found is to simply click on a pixel in the initial image and specify where the object corresponding to that pixel should be moved, by clicking another pixel position. We can also give more than one pair of pixels to specify other desired object motions. While there are types of goals that cannot be expressed in this way (and we have explored more versatile goal specifications, such as goal classifiers), we have found that specifying pixel positions can be used to describe a wide variety of tasks and is remarkably easy to provide. To be clear, these user-provided goal specifications are not used during the data collection, when the robot is interacting with the world—they are only used at test-time per se, when we want the robot to use its predictive model to accomplish a certain goal.

Experiments

We experiment with this overall approach on a Sawyer robot, collecting 2 weeks of unsupervised experience. Critically, the only human involvement during training is providing a diverse range of objects for the robot to interact with (swapping out objects periodically) and coding the random robot motions that are used to collect data. This allows us to collect data on multiple robots nearly 24 hours a day, with very little effort. We train a single action-conditioned video prediction model on all of this data, including two camera viewpoints, and use the iterative planning procedure described previously to plan and execute on user-specified tasks.

Since we set out to achieve generality, we evaluate the same predictive model on a wide range of tasks involving objects that the robot has never seen before and goals the robot has not encountered previously.

For example, we ask the robot to fold shorts:

Left: The goal is to fold the left side of the shorts. Middle: the robot’sprediction corresponding to its plan. Right: the robot performs its plan.

Or put an apple on a plate:

Left: The goal is to put the apple on the plate. Middle: the robot’sprediction corresponding to its plan. Right: the robot performs its plan.

Finally, we can also ask the robot to cover a spoon with a towel:

Left: The goal is to cover the spoon with the towel. Middle: the robot’sprediction corresponding to its plan. Right: the robot performs its plan.

Interestingly, we find that, even though the model’s predictions are far from perfect, it can still use them to effectively accomplish the specified goal.

There have been many prior works that approach the problem of model-based reinforcement learning (RL), i.e. learning a predictive model, and then using this model to act or using it to learn a policy. Many of such prior works have focused on settings where the the positions of objects or other task-relevant information can be accessed directly—rather than through images or other raw sensor observations. Having this low-dimensional state representation is a strong assumption that is often impossible to fulfill in the real world . Model-based RL methods that directly operate on raw image frames have not been studied as extensively. Several algorithms have been proposed for simple, synthetic images and video game environments, which have focused on a fixed set of objects and tasks. Other work has studied model-based RL in the real world, again focusing on individual skills.

A number of recent works have studied self-supervised robotic learning, where large-scale unattended data collection is used to learn individual skills such as grasping (e.g. see these works), push-grasp synergies, or obstacle avoidance. Our approach is also fully self-supervised; in contrast with these approaches, we learn a predictive model that is goal-agnostic and can be used to perform a variety of manipulation skills.

Discussion

Generalization to many distinct tasks in visually diverse settings is arguably one of the biggest challenges in reinforcement learning and robotics research today. Deep learning has greatly reduced the amount of task-specific engineering needed to deploy an algorithm; however, prior methods typically require extensive amounts of supervised experience or focus on mastery of individual tasks. Our results suggest that our approach can generalize to a wide range of tasks and objects, including those never seen previously. The generality of the model is the result of large-scale self-supervised learning from interaction. We believe the results represent a significant step forward in terms of generality of tasks achieved by a single robotic reinforcement learning system.

A team of researchers at Yale University has recently developed a robotic system capable of representing, learning and inferring ownership relations and norms. Their study, pre-published on arXiv, addresses some of the complex challenges associated with teaching robots social norms and how to conform with them.

As robots become more prevalent, it is important for them to be able to communicate with humans both effectively and appropriately. A key aspect of human interactions is understanding and behaving according to social and moral norms, as this promotes positive co-existence with others.

Ownership norms are a set of social norms that helps to navigate shared environments in ways that are more considerate towards others. Teaching these norms to robots could enhance their interactions with humans, allowing them to distinguish between un-owned tools and owned tools that are temporarily shared with them.

“My research lab focuses on building robots that are easy for people to interact with,” Brian Scassellati, one of the researchers who carried out the study, told TechXplore. “Part of that work is looking at how we can teach machines about common social concepts, things that are essential to us as humans but that are not always the topics that attract the most attention. Understanding about object ownerships, permissions, and customs is one of these topics that hasn’t really received much attention but will be critical to the way that machines operate in our homes, schools, and offices.”

In the approach devised by Scassellati, Xuan Tan and Jake Brawer, ownership is represented as a graph of probabilistic relations between objects and their owners. This is combined with a database of predicate-based norms, which constrain the actions that the robot is allowed to complete using owned objects.

“One of the challenges in this work is that some of the ways that we learn about ownership are through being told explicit rules (e.g., ‘don’t take my tools’) and others are learned through experience,” Scassellati said. “Combining these two types of learning may be easy for people, but is much more challenging for robots.”

The system devised by the researchers combines a new incremental norm-learning algorithm that is capable of both one-shot learning and induction from examples, with Bayesian inference of ownership relations in response to apparent rule violations and percept-based prediction of an object’s likely owners. Together, these components allow the system to learn ownership norms and relations applicable in a variety of situations.

“The key to the work that Xuan and Jake did was to combine two different kinds of machine learning representation, one that learns from these explicit, symbolic rules and one that learns from small bits of experience,” Scassellati explained. “Making these two systems work together is both what makes this challenging, and in the end, what made this successful.”

The researchers evaluated the performance of their robotic system in a series of simulated and real-world experiments. They found that it could effectively complete object manipulation tasks that required a variety of ownership norms to be followed, with remarkable competency and flexibility.

The study carried out by Scassellati and his colleagues offers a notable example of how robots can be trained to infer and respect social norms. Further research could apply similar constructs to other norm-related capabilities and address complex situations in which different norms or goals are in conflict with one another.

“We’re continuing to look at how to build robots that interact more naturally with people, and this study merely focuses on one aspect of this work,” Scassellati said.

Knowing which Americans have installed solar panels on their roofs and why they did so would be enormously useful for managing the changing U.S. electricity system and to understanding the barriers to greater use of renewable resources. But until now, all that has been available are essentially estimates.

To get accurate numbers, Stanford University scientists analyzed more than a billion high-resolution satellite images with a machine learning algorithm and identified nearly every solar power installation in the contiguous 48 states. The results are described in a paper published in the Dec. 19 issue of Joule. The data are publicly available on the project’s website.

The analysis found 1.47 million installations, which is a much higher figure than either of the two widely recognized estimates. The scientists also integrated U.S. Census and other data with their solar catalog to identify factors leading to solar power adoption.

“We can use recent advances in machine learning to know where all these assets are, which has been a huge question, and generate insights about where the grid is going and how we can help get it to a more beneficial place,” said Ram Rajagopal, associate professor of civil and environmental engineering, who supervised the project withArun Majumdar, professor of mechanical engineering.

Who goes solar

The group’s data could be useful to utilities, regulators, solar panel marketers and others. Knowing how many solar panels are in a neighborhood can help a local electric utility balance supply and demand, the key to reliability. The inventory highlights activators and impediments to solar deployment. For example, the researchers found that household income is very important, but only to a point. Above $150,000 a year, income quickly ceases to play much of a role in people’s decisions.

On the other hand, low- and medium-income households do not often install solar systems even when they live in areas where doing so would be profitable in the long term. For example, in areas with a lot of sunshine and relatively high electricity rates, utility bill savings would exceed the monthly cost of the equipment. The impediment for low- and medium-income households is upfront cost, the authors suspect. This finding shows that solar installers could develop new financial models to satisfy unmet demand.

To overlay socioeconomic factors, the team members used publicly available data for U.S. Census tracts. These tracts on average cover about 1,700 households each, about half the size of a ZIP code and about 4 percent of a typical U.S. county. They unearthed other nuggets. For example, once solar penetration reaches a certain level in a neighborhood it takes off, which is not surprising. But if a given neighborhood has a lot of income inequality, that activator often does not switch on. Using geographic data, the team also discovered a significant threshold of how much sunlight a given area needs to trigger adoption.

“We found some insights, but it’s just the tip of the iceberg of what we think other researchers, utilities, solar developers and policymakers can further uncover,” Majumdar said. “We are making this public so that others find solar deployment patterns, and build economic and behavioral models.”

Finding the panels

The team trained the machine learning program, named DeepSolar, to identify solar panels by providing it about 370,000 images, each covering about 100 feet by 100 feet. Each image was labelled as either having or not having a solar panel present. From that, DeepSolar learned to identify features associated with solar panels—for example, color, texture and size.

“We don’t actually tell the machine which visual feature is important,” said Jiafan Yu, a doctoral candidate in electrical engineering who built the system with Zhecheng Wang, a doctoral candidate in civil and environmental engineering. “All of these need to be learned by the machine.”

Eventually, DeepSolar could correctly identify an image as containing solar panels 93 percent of the time and missed about 10 percent of images that did have solar installations. On both scores, DeepSolar is more accurate than previous models, the authors say in the report.

The group then had DeepSolar analyze the billion satellite images to find solar installations—work that would have taken existing technology years to complete. With some novel efficiencies, DeepSolar got the job done in a month.

The resulting database contains not only residential solar installations, but those on the roofs of businesses, as well as many large, utility-owned solar power plants. The scientists, however, had DeepSolar skip the most sparsely populated areas, because it is very likely that buildings in these rural areas either do not have solar panels, or they do but are not attached to the grid. The scientists estimated based on their data that 5 percent of residential and commercial solar installations exist in the areas not covered.

“Advances in machine learning technology have been amazing,” Wang said. “But off-the-shelf systems often need to be adapted to the specific project and that requires expertise in the project’s topic. Jiafan and I both focus on using the technology to enable renewable energy.”

Moving forward, the researchers plan to expand the DeepSolar database to include solar installations in rural areas and in other countries with high-resolution satellite images. They also intend to add features to calculate a solar installation’s angle and orientation, which could accurately estimate its power generation. DeepSolar’s measure of size is for now only a proxy for potential output.

The group expects to update the U.S. database annually with new satellite images. The information could ultimately feed into efforts to optimize regional U.S. electricity systems, including Rajagopal and Yu’s project to help utilities visualize and analyze distributed energy resources.

Meeshkan, a Finnish startup that made quite a splash at the recent Slush conference, has quietly raised €370,000 in pre-seed funding to continue developing its “ChatOps” product for machine learning developers.

Deployed on Slack, the bot allows developers to “rapidly stop, restart, fork, tweak, monitor, deploy and test machine learning models” without interrupting the collaborative workflows they are accustomed to or being forced to go back and forth between disparate developer tools.

Under the hood, Meeshkan says it uses patent-pending tech for speedily partitioning of data-flow across distributed infrastructure. Related to this, the burgeoning company is currently partnering with Northeastern University and CUDA to develop “zero-downtime” checkpointing of ML models in TensorFlow and PyTorch.

In an email exchange, Meeshkan founder Mike Solomon explained that training ML models is currently done through command line interfaces and web dashboards, which is not optimum for collaboration. This is because teams typically need to communicate about ML model training, make decisions about models, act on these decisions instantly, and react to push notifications about a job’s status, none of which can conveniently happen through the command line or web dashboards.

“My generation writes less and less code, but we are iterating on it faster and faster with incremental changes,” he says. “In machine learning, this could be a small tweak in the learning rate of a model. In unit testing, this could be covering the corner case of an API that returns null values in certain circumstances. What unites these scenarios is that developers are dealing with externalities, like data or a third-party API, and trying to build fast on top of them. A world-class IDE, while it helps with lots of problems, does not provide much value for these small tweaks. We’ve found that what developers need is a frictionless environment to make the tweak/test/learn loop turn as fast as possible”.

To begin fixing this, Solomon tells me that Meeshkan set out to create a bot on Slack that helps teams monitor and tweak the training of their ML models in realtime. “For ML engineers, we found that they spent hours on Slack discussing what to do with their models but had to resort to arcane and byzantine hacks to apply, document and archive these changes,” he says.

“We made a simple bot where teams can turn their discussions on Slack about things like changing a learning rate or a batch size into action, right from Slack. From this simple idea, the floodgates opened. Developers really quickly let us know what they wanted to control from Slack, some of which is trivial to implement, some of which is profoundly difficult and leads us to uncharted engineering territory”.

Meeshkan has several patent-pending algorithms from the resulting work. Solomon also explained that the same underlying problem exists in continuous integration and “data wrangling” as well, and that the team is developing a suite of products that address this concern.

This includes a second product called unmock.io, which brings the same idea to testing and continuous integration and has seen traction at AWS re:Invent. “We look to be releasing more tools along this line during Q1 of 2018,” he adds.

The AI Index 2018 report is out, and if you’re interested in AI enough to read this newsletter, you really should read the report through for yourself.

Maybe it’s the nerdy thing you do when lounging with family this holiday season, or something you take in during a long walk or travel, but it’s worth a look since it’s one of very few attempts to collate a comprehensive look at the amalgamation that is the AI industry. See last year’s newsletter on the annual report for a recap.

It doesn’t hurt that leaders from the most advanced organizations in this space, including OpenAI, MIT, and SRI International, played a role in putting it together.

Some major takeaways worth considering:

– Strides in performance progress continue for benchmarks like GLUE for natural language understanding as well as improvements in the AI2 Reasoning Challenge to answer multiple-choice questions like a grade-school child.

– Growth in published papers in China has been driven in part by government-affiliated authors, whose work saw a 400 percent increase in 2017. Corporate AI papers saw a 73 percent increase. Conversely, the United States saw its biggest increase in published AI papers from corporate tech giants like Google, Nvidia, and Microsoft.

As the Index reports, Europe leads the world in total number of research papers produced, followed closely by China. Within less than five years, China could lead the world in total number of papers published, according to an Elsevier report released this week.

– AI is a global industry, with 83 percent of papers on Scopus published outside the United States

– U.S. continues to lead in AI-related patents, and AI startup funding is up 4.5 times, compared to 2 times for other sectors receiving venture capital investment.

– More than half of Partnership on AI members are nonprofits now, like the ACLU and the United Nations Development Programme.

– TensorFlow is still far and away the most popular machine learning framework.

One of my favorite stats by far in this year’s report, however, is the total number of mentions of AI and machine learning in earnings calls by companies listed on the New York Stock Exchange. It’s a metric that points to how businesses are changing the way they talk about artificial intelligence.

It’s true there are still companies selling magic beans and snake oil out there, but empty claims aren’t enough anymore.

Earlier this week ahead of the release of the AI Transformation Playbook, I spoke with Andrew Ng. The former Baidu AI chief scientist and Google Brain cofounder said that as he was encouraged and a bit surprised that the irrational AI hype around AGI and killer robots did not seem as prevalent as it has been in the past. Understanding of what AI can and cannot do could help reduce these fears.

There may still be a fair deal of startups and businesses who want to call themselves AI companies now and sprinkle it all over the place to justify their value. But increasingly, it’s not enough to call yourself an AI company — you’ve got to prove it, and demonstrate why that AI creates a virtuous cycle for your company to create a competitive advantage.

It’s not entirely surprising there are more mentions of AI in earnings calls, as more companies are in fact looking to use AI. Tata Consulting reported this week that 46 percent of organizations have implemented some form of AI, but implement is not the same as successfully implemented, and smart companies aren’t just talking about AI, they’re looking for ways to successfully spread it throughout their organizations.

As time goes on and the luster of the first round of AI hype seems to go away, calling yourself an AI-first business doesn’t seem to be enough anymore.

The smartest businesses seem to be building with trust, rapidly shifting consumer sentiment, and the value of diverse employees and perspectives in mind when building systems for intelligent machines.

Today marks the kickoff for Microsoft Connect(); 2018, Microsoft’s annual cloud- and data-focused developer conference, and the Seattle company wasted no time getting down to business. It announced the general availability of Azure Machine Learning service, a cloud platform that enables developers to build, train, and deploy AI models, and updates to Azure Cognitive Services, a collection of natural language processing, speech recognition, and computer vision APIs. And it launched a more affordable Azure Cosmos DB tier, a turnkey solution for distributed cloud-based workloads.

But that’s not all. Microsoft also took the wraps off upgrades to Azure Stream Analytics on Azure IoT Edge, which processes data from IoT solutions locally; a new and improved Azure IoT Device Simulation Solution Accelerator; improvements to Azure IoT Remote Monitoring solution accelerator and Azure Time Series Insights; and Azure Maps enhancements.

AI and Data

After a relatively lengthy preview period, Azure Machine Learning is available now to all customers with a new feature in preview: model explainability. Starting this week, customers will be able to identify which input features weighed the heaviest on an AI system’s predictions.

Launching in general availability is Azure Machine Learning’s core features, including support for AI frameworks such as PyTorch, TensorFlow, and scikit-learn; automated hyperparameter tuning; and the capability to deploy to both cloud and edge environments.

“We’ve received a lot of positive feedback from customers who’ve been using Azure Machine Learning,” Eric Boyd, corporate vice president at Microsoft, told VentureBeat in a phone interview. “It’s helping them to get their work done more quickly and efficiently than before, whether in the cloud or on-premises … [because] it doesn’t require you to be a data scientist to use it. [The] automated machine learning [features] help select the appropriate algorithms to use.”

New pricing is set to take effect on February 1, 2019.

Azure Cognitive Services, meanwhile, has gained two key features: (1) container support for Language Understanding and (2) custom translation. The former, which is available in early access starting today, allows Azure developers to deploy apps with object detection, vision recognition, and speech recognition on the edge, and to more easily maintain architectures across the cloud and edge. Custom translation, meanwhile, which is now generally available, lets users tap human-translated content to build a custom translation system that can better handle specific vocabulary (think jargony terms like “contingent workforce” and “deliverables”) and distinct writing styles.

IoT

On the internet of things (IoT) side of the equation, Microsoft today made publicly available Azure Stream Analytics (ASA) on Azure IoT Edge, which simplifies the process of moving analytics between the cloud and edge devices with limited bandwidth and connectivity. Niftily, it runs within the IoT Edge framework, meaning jobs created in it can be deployed and managed using the IoT Hub.

ASA on IoT Edge launches today, following a preview that began in November 2017.

Microsoft also revealed updates to the Azure IoT Device Simulation Accelerator. Now, it’s easier to script complex device behavior (including multiple devices in a single simulation), and to run simulations that emulate real-world environments.

Those aren’t the only IoT platform updates coming down the pipeline.

Previously, IoT solution accelerators, a service which creates customized solutions for common IoT scenarios, allowed developers to manage devices, modules, and actions only within the Azure portal. But thanks to an enhanced Azure IoT Remote Monitoring user interface rolling out this week, they can now more easily trigger actions (such as email notifications) in response to device alerts, manage device updates using Automatic Device Management, and visualize device data using Azure Time Series Insights.

On the Azure Maps front, Microsoft debuted a new S1 pricing tier. It’s available alongside the Standard S0 offering and provides an enhanced service level for “production-scale” deployments of apps using Azure Maps, without a Query Per Second limitation.

Last, but not least, Microsoft launched new Time Series Insights in public preview. Azure Time Series — a full stack analytics, storage, and visualization service for time series data from IoT deployments — now lets customers more effectively store and analyze modeled and ad-hoc data. They can add rich contextualization to telemetry data, store IoT data in layers, and tap machine learning and analytics tools for insights.

Additionally, Microsoft’s introducing a new usage-based pricing model for Time Series Insights. It’s available today.

Video games have become a proving ground for AIs and Uber has shown how its new type of reinforcement learning has succeeded where others have failed.

Some of mankind’s most complex games, like Go, have failed to challenge AIs from the likes of DeepMind. Reinforcement learning trains algorithms by running scenarios repeatedly with a ‘reward’ given for successes, often a score increase.

Two classic games from the 80s – Montezuma’s Revenge and Pitfall! – have thus far been immune to a traditional reinforcement learning approach. This is because they have little in the way of notable rewards until later in the games.

Applying traditional reinforcement learning typically results in a failure to progress out the first room in Montezuma’s Revenge, while in Pitfall! it fails completely.

One way researchers have attempted to provide the necessary rewards to incentivise the AI is by adding them in for exploration, what’s called ‘intrinsic motivation’. However, this approach has shortcomings.

“We hypothesize that a major weakness of current intrinsic motivation algorithms is detachment,” wrote Uber’s researchers. “Wherein the algorithms forget about promising areas they have visited, meaning they do not return to them to see if they lead to new states.”

Uber’s AI research team in San Francisco developed a new type of reinforcement learning to overcome the challenge.

The researchers call their approach ‘Go-Explore’ whereby the AI will return to a previous task or area to assess whether it yields a better result. Supplementing with human knowledge to guide it towards notable areas sped up its progress dramatically.

If nothing else, the research provides some comfort us feeble humans are not yet fully redundant and the best results will be attained by working hand-in-binary with our virtual overlords.