AI-guided autonomous vehicles (AVs) will enable a transition to mobility on-demand over the coming years and decades. Substantial greenhouse gas reductions for urban transport can be unlocked through route and traffic optimisation, eco-driving algorithms, programmed “platooning” of cars to traffic, and autonomous ride-sharing services. Electric AV fleets will be critical to deliver real gains.

2. Distributed energy grids

AI can enhance the predictability of demand and supply for renewables across a distributed grid, improve energy storage, efficiency and load management, assist in the integration and reliability of renewables and enable dynamic pricing and trading, creating market incentives.

3. Smart agriculture and food systems

AI-augmented agriculture involves automated data collection, decision-making and corrective actions via robotics to allow early detection of crop diseases and issues, to provide timed nutrition to livestock, and generally to optimise agricultural inputs and returns based on supply and demand. This promises to increase the resource efficiency of the agriculture industry, lowering the use of water, fertilisers and pesticides which cause damage to important ecosystems, and increase resilience to climate extremes.

4. Next generation weather and climate prediction

A new field of “Climate Informatics” is blossoming that uses AI to fundamentally transform weather forecasting and improve our understanding of the effects of climate change. This field traditionally requires high performance energy-intensive computing, but deep-learning networks can allow computers to run much faster and incorporate more complexity of the ‘real-world’ system into the calculations.

In just over a decade, computational power and advances in AI will enable home computers to have as much power as today’s supercomputers, lowering the cost of research, boosting scientific productivity and accelerating discoveries. AI techniques may also help correct biases in models, extract the most relevant data to avoid data degradation, predict extreme events and be used for impacts modelling.

5. Smart disaster response

AI can analyse simulations and real-time data (including social media data) of weather events and disasters in a region to seek out vulnerabilities and enhance disaster preparation, provide early warning, and prioritise response through coordination of emergency information capabilities. Deep reinforcement learning may one day be integrated into disaster simulations to determine optimal response strategies, similar to the way AI is currently being used to identify the best move in games like AlphaGo.

AI for the Earth game-changers: indicative timeline – Image: PwC

6. AI-designed intelligent, connected and livable cities

AI could be used to simulate and automate the generation of zoning laws, building ordinances and floodplains, combined with augmented and virtual reality (AR and VR). Real-time city-wide data on energy, water consumption and availability, traffic flows, people flows, and weather could create an “urban dashboard” to optimise urban sustainability.

7. A transparent digital Earth

A real-time, open API, AI-infused, digital geospatial dashboard for the planet would enable the monitoring, modelling and management of environmental systems at a scale and speed never before possible – from tackling illegal deforestation, water extraction, fishing and poaching, to air pollution, natural disaster response and smart agriculture.

8. Reinforcement learning for Earth sciences breakthroughs

This nascent AI technique – which requires no input data, substantially less computing power, and in which the evolutionary-like AI learns from itself – could soon evolve to enable its application to real-world problems in the natural sciences. Collaboration with Earth scientists to identify the systems – from climate science, materials science, biology, and other areas – which can be codified to apply reinforcement learning for scientific progress and discovery is vital. For example, DeepMind co-founder, Demis Hassabis, has suggested that in materials science, a descendant of AlphaGo Zero could be used to search for a room temperature superconductor – a hypothetical substance that allows for incredibly efficient energy systems.

Portrait Mode on the Pixel smartphones lets you take professional-looking images that draw attention to a subject by blurring the background behind it. Last year, we described, among other things, how we compute depth with a single camera using its Phase-Detection Autofocus (PDAF) pixels (also known as dual-pixel autofocus) using a traditional non-learned stereo algorithm. This year, on […]

Portrait Mode on the Pixel smartphones lets you take professional-looking images that draw attention to a subject by blurring the background behind it. Last year, we described, among other things, how we compute depth with a single camera using its Phase-Detection Autofocus (PDAF) pixels (also known as dual-pixel autofocus) using a traditional non-learnedstereo algorithm. This year, on the Pixel 3, we turn to machine learning to improve depth estimation to produce even better Portrait Mode results.

Left: The original HDR+ image. Right: A comparison of Portrait Mode results using depth from traditional stereo and depth from machine learning. The learned depth result has fewer errors. Notably, in the traditional stereo result, many of the horizontal lines behind the man are incorrectly estimated to be at the same depth as the man and are kept sharp. (Mike Milne)

A Short Recap
As described in last year’s blog post, Portrait Mode uses a neural network to determine what pixels correspond to people versus the background, and augments this two layer person segmentation mask with depth information derived from the PDAF pixels. This is meant to enable a depth-dependent blur, which is closer to what a professional camera does.

PDAF pixels work by capturing two slightly different views of a scene, shown below. Flipping between the two views, we see that the person is stationary, while the background moves horizontally, an effect referred to as parallax. Because parallax is a function of the point’s distance from the camera and the distance between the two viewpoints, we can estimate depth by matching each point in one view with its corresponding point in the other view.

The two PDAF images on the left and center look very similar, but in the crop on the right you can see the parallax between them. It is most noticeable on the circular structure in the middle of the crop.

However, finding these correspondences in PDAF images (a method called depth from stereo) is extremely challenging because scene points barely move between the views. Furthermore, all stereo techniques suffer from the aperture problem. That is, if you look at the scene through a small aperture, it is impossible to find correspondence for lines parallel to the stereo baseline, i.e., the line connecting the two cameras. In other words, when looking at the horizontal lines in the figure above (or vertical lines in portrait orientation shots), any proposed shift of these lines in one view with respect to the other view looks about the same. In last year’s Portrait Mode, all these factors could result in errors in depth estimation and cause unpleasant artifacts.

Improving Depth Estimation
With Portrait Mode on the Pixel 3, we fix these errors by utilizing the fact that the parallax used by depth from stereo algorithms is only one of many depth cues present in images. For example, points that are far away from the in-focus plane appear less sharp than ones that are closer, giving us a defocus depth cue. In addition, even when viewing an image on a flat screen, we can accurately tell how far things are because we know the rough size of everyday objects (e.g. one can use the number of pixels in a photograph of a person’s face to estimate how far away it is). This is called a semantic cue.

Designing a hand-crafted algorithm to combine these different cues is extremely difficult, but by using machine learning, we can do so while also better exploiting the PDAF parallax cue. Specifically, we train a convolutional neural network, written in TensorFlow, that takes as input the PDAF pixels and learns to predict depth. This new and improved ML-based method of depth estimation is what powers Portrait Mode on the Pixel 3.

Training the Neural Network
In order to train the network, we need lots of PDAF images and corresponding high-quality depth maps. And since we want our predicted depth to be useful for Portrait Mode, we also need the training data to be similar to pictures that users take with their smartphones.

To accomplish this, we built our own custom “Frankenphone” rig that contains five Pixel 3 phones, along with a Wi-Fi-based solution that allowed us to simultaneously capture pictures from all of the phones (within a tolerance of ~2 milliseconds). With this rig, we computed high-quality depth from photos by using structure from motion and multi-view stereo….

]]>https://www.datascience.us/learning-to-predict-depth-on-the-pixel-3-phones/feed/0Things to come in a very near future [Infographic]https://www.datascience.us/things-to-come-in-a-very-near-future-infographic/
https://www.datascience.us/things-to-come-in-a-very-near-future-infographic/#respondTue, 04 Dec 2018 02:43:36 +0000https://www.datascience.us/?p=5982

Artificial Intelligence, among many other technologies, are changing how our life’s will be in the upcoming years.

But they also have the potential to be the easiest to use. For the launch of our V4 software, we set ourselves the challenge of designing an application where someone who has never even touched a VR headset could figure out what to do with no instructions whatsoever. That application became Cat Explorer, which you […]

But they also have the potential to be the easiest to use. For the launch of our V4 software, we set ourselves the challenge of designing an application where someone who has never even touched a VR headset could figure out what to do with no instructions whatsoever. That application became Cat Explorer, which you can download now for Oculus Rift and HTC Vive.

Perhaps the most important issue to consider when designing hand interactions with virtual objects is the lack of tactile perception and material support. In the physical world, we rely on tangible things to dissipate tremors and inaccuracies, offer mechanical constraints that restrict erratic motion, provide high fidelity feedback at the points of contact, and serve as a rest, allowing some of the arm muscles to relax while staying productive over longer periods of time.

In a landmark study, US lawyers with decades of experience in corporate law and contract review were pitted against the LawGeex AI algorithm to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals. Twenty US-trained lawyers, with decades of legal experience ranging from law firms to corporations, were […]

In a landmark study, US lawyers with decades of experience in corporate law and contract review were pitted against the LawGeex AI algorithm to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.

Twenty US-trained lawyers, with decades of legal experience ranging from law firms to corporations, were asked to issue-spot legal issues in five standard NDAs. They competed against a LawGeex AI system that has been developed for three years and trained on tens of thousands of contracts.The research was conducted with input from academics, data scientists, and legal and machine-learning experts, and was overseen by an independent consultant and lawyer.

Artificial Intelligence, Machine Learning, Virtual Reality, Augmented Reality, RPA, and IoT, are some of the reasons, several new IT jobs will grow in the coming years, but also some will decline and disappear or they will transform into other roles, most likely automated.

Artificial Intelligence, Machine Learning, Virtual Reality, Augmented Reality, RPA, and IoT, are some of the reasons, several new IT jobs will grow in the coming years, but also some will decline and disappear or they will transform into other roles, most likely automated.

Artificial intelligence has been a focus of discussions at the World Economic Forum’s annual meeting in Davos, Switzerland, over the past few years, so the organization decided to partner with Deloitte Consulting on a study that sought to “cut through the sensationalism surrounding AI” and offer helpful insights for business leaders and policymakers. “Financial institutions […]

Artificial intelligence has been a focus of discussions at the World Economic Forum’s annual meeting in Davos, Switzerland, over the past few years, so the organization decided to partner with Deloitte Consulting on a study that sought to “cut through the sensationalism surrounding AI” and offer helpful insights for business leaders and policymakers.

“Financial institutions around the world are making large-scale investments in AI, while governments and regulators seek to grapple with the significant uncertainties and growing public trepidation as AI becomes central to the fabric of institutions and markets,” according to a new report the forum published with Deloitte, “The New Physics of Financial Services.”

The two organizations surveyed financial services executives about AI and held half-day workshops around the world, including one at Davos, on the topic over the past year. They came to several conclusions about how AI is reshaping the financial industry, including the five noted here.

Banks will need to use AI to create competitive products

“As products and services become more easily comparable and therefore commoditized, it’s not sufficient any more to compete on delivering credit quickly and at a good price, which have been the historic competitive levers” for banks, said Rob Galaski, Deloitte Global Banking and Capital Marketing Consulting leader and one of the authors of the report.

For example, to keep its auto loan business relevant, Royal Bank of Canada is piloting a forecasting tool for car dealers to predict demand for vehicle purchases based on customer data.

Such information could be more valuable to the dealers than any banking product, Galaski said.

“We think that is an exemplar of how we see the industry changing overall,” he said. “Much of the AI debate coming into our work was around replacing humans and doing existing things better or faster. But that take on AI dramatically underestimates the impact. The very way we go about conducting business can be redesigned using AI.”

Companies that don’t have scale or AI-based customization will get squeezed out

The report hypothesizes that midtier financial services providers will struggle in this AI-based competition.

“If you’re the scale player in the offering of a financial product or service and you’re able to offer it at lowest cost, that will continue to be a sustainable position,” said R. Jesse McWaters, financial innovation lead at the World Economic Forum and another author of the report. “Otherwise, you’re going to need to offer some level of customization. Simply having a fairly low-cost but relatively undifferentiated product will no longer be a sustainable competitive strategy.”

Banks that try to play a jack-of-all-trades role are likely to suffer, he said. Those that become more specialized in products or customers served will succeed.

Adaptability will be all-important in this AI-fueled competitive landscape

“The proper delineation between who wins and who loses comes down to the degree of adaptability rather than the size of the company,” Galaski said. “The natural advantage does go to scale players. But if you’re scaled but have low adaptability, you will lose. If you have don’t have scale but have a high degree of adaptability, there are a number of modular service providers you can plug into your infrastructure to gain the perception or appearance of scale.”

Several technologies are reaching maturity and needful for banks to adjust, he said: blockchain, AI, quantum computing and cloud computing.

“They are expensive to implement, they require massive scales of data, they require expertise to operate them, so naturally speaking, larger companies should be able to have an advantage” in implementing them, Galaski said. “But the larger companies have shown themselves to be less adaptable in many cases than smaller companies.”

Large-scale players that develop an adaptable mindset will be the winners in the future, he said….

An example that gets cited frequently to show how difficult this can be is the moral decision an autonomous car might have to make to avoid a collision: Suppose there’s a bus coming toward a driver who has to swerve to avoid being hit and seriously injured; however, the car will hit a baby if […]

An example that gets cited frequently to show how difficult this can be is the moral decision an autonomous car might have to make to avoid a collision: Suppose there’s a bus coming toward a driver who has to swerve to avoid being hit and seriously injured; however, the car will hit a baby if it swerves left and an elderly person if it swerves right—what should the autonomous car do?

“Without proper care in programming AI systems, you could potentially have the bias of the programmer play a part in determining outcomes. We have to develop frameworks for thinking about these types of issues. It is a very, very complicated topic, one that we’re starting to address in partnership with other technology organizations,” says Arvind Krishna, Senior Vice President of Hybrid Cloud and Director of IBM Research, referring to the Partnership on AI formed by IBM and several other tech giants.

There have already been several high profile instances of machines demonstrating bias. AI technicians have experienced first-hand how this can erode trust in AI systems, and they’re making some progress toward identifying and mitigating the origins of bias.

“Machines get biased because the training data they’re fed may not be fully representative of what you’re trying to teach them,” says IBM Chief Science Officer for Cognitive Computing Guru Banavar. “And it could be not only unintentional bias due to a lack of care in picking the right training dataset, but also an intentional one caused by a malicious attacker who hacks into the training dataset that somebody’s building just to make it biased.”

As Gabi Zijderveld, Affectiva’s head of product strategy and marketing explains, preventing bias in datasets is largely a manual effort. In her organization, which uses facial recognition to measure consumer responses to marketing materials, they select a culturally diverse set of images from more than 75 countries to train their AI system to recognize emotion in faces. While emotional expressions are largely universal, they do sometimes vary across cultures. For example, a smile that appears less pronounced in one culture might actually convey the same level of happiness as a smile in another culture. Her organization also labels all the images with their corresponding emotion by hand and tests every single AI algorithm to verify its accuracy.

To further complicate efforts to instill morality in AI systems, there is no universally accepted ethical system for AI. “It begs the question, ‘whose values do we use?’” says IBM Chief Watson Scientist Grady Booch. “I think today, the AI community at large has a self-selecting bias simply because the people who are building such systems are still largely white, young and male. I think there is a recognition that we need to get beyond it, but the reality is that we haven’t necessarily done so yet.”

And perhaps the value system for a computer should actually be altogether different than that of humans, posits IBM Research Manager in affective computing David Konopnicki. “When we interact with people, the ethics of interaction are usually clear. For example, when you go to a store you often have a salesman that is trying to convince you to buy something by playing on your emotions. We often accept that from a social point of view—it’s been happening for thousands of years. The question is, what happens when the salesman is a computer? What people find appropriate or not from a computer might be different than what people are going to accept from a human.”…

There may be potential, they say, but it is far too hard to implement AI and see results, and many have just given up. This is a shame, because AI has the potential to change marketers’ lives in a very real way. Some tools, of course, are best used by large companies, but there are […]

There may be potential, they say, but it is far too hard to implement AI and see results, and many have just given up.

This is a shame, because AI has the potential to change marketers’ lives in a very real way. Some tools, of course, are best used by large companies, but there are a number than can be used by companies of all sizes, and which will make a big difference to how you contact new and potential customers, the action you take to drive a first purchase, converting single-purchase customers to repeat customers, and engage long-term customers more fully.

Making contact

Making contact with potential customers is very much about content in marketing terms. Content is what pulls in potential customers, and then keeps them looking. Many marketers would say that AI does not have much of a role in a content strategy, and this is certainly true if your content is focused opinion-type articles.

There are, however, technologies that can help with particular types of content. AI report writers can provide a reasonable summary of regular financial reports or routine data in a fairly human-sounding way. They can, therefore, supplement human content-writers by doing some of the more basic work. AI can also help to indicate other articles and content that readers might like. This uses very similar technology to the recommendations engine at Amazon or Netflix, pointing out other, similar articles.

These examples are ways in which marketers are using AI to make their lives easier. There are, however, also ways in which AI is changing how marketers need to work. These include both search tools and online advertising. These are probably the main ways in which people find information online, but the way that these tools are being used is changing. The arrival of AI-based intermediaries, such as personal assistants like Siri and Alexa, may alter how search is used. Marketers need to be aware of what is happening and ensure that their practice reflects this.

There are also ways in which AI is changing how marketers need to work. These include both search tools and online advertising.

Ad placement is also becoming more scientific and may change in the wake of scandals about inappropriate websites. These are early days, but marketers need to keep an eye on what works and be prepared to change tactics relatively rapidly if necessary.

Drawing customers in and converting them to repeat customers

There are a number of ways in which AI will change the way that marketers work with customers pre-sales and to encourage repeat purchasing. Many of these are based around predictive modelling. There is more and more data available about customer behaviour, and models can predict future actions with increasing accuracy. This, for example, allows marketers to identify the most likely prospects, both for initial sales and future sales. Advertisements can also be targeted with increased accuracy…