Please give an overview of the past research into machine learning and artificial intelligence in medical imaging. What are we currently able to do with this research?

The two major tasks in medical imaging that appear to be naturally predestined to be solved with AI algorithms are segmentation and classification. Most of techniques used in medical imaging were conventional image processing, or more widely formulated computer vision algorithms.

One can find many works with artificial neural networks, the backbone of deep learning. However, most works were focused on conventional computer vision which focused, and still does, on “handcrafted” features, techniques that were the results of manual design to extract useful and differentiating information from medical images.

Some progress was visible in the late 90s and early 2000s (for instance, the SIFT method in 1999, or visual dictionaries in early 2000s) but there were no breakthroughs. However, techniques like clustering and classification were in use with moderate success.

K-means (an old clustering method), support vector machines (SVM), probabilistic schemes, and decisions trees and their extended version ‘random forests’ were among successful approaches. But artificial neural networks continued to fall short of expectations not just in medical imaging, but in computer vision in general.

Shallow networks (consisting of a few layers of artificial neurons) could not solve difficult problems and deep networks (consisting of many layers of artificial neurons) could not be trained because they were too big. By the mid 2000s there was theatrical progress in this field with the first major success stories in early 2010s on large datasets like ImageNet.

Now suddenly, it was possible to recognise cats and cars in an image, perform facial recognition and automatically label images with a caption describing its content. The investigations of applications of these powerful AI methods in medical imaging has started in the past 3-4 years and is in its infancy but promising results have been reported here and there.

What applications are there for machine learning and artificial intelligence in medical imaging?

Based on recent publications, it seems that the focus of many researchers is on diagnosis, mainly cancer diagnosis, where the output of the AI software is often a “yes/no” decision for malignant/benign, respectively.

The other stream is working on segmenting (marking) specific parts of the images, again with the main attention of many works being on cancer diagnosis and analysis, but also for treatment planning and monitoring.

However, there is much more that AI can offer to medical imaging. Looking at its potentials for radiogenomics, auto-captioning of medical images, recognition of highly non-linear patterns in large datasets, and quantification and visualization of extremely complex image content, are just some examples. We are at the very beginning of an exciting path with many bifurcations.

What are the current limitations in the characterization of tissues and their attributes with artificial intelligence? What needs to be done to overcome this?

AI is a large field with a multitude of techniques based of different ideas. Deep learning is just one of them, but it is the one with the most success in recognizing image content in recent years. However, deep learning faces multiple challenges in digital pathology.

First and foremost, it requires a large number of marked (labelled) images (images in which the region of interest has been manually delineated by a pathologist) but the general workflow of digital pathology does not provide labelled images. This has led the research to work on specific cases, e.g., breast cancer, for which a small number of labelled images can be provided to demonstrate the feasibility of deep learning.

Another major challenge for deep learning in digital pathology is the dimensionality of the problem. Pathology images are extremely large, i.e., larger than 50,000 by 50,000 pixels. Deep networks, however, can only handle small input images, i.e., not larger than 300 by 300 pixels. Down-sampling images (making them smaller) would result in a loss of information.

A further obstacle in training deep networks is that they generally perform well if they are fed with “balanced” data, that means having almost the same number of images for any category you need to recognize. Imbalanced data impedes generalization, which means the network may make grave mistakes after training.

A final problem worth mentioning is the so-called “adversarial attacks” when someone with knowledge of the system, or exploiting the presence of artefacts and noise, could eventually fool a deep network into a wrong decision, an effect that is extraordinarily important in medical imaging; we cannot allow algorithms to be fooled when we are dealing with people’s lives.

Intensive research is being conducted at many fronts to find solutions for these and other challenges. Among others, one potential solution being worked on is “transfer learning”, to learn in a different domain and transfer the knowledge into the medical domain.

Can we teach the AI with millions of labelled natural photos (e.g., cars, faces, animals, buildings) and then use the acquired knowledge on histopathology images? Other potential remedies are to inject domain knowledge into deep networks, training “generative” models that do not directly deal with classification, and combining deep solutions with conventional algorithms and handcrafted features.

How would the use of medical imaging interplay with other histopathological tests? Could they be replaced with a simple image search?

Definitely not. Image searches would be a new facilitator that will assist the pathologist and provide new insights. Presently, we may not have an accurate understanding of where the image search would fit most usefully, but we know for sure that the pathologist must remain in the center of all processing.

The tasks that we assign to the AI and computer vision will be widely specialized and customized; they naturally cannot render other existing (non-AI) technologies and other modes of tests useless. It’s all about complementing existing procedures with new insights, and not replacing them; well, at least this should be the guiding attitude.

Please give an overview of your recent research to advance this field and the techniques that you have used.

At Kimia Lab, we have been working on a multitude of techniques, from deep networks to support vector machines, from local binary patterns to Radon transform, and from deep autoencoders to dimensionality reduction.

Our research philosophy is unconditionally pathologist-centric; we are there to design AI techniques that serve the pathology community. We are convinced that this is the right way of deploying AI, namely as a smart assistant to the pathologist and not a competitor.

We introduced a fundamental shift in our research and refrained from engaging in yes/no classification and instead are conducting many experiments to understand the polymorphic nature of tissue recognition before we attempt to design a final chain for the clinical workflow.

In addition, we have not lost our focus on non-AI computer vision for there are a lot of conventional methods that exhibited mediocre performance back in the day, but can now be rediscovered as partners to the powerful AI by relying on the faster computational platforms available.

What advantages are there to the Radon transform that you used in your research?

This is one example of our efforts to not lose sight of well-established technologies. Radon transform is an old technique and has enabled us, among others, to do computed tomography.

Projections in small and large parts of the image can provide compressed information about tissue characteristics and where significant changes occur. They can serve as inputs to AI algorithms to provide additional information in a setting where multiple technologies work together.

Radon transform is not only a mathematically sound technology but, in contrast to deep networks, is interpretable. Why a specific image is selected can be relatively easily understood when we acquire Radon projections, whereas the millions of multiplications and additions inside a network do not offer any plausible way for understanding why a specific decision has been made.

However, we need deep architectures to learn. Hence, combining the old and the new is something we are heavily investing in.

How can artificially intelligent search and categorization of medical images accelerate disease research and improve patient care?

If we abandon the classification-oriented AI (making yes/no decisions), which aims at eliminating the diagnostic role of the pathologist, then we are left with mining-oriented AI that identifies and extracts similar patterns from large archives of medical images.

Showing similar images to the pathologist when s/he is examining a new case is not something extraordinary, unless the retrieved cases are annotated with the information of evidently diagnosed patients from the past.

Then we have something that has never been done before: we are tapping into the collective wisdom of the physicians themselves to provide them with computational consultation. Consulting other pathologists for difficult cases is a common practice.

However, the image search will give us access to “computationally” consult hundreds of pathologists across the country (and the globe) through digital records. This will expedite the process, reduce error rates, save lives, release valuable pathologist time for other tasks (e.g. research and education), and finally save costs.

Where do you see the future of machine learning with regards to medical imaging?

Perhaps many of us are hoping that radiogeomics would be a revolutionary change in disease diagnosis that among others would may make the biopsy superfluous, as some researchers audaciously envision.

However, for the foreseeable future, we should rather look at “consensus building”. The manifestation of medical diagnosis difficulty is clearly visible in the so-called “inter-observer variability”; doctors cannot agree on a diagnosis or measurement when given the same case.

For some cases like breast and lung cancer the disagreement can approach and even exceed 50% when the exact location of the malignancy is involved. Using AI for identifying and retrieving similar abnormalities and malignancies will open the horizon for building consensus.

If we can find several thousand cases of the past patients that can be confidently matched with the data of the current patient, then a “computational consensus” is not far away. The beauty of it is, again, that the AI will not be making any diagnostic decision but just making the existing medical wisdom accessible, the wisdom that is currently fallow under terabytes of digital dust.

As the technology advances, will there be a need to pathologists in the future?

The tasks and workload of the pathologists will certainly go through some transformation but the sensitive nature of what they do on one side, and the breadth and depth of knowledge they hold, on the other side, makes them indispensable as the ultimate recognition entities.

It is imaginable in near future that, by employing high-level visual programming languages, pathologists design and teach their own AI agents for very specific tasks. Not engineers, not computer scientists, it will be the pathologists that would have the medical knowledge to be in charge of exploiting the AI capabilities.

Mechanical engineering researchers are using AI and machine learning technologies to enhance the products we use in everyday life.

Chelsea Turner/MIT

“Who is Bram Stoker?” Those three words demonstrated the amazing potential of artificial intelligence. It was the answer to a final question in a particularly memorable 2011 episode of Jeopardy!. The three competitors were former champions Brad Rutter and Ken Jennings, and Watson, a super computer developed by IBM. By answering the final question correctly, Watson became the first computer to beat a human on the famous quiz show.

“In a way, Watson winning Jeopardy! seemed unfair to people,” says Jeehwan Kim, the Class ‘47 Career Development Professor and a faculty member of the MIT departments of Mechanical Engineering and Materials Science and Engineering. “At the time, Watson was connected to a super computer the size of a room while the human brain is just a few pounds. But the ability to replicate a human brain’s ability to learn is incredibly difficult.”

Kim specializes in machine learning, which relies on algorithms to teach computers how to learn like a human brain. “Machine learning is cognitive computing,” he explains. “Your computer recognizes things without you telling the computer what it’s looking at.”

Machine learning is one example of artificial intelligence in practice. While the phrase “machine learning” often conjures up science fiction typified in shows like "Westworld" or "Battlestar Galactica," smart systems and devices are already pervasive in the fabric of our daily lives. Computers and phones use face recognition to unlock. Systems sense and adjust the temperature in our homes. Devices answer questions or play our favorite music on demand. Nearly every major car company has entered the race to develop a safe self-driving car.

For any of these products to work, the software and hardware both have to work in perfect synchrony. Cameras, tactile sensors, radar, and light detection all need to function properly to feed information back to computers. Algorithms need to be designed so these machines can process these sensory data and make decisions based on the highest probability of success.

Kim and the much of the faculty at MIT’s Department of Mechanical Engineering are creating new software that connects with hardware to create intelligent devices. Rather than building the sentient robots romanticized in popular culture, these researchers are working on projects that improve everyday life and make humans safer, more efficient, and better informed.

Making portable devices smarter

Jeehwan Kim holds up sheet of paper. If he and his team are successful, one day the power of a super computer like IBM’s Watson will be shrunk down to the size of one sheet of paper. “We are trying to build an actual physical neural network on a letter paper size,” explains Kim.

To date, most neural networks have been software-based and made using the conventional method known as the Von Neumann computing method. Kim however has been using neuromorphic computing methods.

“Neuromorphic computer means portable AI,” says Kim. “So, you build artificial neurons and synapses on a small-scale wafer.” The result is a so-called ‘brain-on-a-chip.’

Rather than compute information from binary signaling, Kim’s neural network processes information like an analog device. Signals act like artificial neurons and move across thousands of arrays to particular cross points, which function like synapses. With thousands of arrays connected, vast amounts of information could be processed at once. For the first time, a portable piece of equipment could mimic the processing power of the brain.

“The key with this method is you really need to control the artificial synapses well. When you’re talking about thousands of cross points, this poses challenges,” says Kim.

According to Kim, the design and materials that have been used to make these artificial synapses thus far have been less than ideal. The amorphous materials used in neuromorphic chips make it incredibly difficult to control the ions once voltage is applied.

In a Nature Materials study published earlier this year, Kim found that when his team made a chip out of silicon germanium they were able to control the current flowing out of the synapse and reduce variability to 1 percent. With control over how the synapses react to stimuli, it was time to put their chip to the test.

“We envision that if we build up the actual neural network with material we can actually do handwriting recognition,” says Kim. In a computer simulation of their new artificial neural network design, they provided thousands of handwriting samples. Their neural network was able to accurately recognize 95 percent of the samples.

“If you have a camera and an algorithm for the handwriting data set connected to our neural network, you can achieve handwriting recognition,” explains Kim.

While building the physical neural network for handwriting recognition is the next step for Kim’s team, the potential of this new technology goes beyond handwriting recognition. “Shrinking the power of a super computer down to a portable size could revolutionize the products we use,” says Kim. “The potential is limitless – we can integrate this technology in our phones, computers, and robots to make them substantially smarter.”

Making homes smarter

While Kim is working on making our portable products more intelligent, Professor Sanjay Sarma and Research Scientist Josh Siegel hope to integrate smart devices within the biggest product we own: our homes.

One evening, Sarma was in his home when one of his circuit breakers kept going off. This circuit breaker — known as an arc-fault circuit interrupter (AFCI) — was designed to shut off power when an electric arc is detected to prevent fires. While AFCIs are great at preventing fires, in Sarma’s case there didn’t seem to be an issue. “There was no discernible reason for it to keep going off,” recalls Sarma. “It was incredibly distracting.”

AFCIs are notorious for such ‘nuisance trips,’ which disconnect safe objects unnecessarily. Sarma, who also serves as MIT's vice president for open learning, turned his frustration into opportunity. If he could embed the AFCI with smart technologies and connect it to the ‘internet of things,’ he could teach the circuit breaker to learn when a product is safe or when a product actually poses a fire risk.

“Think of it like a virus scanner,” explains Siegel. “Virus scanners are connected to a system that updates them with new virus definitions over time.” If Sarma and Siegel could embed similar technology into AFCIs, the circuit breakers could detect exactly what product is being plugged in and learn new object definitions over time.

If, for example, a new vacuum cleaner is plugged into the circuit breaker and the power shuts off without reason, the smart AFCI can learn that it’s safe and add it to a list of known safe objects. The AFCI learns these definitions with the aid of a neural network. But, unlike Jeewhan Kim’s physical neural network, this network is software-based.

The neural network is built by gathering thousands of data points during simulations of arcing. Algorithms are then written to help the network assess its environment, recognize patterns, and make decisions based on the probability of achieving the desired outcome. With the help of a $35 microcomputer and a sound card, the team can cheaply integrate this technology into circuit breakers.

As the smart AFCI learns about the devices it encounters, it can simultaneously distribute its knowledge and definitions to every other home using the internet of things.

“Internet of things could just as well be called 'intelligence of things,” says Sarma. “Smart, local technologies with the aid of the cloud can make our environments adaptive and the user experience seamless.”

Circuit breakers are just one of many ways neural networks can be used to make homes smarter. This kind of technology can control the temperature of your house, detect when there’s an anomaly such as an intrusion or burst pipe, and run diagnostics to see when things are in need of repair.

“We’re developing software for monitoring mechanical systems that’s self-learned,” explains Siegel. “You don’t teach these devices all the rules, you teach them how to learn the rules.”

Making manufacturing and design smarter

Artificial intelligence can not only help improve how users interact with products, devices, and environments. It can also improve the efficiency with which objects are made by optimizing the manufacturing and design process.

“Growth in automation along with complementary technologies including 3-D printing, AI, and machine learning compels us to, in the long run, rethink how we design factories and supply chains,” says Associate Professor A. John Hart.

Hart, who has done extensive research in 3-D printing, sees AI as a way to improve quality assurance in manufacturing. 3-D printers incorporating high-performance sensors, that are capable of analyzing data on the fly, will help accelerate the adoption of 3-D printing for mass production.

“Having 3-D printers that learn how to create parts with fewer defects and inspect parts as they make them will be a really big deal — especially when the products you’re making have critical properties such as medical devices or parts for aircraft engines,” Hart explains.

The very process of designing the structure of these parts can also benefit from intelligent software. Associate Professor Maria Yang has been looking at how designers can use automation tools to design more efficiently. “We call it hybrid intelligence for design,” says Yang. “The goal is to enable effective collaboration between intelligent tools and human designers.”

In a recent study, Yang and graduate student Edward Burnell tested a design tool with varying levels of automation. Participants used the software to pick nodes for a 2-D truss of either a stop sign or a bridge. The tool would then automatically come up with optimized solutions based on intelligent algorithms for where to connect nodes and the width of each part.

If there is anything on MIT’s campus that most closely resembles the futuristic robots of science fiction, it would be Professor Sangbae Kim’s robotic cheetah. The four-legged creature senses its surrounding environment using LIDAR technologies and moves in response to this information. Much like its namesake, it can run and leap over obstacles.

Kim’s primary focus is on navigation. “We are building a very unique system specially designed for dynamic movement of the robot,” explains Kim. “I believe it is going to reshape the interactive robots in the world. You can think of all kinds of applications — medical, health care, factories.”

Kim sees opportunity to eventually connect his research with the physical neural network his colleague Jeewhan Kim is working on. “If you want the cheetah to recognize people, voice, or gestures, you need a lot of learning and processing,” he says. “Jeewhan’s neural network hardware could possibly enable that someday.”

Combining the power of a portable neural network with a robot capable of skillfully navigating its surroundings could open up a new world of possibilities for human and AI interaction. This is just one example of how researchers in mechanical engineering can one-day collaborate to bring AI research to next level.

While we may be decades away from interacting with intelligent robots, artificial intelligence and machine learning has already found its way into our routines. Whether it’s using face and handwriting recognition to protect our information, tapping into the internet of things to keep our homes safe, or helping engineers build and design more efficiently, the benefits of AI technologies are pervasive.

The science fiction fantasy of a world overtaken by robots is far from the truth. “There’s this romantic notion that everything is going to be automatic,” adds Maria Yang. “But I think the reality is you’re going to have tools that will work with people and help make their daily life a bit easier.”

Sony’s new chief executive has positioned data and artificial intelligence at the centre of its survival strategy, warning that the likes of Amazon and Google pose an existential threat to the Japanese technology and entertainment group.

“The data mega players [such as Google, Amazon and Facebook] are so powerful they are capable of doing all kinds of things,” Kenichiro Yoshida said in his first media session since taking the helm of Sony in April. “The big challenge for our survival lies in the extent to which we can take control of data and AI. I personally feel a strong sense of crisis.”

The comments by Mr Yoshida come as a recently revived Sony is looking to revive investment in entertainment content and technology. A day earlier, the group struck a $2.3bn deal to buy outright control of EMI Music Publishing, taking advantage of a recovery in the music industry driven by streaming services.

Following a decade of deep losses driven by its ailing consumer electronics division, Sony has increased its focus on subscription revenue from online gaming and streaming of videos and music.

As part of that strategy, Mr Yoshida said the company would take a more strategic approach to collecting data from its users across a range of devices and platforms, spanning PlayStation games, financial services and mobile phones.

Sony does not intend to compete directly with the huge data platforms operated by Apple and other technology giants. But Mr Yoshida said Sony could do better in utilising its own data trove — such as the 80m monthly active users on Sony’s cloud gaming service PlayStation Network — to create content that matched users’ preferences.

“We want to remain close to our users and I think that’s how we can survive,” Mr Yoshida said.

For this reason, Sony will continue to offer its PlayStation Vue internet streaming TV service despite calls by some analysts to give up the effort in a clearly crowded market led by Netflix.

“It is clear as crystal that Sony has no competitive advantage in this business. It has not reached even a 1m user base in the last three years,” said Jefferies analyst Atul Goyal. But Mr Yoshida said PS Vue offered valuable real-time data on viewers’ preferences.

In February, Sony announced plans to launch a ride-hailing service in partnership with several Japanese taxi companies to obtain data on vehicles. The company is looking to expand the sale of image sensors, installed in Apple’s iPhones and other mobile devices, for use in self-driving cars.

“We want to contribute to the safety of mobility,” Mr Yoshida said, adding that Sony had no plans to make its own vehicle.

Nothing short of a concerted effort by the government, and the public and private sectors, will be enough if the UK is to be a world leader of artificial intelligence, argues Mike Rebeiro, head of digital and innovation at law firm Macfarlanes.

As part of its Industrial Strategy unveiled last November, the government identified artificial intelligence (AI) as one of its four 'Grand Challenges' facing the UK.

As such, the Department for Business, Energy & Industrial Strategy's (BEIS) stated ambition is "to put the UK at the forefront of the AI and data revolution", predicting that UK GDP will be 10% higher (or an additional £232bn per year) by 2030 as a direct result of AI.

The BEIS recently announced the UK Artificial Sector Deal between the government and the private sector, outlining a package of £603m in new private and public sector funding for AI, and up to £324m from existing government funding.

The sector deal focuses on five areas:

• Infrastructure - in addition to the £1bn+ being invested in digital infrastructure, creating new data sharing frameworks to address the barriers of sharing publicly and privately held data to allow for the "fair and equitable data sharing between organisations in the private sector and between the private and public sectors"

• Ideas - boosting research and development spending in the private sector to 2.4% by 2027 and rising to 3% in the longer term

• People - growing digital skills in the workforce and creating by 2025 at least 1,000 government-supported AI PhD places

• Business environment - the creation of a new AI Council, bringing together respected leaders from academia and industry, and the creation of a new government delivery body, the Office for Artificial Intelligence, as well as a new centre for data ethics and innovation

• Places - ensuring that businesses around the UK grow by using AI.

If the government and businesses can achieve these goals, there will be a growing investment and acquisition market in AI technologies and companies within the UK.

The week before the publication of the Sector Deal, the House of Lords Select Committee on Artificial Intelligence report was also published.

The report, AI in the UK: ready, willing and able?, concludes that the "UK is in a strong position to be among the world leaders in the development of artificial intelligence during the 21st century".

Nevertheless, the report also stated that the development of the UK as an AI hub will also require not only the governance of existing legislation, but also new legal frameworks to be put in place.

Unlike other disruptive technologies, many forms of AI have the capacity to learn, make decisions independently and decide the basis upon which it is going to make decisions without human involvement or intervention.

Just a few short years ago, having “conversations” in human languages with machines was pretty much universally a frustratingly comedic process.

Adobe Stock

Today that has changed. While natural language processing (NLP) and recognition is far from perfect, thanks to machine learning algorithms it’s getting increasingly closer to a point where it will be harder to tell whether we are talking to a human or a computer.

Business has capitalized on this, with increasing numbers of chatbots deployed, usually in customer service functions but increasingly in internal processes and to assist in training.

He told me “NLP is going to be incredibly important for business – it is going to fundamentally change how we provide services, how we understand sales processes and how we do marketing.

“Particularly on social media, you need NLP to understand the sentiment around your marketing messages and how people perceive your brand.”

Of course, this raises some issues, and one of the most glaring is, do people really want to talk to machines? From a business point of view it makes sense – it’s incalculably cheaper to carry on 1,000 simultaneous customer service conversations with a machine than with the giant human call center which would be needed to do the same job.

But from a customer point of view, are they gaining anything? Unless the service they receive is faster, more efficient and more useful, then they probably aren’t.

“I can’t speak for all chatbot deployments in the world – there are some that aren’t done very well,” says Socher.

“But in our case we’ve heard very positive feedback because when a bot correctly answers questions or fills your requirements it does it very, very fast.”

“In the end, users just want a quick answer, and originally people thought they wanted to talk to a person because the alternative was to go through a ten minute menu or to listen to ten options and then have to press a button – that’s not fun and its not fast and efficient.”

Key to achieving this efficient use of NLP technology are the concepts of aggregation and augmentation. Rather than thinking of a conversation exclusively taking place between one human and one machine, AI and chatbots can be used to monitor and draw insights from every conversation and learn from them how to perform better in the next one.And augmentation means that the machine doesn’t have to conduct the entire conversation. Chatbots can “step in” for routine tasks such as answering straightforward questions from an organization’s knowledge base, or taking payment details.

In other situations, the speed of real-time analytics available today means that bots can raise an alert when they detect, for example, a customer becoming irate – thanks to sentiment analytics - prompting a human operator to take over the chat or call.

Summarization is another highly useful function of NLP, and one which is likely to be increasingly rolled out to chatbots. Internally, bots will be able to quickly digest, process and report business data when it is needed, and new recruits can quickly bring themselves up to speed. For customer-facing functions, customers can receive summarized answers to questions involving product and service lines, or technical support issues.

Chatbots are a form of the ‘intelligent assistant’ technology which powers Siri or Google Assistant on your phone, or Cortana on your desktop. Generally though they are focused on one specific task within an organization.

One study found that 40% of large businesses have implemented this technology in some form, or will have done so by the end of 2019.

Among those, 46% said that NLP is used for voice to text dictation, 14% for customer services and 10% for other data analytics work.

Chatbots are also increasingly ubiquitous in collaborative working environments such as Slack, where they can monitor conversations between teams and provide relevant facts or statistics at pertinent points in the conversation.

In the future, chatbots will probably be able to take things even further and propose strategy and tactics for overcoming business problems.

Socher tells me “They will probably be able to help us craft marketing messages, based on understanding of the language of all the things that have been successful in the past.”

Another example could be customer service bots which can allocate resources to dealing with customer cases based on the classification and sentiment analysis of the conversations they are having.

As with all AI, development of NLP is far from a finished process and level of conversation we are able to have today will undoubtedly seem archaically stilted and unnatural in just a couple of years’ time.

But today, organizations are clearly becoming more comfortable with the idea of integrating chatbots and intelligent assistants into their processes, and confident that it will lead to improvements in efficiency and customer satisfaction.

Artificial Intelligence, AKA the shiny new toy in the marketer's toolkit. Shutterstock

Artificial Intelligence (AI) has become part of the business landscape. It's now accepted as a technology for many applications and platforms. However, marketing is one of the areas where AI is transforming how the process works. As such, it's also solving some marketing challenges across industries.

However, like other technology slowly making its way into all aspects of work and life, such as the Internet of Things (IoT) and autonomous vehicles, the transformation process of AI in marketing may not quite be there yet. And, that may be for the best. Here's the current state of AI's disruption of marketing.

AI's Impact on Marketing Science

Specific changes from AI's influence on marketing are already being felt, according to Charles (Chuck) Davis, co-founder and CTO of Element Data, a company behind an AI tool called Decision Cloud. “AI has enabled the evolution of search engines, recommendation engines, chatbots and voice data analysis and other technologies employed by marketers every day."

And, companies across industries are starting to understand how to incorporate AI and machine learning into their marketing efforts. Companies like Amazon and Netflix were early adopters. They used this technology to provide personalized recommendations to their customers. Although this marketing tactic is still used successfully, the marketing applications have progressed into many other areas.

Better Decisions Arrive Faster

Being able to make better decisions related to your marketing strategy means money well spent and better return on what you do use from the budget. If you could see the future to make informed predictions and execute on targeted actions, then you'd be making the best decisions and garnering the best results for doing so.

Catalant’s Pedro Pereira explains, “In sales and marketing, AI measures customer sentiment and tracks buying habits. Brands and advertisers use the information to make ecommerce more intuitive and for targeted promotions...AI creates efficiencies that wouldn’t be possible without sifting through piles of data.”

As you know, making the right decisions with the data you receive is challenging at best. That's where AI has made the difference. Companies like Element Data, Selligent Marketing Cloud,and SetSchedule are helping marketers take the massive volumes of data that comes from all these channels and platforms and group it in a structured way to see what decisions need to be made. Questions about what motivates customers and why they act a certain way can be answered. And, those insights come more quickly than any human could ever figure out.

By speeding more accurate decisions, business intelligence rapidly grows. As a result, the return increases further. That means more time and money for creating the right campaigns and spending more time interacting with each customer. AI then becomes truly worth its weight in gold.

Personalization Gets Help

Being able to make each and every experience for what could be thousands of customers seems like an impossible task. However, that is what today's customers want. Although Amazon and others have proved that it's possible, they have AI to thank. And, so many other companies are seeing the potential.

According to Emme Yllesca, CEO of real estate investment platform, Asset Column, “AI provides deep insights, allowing our brand to use that data in order to bridge the gap, resulting in a marketing message that hits the right pain points.” That means matching audience segments with specific problems and solutions. That means a huge uplift in your response and success rate.

Aman Naimat, senior vice president of technology & engineering at Demandbase says personalization is at the crux of why marketing has to and will adopt AI. "Ultimately, marketing is all about how a brand communicates to its prospects and customers, and personalized, relevant customer experiences are the most effective way to reach their target audiences," says Naimat. "Think about how easy it is to filter out spam with the glance of an eye."

Naimat cautions that 1:1 conversations are difficult to have at scale. He believes the only way to achieve personalization at scale is to leverage AI and machine learning applications. "The knowledge you get from AI technology is akin to the knowledge most sales reps have when they research every single buyer in-depth. Today, many companies are already enabling this hyper-personalization at scale, creating context-rich conversations that help businesses understand, connect and relate to their audiences."

Content Marketing Is Efficient

With content in such demand, it's easy to focus on mass production. However, while quantity is important to a certain degree, it shouldn't put quality at risk. What you create must be relevant for numerous audiences but also be adjustable to each segment. As you know, that leads to a considerable amount of content to manage, organize, and put to work.

To create these content assets most likely also used a large number of resources. Therefore, you want to be able to tap them, repurpose them, and leverage them again at will. That's again when AI becomes the marketing superhero. According to Jim Vernon, CEO of RockHer,“The majority of our content management uses artificial intelligence to some degree, allowing us to catalog, search and find any piece of content related to a specific search query.

Go Deeper Into the Data

To beat out the competition means knowing more about the intended customer and existing base. It's in the data, but it's a race to find it first and understand what to do with it. “Consumer data is a very touchy subject,” says Saro Der Ohanesian, CEO of Vanguard Tax Relief, “and what and how data is collected is a completely different discussion.” As on simple example, we just need to look at how much Facebook has been in the news in recently involving its data collection.

Real estate is a good example of an ideal place to put AI's power to work on marketing to generate more effective results. For example, SetSchedule is a real estate marketing firm that has leveraged AI technology to create connections between realtors and local homeowners, home buyers, and investors to complete more property deals. The company uses AI to identify properties through predictive data. Then, it uses automated marketing to understand timing, seller and buyer intent, market conditions and more to develop leads that close more often than any marketing processes that did not use machine learning capability.

Marcos Meneguzzi, EVP and Head of Cards and Unsecured Lending for HSBC, also sees firsthand how AI is impacting the customer experience in his organization. “Customers want companies to treat them like individuals who matter - not interchangeable sources of revenue. The greatest promise for AI is about optimization of data and the valuable insights they can provide leading to greater personalization. This allows companies like HSBC to enhance and tailor our customer experiences.”

HSBC uses AI to predict the redemption of loyalty program rewards associated with their new suite of credit cards. Also, it is leveraged within Fraud Management in both models and rule building to detect anomalous behavior for the protection of our customers and the firm. Launching soon, HSBC’s new chat bot will augment the expertise of our bankers by providing fast and accurate responses to a wide range of questions that will reduce friction to getting answers and ultimately eliminate wait time.

In looking at future applications, Meneguzzi, states, “We’re actively evaluating and exploring additional innovative AI use cases across our businesses to deliver superior customer experiences. A number of projects look to improve the customer experience. This includes reducing fraud and card compromises. Others enable more personalized and relevant customer contacts within the personal banking space.”

Share the AI Love

Now, working with sales, customer service, and other areas of the business means sharing information and insights. And, it’s the CMO who can take the lead in pushing these efficiencies throughout the company by working with others on the executive team.

For example, this includes things like contract management. Although companies have typically relied on large sales platforms to cover this task, these platforms haven't been able to optimize the process the way AI could do. The technology can do a lot of the heavy lifting for the legal and sales teams while also protecting the contracts better than any other tools available.

More to Come

These are huge strides AI has made in moving the science of marketing forward. Other opportunities include Decision Intelligence. Not only will it change how CMOs make decisions, but it will also influence consumer decisions related to how, when, and where they spend their money. AI tools will learn what consumers have previously done, mimic that decision-making process, and then understand what to deliver to consumers to influence that decision.

First, there are other challenges. This includes knowing where to start making changes internally with marketing tools to integrate AI with the various types of data, data sources, and channels. However, AI could determine how you achieve that.

Second, companies have to think about becoming too dependent on AI. Jesse Wolfersberger, Senior Director of Decision Sciences for Maritz Motivation Solutions, recommends when integrating AI into your business, you need to have experienced professionals run the show. "Even after that, we recommend substantial testing and gradual roll-outs," he adds. "You don't want to be in the situation where you are taking actions based on an AI's recommendations and have it turn out that an analyst accidentally swapped the revenue column with the cost column.”

Naimat believes there is no risk in marketers becoming too dependent on AI. "Marketers will still need to drive AI tools that will help them do their jobs better and at scale," he said. "In fact, I believe that as AI advances, there will be a new class of marketers whose sole responsibility will be to drive this AI machinery, understand and take advantage of AI algorithms, and strategically point to the right data and goals which in turn will spark the integration between data and marketing, and ultimately, bring them closer together."

The real risk, as Naimat explained, is in the non-adoption of AI, with a loss of competitive advantage that data and insights can provide.

A word of warning to those that infiltrate the content pipeline with information that’s not factual, because there’s heightened demand for new methods to distill the mountains of information we are presented with daily down to the unadulterated facts. People crave a way to cut through the opinions, marketing speak and propaganda to get to the truth. And technology just might be the solution we need to become data-driven decision-makers and objectively understand the information.

Adobe Stock

There are reasons why we struggle under the weight of fake or worthless content. Every 60 seconds, 160 million emails are sent, 98,000 tweets are shared on Twitter, 600 videos are uploaded to YouTube and 1,500 blog entries are created. Nobody but a machine could keep up with it all.

Not only do we struggle to determine if politicians are telling us the truth, but marketers try to hook us up with all kinds of products that are just what we need because they are better than the competition, the safest, the only one that will get you your desired results. The hyperbole can be exhausting.

We have never experienced such a time when we have so much information and so many opinions thrown at us from so many angles. In response to our struggles, fact-checking organizations that are dedicated to dissect and analyze statements made by politicians and public figures now exist and are becoming increasingly visible.

As data continues to explode, the ability to rummage through it to find the truth required in a situation is essential. Consumers won’t be patient either. They want to find out anything they seek to know and they want to know it now. Brands will have to respond with truth and transparency if they hope to remain competitive.

Businesses are beginning to respond to their customers’ demands for facts. The big data-driven, machine-learning tech that is rolling out gives customers the raw material needed to measure and quantify absolute, objective facts and then act based on those findings, rather than rely on opinions and gut instincts so common today.

Checking Our Ads

AdVerif.ai offers a solution to verify ads so advertisers can keep an eye on where the content is displayed and publishers can check that content meets their policy. The tool augments the job of editorial staff with deep learning and Natural Language Processing capabilities to detect patterns that indicate spam, malware or inappropriate content. It also checks the content of ads and uses AI tools that leverage online knowledge repositories to either confirm facts or highlight potentially fake ones.

Facebook Fact Checking

Especially after the recent backlash against Facebook the company is on a mission to regain user trust. Facebook has been working with four independent fact-checking organizations—Snopes, Politifact, ABC News and FactCheck.org—to verify the truthfulness of viral stories. New tools that are designed to avert the spread of misinformation will notify Facebook users when they try to share a story that has been bookmarked as false by these ‘independent fact-checkers.’ Facebook has just recently announced its plan to open two new AI Labs that will work on creating an AI safety net for its users, tackling fake news, political propaganda as well as bullying on its platform.

Transparency of Reds and Whites

Alit Wine is leading the industry to “shine a light on the places that the wine industry doesn’t talk about,” founder Mark Tarlov says. One of those things that’s typically hush-hush in the industry is the how much each element of the winemaking process costs. But, not Alit Wine. The company sells wine directly to consumers and they detail exactly how much each step of production costs for the wines they sell.

Big Brother in Reverse

Usually we’re concerned about the scrutiny of the government into our own affairs. But, Contratobook helps citizens scrutinize the work of government and public officials. Launched in Mexico in 2016 by a group of anonymous hackers, the company is an open-source platform that allows people to search, filter and comment on more than 1.6 million government bids and contracts dating back to 2002. For those citizens with a desire to do so, they can look at each entry’s details including contact values, involved parties and start date to detect irregular or inaccurate expenses.

Those brands, platforms and companies who build trust with their customer base via transparency and factual information that can be verified with data are expected to have the competitive edge in a world that has grown weary of the widespread dishonesty and misinformation that permeates our culture. Thanks to big data and machine learning, any company can now create more transparent and trustworthy systems we will all benefit from.

The next few decades will surely be exciting as we can experience our science fiction fantasies playing out in our everyday lives.

Humans have emerged as the “superior” species on earth for having adapted to the surroundings and significantly altered a wide variety of regions across the world. We have surpassed the intelligence of other species on planet earth and our exploration has resulted in remarkable historical impacts. A quest to find other dominant species has led us to explore other planets and celestial bodies, and so far, we haven’t found one!

Human Imagination

Subsequently, we started dreaming about building machines that could match the ingenuity of humans. We started off by creating all types of machines to help us in overcoming our limitations. We went on to invent computers to help us in extending our brain power of analyzing and comprehending large amounts of data for insights. With constant persistence, we continued our expedition in building intelligent machines, and now, we have reached a tipping point where machines are reflecting the intellectual prowess of human beings. Thanks to Artificial Intelligence! We are currently in the era where AI revolution is rewriting the power of technology. With such fast-paced progression, it’s only a question of time before we reach the state of singularity (when machines surpass human thinking). At present, it’s challenging to anticipate as to when machines will reach this state. A prediction says that they will make unimaginable advancements anywhere between 40 to 70 years to come.

Modern Artificial Intelligence

Today, AI-powered machines have the intellect like that of an infant and are limited to mimicking routine and rudimentary tasks. Just like how juveniles learn from the environment as they grow, these machines also learn from the environment, to develop over time, by acquiring numerous skills. Today’s machines are invented and supervised by humans. They are finely designed & maintained with utmost care and taught to think like humans, while they advance. The hope is that, in due course of time, these machines would take over all the human tasks that are neither creative nor desirable, thus liberating us from drudgery. Soon, these machines are expected to be as intelligent as humans, at which point, we can witness the true collaboration between man and machine.

These collaborative bots should help us in amplifying human imagination, problems solving and deep-thinking capabilities by several notches. Many mysteries of the universe could be unravelled by the unification of man and machine. This unification helps us overcome our limitations in correlating events, perceiving the deep linkages of causation and finding answers to complex problems. With intelligent machines by our side, we should be able to solve critical issues, such as chronic diseases, global warming and disabilities. Many of our current science fiction scenarios could turn into a reality – like inter-planetary travel, powering the earth completely with solar-energy, controlling weather patterns, regenerating human organs with flesh & blood, and even defining a new future for creating babies!

The Bright Future

The final frontier in the human evolution would be embedding AI in our body by implanting smart sensors, chips and electronic prosthetics. These embedded devices should help us see those things which we cannot see with our eyes, hear those sounds which we cannot hear today, touch those objects (like fire) which we cannot touch today, fly like birds and swim in deep oceans, thus making us superhumans on earth. In this final act, we would be moving from unification of man and machine to a higher level – integration.

In far future, machines may surpass human intelligence, take command & control of earth and we the humans could be working under the supervision of machines. But, knowing human psychology, it’s doubtful whether we would allow ourselves to be in such a situation – surrendering our supremacy to the machines. Since we are in control of the current machine evolution, we would certainly ensure to imbibe certain characteristics in these machines so that they always treat humans as their masters and never cause us any harm.

The next few decades will surely be exciting as we can experience our science fiction fantasies playing out in our everyday lives.

Starting autumn 2018, the programme will only be accepting a total of only 100 students a year

The Carnegie Mellon School of Computer Science (SCS) has launched the first undergraduate degree in artificial intelligence (AI) in the United States.

Starting Autumn 2018, the program appears fittingly rigorous, accepting a total of only 100 students a year. First years can only declare themselves AI majors in the spring, after completing core mathematics and computer science classes in the SCS. The 100 second-, third- and fourth-year students who make it onto the programme will take additional courses in statistics and probability, computational modeling, machine learning and symbolic computation.

The degree will also involve an emphasis on ethics and social responsibility, as part of the SCS' desire to use AI to improve social conditions.

"Carnegie Mellon has an unmatched depth of expertise in AI, making us uniquely qualified to address this need for graduates who understand how the power of AI can be leveraged to help people," said Andrew Moore, dean of the School of Computer Science.

This degree programme continues the university's tradition of leading innovations in computer science and AI.

The SCS was one of the first universities in the US dedicated entirely to computer science and in 1975, Allen Newell and Herbert A. Simon, researchers at Carnegie Mellon, received the A.M. Turing Award for contributions to AI. A total of twelve alumni and faculty have received Turing Awards, and a recent study by the US News and World Report ranked the SCS at Carnegie Mellon the best computer science college in the US for AI.

In the UK, many universities already offer bachelor's degrees in computer science with an emphasis or modules in AI, which appears comparable to what Carnegie Mellon has planned.

While many fear that AI will replace humans in a large array of jobs, and this is arguably inevitable, initiatives such as this seem determined to use AI to fix social issues and improve quality of life rather than let the technology overtake humanity. If the creation of this program potentially proves anything, it's that the rise of AI will also create new jobs as it replaces people in old ones.

"It's an opportunity for us to shape what it means to be a degree program in AI as opposed to offering courses related to AI," said Reid Simmons, director of the new programme, according to the Carnegie website. "We want to be the first to offer an AI undergraduate degree. I'm sure we won't be the last. AI is here to stay."

India has ambitions to fire up its artificial intelligence capabilities — but experts say that it's unlikely to catch up with the U.S. and China, which are fiercely competing to be the world leader in the field.

An Indian government-appointed task force has released a comprehensive plan with recommendations to boost the AI sector in the country for at least the next five years — from developing AI technologies and infrastructure, to data usage and research.

The task force, appointed by India's Ministry of Commerce and Industry, proposes that the government work with the private sector to develop technologies, with a focus on smart cities and the country's power and water infrastructure.

It recommends a network of infrastructure — a testing facility, and six centers focusing on research in generating AI technologies, such as robotics, autonomous trucks and advanced financial technology.

A data center could be set up to "develop an autonomous AI machine that can work on multiple data streams in real time," the plan said. Calling data the "fuel that powers AI," the report said data marketplaces and exchanges could allow the "free flow of data."

Yet despite those aspirations, experts said that insufficient research support, poor data quality, and the lack of expertise in the field will be stumbling blocks for India.

Rishi Sharma, an associate research manager for enterprise infrastructure at research firm IDC, said: "India is lagging the global dominance presently in the AI space ... It will take time before (it) positions itself at a global standing."

India's Ministry of Commerce and Industry did not respond to a request for comment from CNBC.

India's plans to deploy A.I.

From crop management to fighting terrorism, there's a plan to deploy AI in 10 sectors in Asia's third-largest economy. Those include manufacturing, health care, agriculture, education and public utilities.

Here are a few areas proposed by the task force:

National defense: Secure public and critical infrastructure by predicting terror attacks, robots for counter terrorism operations.

Crop management: Using AI for crop prediction, health management and selection based on historical data and current factors. Crop monitoring and collection of data can be done by using drones and robots.

Environment: To automate and control — at the source — the levels of smoke and waste being released into the air, soil and water.

Can it succeed?

India's efforts come as the AI competition between China and U.S. intensifies, with China aiming to be the world leader in the space by 2030.

India, meanwhile, is late to the game, and will probably not dominate in the field except in a few areas, experts said.

IDC's Sharma said the country needs to resolve some issues first: "India stands a chance to compete at a global level, provided the hurdles are overcome." Challenges, she said, include poor data quality and integrity, as well as a lack of expertise.

Those critiques would not be news to New Delhi.

"The most important challenge in India is to collect, validate ... distribute AI-relevant data and making it accessible to organizations, people and systems without compromising privacy and ethics. Data is the bedrock of AI systems and reliability of AI systems depends primarily on quality and quantity of the data," the government report said.

Milan Sheth, a partner at EY covering intelligent automation, added: "There is a need to reskill a large number of people in a short span of time. It will take a couple of years, but tech developments will also take that same amount of time. To keep pace with adoption, that is the challenge."

While India is unlikely to be able to fully compete anytime soon, it can still aim to be a leader in a few areas such as industrial electronics, Sheth said.

"It will make a bid for dominating in a few areas but can't compete with the U.S. or China on academic investment," he said, adding that very few companies in India are getting sufficient funding for research.

India's GDP could reach $6 trillion in 2027 because of its digitization drive, according to a previous forecast by Morgan Stanley. That would make India the third-largest economy in the world — behind the U.S. and China, which recorded $18.5 trillion and $11.2 trillion in 2016 GDP, respectively.

From high speed internet to connected devices, innovation is transforming almost every aspect of our lives. The field of medicine is no different. U.K.-based health care business Babylon Health, for instance, is combining digital technology with human doctors.

The company has grand ambitions. "If we can make health care accessible, affordable, put it in the hands of every human being on earth, if we can do with health what Google did with information, that's a phenomenal thing to have achieved," Ali Parsa, Babylon Health's founder and CEO, told CNBC's Nadine Dereza.

Parsa went on to stress just how much things were changing in the field of medicine.

"Everything we know about intervention in medicine is being reinvented, whether it is electro biology or synthetic biology, whether it is laser manipulation or audiology intervention, whether it is organ reconstruction or DNA reengineering," he said. "We are reinventing the way we can intervene in your body in a way that we could never imagine before."

Babylon Health is not the only organisation looking to use technology to transform the way patients are treated.

"We want to work hard to more quickly diagnose our patients so we can begin treatment, and more quickly diagnosing requires artificial intelligence," Kevin Mahoney, senior vice president and chief administrative officer for the University of Pennsylvania Health System, said.

"It's going to require using big data," he added. "It's going to require looking for those patterns that we don't quite see, but always following it back through the physician who's been trained how to interpret that data."

The issue of whether we will eventually be treated by computers rather than humans is an intriguing one, but Mahoney sought to paint a more collaborative future.

"I'm not advocating that we're ever going to get to the point where the computer treats you," he said.

"But the amount of information that doctors are being told on a daily basis about new treatments, new evidence that's out there, artificial intelligence is going to be required to help condense that down and bring it directly to the patient's room so the doctor can intervene as effectively as possible."

Onstage at I/O 2018, Google showed off a jaw-dropping new capability of Google Assistant: in the not too distant future, it’s going to make phone calls on your behalf. CEO Sundar Pichai played back a phone call recording that he said was placed by the Assistant to a hair salon. The voice sounded incredibly natural; the person on the other end had no idea they were talking to a digital AI helper. Google Assistant even dropped in a super casual “mmhmmm” early in the conversation.

Pichai reiterated that this was a real call using Assistant and not some staged demo. “The amazing thing is that Assistant can actually understand the nuances of conversation,” he said. “We’ve been working on this technology for many years. It’s called Google Duplex.”

Duplex really feels like next-level AI stuff, but Google’s chief executive said it’s still very much under development. Google plans to conduct early testing of Duplex inside Assistant this summer “to help users make restaurant reservations, schedule hair salon appointments, and get holiday hours over the phone.”

Pichai says the Assistant can react intelligently even when a conversation “doesn’t go as expected” and veers off course a bit from the given objective. “We’re still developing this technology, and we want to work hard to get this right,” he said. “We really want it to work in cases, say, if you’re a busy parent in the morning and your kid is sick and you want to call for a doctor’s appointment.” Google has published a blog post with more details and soundbites of Duplex in action.

“The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.” Google envisions other use cases like having Assistant call businesses and inquire about their hours to help keep Maps listings up to date. The company says it wants to be transparent about where and when Duplex is being used, as a voice that sounds this realistic and convincing is certain to raise some questions.

In current testing, Google notes that Duplex successfully completes most conversations and tasks on its own without any intervention from a person on Google’s end. But there are cases where it gets overwhelmed and hands off to a human operator. This section on the ins and outs of Duplex is very interesting:

The Google Duplex system is capable of carrying out sophisticated conversations and it completes the majority of its tasks fully autonomously, without human involvement. The system has a self-monitoring capability, which allows it to recognize the tasks it cannot complete autonomously (e.g., scheduling an unusually complex appointment). In these cases, it signals to a human operator, who can complete the task.

To train the system in a new domain, we use real-time supervised training. This is comparable to the training practices of many disciplines, where an instructor supervises a student as they are doing their job, providing guidance as needed, and making sure that the task is performed at the instructor’s level of quality. In the Duplex system, experienced operators act as the instructors. By monitoring the system as it makes phone calls in a new domain, they can affect the behavior of the system in real time as needed. This continues until the system performs at the desired quality level, at which point the supervision stops and the system can make calls autonomously.

Every big tech company is an AI company these days, but none more so than Google. To underline the point ahead of its I/O developers conference, the company has rebranded its Google Research division as Google AI, reflecting the centrality of artificial intelligence to the company’s future.

In a blog post announcing the news, the company said the rebrand was to “better reflect [its] commitment” to integrating AI into various services. It follows an organizational reshuffle last month which saw AI product development split up from Google’s search efforts, and veteran Googler Jeff Dean taking the helm of the new division. A newly-revamped homepage for Google AI also emphasizes more than just the company’s consumer products, highlighting recently-published research in topics like health and astronomy and open-source tools used by the AI community worldwide, like the machine learning framework Tensor Flow. (Important to note also: non-AI research will still be done under in the new “Google AI” division.)

The homepage for Google AI.

This focus on research and community contrasts slightly with Microsoft, which has also been pushing its AI credentials this week at its Build conference. But for Microsoft the message has been more about AI ethics and morality, with the company launching a new $25 million AI for Accessibility fund to develop the tech for people with disabilities. Google does plenty of work in the field of AI ethics too, but it’s interesting to see these two titans of the tech world trying to differentiate their message on the same subject.

Last month in a letter to investors, Google’s co-founder Sergey Brin warned of the threats posed by AI, like job destruction, biased algorithms, and misinformation. He also called AI “the most significant development in computing in my lifetime.” Google’s rebranding of its research division drives that point home.

Interested in how AI is implemented and monetised in companies? Register for your ticket here: https://theaicongress.com/bookyourtickets/

Supersmart algorithms won't take all the jobs, but they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

ARTIFICIAL INTELLIGENCE IS overhyped—there, we said it. It’s also incredibly important.

Superintelligent algorithms aren’t about to take all the jobs or wipe out humanity. But software has gotten significantly smarter of late. It’s why you can talk to your friends as an animated poop on the iPhone X using Apple’s Animoji, or ask your smart speaker to order more paper towels.

Tech companies’ heavy investments in AI are already changing our lives and gadgets, and laying the groundwork for a more AI-centric future.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training” computers to perform tasks based on examples, rather than by relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

For most of us, the most obvious results of the improved powers of AI are neat new gadgets and experiences such as smart speakers, or being able to unlock your iPhone with your face. But AI is also poised to reinvent other areas of life. One is health care. Hospitals in India are testing software that checks images of a person’s retina for signs of diabetic retinopathy, a condition frequently diagnosed too late to prevent vision loss. Machine learning is vital to projects in autonomous driving, where it allows a vehicle to make sense of its surroundings.

There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.

The Beginnings of Artificial Intelligence

Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. He had high hopes of a breakthrough toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a proper academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s Arthur Samuel created programs that learned to play checkers. In 1962 one scored a win over a master at the game. In 1967 a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for tasks like understanding language. Others were inspired by the importance of learning to human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone, as computers mastered more tasks that could previously be done only by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by how brain cells work, known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes, and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book co-authored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.

Not everyone was convinced, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data and powerful computer chips could give machines new powers of perception.

In one notable result, researchers at the University of Toronto trounced rivals in an annual competition where software is tasked with categorizing images. In another, researchers from IBM, Microsoft, and Google teamed up to publish results showing deep learning could also deliver a significant jump in the accuracy of speech recognition. Tech companies began frantically hiring all the deep-learning experts they could find.

The Future of Artificial Intelligence

Even if progress on making artificial intelligence smarter stops tomorrow, don’t expect to stop hearing about how it’s changing the world.

Big tech companies such as Google, Microsoft, and Amazon have amassed strong rosters of AI talent and impressive arrays of computers to bolster their core businesses of targeting ads or anticipating your next purchase.

They’ve also begun trying to make money by inviting others to run AI projects on their networks, which will help propel advances in areas such as health care or national security. Improvements to AI hardware, growth in training courses in machine learning, and open source machine-learning projects will also accelerate the spread of AI into other industries.

Meanwhile, consumers can expect to be pitched more gadgets and services with AI-powered features. Google and Amazon in particular are betting that improvements in machine learning will make their virtual assistants and smart speakers more powerful. Amazon, for example, has devices with cameras to look at their owners and the world around them.

The commercial possibilities make this a great time to be an AI researcher. Labs investigating how to make smarter machines are more numerous and better-funded than ever. And there’s plenty to work on: Despite the flurry of recent progress in AI and wild prognostications about its near future, there are still many things that machines can’t do, such as understanding the nuances of language, common-sense reasoning, and learning a new skill from just one or two examples. AI software will need to master tasks like these if it is to get close to the multifaceted, adaptable, and creative intelligence of humans. One deep-learning pioneer, Google’s Geoff Hinton, argues that making progress on that grand challenge will require rethinking some of the foundations of the field.

As AI systems grow more powerful, they will rightly invite more scrutiny. Government use of software in areas such as criminal justice is often flawed or secretive, and corporations like Facebook have begun confronting the downsides of their own life-shaping algorithms. More powerful AI has the potential to create worse problems, for example by perpetuating historical biases and stereotypes against women or black people. Civil-society groups and even the tech industry itselfare now exploring rules and guidelines on the safety and ethics of AI. For us to truly reap the benefits of machines getting smarter, we’ll need to get smarter about machines.

NASA spacecraft typically rely on human-controlled radio systems to communicate with Earth. As collection of space data increases, NASA looks to cognitive radio, the infusion of artificial intelligence into space communications networks, to meet demand and increase efficiency.

“Modern space communications systems use complex software to support science and exploration missions,” said Janette C. Briones, principal investigator in the cognitive communication project at NASA’s Glenn Research Center in Cleveland, Ohio. “By applying artificial intelligence and machine learning, satellites control these systems seamlessly, making real-time decisions without awaiting instruction.”

To understand cognitive radio, it’s easiest to start with ground-based applications. In the U.S., the Federal Communications Commission (FCC) allocates portions of the electromagnetic spectrum used for communications to various users. For example, the FCC allocates spectrum to cell service, satellite radio, Bluetooth, Wi-Fi, etc. Imagine the spectrum divided into a limited number of taps connected to a water main.

What happens when no faucets are left? How could a device access the electromagnetic spectrum when all the taps are taken?

Software-defined radios like cognitive radio use artificial intelligence to employ underutilized portions of the electromagnetic spectrum without human intervention. These “white spaces” are currently unused, but already licensed, segments of the spectrum. The FCC permits a cognitive radio to use the frequency while unused by its primary user until the user becomes active again.

In terms of our metaphorical watering hole, cognitive radio draws on water that would otherwise be wasted. The cognitive radio can use many “faucets,” no matter the frequency of that “faucet.” When a licensed device stops using its frequency, cognitive radio draws from that customer’s “faucet” until the primary user needs it again. Cognitive radio switches from one white space to another, using electromagnetic spigots as they become available.

“The recent development of cognitive technologies is a new thrust in the architecture of communications systems,” said Briones. “We envision these technologies will make our communications networks more efficient and resilient for missions exploring the depths of space. By integrating artificial intelligence and cognitive radios into our networks, we will increase the efficiency, autonomy and reliability of space communications systems.”

For NASA, the space environment presents unique challenges that cognitive radio could mitigate. Space weather, electromagnetic radiation emitted by the sun and other celestial bodies, fills space with noise that can interrupt certain frequencies.

“Glenn Research Center is experimenting in creating cognitive radio applications capable of identifying and adapting to space weather,” said Rigoberto Roche, a NASA cognitive engine development lead at Glenn. “They would transmit outside the range of the interference or cancel distortions within the range using machine learning.”

In the future, a NASA cognitive radio could even learn to shut itself down temporarily to mitigate radiation damage during severe space weather events. Adaptive radio software could circumvent the harmful effects of space weather, increasing science and exploration data returns.

A cognitive radio network could also suggest alternate data paths to the ground. These processes could prioritize and route data through multiple paths simultaneously to avoid interference. The cognitive radio’s artificial intelligence could also allocate ground station downlinks just hours in advance, as opposed to weeks, leading to more efficient scheduling.

Additionally, cognitive radio may make communications network operations more efficient by decreasing the need for human intervention. An intelligent radio could adapt to new electromagnetic landscapes without human help and predict common operational settings for different environments, automating time-consuming processes previously handled by humans.

The Space Communications and Navigation (SCaN) Testbed aboard the International Space Station provides engineers and researchers with tools to test cognitive radio in the space environment. The testbed houses three software-defined radios in addition to a variety of antennas and apparatus that can be configured from the ground or other spacecraft.

“The testbed keeps us honest about the environment in orbit,” said Dave Chelmins, project manager for the SCaN Testbed and cognitive communications at Glenn. “While it can be simulated on the ground, there is an element of unpredictability to space. The testbed provides this environment, a setting that requires the resiliency of technology advancements like cognitive radio.”

Chelmins, Rioche and Briones are just a few of many NASA engineers adapting cognitive radio technologies to space. As with most terrestrial technologies, cognitive techniques can be more challenging to implement in space due to orbital mechanics, the electromagnetic environment and interactions with legacy instruments. In spite of these challenges, integrating machine learning into existing space communications infrastructure will increase the efficiency, autonomy and reliability of these systems.

The SCaN program office at NASA Headquarters in Washington provides strategic and programmatic oversight for communications infrastructure and development. Its research provides critical improvements in connectivity from spacecraft to ground.

From gender-neutral AI to coding

At a time when diversity remains a front-burner issue within the tech industry, this year’s Consumer Electronics Show—the tech world’s largest conference—is surprisingly lacking in, well, diversity. While, in the past, the agenda-setting conference has showcased powerhouse solo women keynoters such as IBM CEO Ginni Rometty, General Motors CEO Mary Barra and former Yahoo CEO Marissa Mayer, this year, CES has chosen, for instance, to present a trio of women executives from A+E Networks, MediaLink and 605, sharing the stage alongside five male execs in a keynote panel.

Not surprisingly, CES’ male-dominated lineup has been widely slammed, with a number of CMOs and other marketing executives publicly criticizing the organization.

CES’ gender imbalance is emblematic of the broader gender inequity issues currently roiling tech. According to Girls Who Code, last year, 30,000 men graduated with computer science degrees compared to 7,000 women. Once they graduate, the statistics are grim. According to Crunchbase, the number of companies with at least one female founder increased to 9 percent between 2009 and 2012—but that number hasn’t budged in five years. The funding picture isn’t much better. According to the Harvard Business Review, among venture capital bankrolled tech startups, just 9 percent of the entrepreneurs are women.

Not content with the status quo, a number of women in tech are taking the lead to tip the gender scales, creating opportunities for women while at the same time making systemic changes when it comes to culture and thinking about diversity.

Here, Adweek highlights five women working to change the tech industry’s game.

1. Kriti Sharma, vp of artificial intelligence at Sage

What she’s doing: Making AI inclusive

Artificial intelligence may be the buzziest new word in tech circles, but it has a significant gender problem, according to Sharma. For starters, AI assistants like Apple’s Siri and Amazon’s Alexa, which have female voices and personas as their default option, reinforce gender stereotypes. While these female-branded assistants are often used as “helpers,” fielding passive and anodyne questions (e.g., Siri, what’s the temperature?) or conducting household tasks like dimming lights, their male-branded counterparts such as IBM’s Watson, Salesforce’s Einstein and Samsung’s Bixby are touted as muscular, complex problem solvers deployed to such tasks as plugging into a brand’s CRM system and using AI to determine which sales leads are most promising based on past behavior.

Sharma aims to create a more gender-neutral AI industry. At Sage’s two-day “BotCamp” workshops, students get hands-on opportunities learning to build their own chatbots. And Sharma recently hired Sage’s first conversation designer, a role designed specifically to analyze the voice tones and personalities used to create virtual assistants.

Further, Sage’s code of ethics requires developers to follow five guidelines when creating AI. It covers everything from how to name virtual assistants to building diverse data sets that help companies make hiring decisions when gender is taken out of the equation.

“Women are going to lose twice as many jobs as men due to AI,” Sharma explains, citing research from the Institute for Spatial Economic Analysis. “What we don’t talk about is how [AI] is going to impact different parts of society in different ways. I do a lot of work in that area.”

2. Allison Jones, director of marketing and communications at Code2040

What she’s doing: Getting tech students in the door

Code2040’s mission is to make sure that black and Latinx men and women are well represented in tech. To that end, the 30-person organization provides computer science college students with internships at major companies like Squarespace, Spotify, The New York Times and Goldman Sachs.

The organization also works directly with companies to shake up and realign their internal hiring processes. When Code2040 helped blogging platform Medium hire its technical talent, instead of focusing on the usual factors such as college GPAs, it worked with Medium to create face-to-face events with engineering interns in order to get to know each candidate personally.

While just 20 percent of computer science bachelor’s degrees and 5 percent of the technical workforce are black and Latinx, by 2040 they will comprise 40 percent of the U.S. population. Says Jones, “It’s not enough to just connect folks to talent—you have to make sure that your company has the culture that helps them drive, succeed and grow.” Adding, “The opportunities provide a way to generate wealth. We are building products that need to reflect the communities that are going to be the majority by 2040.”

3. Reshma Saujani, founder and CEO of Girls Who Code

What she’s doing: Teaching thousands of young women to code

In the six years since Saujani, a former attorney, launched Girls Who Code—53,000 young women have graduated from the program. By the end of 2018, her goal is to nearly double that number, hitting 100,000.

The way Saujani sees it, although the demand for technical roles continues to rise, the percentage of women who actually hold computing roles is falling. The organization’s own research finds that 24 percent of computer scientists in 2017 were women, down from 37 percent in 1995. By 2027, the percentage is expected to slip further to 22 percent.

“I think parity has to be intentional about gender and race,” Saujani says. “We talk a lot about access to computer science education. We should be focused on participation.”

At the same time, she says, simply getting more tech companies to hire women is just the first part of the equation; the second is retention. “What causes women to leave the workforce and college is the lack of community,” Saujani adds.

4. Neha Murarka, co-founder and CEO of Smoogs.io

What she’s doing: Making bitcoin easy to understand

If the technology industry is dominated by men, think of bitcoin as an even more exclusive boys club.

“It’s a niche within a niche,” says Murarka. As co-founder of the five-person startup Smoogs.io, she’s trying to help more women understand the nascent technology. Smoogs.io powers a media player that digital creators including publishers and authors embed into their websites asking consumers to make small payments in exchange for accessing content. Instead of using a credit card to make individual payments, bitcoin stores users’ information, safely allowing them to pay for every second that they watch a video or read an article. Currently, the Nigerian news network BattaBox and author Akul Tripathi are testing Smoogs.io’s micro-payments to access and read a series of articles and books.

In Muraka’s spare time, she co-hosts London Women in Bitcoin, a meetup event aimed at attracting more women into the cryptocurrency space. Here, women network while learning about such topics as the ethics behind building bitcoin technology.

“Most of the people who come to us are everyday people from different industries, not just technical industries,” says Muraka, who believes in order to get more women in tech, they need tech educations.

“In my undergrad and post-grad, I was the only girl in the whole department,” she says. “Even when I was working in my second job in London, we were 22 developers and I was the only girl.”

5. Katharine Zaleski, co-founder and president, PowerToFly

What she’s doing: Helping big brands find talent

In 2014, Zaleski—who had spent years working in media at The Huffington Post, The Washington Post and NowThis News—realized society needed to change the way it talked about women and work.

So, she started PowerToFly with Milena Berry, connecting women with companies. Think of it as an all-women version of LinkedIn: Women create profiles and then outfits like American Express, Casper and Hearst get lists of qualified, tech-heavy, female candidates. For example, Casper recently posted 10 job openings on the site, including positions for a data engineer, an IT manager and a data and engineering director.

In three years, PowerToFly has created 1 million profiles. In addition to career matchmaking, PowerToFly also runs social and mobile campaigns that advertise companies’ roles through user-acquisition tactics, reaching another 12 million women. It has sent out 30,000 diverse candidates in 2017.

“Companies can no longer say that they have a 'pipeline' problem,” Zaleski says. “When it comes time to interview for a role, not only are we giving them the women that they need to look at immediately, but we’re giving them a lead list and they’re able to say that they’re really interviewing 50/50 male-female.”

Nvidia will partner with Uber and Volkswagen as the graphics chipmaker’s artificial intelligence platforms make further gains in the autonomous vehicle industry.

The company, which already has partnerships in the industry with companies such as carmaker Tesla and China’s Baidu, makes computer graphics chips and has also been expanding into technology for self-driving cars.

CEO Jensen Huang told an audience at the CES technology conference in Las Vegas that Uber’s self-driving car fleet was using Nvidia technology to help its autonomous cars perceive the world and make split-second decisions.

Uber has been using Nvidia’s GPU computing technology since its first test fleet of Volvo SC90 SUVS were deployed in 2016 in Pittsburgh and Phoenix.

Uber’s autonomous driving programme has been shaken this year by a lawsuit filed in San Francisco by rival Waymo alleging trade secret theft.

Nevertheless, Nvidia said development of the Uber self-driving programme had gained steam, with one million autonomous miles being driven in just the past 100 days.

With Volkswagen, Nvidia said it was infusing its artificial intelligence technology into the German carmakers’ future lineup, using Nvidia’s new Drive IX platform. The technology will enable so-called “intelligent co-pilot” capabilities based on processing sensor data inside and outside the car.

So far, 320 companies involved in self-driving cars - whether software developers, carmakers and their suppliers, sensor and mapping companies - are using Nvidia Drive, formerly branded as the Drive PX2, the company said.

Nvidia also said its first Xavier processors would be delivered to customers this quarter. The system on a chip delivers 30 trillion operations per second using 30 watts of power.

Bets that Nvidia will become a leader in chips for driverless cars, data centres and artificial intelligence have more than doubled its stock price in the past 12 months, making the Silicon Valley company the third-strongest performer in the S&P 500 during that time.

Chinese education start-up Liulishuo has developed what it calls the world's first artificial intelligence English teacher.

After years spent gathering data on Chinese people speaking English, the firm employed deep learning to create personalized English courses powered by AI. Available on the firm's mobile app, the courses were launched in 2016 and boast around 50 million registered users.

Schools have long suffered from a short supply of highly qualified teachers, he said, but now "technology, especially AI and mobile internet, has enabled us to extract the best out of the best teachers."

"We're seeing a tidal shift here," he added.

Wang, a former Google product manager, says Liulishuo will eventually move on to other languages as it looks to build "the most intelligent and efficient AI language teacher."

During 2017 it was hard to escape predictions that artificial intelligence is about to change the world. In 2018, this is unlikely to change. However, an increased focus on repeatable and quantifiable results is likely to ground some of the “big picture” thinking in reality.

Don’t get me wrong - in 2018 AI and machine learning will still be making headlines, and there are likely to be more sensationalized claims about robots wanting to take our jobs or even destroy us. However, stories about real innovation and progress should start to receive more prominence as the promise of the smart, learning machines increasingly begins to bear fruit.

Here are my predictions for what we will see in 2018:

There will be less hype and hot air about AI – but a lot more action

With any breakthrough technology comes hype. As the arrival of functional and useful AI is something that has been predicted for centuries, it’s hardly surprising people want to talk about it, now it’s here.

It also means that there’s inevitably a lot of hot air – for starters, take a look at my rundown of the most common AI myths. Inevitably this eventually dies down as the media moves onto the “next big thing”. In its place during 2018, I expect we should start to see real progress towards achieving some of the dreams and ambitions which have been talked up over the past few years.

All the indicators show that investment into the development and integration of AI and, in particular machine learning, technology is continuing to increase in scale. And importantly, results are starting to appear beyond computers learning to beat humans at board games and TV game shows. I expect 2018 to provide a continuous stream of small but sure steps forward, as machine learning and neural network technology takes on more routine tasks.

Despite the big hype, smaller, medium-size and sometimes even larger businesses are often unsure about where to begin: “How can we use artificial intelligence in our organization and what value can it bring?” This is the question that many company directors and company managers have asked themselves. Organizations are often not aware of the vast opportunities that they are already sitting on in terms of what is possible with their data, but they do know they need to get started with AI not to be left behind the competition.

While everyone talks about AI transforming vastly each industry in the near future, many businesses are not sure what exactly this can mean for their own organisation: What business processes could be automated? What processes could be made more efficient with AI, and where could a machine learning algorithm bring the most value?

So, why have some businesses not yet started using AI? Innovating with AI and machine learning requires access to highly skilled individuals. These are data scientists mastering not only statistics and data visualization, but also complex machine learning and AI methods. Machine learning engineers and AI architects are rare and harder to find, locating someone excellent is a lengthy process, and hiring them is costly. AI experts often have PhDs in an artificial intelligence field, and many are still doing research in the academic system, because AI is not a field you become expert in overnight.

Before we can solve the talent gap, we need to fill the knowledge gap. There are companies, such as Brainpool AI, which provide the experts but also help organisations understand how they can get started with AI, from data structuring and engineering, to identifying machine learning opportunities within the business. By working closely with the company’s in-house teams, Brainpool consultants perform analytics audits, figuring out what data is available and what data analytics has been done, how their data should be structured and merged, and help businesses understand what kind of questions can be answered with machine learning, and where they can bring the most value.

Say you are a retailer and want to know if you are offering the right kind of stock that makes your business run efficiently and profitable while offering product ranges that make your customers happy. You may be wondering whether the set of Mayonnaise brands you are offering is a satisfactory range to your customers but also cost-efficient.

Here are some examples of how AI can help us:

AI powered product selection – ensuring the consumer receives the most relevant choice of products based on their online behavior. We see Amazon getting quite good at this.

AI powered stock management – using AI to maximise customer satisfaction whilst in the same time optimizing stock management to ensure business runs efficiently

Personal health virtual assistant/healthcare bots - AI powered technology can help patients by suggesting what medication or attention is needed based on their described symptoms

Medical diagnostics - millions of tests are being carried out by hospitals today for various illnesses which are hard to detect. AI can enhance speed and accuracy of these tests

Fraud detection – AI can help companies in industries such as telecom or banking detect and prevent fraud with higher accuracy

The range of applications is huge, it would be hard to list them all. When thinking of getting started with AI, no matter what application or the industry you’re in, it is important to select the right tools that are suitable to the type of data and the problems you are tackling. AI frameworks such as TensorFlow, H2o, Caffe, PowerAI are some of them. You will also need advice on the libraries that your organisation should be using such as R, Matlab or Python. Artificial Intelligence and Machine Learning experts can help you select the right tools and deliver a portfolio of powerful machine learning solutions to choose from with a roadmap of how to get started.

The goal is to become self-sufficient and learn exactly what steps you need to take in order to be ready to start using AI within your business. If you are already using data science, you should get experts to evaluate whether the algorithms your company is using is really the state-of-the-art and the best you could be doing.

Don’t wait around, otherwise you’ll get left on the platform with your competitors moving away in a speeding train. Get expert advice from a company like Brainpool and get started with AI today.