“AI could mean the end of human history” – Henry Kissinger (Atlantic Article – June 2018)

Three years ago, at a conference on transatlantic issues, the subject of artificial intelligence appeared on the agenda. I was on the verge of skipping that session—it lay outside my usual concerns—but the beginning of the presentation held me in my seat.

The speaker described the workings of a computer program that would soon challenge international champions in the game Go. I was amazed that a computer could master Go, which is more complex than chess. In it, each player deploys 180 or 181 pieces (depending on which color he or she chooses), placed alternately on an initially empty board; victory goes to the side that, by making better strategic decisions, immobilizes his or her opponent by more effectively controlling territory.

The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go’s basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world’s greatest Go players.

As I listened to the speaker celebrate this technical progress, my experience as a historian and occasional practicing statesman gave me pause. What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?

CloudQuant Thoughts: AI can beat us at Chess, Go, Call of Duty. Kissinger makes some interesting points. “AlphaZero, on its own, in just a few hours of self-play, achieved a level of skill that took human beings 1,500 years to attain. Only the basic rules of the game were provided to AlphaZero. Neither human beings nor human-generated data were part of its process of self-learning.” If the process of learning involves errors, and these are an accepted part of the process, what happens when those learning errors become costly to humankind. Pot, Kettle, Henry.

19 Data Science Tools for people who aren’t so good at programming

Programming is an integral part of data science. Among other things, it is acknowledged that a person who understands programming logic, loops and functions has a higher chance of becoming a successful data scientist. But, what about those folks who never studied programming in their school or college days? Is there no way for them to become a data scientist then?

With the recent boom in data science, a lot of people are interested in getting into this domain. but don’t have the slightest idea about coding. In fact, I too was a member of your non-programming league until I joined my first job. Therefore, I understand how terrible it feels when something you have never learned haunts you at every step.

The good news is that there is a way for you to become a data scientist, regardless of your programming skills! There are tools that typically obviate the programming aspect and provide user-friendly GUI (Graphical User Interface) so that anyone with minimal knowledge of algorithms can simply use them to build high quality machine learning models.

CloudQuant Thoughts: Everyone has to start somewhere. I have found Microsoft Azure demos on YouTube to be a great introduction, particularly this 15 minute Titanic demo.

Google Champions NLP by using Neural Networks to Help you Write Emails

Google debuted it’s latest NLP development – Smart Compose – at last week’s Google I/O conference. It’s a Gmail feature that uses machine learning to predict the next words you are going to write and offers sentence completion suggestions accordingly. The aim is to help users write emails faster so they can focus on their daily work, rather than be stuck in the black hole of their inbox.

Google has revealed the technology behind it’s Smart Compose feature for Gmail – a combination of bag of words and RNN

The final model was trained on billions of text examples

The developers used TPUs to increase the computational power and consequently increase the speed of predictions

CloudQuant Thoughts: Clippy is back.. but better and stronger thanks to AI!

This Artificial Intelligence Model Trains Itself based on it’s own Dreams

A tennis player, on the receiving end of a booming 150km/hr serve, has milliseconds to decide which way the ball is coming, how high it’ll bounce, and how he/she wants to swing the racket so as to make it go where he/she wants. The player predicts all these things subconsciously, based on the images the brain generates.

Researchers have developed an AI agent that dreams up scenarios and learns from them by itself (unsupervised learning)

The structure of the model is divided into three units: vision, memory (RNN model) and controller

On a selection of 100 randomly selected tracks, the average score of the model was almost three times higher than that of DeepMind’s initial Deep Q-Learning algorithm!

We have a tendency of creating a mental image of the world around us, based on events that are perceived by our limited senses. The decisions we make and the actions we take are built around these mental “models”. There is a VAST amount of information that we intake every single day; we observe something and proceed to remember an abstract version of it. Think about this for a minute – it is true for all of us.

Two researchers, David Ha and Jurgen Schmidhuber, have developed an AI model that not only plays video games with awesome accuracy, but also has the ability to conjure up new scenarios (or dreams), learn from them, and then apply them on the game itself. The model can be trained in an unsupervised manner to learn the “spatial and temporal representation of the environment”.

CloudQuant Thoughts: Researchers are now looking hard at how to speed up ML, from the traditional route of Hardware improvements (Google’s TPUs), to pushing the hard work out to the edges of the network onto users machines, Microsoft’s Windows ML which enables developers to use pre-trained machine learning models in their Apps and now dopamine and dream simulation.

OpenAI : Computing power is shaping the future of AI

We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.

NVIDIA has developed a deep learning system that enables robots to learn and teach themselves, simply by observing human actions. In the initial demonstration, we were shown how robots detected objects (coloured boxes and a toy car in this case), picked them up and moved them.

Researchers at NVIDIA have developed a deep learning system that enables robots to learn from humans

The algorithm is powered by several neural networks that perceive objects, understand and train themselves, and then execute the actions they saw the human performing

AEye’s iDar sensor combines camera and lidar data into a 3D point cloud

A key component of many autonomous driving systems is lidar (a portmanteau of light and radar), which bounces light — usually in form of ultraviolet, visible, or near-infrared — off of objects to map them digitally in three dimensions. But while lidar systems are great for identifying potential obstacles, they don’t always spot those obstacles quickly. At a speed of 70 miles per hour, for instance, targeting an object 60 meters away doesn’t do much good if it takes the car 100 meters to come to a stop. Post-processing introduces another delay.

That is why the new sensor from startup AEye — the iDar — is built for speed first and foremost.

Emergence Capital today announced it has raised a $435 million fund to invest in companies helping people increase productivity through the use of machine learning.

The fund will focus especially on companies that provide coaching powered by data and conversational AI to help people perform their jobs better. Emergence has previously made a number of similar investments, including in call center analysis company Chorus.ai; recruiter chatbot Mya; and Textio, which is using conversational AI to make better recruitment messages for companies that are hiring.
…
2018-05-21 00:00:00 Read the full story.

App To Use AI To ID Guests At Prince Harry & Meghan Markle’s Wedding

As millions gather to watch the Royal Wedding of Prince Harry and Meghan Markle on Saturday, May 19, Sky News – Europe’s leading entertainment company – will introduce a new interactive experience for the historic event. In addition to enjoying live coverage of the wedding, viewers around the world will be able to access the Sky News “Royal Wedding: Who’s Who” app to follow real-time updates of wedding guests as they enter St. George’s Chapel.

Artificial Intelligence is a wonderful thing, if applied correctly. It has diverse applications and it is well and truly transforming our lives in a positive way (like healthcare). But there can be certain applications, like the one you will read about below, that are a mix between genius and scary. They have the potential to be game changing, and only time will tell if it’ll be a good or bad thing. These researchers are the first to have successfully transferred the full 3 dimensional head pose, expressions, eye motions, etc. of a face, into the face of a different actor.. The results are simply mind blowing.

A group of researchers have developed an algorithm that takes 1 input video and reconstructs the facial expressions, head pose and eye motions on another person’s face

At the core of the approach is a generative neural network

The results are truly mind blowing. Previous efforts in this field pale in comparison to what this approach has done

DeepMind’s Recurrent Neural Network Explores the role of Dopamine for Machine Learning

Machines have already started outperforming humans in some tasks, like classifying images, reading lips, forecasting sales, curating content, among other things. But there is a caveat attached to it – they require tons and tons of data to learn and train the model. Some of the best algorithms, like DeepMind’s AlphaGo, take a lot of data and hundreds of hours to understand the rules of a video game and master it. Humans can usually do this in one sitting.

DeepMind’s latest research aims to figure out how it can get machines to learn something in a few hours, replicating human behavior. The researchers behind this study believe that it might have something to do with dopamine, the brain’s pleasure signal. Dopamine has been associated with the reward prediction error signal used in AI reinforcement learning algorithms. These systems learn to act by trial and error, guided by the reward.

Tech Platforms and the Knowledge Problem

Jeffersonians and Hamiltonians express very different views on what an optimal economy looks like. In the long run, their visions are probably irreconcilable. In the short run, however, both sets of reformers offer important lessons for policymakers grappling with the power of massive tech, finance, and health-care firms. This essay explores these lessons, specifying where each vision has comparative advantage.

Clashes amongst centralizers and decentralists can be particularly illuminating…
2018-05-20 00:02:27-04:00 Read the full story.

Baidu COO Qi Lu steps down, AI chief now reports directly to CEO

Baidu today announced that COO Qi Lu will step down in July for personal and family reasons. Lu had been brought aboard to help the Chinese search giant become more centrally focused on AI services.

“With Baidu’s strategy to transform into an AI-first company firmly in place, we are well positioned to continue the momentum that we have built in the past year,” CEO Robin Li said in a statement shared with VentureBeat.

Since he joined Baidu, the company has launched a smart speaker, its Duer virtual assistant, and a $1.5 billion fund to grow its Apollo autonomous driving division.

Lu will continue to serve as vice chairman of the Baidu board of directors.
2018-05-18 00:00:00 Read the full story.

How To Convert Data Science And Machine Learning Internships Into Jobs

Can popular massive open online courses turn into job offers? This a common dilemma faced in data science forums about internships and certificate courses converting in job offers. Now that you snagged an internship, built a portfolio of work, networked with the mid and senior management team and are ready to pursue a career in data science, you are waiting for the high-paying job offer to land in your inbox. According to UC Berkeley data scientist Karsten Walker, recruiters look for specific traits such as applying scientific methodology to a business problem and look for candidates who have a demonstrated history of applying analytical concepts.
2018-05-18 11:38:31+00:00 Read the full story.

Google Starts Beta Tests of High-Memory Cloud VMs for Demanding Apps

Enterprises now have a new option for running memory-intensive applications on Google’s cloud platform. The company this week announced beta availability of n1-ultramem, a new family of memory-optimized virtual machine (VM) instances that Google says is well suited for applications such as data analytics, enterprise resource planning and genomics.

The new machine types offer more memory and computing resources than any other virtual machine type that Google currently offers and is ideal for resource-hungry High Performance Computing apps as well, according to the company. “These VMs are a cost-effective option for memory-intensive workloads, and provide you with the lowest $/GB of any Compute Engine machine type,” Google product manager Hanan Youssef, wrote in a blog May 15.

AI decides: Is it Laurel or Yanny?

Unless you’ve been living under a rock, you’ve probably run across the Vocabulary.com audio clip that kicked off a social media “Laurel” or “Yanny” firestorm this week. Perhaps you even weighed in, offering your two cents on the elocution of the opera singer (a member of the original Broadway cast of Cats, as it turns out) in the recording. But you probably didn’t consult artificial intelligence for a second opinion.

Finnish university’s online AI course is open to everyone

Helsinki University in Finland has launched a beginners course on artificial intelligence — one that’s completely free and open to everyone around the world. As Janina Fagerlund from the university’s project partner (tech strategy firm Reaktor) said, people who might not know that their lives are already affected by AI every day.

Fagerlund mentioned the use of AI in the food industry to sort produce and other items at facilities as an example. And while you know that Facebook uses AI for facial recognition, a lot of people might not.

Microsoft buys a start-up that wants A.I. to make conversation with humans

Microsoft has bought Semantic Machines, an artificial intelligence start-up, as it looks to boost its efforts in developing conversational AI.

Berkeley, California-based Semantic’s approach to AI is using machine learning to add context to conversations with chatbots. This means taking information received by AI and applying it to future dialogue.

The firm’s speech recognition team previously led automatic speech recognition development for Apple’s personal assistant Siri.

Digital Privacy In The Era Of Artificial Intelligence – GDPR

Not a day goes by without news articles talking about artificial intelligence capabilities and its effects on our lives. The implications for the economy and the workforce are very profound. Companies use AI today for numerous tasks, such as marketers to buy their products and the financial industry to process credit applications, along with newer applications such as medical diagnosis.

Use of personal data has obvious privacy implications, especially personally identifiable information (PII) can be used to identify individuals and may be sensitive. A commonplace, prevalent fear is that individuals must be careful about what personal information makes its way online because it will be there forever, or because it can’t be rectified. Since 1995, policymakers have begun to act, with the European Union spearheading such policy efforts proposing citizen digital rights.

This week, the new EU General Data Protection Regulation (GDPR) is due to enter into force, strengthening the role of consent for personal data processing, adding digital rights for citizens, and focusing on how organisations should design their data privacy and protection processes.

This news clip post is produced algorithmically based upon CloudQuant’s list of sites and focus items we find interesting. If you would like to add your blog or website to our search crawler, please email customer_success@cloudquant.com. We welcome all contributors.

This news clip and any CloudQuant comment is for information and illustrative purposes only. It is not, and should not be regarded as “investment advice” or as a “recommendation” regarding a course of action. This information is provided with the understanding that CloudQuant is not acting in a fiduciary or advisory capacity under any contract with you, or any applicable law or regulation. You are responsible to make your own independent decision with respect to any course of action based on the content of this post.

The thoughts and opinions on this site do not represent investment recommendations by CloudQuant or Kershner Trading Group. Securities, charts, illustrations and other information contained herein are provided to assist crowd researchers in their efforts to develop algorithmic trading strategies for backtesting on CloudQuant.