Profile

Stream

Small groups of researchers—from Pennsylvania State University to Google to the U.S. military— are devising and defending against potential attacks that could be carried out on artificially intelligent systems. In theories posed in the research, an attacker could change what a driverless car sees. Or, it could activate voice recognition on any phone and make it visit a website with malware, only sounding like white noise to humans. Or let a virus travel through a firewall into a network. Instead of taking the controls of a driverless car, this method shows it a kind of a hallucination—images that aren’t really there. These attacks use adversarial examples: images, sounds, or potentially text that seems normal to human viewers, but are perceived as something else entirely by machines. Small changes made by attackers can force a deep neural network to draw incorrect conclusions about what it’s being shown.﻿»

Small groups of researchers—from Pennsylvania State University to Google to the U.S. military— are devising and defending against potential attacks that could be carried out on artificially intelligent systems. In theories posed in the research, an attacker could change what a driverless car sees. Or, it could activate voice recognition on any phone and make it visit a website with malware, only sounding like white noise to humans. Or let a virus travel through a firewall into a network. Instead of taking the controls of a driverless car, this method shows it a kind of a hallucination—images that aren’t really there. These attacks use adversarial examples: images, sounds, or potentially text that seems normal to human viewers, but are perceived as something else entirely by machines. Small changes made by attackers can force a deep neural network to draw incorrect conclusions about what it’s being shown.﻿

DeepDream was a famous experiment in inducing hallucinations in neural networks. A Russian engineer has adapted the technique (known as "Inceptionism") to create a system which combines stylistic elements from multiple images to form a new work, and has made this into a software tool. (At the moment, it is only available in Russia, due to CPU limitations; he's trying to raise money to get enough capacity to make it available worldwide) The link below shows some of the rather fascinating things it has created – from the rather beautiful cat of trees below, to some simply strange things involving fish developing pasta-like tentacles.

It's fascinating to see technology turn not only into tools, but into new artistic media.

Communities

LOL! The absolute best TED talk I ever watched. It has so much stuff in it, in a beautiful charade! All about manipulation, about how to hit the hidden buttons that control emotions and maximize your chances to transform the audience in supporters. He's not just talking about mind tricks, he's doing them in your face so you clearly understand. As a result, you'll never see a talk, any talk, the same again. You'll unconsciously reject tricks an the real content will stand out, naked. And you'll be more careful with your precious time.﻿

Here is some progress with our toy: Aliens by Daria: Woogie«Woogie is a voice enabled AI device that is being conceived and developed by a team of Romanian engineers and programmers passionate about technology and education. It is forthcoming for kids aged between 6 and 12 years. Here you can watch Woogie’s progress» http://www.aliensbydaria.com/﻿

Training a Convolutional Neural Network to recognize a spoken word. Steps: 1. Record a few hundred samples of the word (positives) and an equal number of negatives. 2. Compute the Mel-frequency Cepstral Coefficients (similar to a spectrogram) of the samples. 3. Train the CNN on the MFCCs. 4. Measure accuracy on never seen test samples.With only about 300 samples, the accuracy seems to be above 80%. Looks promising.‪#‎MFCC‬ ‪#‎machinelearning‬ ‪#‎google‬ ‪#‎tensorflow‬ ‪#‎ai‬﻿