"Automatically produced texts use language models derived from statistical analysis of vast corpuses of human-generated text to produce machine-generated texts that can be very hard for a human to distinguish from text produced by another human. These models could help malicious actors in many ways, including generating convincing spam, reviews, and comments -- so it's really important to develop tools that can help us distinguish between human-generated and machine-generated texts."

"The underlying theory is that art evolves "through small alterations to a known style that produce a new one," which, as Ian Bogost (previously) points out, is "a convenient take, given that any machine-learning technique has to base its work on a specific training set.""

""My PhD students seem to have to spend three years just getting to the point where they understand what's being asked of them," he says. Once again, he looks pained. "We seem to be hitting problems that will require so many strands that one mind isn't going to be able to pull them together.""

"But researchers have approximately zero clues as to why it's so good at it, and it doesn't help that the AI essentially taught itself to make these predictions. According to one researcher involved in the project, "We can build these models, but we don't know how they work.""

"Westfield's Smartscreen network was developed by the French software firm Quividi back in 2015. Their discreet cameras capture blurry images of shoppers and apply statistical analysis to identify audience demographics. And once the billboards have your attention they hit record, sharing your reaction with advertisers. Quividi says their billboards can distinguish shoppers' gender with 90% precision, five categories of mood from "very happy to very unhappy" and customers' age within a five-year bracket."

"At their core, GANs consist of two networks: the generator and discriminator. These computer programs compete against each other millions-upon-millions of times to refine their image generating skills until they're good enough to create the full-fledged pictures."

""The ethics commission has done pioneering work and has developed the world's first guidelines for automated driving. We are now implementing these guidelines."
The ethics rules address a classic thought experiment: the "trolley problem.""

"His team trained a machine learning algorithm to spot words and phrases associated with bullying on social media site AskFM, which allows users to ask and answer questions. It managed to detect and block almost two-thirds of insults within almost 114,000 posts in English and was more accurate than a simple keyword search. Still, it did struggle with sarcastic remarks."

"The creators of a revolutionary AI system that can write news stories and works of fiction - dubbed "deepfakes for text" - have taken the unusual step of not releasing their research publicly, for fear of potential misuse."

"Robots, artificial intelligence and smart speakers will ease the burden on doctors and give them more time with patients, according to an NHS report on the pending technological "revolution" in healthcare.
Developments in the ability to sequence individuals' genomes - the entirety of their genetic data - will also spur on advances, according to the review published on Monday."

"Everything around us will be altered by autonomous vehicles-our roads, our warehouses and even our definition of what a car can be. Say goodbye to four wheels and a running board; the cars of the future will barely resemble the vehicles choking our cities today."

""There's no sense in which this is a technology that will lead the science fiction walking, talking, intelligent robot", says Furber, "because they'd need a head the size of an aircraft hangar and a nuclear power station attached to it.""

How long before this is used without our knowledge in facial recognition systems? "Through the use of the data from this lab study and a formula Kim and Yang applied called "fractal dimension," Kim and Yang discovered a negative relationship between the fractal dimension of pupil dilation and a person's workload, showing that pupil dilation could be used to indicate the mental workload of a person in a multitasking environment."

"The Wheelie 7 kit equips a wheelchair with artificial intelligence to detect the user's expressions and process the data in real-time to direct the movement of the chair.
Smiling, raising the eyebrows, wrinkling the nose or puckering the lips as if for a kiss are among the repertoire of 10 gestures recognised by the prototype Wheelie 7."

"The researchers, Tero Karras, Samuli Laine, and Timo Aila, came up with a new way of constructing a generative adversarial network, or GAN.
GANs employ two dueling neural networks to train a computer to learn the nature of a data set well enough to generate convincing fakes. When applied to images, this provides a way to generate often highly realistic fakery. The same Nvidia researchers have previously used the technique to create artificial celebrities (read our profile of the inventor of GANs, Ian Goodfellow)."

"Hassabis was a child chess prodigy, who learned the game aged four and was able to beat his dad three weeks later - indeed, when he started playing competitively he was so small he had to bring a pillow with him to reach the board - and became a strong player. Yet in AlphaZero's case there was no human input, other than telling it the rules of each game. "In a matter of a few hours it was superhuman," Hassabis says proudly."

"At the heart of the problem that troubles Ming is the training that computer engineers receive and their uncritical faith in AI. Too often, she says, their approach to a problem is to train a neural network on a mass of data and expect the result to work fine. She berates companies for failing to engage with the problem first - applying what is already known about good employees and successful students, for example - before applying the AI."

"The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased.
"We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""