Could a Pen Change How We Diagnose Brain Function?MIT News (08/13/15) Adam Conner-Simons

Researchers from the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab (CSAIL) have used artificial intelligence (AI) and a digital pen to diagnose dementia earlier than ever before. Dementia screening via the Clock Drawing Test (CDT) has been a standard practice for some time, but its limitations and subjectivity spurred the researchers to use the camera-equipped Anoto Live Pen, which measures its position on the paper more than 80 times a second, enabling the system to analyze every single movement and pause by a subject. CSAIL and Lahey Hospital and Medical Center collaborated on software for analyzing this version of the test, with the end result being the digital Clock Drawing Test (dCDT). The testers discovered their machine-learning computational models were much more accurate than standard models in diagnosing the presence and the precise nature of a cognitive impairment. "We've improved the analysis so that it is automated and objective," notes MIT professor Cynthia Rudin. "With the right equipment, you can get results wherever you want, quickly, and with higher accuracy." The researchers' next step is development of an interface to ease use of the dCDT technology by neurologists and non-specialists in hospitals.

Bitcoin's Dark Side Could Get DarkerTechnology Review (08/13/15) Tom Simonite

Smart contracts or computer programs that can confirm data and hold or use funds using similar cryptographic methods to those which underpin Bitcoin could help enable a new, insidious dimension of cryptocurrency's criminalization, according to research by Cornell University professors Ari Juels and Elaine Shi, and University of Maryland researcher Ahmed Kosba. "In some ways this is the perfect vehicle for criminal acts, because it's meant to create trust in situations where otherwise it's difficult to achieve," Juels says. The researchers envision one example of a contract offering a virtual currency bounty for hacking a website supported on the Ethereum small-contract platform. The platform's programming language would enable the contract to control the funds, and issue them only to someone who proves they have performed the task in the form of a verifiable string added to the defaced site. Although Ethereum chief technology officer Gavin Wood says legitimate companies are planning to use the platform for honest business, he concedes it could be used in illegal ways as well. One scenario he speculates on is Ethereum's software being used to establish a decentralized version of Uber or a similar service, in which payments are handled without the need for an intermediate company.

Researchers from University College in England have developed an algorithm that can predict people's location based on the photos they upload to the Flickr file-sharing website. Using photos shared by 16,000 people in the U.K., the team created a database of 8 million images. The algorithm accessed the photos' global-positioning system and time-stamp data to note all the locations where pictures had been taken by a single camera, as well as to predict where people would take photos in the future based on their past movements. The team tested the algorithm by comparing their results with a government survey taken to better understand national travel patterns. The researchers report their results and the survey agreed 92 percent of the time. They say they also could focus on individuals, or at least their cameras, and anticipate where that person might be at any given moment. The research could provide the government with a new way to track people's movement, which could be beneficial for road-building plans and other transportation projects. However, it also raises privacy issues about shared photographs.

A new Carnegie Mellon University (CMU) study suggests major law enforcement take-downs of online drug markets have largely failed to dent the traffic in drugs on the Dark Web. From 2013 to early 2015, CMU researchers Nicolas Cristin and Kyle Soska used automated software to scrape the visible contents of 35 Dark Web markets, providing a comprehensive, if not complete, view of how illegal drug sales fluctuated during that period. They found the Dark Web drug market has largely stabilized following the explosive growth it experienced during the heyday of the Silk Road, and currently generates $100 million to $180 million in annual sales. The volume of sales has remained stable even in the face of major thefts, scams, takedowns, and arrests. The study found a joint Europol/U.S. Federal Bureau of Investigation effort known as Operation Onymous, which took down six major Dark Web sites, barely dented the market, nor did the loss of millions of dollars of users' Bitcoins by the Silk Road 2 market. Today's Dark Web drug market stands in stark contrast to the earlier days of the online drug market, which was significantly disrupted when the original Silk Road was shuttered in 2013 and after one of its successors, Sheep Marketplace, was closed two months later.

University of Texas at Austin (UT) researchers and Japanese government officials are collaborating on a $13-million project aimed at making data centers more energy efficient. The effort will be hosted by the Texas Advanced Computing Center (TACC), which will receive about $4 million in additional computing capability thanks to the project. The project also involves installing a 250-kilowatt solar farm to power the new computers on sunny days. Although UT will get more computing power, the Japanese government will study the technology to trim costs and energy use elsewhere. "Through this project, we hope to verify the energy efficiency of the new technology and to disseminate it in the U.S.," says Japanese official Fumio Ueda. The partnership will test a high-voltage direct-current power system for computers, which typically run on alternating current. The technology is expected to boost efficiency by avoiding costly conversions of the current at the solar panels, a battery backup system, and computing racks. "Small changes in efficiency there have massive consequences in savings," notes TACC executive director Dan Stanzione. "If we're going to build large-scale computers, we're going to need more and more energy to do it. We have to find sustainable ways to do that."

A new optimization strategy developed by University of Washington (UW) scientists won the top prize at the 24th International Conference on Artificial Intelligence, and its experimental application showed the approach beat standard optimization methods. The new RDIS algorithm breaks very complex problems down into more manageable segments by identifying numeric variables that, once set to specific values, reduce a larger problem into independent subproblems. UW professor Pedro Domingos reports the approach can "solve problems exponentially faster." The researchers assessed RDIS by using it to determine the shape of folded proteins and accurately convert two-dimensional images into three-dimensional (3D) objects and scenes. They found RDIS helped give rise to much lower-energy protein shapes than alternative optimization methods in the first case. In the second case, the optimization algorithm was able to construct 3D objects and scenes with 100,000 to 10 billion times more precision than previous techniques, on average. RDIS partly owes its success to the researchers' application of decomposition to continuous optimization problems, and they say the next stage of their plan is to test RDIS' performance on new and different applications. "This can be applied to pretty much any machine-learning problem, but that's not to say it's going to be good for every machine-learning problem," Domingos notes.

In an interview, Baidu engineer Awni Hannun discusses a new model for handling Mandarin voice queries that tests found is accurate 94 percent of the time. He says the model employs Deep Speech, a deep-learning system that differs from other deep learning-based systems such as Microsoft's Skype Translate. Hannun says in the latter case there are usually three modules in the pipeline--a speech-transcription module, a machine-translation module, and a speech-synthesis module. "Our system is different than that system in that it's more what we call end-to-end," he notes. "Rather than having a lot of human-engineered components that have been developed over decades of speech research--by looking at the system and saying what features are important or which phonemes the model should predict--we just have some input data, which is an audio .WAV file on which we do very little pre-processing. And then we have a big, deep neural network that outputs directly to characters." Hannun says the network is fed enough data so it can learn what is relevant from the input to correctly transcribe the output, with a minimum of human intervention. He says Baidu plans to build a speech system that can interface with any smart device, and compressing existing models may be of help in this regard.

A new tool will enable computer animators to create more realistic dressing scenes for animated characters. Developed by researchers from the Georgia Institute of Technology (Georgia Tech), the algorithm enables virtual characters to intelligently manipulate simulated cloth in order to put on clothes and get dressed. Computer animated films often lack dressing scenes because it is difficult to manipulate simulated cloth. The new technique will enable animators to create scenes similar to live-action movies with iconic clothing, such as Spider-Man pulling his mask over his head. The team ultimately wants to develop assistive technologies that will enable robots to help disabled or elderly adults with getting dressed and other aspects of self-care. "The challenge of learning to dress at a young age or for some older adults and those with disabilities is mainly due to the combined difficulty in coordinating different body parts and manipulating soft and deformable objects," says Georgia Tech School of Interactive Computing professor Karen Liu. The research, "Animating Human Dressing," was a technical paper at the SIGGRAPH 2015 conference in Los Angeles last week.

Microsoft researchers Ashish Kapoor and Eric Horvitz are using machine learning to make more accurate weather predictions over a 24-hour period. Their system tries to "learn" from massive data sets of past weather events. Kapoor and Horvitz use deep neural networks they had used to enhance artificial sight and speech. The researchers note their new models have much lower error rates when it comes to predicting wind, dew point, pressure zone locations such as geopotential height, and temperature, up to 24 hours in advance. "The deep neural network strives to model the dependencies across variables without making explicit assumptions," Kapoor says. "You don't even need to encode that relationship, these models learn that relationship automatically." Kapoor previously used machine learning to examine higher altitude wind patterns by using public U.S. Federal Aviation Administration data from tens of thousands of commercial airplane flights per day. "Using that function, you try to extrapolate what it's going to look like in the future," he notes. Kapoor now is investigating how the system's period of accuracy can be extended beyond 24 hours. The researchers believe they can help scientists better understand the effects of climate change on weather patterns.

University of Wisconsin-Madison professors in the departments of computer sciences, psychology, and educational psychology are collaborating to explore what computer scientist Jerry Zhu calls "machine teaching." Under the approach, instead of dealing with large amounts of data and not knowing what patterns might be revealed through analysis, the researcher already knows what knowledge they want to emphasize to the learner. Machine teaching also uses mathematics to enable researchers to model actual human students and create the best possible lessons for teaching them. Although the definition of "best" would be determined by the teacher, one example could be identifying the smallest number of exercises needed for a particular student to grasp a concept. "Can five really good questions teach the material, rather than 20?" Zhu asks. UW-Madision professor Timothy T. Rogers, one of Zhu's collaborators, says machine teaching can work if there is a good model that is "able to make concrete, quantitative predictions about the learner's behavior." Zhu presented some of his research earlier this year in Austin, TX, at the 29th annual Conference on Artificial Intelligence, organized by the Association for the Advancement of Artificial Intelligence. A two-year seed grant from the UW-Madison Graduate School currently supports the research.

Researchers at Northeastern University have published a new paper in Nature Physics on an approach to measuring how energy is spent through networks people can control. "We provide a metric—called 'control energy'—to characterize the amount of effort needed to control real-world complex systems," says first author Gang Yan. Self-organized networks include cellular networks, social networks, and mobile-sensor networks. Potential applications of Yan's metric can range from helping to identify key points in the metabolic pathways of bacterial cells used by drugs to determining the most critical areas to monitor and protect in an online security system. "Estimating the control energy, or effort, is key in executing most control applications, from controlling digital devices to understanding the control principles of the cell," says Northeastern professor Albert-Laszlo Barabasi, the paper's corresponding author. A network consists of points of connection, or "nodes"—individual units, such as a metabolite, gene, person, or gas pedal—and the links or interactions linking those nodes to one another. "Driver nodes" are particular nodes that network administrators zap with external signals to control the system. The condition of a driver node, such as a gene coding a protein or a person expressing his opinion about a political candidate, evolves over time as a result of both the node's internal dynamics and how it connects with its neighbors.

Microsoft researchers aim to teach artificial intelligence (AI) software how humor works by training it on an archive of New Yorker cartoons and entries into the magazine's cartoon caption contest. Researcher Dafna Shahaf fed the cartoons and contest entries to the software and taught it to select the funniest choices among captions that make similar jokes, relying partly on crowdsourced input from contract workers via Amazon.com's Mechanical Turk. Ranking jokes was the next step, requiring the researchers to manually describe what was happening in each cartoon, and to categorize its context and anomalies. The AI system is capable of weeding out poor caption-contest submissions and narrowing the list to the funnier ones. New Yorker cartoon editor Bob Mankoff thinks AI could become a useful aid for humorists, once the Microsoft system can select captions with greater accuracy. The Microsoft researchers also want to train computers to invent their own situational jokes. In a wider context, understanding what people find funny and how they come up with jokes is an important area in the field of brain dynamics, which also is essential to AI research.