The Next Hot Job in Silicon Valley Is for PoetsThe Washington Post (04/07/16) Elizabeth Dwoskin

Demand for chatting virtual assistants and other artificial intelligence (AI) products is creating favorable job prospects for writers, poets, comedians, and other people of artistic persuasion in Silicon Valley. The industry is tapping them to engineer the personalities of AI tools to make them capable of seamless interaction with people. AI writers are tasked with imbuing the AIs with natural-seeming conversational capabilities. Writers for virtual assistants must typically concoct a backstory for these assistants, and inject personality quirks into even the most mundane operations. For example, Microsoft Cortana's writing team regularly brainstorms how Cortana should respond to emerging issues, such as political questions. A major challenge for AI writers and designers is defining the virtual assistant/human relationship via the bot's personality and approach, especially with personalization such a highly valued quality. Sense.ly's Cathy Pearl says users can be more accepting of a virtual assistant's mistakes and limitations if it does not project human aspirations and has a sense of humor about it. Conversational AIs are expected to become essential to the workforce, more than 33 percent of which is forecast to work alongside such technologies by 2019, according to Forrester Data.

These Are the Cities Where Tech Workers Live LargestUSA Today (04/07/16) John Shinal

Annual data released by the U.S. Bureau of Labor Statistics demonstrates the value of an education in the science, technology, engineering, or math fields. Workers employed in computer and math occupations in the cities with the most technology employees earned yearly salaries about 50 percent to 75 percent higher than the overall workforce. Seattle tech workers, for example, had a mean salary of $108,350, or 78 percent more than the $61,000 earned by all workers there. That was the highest tech-worker premium in the 10 largest hubs, followed by Dallas-Fort Worth, Houston, and Austin. The same is true in the burgeoning tech hub of Oakland, CA, where workers in computer and math occupations were paid 70 percent more. Computer and math occupations in Los Angeles, Philadelphia, San Jose, and San Francisco all earn more than 60 percent more than their non-tech counterparts. Although Washington, D.C., is among the largest tech-employing regions, its tech workers had the smallest salary differential, at 54 percent, likely due to the large numbers of federal government workers. Among tech occupations, software developers and systems analysts were the highest in number in nearly all of the largest tech hubs, surpassing computer programmers, network and database administrators, computer research scientists, and computer-support specialists.

Three teams of neuroscientists and computer scientists have been set up under a five-year U.S. Intelligence Advanced Research Projects Activity (IARPA) program called Machine Intelligence from Cortical Networks (Microns) to determine how the brain performs visual identification and how to mimic this process in machines. "We want to revolutionize machine learning by reverse-engineering the algorithms and computations of the brain," says IARPA's Jacob Vogelstein. Each team is simulating a piece of the cerebral cortex that processes vision while also developing algorithms based on the knowledge they gain. By this summer, each algorithm will be provided with an example of a foreign item and then must pick out instances of it from among thousands of images in an unlabeled database. Each team is investigating a different analysis-by-synthesis method based on the theoretical concept of the brain's mechanism for vision. All three teams will track neuronal activity in a cube of brain, and plot out a neuron map with different techniques; from that they will try to extract basic rules governing the circuit, feed those rules into a model, and quantify the model's similarity to an actual brain. The goal of the Microns project is a map of a cubic millimeter of cortex that will yield new insights about the brain. A key challenge will be developing machine-learning tools to analyze the vast quantity of data generated by the cortex sample.

A portrait titled "the Next Rembrandt" unveiled on Tuesday is not the work of the renowned Dutch painter, but the end-product of an 18-month project that brought together data scientists, developers, engineers, and art historians. The project team, which included researchers from Microsoft, the Delft University of Technology, the Mauritshuis in The Hague, and the Rembrandt House Museum in Amsterdam, was tasked with creating a three-dimensional printed painting in the style of Rembrandt but generated by software. The new painting represents a distillation of 168,263 fragments from Rembrandt paintings into more than 148 million pixels. The development team first designed a software system capable of understanding the famed artist according to his use of geometry, composition, and painting materials. Their next step was to utilize a facial-recognition algorithm to identify and classify the most typical geometric patterns used to paint human features. Project originator Bas Korsten emphasizes the initiative is not an attempt to create a new Rembrandt. "We are creating something new from his work," he says. "Only Rembrandt could create a Rembrandt." Korsten hopes the project leads to a conversation about art and algorithms. "While no one will claim that Rembrandt can be reduced to an algorithm, this technique offers an opportunity to test your own ideas about his paintings in concrete, visual form," says art historian Gary Schwartz.

How Data Mining Reveals the Hidden Evolution of American AutomobilesTechnology Review (04/06/16)

Erik Gjesfjeld at the University of California, Los Angeles, and colleagues have analyzed the evolution of U.S. automobiles from their invention in the 19th century to the present, and their results provide unprecedented insight into the forces at work in automobile evolution. Scientists and anthropologists have yet to come to grips with the role of evolution in technological development because nobody agrees on how to measure change in systems in which there is no obvious analogy to familiar ideas of genetics and sexual reproduction. The team's approach was to treat the birth and death of car models as a kind of fossil record. Using eBay Motors' extensive database of the make, model, and production years of U.S. cars and trucks from 1896 to 2014, the team plotted the data and analyzed it using a process called a birth-death Markov chain Monte Carlo algorithm. The algorithm simulates the process of evolution among individuals to generate a curve showing the rate of origin and extinction of species. The team ran the model through 10 million generations of individuals to produce a curve that matches the history of automobile production, and they say the approach can be applied to other technologies.

Massachusetts Institute of Technology (MIT) researchers have developed a new strategy for preserving superposition in quantum devices built from synthetic diamonds. MIT professor Paola Cappellaro and former postdoctoral student Masashi Hirose describe a feedback-control system for maintaining quantum superposition without requiring measurement. A nitrogen-vacancy (NV) center in a diamond can represent a quantum bit (qubit) while dispensing with ion- or atom-trapping hardware required by other approaches, and is a natural light emitter as well. The NV center's electronic spin is controlled using the spin state of the nitrogen nucleus, via a combination of microwave and radio-frequency radiation bursts. Entanglement of the nitrogen nucleus and the NV center enabled by a second microwave dose ensures any computational errors will be reflected in the nucleus' spin, and once the computation is conducted, a third microwave dose disentangles the nucleus and the NV center. A final series of calibrated microwave exposures corrects any error that creeps in during computation. The researchers demonstrated that an NV-center qubit using the feedback-control system would remain in superposition about 1,000 times as long as it would without it. "Applications of this technique will appear soon, as demonstrations of new protocols applied to quantum metrology and quantum computing," predicts Ulm University professor Fedor Jelezko.

Lawrence Livermore National Laboratory researchers will determine how IBM Research's low-power, brain-inspired TrueNorth chips will fare in extensive experiments. Devised in 2010 and unveiled in 2014, the IBM/Cornell University-developed chips can each consume as little as 70 milliwatts and bundle together 5.4 billion transistors. IBM has built a 16-chip array with new processors that can run on as little as 2.5 watts to demonstrate the system can scale up the approach to bigger and bigger systems, says IBM's Dharmendra Modha, recipient of the 2009 ACM Gordon Bell Prize. Under a $1-million contract with IBM, Livermore will put the TrueNorth array through its in paces in areas such as cybersecurity and large-scale physical simulation. Modha stresses the neuromorphic chip architecture is designed to complement, and not replace, traditional computers, and TrueNorth's speed and efficiency are only apparent in specific applications, such as pattern recognition. Brian Van Essen with Livermore's Center for Applied Scientific Computing says incorporating neuromorphic chips in a heterogeneous computing scheme "is definitely one potential path" to exascale computing. He notes a low-power technology such as TrueNorth would be a good candidate for powering a system designed to monitor the progress of a simulation.

Researchers from across the U.S. are working with the U.S. National Science Foundation to develop a range of new sensors and other tools to gauge emotional responses and behavior of children with developmental disorders. For example, Massachusetts Institute of Technology researcher Rosalind Prichard is developing wearable sensors to measure the subtle changes that naturally occur in the body during social interactions. The research is focused on children with autism and other nonverbal learning disorders that make it difficult to understand and communicate their emotions. For example, Prichard's team has developed StoryScape, an open and customizable creative learning platform that creates animated storybooks that can interact with the physical world. Meanwhile, Georgia Institute of Technology researcher Jim Rehg is developing methods to monitor subtle behaviors, such as eye movements, using wearable cameras. Rehg's research group is testing a range of sensors on children, some of whom are on the autism spectrum, with the goal of designing new tools for autism research and more effective treatment.

Lifelogging and Fiction Can Teach Computers to See How We SeeNew Scientist (04/06/16) Aviva Rutkin

Researchers are attempting to enhance artificial intelligences (AIs) to perceive the world as people do to gain a better understanding of humanity. One effort in this vein is the University of Pennsylvania's EgoNet, a project to build a neural network system that tries to predict what objects would interest people. The project involves volunteers who wore GoPro video cameras and annotated the first-person footage they shot day by day and frame by frame. The researchers fed this footage into a computer and repeatedly asked EgoNet to describe the activities it depicted, training it to anticipate what the person was likely to touch or study more intimately. One network seeks objects likely to stand out due to their hue, position in the scene, or other attributes, while another network estimates how each object might relate to that person. Meanwhile, a Stanford University project called Augur is trying to give AIs a similar ability by training them on 1.8 billion words of fiction derived from the Wattpad online writing community. When Augur recognizes an object in a scene, it sifts through what it has read in Wattpad to guess what a person might do with it. Potential applications for systems such as EgoNet and Augur could include behavioral health diagnosis, screening calls for busy people, and other assistive uses.

Improving Robot-Human Collaboration With the Help of IBMASU News (04/06/16) Monique Clement

Arizona State University (ASU) graduate student Tathagata Chakraborti's automated planning research could help address some of the problems that may arise when humans and autonomous systems interact. Chakraborti has investigated how autonomous agents sharing human workspaces can modify their behavior and respect human intentions, and he has tested planning algorithms for human-robot collaboration on robots in ASU's Yochan Lab. Chakraborti's research efforts have earned him an IBM Ph.D. Fellowship Award, and he will participate in an internship in May at IBM's Cognitive Algorithms Department, where he will join the Artificial Intelligence and Optimization Group and work on symbiotic human-AI systems. The next stage of his research at ASU is to work on the human side of the human-AI team by validating theoretical results on biological data to establish psychological or neuroscientific connections to how humans respond to robotic teammates. "This is vital in order for AI algorithms to move from the drawing board to actual integration with human workflows," Chakraborti says. "I believe that my research will contribute significantly to the progress of standalone automated planners toward addressing the requirements of the human component, and provide much-needed guidance for principled and well-informed design of intelligent symbiotic systems of the future."

Imec is expanding into new territory with its latest computer chip technology research projects. The Belgian design organization focuses on improving microprocessors, but it has increasingly partnered with companies outside the traditional computing industry. Imec is now working to speed the process of detecting cancer with an image sensor by using chips that work in parallel. Researchers also have developed a "hyperspectral" image sensor that captures as many as 150 precisely chosen colors, which could be used to judge the precise freshness of fruit being sent to grocery stores. Imec also wants to give contact lenses the ability to autofocus by enabling them to detect where the wearer is trying to focus and letting in light through only the appropriate part of the lens. In addition, the research center plans to add a thin, transparent film on top of solar panels, which would enable them to convert sunlight more efficiently into electrical power. Imec also is working on solid-state batteries that last longer and hold more energy; phones, laptops, electric cars, and solar power would benefit from the improvement in energy storage.

University of Cambridge researchers have developed measurement-device-independent quantum key distribution (MDI-QKD), a technique to overcome one of the main issues in implementing a quantum cryptography system. The researchers say the technology is a breakthrough that could lead to a usable "unbreakable" method for sending sensitive information hidden inside light particles. MDI-QKD involves "seeding" one laser beam inside another, making it possible to distribute encryption keys at rates between two and six orders of magnitude higher than previous attempts at quantum cryptography systems. In the new technique, the sender and receiver send their photons to a central node that lets the photons pass through a beam splitter and then measures them. MDI-QKD has been experimentally demonstrated, but the information transfer rates are too slow for real-world application because of the difficulty in creating indistinguishable particles from different lasers. The method developed by the researchers overcomes this problem by using pulsed laser seeding, a technique in which one laser beam injects photons into another. The technique makes the laser pulses more visible to the central node by reducing the amount of "time jitter" in the pulses, so much shorter pulses can be used. The researchers say this technique in a MDI-QKD system would enable rates as high as 1 Mbps, representing an improvement of two to six orders of magnitude over previous efforts.

The Robot Will See You Now: U of T Experts on the Revolution of Artificial Intelligence in MedicineU of T News (04/04/16) Nina Haikara

The University of Toronto (U of T) hosted a panel discussion Tuesday on the ethical use of artificial intelligence (AI) in medicine. Experts in computer science and medicine explored the issues of privacy, accuracy, and accountability during the session. Integrating AI successfully into the nuanced setting of patient and doctor interaction and communication creates intriguing challenges for researchers. Natural language expert Graeme Hirst says a medical AI would have to talk to patients in language used in the real world and deal with all issues of complex conversation and health communication. Hirst has developed methods for detecting cognitive decline, including Alzheimer's disease, by examining linguistic changes in a person's writing over time. U of T professor and panelist Michael Brudno oversees a research group that spans both computational biology and an emerging subfield of computational medicine. "Computational biology applies to a much broader set of disciplines, from how to raise better cattle, to forests that are more heat-resistant, because of global warming," he notes. "Computational medicine is about the application to patients, and to human health." Brudno predicts AI will help to greatly streamline doctors' workflows, but will not replace doctors.