A new report from the White House discusses both the risks and opportunities innate in emerging big data technology. For example, "properly harnessed, big data can be a tool for overcoming long-standing bias and rooting out discrimination," agreed U.S. chief technology officer (CTO) Megan Smith, deputy CTO DJ Patil, and domestic policy council director Cecilia Munoz. Recommendations the report makes include additional research on mitigating algorithmic discrimination, building fairness-promoting systems, and developing strong data ethics frameworks. The Networking and Information Technology Research and Development Program and the U.S. National Science Foundation are investigating how to encourage researchers to explore these issues. The report also advises designers to give algorithmic systems built-in transparency and accountability mechanisms so people can revise inaccurate data and appeal data-based decisions. More research and development into algorithmic auditing and testing is recommended as well. Despite data's basically neutral nature, programmers must decide how much importance to ascribe to data inputs in algorithmic systems, and those decisions can lead to biased outcomes, the report warns. Nevertheless, the White House acknowledges "big data is here to stay," and the question now is whether it will be used "to advance civil rights and opportunity, or to undermine them."

ACM on Wednesday announced Oxford e-Research Center visiting professor and ACM Fellow Ron Perrott was awarded the ACM Distinguished Service Award for "providing vision and leadership in high-performance computing and e-science, championing new initiatives, and advocating collaboration among interested groups at both national and international levels." Perrott's efforts to further parallel processing include chairing Britain's High Performance Computing Strategy Committee and co-founding both Euro-Par, the premiere European conference on parallel computing, and the ACM Special Interest Group on High Performance Computing (SIGHPC). "Perrott has been consistently at the forefront of the promotion of high-performance computing since he worked at the [U.S. National Aeronautics and Space Administration's] Ames Research Center in the 1970s on the ILLIAC IV," ACM notes. Perrott recalls working on the machine's languages and compilers for 18 months, and ILLIAC IV's design boasted fairly high parallelism with up to 256 processors to enable the system to work on large datasets in what would later be called vector processing. ACM lauds Perrott as an "effective advocate for high-performance and grid computing in Europe since the 1970s, working tirelessly and successfully with academic, governmental, and industrial groups to convince them of the importance of developing shared resources for high-performance computing at both national and regional levels."

There is little emphasis on the philosophical ramifications of artificial intelligence (AI) research and development at AI conferences and other scientific forums, with most researchers preferring to focus on technical achievement, writes Duke University professor Vincent Conitzer. He says this tendency can be partly traced to AI scientists' push to have their work respected by peers. Bringing attention to philosophical issues in AI are experts such as Nick Bostrom, director of Oxford University's Future of Humanity Institute. He is concerned with an "intelligence explosion" in which humans build machines that exceed human intelligence, which in turn build something that is even more intelligent, leading to ever-escalating generations of smarter systems. Another factor creating a disconnect between mainstream AI researchers and those worried about the future has been inaccurate predictions of how progress in the field would unfold, even in the short term. Issues about AI are being raised outside of the discipline, with the American Association for the Advancement of Science calling for 10 percent of the AI research budget to be channeled into examining its societal effects. Conitzer says it is in the AI community's interest to get involved in this debate, lest the discussion be less informed. Currently absent is a way to engage with the more opaque long-term philosophical issues, but AI's ability to make ethical decisions is one subject in which immediate momentum appears possible.

The Future Interfaces Group at Carnegie Mellon University's Human-Computer Interaction Institute (HCII) has developed SkinTrack, a wearable technology that enables continuous touch tracking on the user's hands and arms. Whereas earlier "skin to screen" designs have used flexible overlays, interactive textiles, and projector/camera combinations, SkinTrack requires users to wear a ring that transmits a low-energy, high-frequency signal through the skin when the finger touches or nears the skin surface. "SkinTrack makes it possible to move interactions from the screen onto the arm, providing a much larger interface," says HCII professor Chris Harrison. The source of the electromagnetic waves can be localized via electrodes integrated into a smartwatch strap, which pick up the waves' varying phases. The researchers found they could determine when the finger was touching the skin with 99-percent accuracy. They also could resolve the location of the touches with an average error of 7.6 millimeters. Demonstrated uses of SkinTrack include as a game controller, a drawing implement, and a tool for scrolling through lists on a smartwatch. The researchers will present the technology next week at the ACM CHI 2016 conference in San Jose, CA.

The White House released a report this week examining the problems associated with poorly designed systems that increasingly are being used in automated decision-making. The report warns algorithms may have so much power in day-to-day life that it may be important to develop ethical frameworks for designing automated computer systems. In addition, the report says automated computer systems may need to be transparent for testing and auditing. Meanwhile, a second effort has been studying the future of algorithms through a series of four workshops held across the U.S. to examine artificial intelligence's (AI) impact on society. "We're increasingly relying on AI to advise decisions and operate physical and virtual machinery--adding to the challenge of predicting and controlling how complex technologies will behave," says the U.S. Federal Trade Commission's Ed Felten. The federal government will produce an AI report following workshops in Seattle, to be followed by meetings in Washington, D.C., Pittsburgh, and New York City in July. The most pressing concern is algorithmic systems designed to inadvertently discriminate because of bad design. The report notes a system also could use a poorly designed matching system or could inadvertently restrict the flow of information.

Siri's Creators Say They've Made Something Better That Will Take Care of Everything for YouThe Washington Post (05/04/16) Elizabeth Dwoskin

Viv, a virtual assistant from the creators of Siri that promises smoother, more natural interaction with users and online services, will be publicly demonstrated for the first time on Monday. Apple, Google, Facebook, Amazon.com, and Microsoft are all competing in next-generation virtual-assistant software development to see whose technology will become the ultimate mediator between customers and businesses. "It's about taking the way that humans have naturally interacted with each other for thousands of years and applying that to the way they interact with services," says Viv Labs CEO Dag Kittlaus. Most modern virtual assistants are constrained by their comprehension of a limited query vocabulary, and Viv differs in its aim to emulate a human's spontaneity and knowledge base, says Allen Institute for Artificial Intelligence CEO Oren Etzioni. Viv can tap into data held by dozens of partner businesses, including Grubhub, Uber, and SeatGuru, to understand customer habits, preferences, and other factors to order movie tickets, food, and transportation using a more seamless conversational interface. Viv also may have an advantage by opening the assistant technology to third parties, and bypassing primary app gatekeepers such as Apple and Google. Kittlaus says a major challenge will be finding a distribution model to obtain as many Viv users as possible, noting "our goal is ubiquity."

A new draft publication from the U.S. National Institute of Standards and Technology (NIST) proposes incorporating proven security design principles and concepts into cyber-physical systems at every step, from conception to deployment. NIST Special Publication 800-160, based on the international ISO/IEC/IEEE Standard 15288 for Systems and Software Engineering, recommends a comprehensive, ground-up approach to baking in security. NIST fellow Ron Ross says current procedures for organizations--purchasing commercial components and then tacking on security measures--"do not go far enough in reducing and managing complexity, developing sound security architectures, and applying fundamental security design principles." The draft publication applies security precepts to all of the ISO/IEC/IEEE standard's listed technical processes, as well as to crucial non-engineering processes involving systems such as management and support services. The recommended strategy begins with mission or business owners "valuing" their assets and then applies security design principles and systems engineering processes to develop suitable security requirements, architecture, and design. "The systems security engineering considerations...give organizations the capability to strengthen their systems against cyberattacks, limit the damage from those attacks if they occur, and make their systems survivable," Ross says. Consultant Robert Bigman predicts the recommendations "will become the de facto standard for integrating 'trustability' into the design, development, deployment, and operation of systems used both within government and commercial critical infrastructure industries."

A just-issued report commissioned by the U.S. National Science Foundation (NSF) and conducted by National Academies of Sciences, Engineering, and Medicine studies priorities and associated trade-offs for advanced computing investments and strategy. The Future Directions for NSF Advanced Computing Infrastructure to Support U.S. Science and Engineering in 2017-2020 report used community input from more than 60 individuals, research groups, and organizations. NSF's study request in 2013 prompted the Computer Science and Telecommunications Board to organize a committee to make recommendations on establishing a framework to position the U.S. for continued science and engineering leadership, guarantee resources fulfill community needs, help the scientific community keep pace with the revolution in computing, and sustain the advanced computing infrastructure. NSF has funded the enablement of a cyberinfrastructure ecosystem combining superfast and secure networks, cutting-edge parallel computing, efficient software, state-of-the-art scientific instruments, and massive datasets with expert staff across the country. The agency requested $227 million for its 2016 advanced cyberinfrastructure budget, up from $211 million in 2014. "[The report's] timing and content give substance and urgency to NSF's role and plans in the National Strategic Computing Initiative," says NSF's Irene Qualters.

A Secret Tool to Catch the Next VW-Style Emissions CheatTechnology Review (05/03/16) David Talbot

Researchers are developing software that could quickly detect when a car's emissions controls have been disabled. Armin Wasicek, a postdoctoral researcher at the University of California, Berkeley is leading the development of the software analysis tool. He says the software can infer problems with emissions controls in just hours by monitoring vehicle data, and there would be no need to sample the exhaust or directly detect that emissions controls were not engaged. The team used the approach to detect when an engine control chip was altered to boost horsepower, but the researchers say the method could be used to pick up disabled emissions controls. The software captures data on what a given car's operating parameters look like during normal operations, and then compares it to what is actually happening. The technology could detect a hacked engine chip when tested in a real car, but the concept could be broadly applicable. "We could put this inside the car's computer system and automate the whole thing," Wasicek notes. He says reports on anomalies and potential causes could be streamed from cars or stored for later downloading, and the software would be able to detect anomalies in modified or hacked cars.

The U.S. National Aeronautics and Space Administration last week delivered a version of its six-foot-tall, 300-pound humanoid Valkyrie robot to the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL). The Valkyrie robot had a disappointing showing at the 2013 U.S. Defense Advanced Research Projects Agency's Robotics Challenge trials and did not qualify for the finals. The CSAIL team will work to improve the robot's capabilities and help get it ready to assist with space missions. The researchers will program the robot to autonomously perform a variety of tasks that would enable it to help or even replace astronauts on missions. The robot collects an enormous amount of data, and this makes controlling it in real time very challenging. CSAIL investigators are hoping artificial intelligence and deep-learning algorithms will teach the robot to better navigate the world as it gains experience. "If we can integrate the autonomy work with our planning and control algorithms, it could result in an unprecedented level of autonomous capabilities for a humanoid robot," says CSAIL principal investigator Russ Tedrake.

Researchers at Google and the University of Michigan's (U-M) Flint and Ann Arbor campuses have teamed up to develop a smartphone app and other digital tools that can help Flint, MI, residents and officials manage the water crisis. The app and other tools will help predict where lead levels will be highest in the city's water, and will pull together information and resources designed to make the crisis easier to navigate for those affected. A student team at U-M Flint already has developed a prototype smartphone app, and Google and U-M Ann Arbor's Michigan Data Science Team will add mapping features that use predictive analytics. The Android app is slated for rollout this summer. Initial work by the research team has shown some success at predicting which homes and neighborhoods have a high risk of lead contamination. In the coming months, they will apply predictive algorithms and machine-learning techniques to data from a wide variety of sources including Google, the state of Michigan, and the city of Flint. "Finding the best way to put resources close to where high lead levels are is a big part of managing this crisis, and it's the kind of problem that analytics can solve," says U-M Flint professor Mark Allison.

Not So Safe: Security Software Can Put Computers at RiskConcordia University (05/04/16) Clea Desjardins

Concordia University researchers found security software might actually make online computing less safe, according to a study examining 14 commonly used software programs that claim to make computers safer by protecting data, blocking out viruses, or shielding users from questionable content on the Internet. "Out of the products we analyzed, we found that all of them lower the level of security normally provided by current browsers, and often bring serious security vulnerabilities," says Concordia Ph.D. student Xavier de Carne de Carnavalet. The problem is based in how security applications act as gatekeepers, filtering dangerous or unwanted elements by inspecting secure websites before they reach the browser. Normally, browsers must check the certificate delivered by a website, and verify it has been issued by a proper Certification Authority (CA). However, security products make the computer "think" they are a fully entitled CA, thus enabling them to trick browsers into trusting any certificate issued by the products. The research is important for everyday computer users, as well as for the companies purchasing the software programs themselves, says de Carne de Carnavalet. "We also hope that our work will bring more awareness among users when choosing a security suite or software to protect their children's online activities," he says.

The U.S. National Science Foundation (NSF), Infosys Foundation USA, and DonorsChoose.org have partnered to launch the Computer Science (CS) for All Community Giving program, which will provide up to 2,000 teachers with professional development in CS education. The program will enable public school teachers in grades 6-12 to create project requests to attend CS professional development programs. Local communities can sponsor the requests, and Infosys Foundation USA will match the community-funded donations. Teachers who create project requests can select one of the professional development programs associated with the initiative. The programs include Exploring Computer Science, CS Principles, and Bootstrap, all of which were developed with support from NSF. NSF and Infosys Foundation USA have committed a combined $6 billion to define an end-to-end approach to CS education, develop new evidence-based curricula, and devise sustainable funding mechanisms to ensure teachers are trained effectively in CS instruction. The program has the potential to affect up to 60,000 students in the first academic year, especially those in districts with significant funding challenges and limited or no access to CS education, according to Infosys Foundation USA. "Investing in CS professional development and training the teachers creates a multiplier effect that expands the learning opportunities for our students, especially in under-served communities," says Infosys Foundation USA chairperson Vandana Sikka.