Lifelong Learning in the Real World, Drew Bagnell and Sidd Srinivasa
We will enable embedded systems to continuously learn to improve performance. We focus specifically on systems interacting physically with the world, where lifelong learning techniques have rarely been applicable due to a lack of robust platforms for continuous learning and a need for new algorithmic development.
We propose to build upon our recent advances in imitation learning, reinforcement learning, and planning/optimization in high-dimensional spaces while exploiting two key features of our domains of interest: they are populated by people who provide expert examples of behavior, and we have a platform capable of life-long improvement.
Our research will enable the continuous improvement of embedded systems and enable robots to achieve human dexterity and capability by leveraging human expertise and refinement via life-long experimentation.

Perception Techniques for Behavior and Environment Understanding Using First-Person Sensing, Takeo Kanade, Martial Hebert
The goal of this project is to develop perception functions for understanding the behavior of people and their interactions with the environment in support of ISTC application domains. We propose to organize the perception functions around the novel concept of first person sensing. We propose to adapt and develop algorithms for recognition in images and video in support of this concept.

Personal Navigation for New Shopping Experiences, Al Kelly, Jeyanandh Paramesh, Tamal Mukherjee, Gary Fedder
Personal navigation is the pedestrian equivalent of your car GPS. In order to make it work indoors, GPS must be replaced by something that registers data from wearable sensors with a map. Cameras are small, cheap, require no infrastructure they can be registered to image databases served over the internet. The project will demonstrate how existing human inertial navigation can be augmented with camera-based place recognition, and intershoe ranging, in order to provide the basic technology required to help people navigate in large indoor spaces like shopping malls, airports, etc.

Never-­‐Ending Web-­‐Scale Massively Parallel Machine Learning, Tom Mitchell, Geoff Gordon, Carlos Guestrin
We propose research to scale up advanced machine learning algorithms to web scale data using new parallel implementations. This research builds on our ongoing research on never-­‐ending language learning (NELL), and on a parallelizable abstraction for machine learning algorithms (GraphLab). Specifically, we propose new research that will integrate these ongoing research efforts, to (1) develop new web-­‐scale machine learning algorithms for learning latent data abstractions and for semi-­‐ supervised, coupled learning of thousands of different functions, (2) develop parallel implementations of these machine learning algorithms based on our GraphLab parallelizable programming abstraction, and (3) demonstrate the use of these parallelized machine learning algorithms for web-­‐scale machine learning on text and image data in NELL.

Common Configurable Accelerator and Memory System Designs
to Enable Effective Deeply Embedded SoCs in Electric Automotive Systems and Embedded Systems for Retail, Onur Mutlu, Illah Nourbaksh
Our goal in this project is to design specialized computational accelerators/cores and specialized memory/storage subsystems that can drastically improve the efficiency with which such algorithms can be implemented to enable the embedded SoC realization of such algorithms. To this end, our first step is to devise specific accelerators for bottleneck inference, machine learning, filtering, analysis, perception (and other) tasks in heterogeneous battery-­‐based automotive systems and embedded retail systems as part of two projects: the ChargeCar project (to perform on-­‐line machine learning and optimization of electric vehicle energy management to model battery state of health, accurately forecast battery state of charge, enhance system efficiency and improve driving experience) and smart systems that enhance shopping experience by adapting themselves to user behavior and experience.

Realtime 3D Reconstruction of Realworld Scenes, Yaser Sheikh, James C. Hoe
Camera-enabled moving platforms, like vehicles, domestic robots, wearable sys- tems, and cellular phones, are being introduced in staggering numbers into our social environment. These cameras observe highly cluttered and dynamic scenes: pedestrians, cyclists, and vehicles from automobile platforms; gestures and activity from domestic robotic platforms like Zia; and interacting crowds of people from robots in the mall. To safely and usefully co-habit real envi- ronments with people, each system will require the ability to understand their time-varying 3D environment in realtime. State-of-the-art structure from mo- tion and visual SLAM algorithms cannot reconstruct these types of scenes. The overarching goal of this research program is to develop the theory and prac- tice required to robustly reconstruct a dynamic realworld scene from moving platforms and to investigate the design of realtime algorithms amenable to the Intel Stellarton platform.

Embedded System for All-­‐weather Automobile Headlights, Srinivasa Narasimhan, Takeo Kanade, Anthony Rowe
We will design and implement a vehicle headlight system that increases visibility for drivers during inclement weather conditions such as rain and snow. An integrated embedded system implementation of imaging and reactive illumination will improve safety and reduce the tens of thousands of accidents due to bad weather every year. The ISTC mentions the scenario of driving in bad weather in its first line. To our knowledge, this is the first work that proposes intelligent all-­‐weather lighting for vehicles.

Crowdsourced Embedded Telematics Platforms for Improved Driver Experience, Priya Narasimhan
The project builds on the successful and highly visible deployed mobile 311 efforts (with iBurgh being the nation’s first mobile 311 app) and the crowdsourced snow-removal HowsMyStreet platform. By employing cloud-based crowdsourcing of the automated reports from the various embedded platforms in all of the vehicles under test, and then correlating this data potentially with the crowdsourced information from human input, we can provide a more accurate, actionable picture of the driveability of the roads under snow-storm and pothole-ridden conditions.

Automated Real-Time Construction of Planograms in Retail Environments, Priya Narasimhan, Rajeev Gandhi
This project aims for the automated, real-time construction of planograms in retail environments through the use of a retail-centric robot equipped with the appropriate sensors, along with product-identification and in-store localization capabilities. We will deploy this in the Carnegie Mellon University Store, with whom the PIs already have a working relationship to develop the CMU Store’s official mobile app. Secondly, by providing this real-time planogram information in a queryable, visual format to both the store-clerks and shoppers alike through the store’s mobile app, we will demonstrate the impact of our research and the utility of our planograms for the accurate, real-time location of products in the store.