Projects @ NESL

Sensor and Actuator Networks

This is a collaborative project funded by King Abdullah University of Science and Technology under Sensor Initiative: CASSE project. The mission of the CASSE project is to design and build a new purpose-built sensors and integrate with artificial platforms for ocean observation. We aim to create a new form of energy-efficient tag and track animals' movement. We plan to deploy the sensor tags onto the marine animals at Red Sea, Saudi Arabia. The project will also pursue advances in localization. Optimizing the duty-cycle of power-hungry GPS and other sensors, we will try to overcome the usual limitation of size and operating time.

Due to the presence of heterogeneous sensing modalities
(eg. vision, audio and inertial) and different deployment
scenarios, edge devices cannot use same machine learning
model. However, training separate models are limited
by the availability of labeled data. To address
the challenge, inspired from biological sensory
substitution such as touch to sight, we explore the idea
of sensory substitution in edge devices. To enable sensory
substitution, our approach is to learn a shared representation
using unlabeled data across modalities. We
exploit the fact that in the same environment, edge devices
are capturing the same event. Our evaluation using
human activity recognition as a use case shows that sensory
substitution can reduce the required labeled data by
up to 90% and can speed up the training process by up
to 50 times in comparison to training edge devices from
scratch.

A major problem with sensor nodes today is their insufficient memory size, storage capacity, and power consumption while operating. This project explores the current options available for storing data on the motes and proposes a way to increase the storage capacity by using the StarGate as a base station.

Sensor Networks are being deployed to sense the environment. The data sensed(collected) at the various nodes is relayed to a base station using the wireless multihop network formed by the sensor nodes. The nodes nearer to the base station, acting as relays, risk running out of battery earliest. Once they die, the network becomes disconnected. I am looking at using controlled mobility to gather data from the sensor nodes. If the delay in data can be tolerated, the mobile element can visit these nodes periodically and get their data, thus reducing the relaying and increasing the lifetime. Recent research has involved deciding how and where the mobile should go.

Sensor networks are in wide spread use and with deployements reaching larger numbers, the need for a scalable simulation framework is becoming apparent. sQualnet provides this simulation framework. sQualnet, a scalable, extensible sensor network simulation framework, is built on top of commerically available state-of-the-art network simulator, Qualnet. sQualnet introduces a sensor stack parallel to the networking stack and provides accurate simulation models for various layers in the sensor and networking stack. It also includes a detailed and accurate power model. The sQualnet framework allows for hybrid testbeds by combining simulated nodes with real nodes.

There has been a dramatic shift in sensor networks towards the study of motile and mobile systems. Consider the classic target application of such a network – monitoring. Whether it's a forest fire, a platoon of soldiers, or a cosmic phenomenon, the overriding approach involves cheap nodes deployed throughout the area of interest. However, static sensor nodes suffer from numerous drawbacks. The pragmatic issues of network deployment, coverage holes, and sub-optimal density lend great credence to the addition of mobility.

Implementing these modern sensor networks requires a different design philosophy from traditional robotics. By convention, this field emphasizes the capability of a single robot; in contrast, mobile sensor nets leverage teams of coordinated entities. Estrin, et.al. conclude that two key requirements emerge: "support for very large numbers of unattended autonomous nodes and adaptivity to environment and task dynamics." In designing mobile sensor network devices, we discover that the former is actually an implicit benefit while the latter imposes significant design challenges.

Motion may be used in sensor networks to achieve a desirable network configurations for improving the sensing performance. Mobility itself may have a high resource overhead, hence we exploit a constrained form of mobility which has very low overheads but provides significant reconfiguration potential. The objective is to maximize spatial coverage and resolution while simultaneously minimizing sensing resources. The project focuses on distributed methods to control the motion of the sensor nodes in response to the environmental dynamics and sensing demands.

The emergence of pervasive networking technologies such as ZigBee and other low power radios, have opened up opportunities to apply wireless communication to several new applications. Control systems, earlier limited to wired star or bus topologies, may now use ad hoc wireless topologies where it facilitates deployment and maintenance. In this research we focus on a class of systems where a control operation is exercised over a system comprising of spatially distributed sensors and actuators that communicate through a wireless ad-hoc network. Such systems are emerging everywhere. A smart workspace that changes the lighting by sensing the user actions; A smart building that continuously reduces structural vibrations due to external disturbances; An intelligent irrigation system that micro-controls the soil conditions for precision agriculture. These are a few examples of a growing class of control applications.

A challenging issue for control over ad-hoc networks is the latency between sensor inputs and actuator outputs. Latency concerns can be addressed by operating the system in a distributed manner. However, an additional challenge then is to ensure coherent operation throughout the system without explicit central coordination. We design distributed ad hoc network algorithms to address these challenges.

A barrier to enabling end-to-end control applications is the difficulty in writing complex distributed embedded software for these systems. We provide a middleware service and a set of reusable components that abstract the low level timing and coordination details and provide an easy programming model for developers to quicly deploy their control applications.

One of the challenges facing the system design is scalability. Centralized approaches fail due to latency (particularly for control based applications) and energy (determines the lifetime of the system). This forces us into a space where processing is co-located with sensing and the individual nodes are resource constrained. Such systems would have low-latency and be able to adapt to the environment. In this scenario, the GALORE project seeks to explore the hybrid space combining engineered subsystems and ad hoc global system structure. (GALORE Globally Ad-hoc LOcally REgular). The focus is on localized approaches that do not rely on global knowledge or co-ordination. The motivating application for such a system is the unsupervised detection of the events of interest.

SensorSim is a simulation framework for modeling sensor networks. Our work builds up on the ns-2 simulator and provides additional features for modeling sensor networks. The main features of this platform are:

SensorWare is a distributed middleware for Linux and eCos based wirelessly networked sensor nodes. Unlike node-level programming and database-oriented network querying, in SensorWare the queries take the form of mobile scripts that may be injected in to the network at a node, and then propagate and replicate dependent on the state of the physical world as sensed by the sensor nodes.

AHLoS represents our current research effort in the development of a fine-grained location discovery system in ad-hoc networks. The main goal of this project is to provide mechanisms and technologies that enable location awareness in wireless nodes. Our primary focus is to provide location awareness in networks of wireless micro sensors both in indoor and outdoor environments without requiring the use of GPS on each sensor node.

The aim of the project is to build a system that supports software reconfiguration in embedded sensor networks at multiple levels.
The system architecture is based on SOS, an operating system that consists of (i) a fixed tiny static kernel and (ii) binary modules that can be dynamically inserted, updated or removed unobtrusively.
On top of SOS, we have implemented a dynamically extensible virtual machine (DVM), that interprets high-level scripts.
Any binary module that is dynamically inserted into the operating system can register custom extensions to the virtual machine.
Therefore, the high-level scripts, that are executed by the virtual machine, can efficiently access services that are exported by a module and tune module parameters.
Together these system mechanisms permit the flexibility of selecting the most appropriate level of reconfiguration.

Hotline is a distributed programming framework that provides a shared memory abstraction and data and process synchronization services to support iterative optimization algorithms on resource-constrained wireless sensor-actuator networks. These algorithms are typically used to solve distributed control problems in large-scale networks of actuators in applications like personalized light control and cooperative visual surveillance. Hotline automatically improves their performance by exploiting spatial locality inherent in actuation. It has been implemented as a run-time library in TinyOS-2.x.

In this MURI project we are working with a team of researchers from Virginia Tech, UT Dallas, and others to create underwater robotic sensing platforms that mimic jellyfish. Our work at UCLA is focused on underwater sensing and communication modalities for the artificial jellyfish, with a particular focus on electric field based sensing and communication.

SOS is an operating system for mote-class wireless sensor networks developed by the Networked and Embedded Systems Lab (NESL) at UCLA. SOS uses a common kernel that implements messaging, dynamic memory, module loading and unloading, and other services. SOS uses dynamically loaded software modules to create a system supporting dynamic addition, modification, and removal of network services.

Time synchronization is critical to sensor networks at many layers of its design and enables better duty-cycling of the radio, accurate localization, beamforming and other collaborative signal processing. While there has been significant work in sensor network synchronization, measurement based studies have been restricted to very short-term (few minutes) datasets and have focused on obtaining accurate instantaneous synchronization. Long-term synchronization has typically been handled by periodic re-synchronization schemes with beacon intervals of a few minutes based on the assumption that long-term drift is too hard to model and predict. Thus, none of this work exploits the temporally correlated behavior of the clock drift. Yet, there are incredible energy gains to be achieved from better modeling and prediction of long-term drift that can provide bounds on long-term synchronization error across a sensor network. Better synchronization can lead to significantly lower duty-cycles of the radio, simplify signal processing and can enable an order of magnitude greater lifetime than current techniques. In this paper, we measure, evaluate and analyze in-depth the long-term behavior of synchronization skew and drift on typical Mica sensor nodes and develop an efficient long-term time synchronization protocol. We use four real time data sets gathered over periods of 12-30 hours in different environmental conditions to study the interplay between three key parameters that influence long-term synchronization – synchronization rate, history of past synchronization beacons and the estimation scheme. We use this measurement-based study to design an online adaptive time-synchronization algorithm that can adapt to changing clock drift and environmental conditions while achieving application-specified precision with very high probability. We find that our algorithm achieves between one and two orders of magnitude improvement in energy efficiency over currently available time-synchronization approaches.

In Cyber-Physical Systems the physical world influences the state of the various computation, communication, and storage processes via the sensing interface at which measurements of physical world variables are acquired and digitized for subsequent use in application-specific detection, estimation, inference, and control processes. Although its role is fundamental to the Cyber and the Physical, this interface receives little attention in system level design analysis, optimization, and verification. Typically, the
interface between the continuous and the discrete halves is abstracted and idealized.

The ability to abstract the physical-world sampling interface stems from the perfect reconstruction assured by Shannon Sampling and enables designers to separate its concerns from the rest of the system. But, this relies on several implicit assumptions: that the act of sampling the physical world is cheap relative to the rest of the system, that Nyquist sampling is the best one could do, and that the performance and correctness of the rest of the system is decoupled from how the sampling interface is designed. These assumptions are often not true in practice.

Profligacy in sampling leads to a variety of energy, processing, communications, and even security bottlenecks at the system level. Our research investigates the impact on these bottlenecks of mechanisms that optimize the sampling and processing using adaptation and context-awareness while
exploiting recent theoretical and embedded platform technology advances. Our goal in studying this is two-fold. First, we seek to systematically and jointly optimize and manage the various phases of the entire physical-to-cyber information-acquisition-and-processing pipeline for specific application objectives and system resource constraints. Second, we seek to develop methods to predict and validate the performance, resource requirements, and correctness of systems that make use of sophisticated sampling strategies with optimized analog, computation, communication, and storage processes.

Power-aware Computing and Communications

As semiconductor manufacturers build ever smaller components, circuits and chips at the nano scale become less reliable and more expensive to produce – no longer behaving like precisely chiseled machines with tight tolerances. Modern computing tends to ignore the variability in behavior of underlying system components from device to device, their wear-out over time, or the environment in which the computing system is placed. This makes them expensive, fragile and vulnerable to even the smallest changes in the environment or component failures. The Variability Expedition envisions a computing world where system components -- led by proactive software -- routinely monitor, predict and adapt to the variability of manufactured systems. The Variability Expedition proposes a new class of computing machines that are adaptive but highly energy efficient. They will continue working while using components that vary in performance or grow less reliable over time and across technology generations. A fluid software-hardware interface will mitigate the variability of manufactured systems and make machines robust, reliable and responsive to changing operating conditions -- offering the best hope for perpetuating the fundamental gains in computing performance at lower cost of the past 40 years.

State-of-the-art convolutional neural networks are enormously costly in both compute and memory, demanding massively parallel GPUs for execution.Such networks strain the computational capabilities and energy available to embedded and mobile processing platforms, restricting their use in many important applications.
In this project, we push the boundaries of hardware-effective CNN design by proposing BCNN with Separable Filters (BCNNw/SF), which applies Singular Value Decomposition (SVD) on BCNN kernels to further reduce computational and storage complexity.
To enable its implementation, we provide a closed form of the gradient over SVD to calculate the exact gradient with respect to every binarized weight in backward propagation.We verify BCNNw/SF on the MNIST, CIFAR-10, and SVHN datasets, and implement an accelerator for CIFAR-10 on FPGA hardware. Our BCNNw/SF accelerator realizes memory savings of 17% and execution time reduction of 31.3% compared to BCNN with only minor accuracy sacrifices.

Sensor networks differ from traditional wireless networks in several respects. Unlike handheld wireless devices which can be recharged at reasonable frequent intervals, sensor nodes must operate autonomously for much longer durations. Energy supply thus remains an open challenge in sensor networks because unfettered deployment rules out traditional wall socket supplies and batteries with acceptable form factor and cost constraints do not yield the lifetimes desired by most applications.

One method to improve the battery lifetime of such systems is to supplement the battery supply with environmental energy. Several technologies exist to extract energy from the environment such as solar, thermal, kinetic energy, and vibration energy. However, we lack system level methods to efficiently exploit these resources for optimal performance. Sensor networks are expected to be deployed for several mission critical tasks and operate unattended for extended durations. The autonomous nature of operation makes it imperative that the system learn its own energy environment and adapt its power consumption accordingly. In distributed systems, not only does the energy source vary in time, but also the energy available at different locations, and thus at different nodes of the sensor network differs. In this situation, the performance can be improved by scheduling tasks according to the spatio-temporal characteristics of energy availability. The problem then, is to find scheduling mechanisms that can adapt the performance to the available energy profile.

The Power Aware Sensing Tracking and Analysis (PASTA) project is a DARPA-funded project that is investigating power-efficient systems for unattended ground sensor (UGS) applications. One of the applications that the project studies the tracking of moving vehicles using acoustic sensors. The NESL lab is involved in studying the effect of hierachical networks on the tracking cost and performance. The low tier of the hierarhical network comprises of small resource-contrained nodes (tripwires) such as the Berkeley motes as tripwire nodes. Higher tier nodes (trackers) perform sophisticated signal processing such as beamforming in order to estiamate the location of a moving vehicle using acoustic sensors.

Tripwire nodes consume low power and operate continuously to detect interesting events such as vehicles entering the field of interest. Upon detection tripwires wake up the more capable tracker nodes that perform the actual tracking of the intersting event. For more information about the project please visit the official project web page at USC/ISI East.

This project studies how components with complementary power-speed characteristics may be combined in a staged hierarchy to provide overall superior system-level power-speed characteristics. The project explores this concept for processors, radios, and sensors.

The aim of the PADS project is to investigate and demonstrate embedded system architecture approaches, algorithms, middleware, simulation, and protocol technologies. The main project web site is at http://www.east.isi.edu/projects/PADS/. This site describes the activities on power-aware RTOS scheduling undertaken by our group jointly with Prof. Rajesh Gupta's group at U.C. Irvine as part of the overall PADS project.

Wireless embedded networks have matured beyond academic research as industry now considers the advantages of using wireless sensors. With this growth, reliability and real-time demands increase, thus timing becomes more and more relevant.
This project explores the further development of highly stable, low-power clocks for wireless embedded systems. Wireless embedded networks, due to their wire-free nature, present one of the most extreme power budget design challenges in the field of electronics. Improvements in timing can reduce the energy required to operate an embedded network. However, the more accurate a time source is, the more power it consumes. To comprehensively address the time and power problems in wireless embedded systems, we studied the exploitation of dual-crystal clock architectures to combat effects of temperature induced frequency error and high power consumption of high-frequency clocks. Combining these architectures with the inherent communication capabilities of wireless embedded systems, this project proposes two new technologies;
(1) a new time synchronization service that automatically calibrates a local clock to changes in temperature;
(2) a high-low frequency timer that allows a duty-cycled embedded system to achieve ultra low-power sleep, while keeping fine granularity time resolution offered only by high power, high frequency clocks.

This project investigates how to leverage the complimentary features of a large variety of processors, radios and sensors (low-power vs. power-efficient) to design sensor nodes that can operate over a high dynamic range. With such functionality, nodes can dynamically assume roles inside a sensor network, switching from cluster head to leaf nodes in a power efficient manner.

Considerable research has gone into creating low power, long lasting wireless sensor nodes (WSNs). The past decade has been marked with a variety of designs capable of operating for more than a year with a single battery, and recent improvements in solar and other energy harvesting technologies have inspired the creation of ‘autonomous’ sensors—sensors that are self-powered and self-sufficient. Towards this goal, WSNs have employed solar, piezoelectric, RF, and thermal harvesting techniques. Such systems have been marked with varying success, due to unreliable energy sources and mechanical challenges. This work focuses on an implementation of thermal energy harvesting specifically targeting waste heat from hot water pipes. We demonstrate both a low-power wireless node capable of more than 20 minutes of operation after a mere 10 second burst of hot water and an appropriately asymmetric network infrastructure to support such energy-starved ‘leaf’ nodes. This work is a blend of energy-aware computing, ultra-low-power circuit design, wireless sensor network design, and thermodynamic optimization.

This NSF funded collaborative project focuses on understanding the principles and methods for the design of green networks at the edge of the Internet. The total power consumption of edge networks is estimated to be quite significant, so even moderate improvements in energy-usage in an individual device can result in non-trivial savings overall. Obtaining these moderate improvements in the energy-efficiency of edge networks is challenging for two reasons: the diversity of edge networks and the dynamics in their workload.
Intellectual Merit. Leveraging the researchers' combined expertise in low-power electronics, link-layer technologies, and energy-efficient network subsystem design and architecture, the project will: a) devise a deep energy-inspection architecture that encompasses a broad range of edge devices and networking technologies, and incorporates innovative hardware designs for subsystem-level monitoring and control of energy usage; b) explore run-time energy adaptation at various levels of the network, enabled by this inspection architecture; c) examine coordination mechanisms for controlling edge network energy usage which will allow coordinated energy management across components and devices, enabling more aggressive energy savings.
Broad Impacts. The project can have significant societal benefit, targeted as it is on sustainable technologies. Moreover, the techniques it develops for energy efficiency can be broadly applied to other areas of computing: large server systems, mobile devices, and consumer appliances. Beyond its impact on technology, the project will also contribute to workforce development by training EE and CS students in sustainability.

Wireless Communications

It is envisioned that the next generation high-throughput wireless LAN standard (IEEE 802.11n), which is currently under development, would use MIMO technology to achieve high data rates. An important design consideration is maintaining backward compatibility with the IEEE 802.11a/g standards. In this context, we present a rate-adaptive MAC protocol for wireless networks with MIMO links. We adopt a joint MAC and physical layer strategy for channel access, based on the instantaneous channel conditions at the receiver. Our contributions include a transmit antenna and data rate selection scheme based on the optimal tradeoff between spatial multiplexing and diversity. The goal is to maximize the achievable data rate, given a MIMO channel instance and a target bit error rate. We also provide a feedback mechanism for the transmitter to obtain the rate selection settings from the receiver. Moreover, we maintain compatibility with legacy 802.11a/g devices and our protocol supports communication between devices with different number of antennas. The overall contribution is a MIMO physical layer aware, rate adaptive MAC protocol, which is compatible with 802.11a/g and can also be readily integrated with the 802.11n proposals.

Biomedical applications in Sensor Networks attract many researchers’ focus, especially in the real-time health status monitoring field. However, the inconvenience of interconnecting sensors through wires not only induces high maintenance cost also limits freedom of human action. By attaching various types of bio-sensors to wireless sensor nodes, the unnatural wire constraints have been removed. The sampled data are transmitted through wireless interface and the system can response in time according to the different vital signs. Unlike wire connections, wireless connections are unstable and vulnerable to environments. It is our goal to study how the wireless links quality affected by human body, we attached sensor nodes onto different parts of human body, and monitor the packets received rate to learn how channels behave. We expect these experiment results can be used to assist the reliable and efficient connections for related biomedical researches

Applications

The mission of the MetroInsight project is to build an end-to-end system for knowledge discovery using highly-dimensional sensor time-series and real-time data streams to support the metropolitan infrastructure through effective analytics, workforce development and policy support. Working with a strategically chosen set of city governments in the Southwest, utilities and companies, we have unique access to specialized metropolitan data providers and National initiatives that have not yet been accessible to pipelines such as those proposed here. The project aims to overcome the data deluge caused by noisy multimodal urban sensory data. It will pursue advances in models and methods to transform multimodal urban data to a lower dimensional population-level data suitable for dynamic processing, real-time monitoring and visualization.

Our work has been around leveraging all (old and new) water quality data, putting it on cutting-edge middleware for holistic analysis regardless of their source, finding insights that people may use and validating them in live environment.

Pervasive Computing

This is a collaborative project funded by the National Science Foundation under its Cyber-Physical Systems program, and involves researchers from CMU, UCLA, UCSB, UCSD, and the University of Utah.
Accurate and reliable knowledge of time is fundamental to cyber-physical systems for sensing, control, performance, and energy efficient integration of computing and communications. This simple statement underlies the RoseLine project. Emerging CPS applications depend on precise knowledge of time to infer location and control communication. There is a diversity of semantics used to describe time, and quality of time varies as we move up and down the system stack. System designs tend to overcompensate for these uncertainties and the result is systems that may be over designed, in-efficient, and fragile. The intellectual merit derives from the new and fundamental concept of time and the holistic measure of quality of time (QoT) that captures metrics including resolution, accuracy, and stability.
The project has built a system stack that enables new ways for clock hardware, OS, network services, and applications to learn, maintain and exchange information about time, influence component behavior, and robustly adapt to dynamic QoT requirements, as well as to benign and adversarial changes in operating conditions. Robust and tunable quality of time has applicability across a broad spectrum of applications that pervade modern life. Example application areas that benefit from Quality of Time include: smart grid, networked and coordinated control of aerospace systems, underwater sensing, and industrial automation.
The project is also providing valuable opportunities to integrate research and education in graduate, undergraduate, and K-12 via publications, open sourcing of software, and participation in activities such as the Los Angeles Computing Circle for pre-college students.

There are many mobile applications that wish to collect information from users such as ground-truth labeling, experience sampling, and crowdsourcing. A typical and convenient way to grab user attention is by popping out mobile phone notifications, which instruct users to provide inputs. However, ill-timed notifications can cause annoyance and distraction, hence decreasing users’ willingness to provide inputs. Widely common approaches to address interruptibility are postponing notifications to future transitions of activities, and filtering out undesired notifications based on content, user preference, or user context. However, skipping or deferring the notifications can cause biased inputs. For example, if a user enables ‘don’t disturb’ mode during meetings, we can never receive user responses during meetings. This project aims to model user engagement, i.e., learn human perception of interruption. To this end, we are going to answer the following questions: As a mobile device, when is a reasonable time to interact with users while the disturbance? How do we infer such a timing from mobile sensors or IoT devices? How do we keep track of user's cognitive status?

Smart Table is a table that can track and identify multiple objects simultaneously when placed on top of its surface. The table has been designed to support a smart problem-solving environment for early childhood education in a project called Smart Kindergarten. The incorporation of location information and identification provided by Smart Table into context-aware computing applications is presented here.

The Illuminator is a sensor network based intelligent light control system for entertainment and media production. Unlike most sensor network applications that focus on sensing alone, a distinctive aspect of Illuminator is that it closes the loop from light sensing to control of lights. To satisfy the high-performance light sensing requirements in entertainment and media production applications, the system uses Illumimote, which is a multi-modal and high fidelity light sensor module that is well suited to wireless sensor networks. The Illuminator system provides a toolset to characterize lights, to generate desired lighting effects for user constraints expressed in a formal language, and to help set up lights. Given light configuration, Illuminator computes
at run-time optimal light settings using an optimization framework based on a genetic algorithm.
Source code can be obtained from the CVS tree: http://cvs.nesl.ucla.edu/cvs/viewcvs.cgi/Illuminator/java

In Smart Kindergarten, we target the early childhood education environment as a testbed, where we try to provide parents and teachers with the abilities to comprehensively investigate students’ learning processes. The kinds of questions that we hope to answer range from evaluations of students progress such as “How well is student A reading the story book B?”, “Is student C spending too much time on one learning area?”, to evaluations of students social behavior such as “Does student A tend to confront other students?”, “Is student B usually isolated?”. The infrastructure of SmartKG was designed to collect, manage, and fuse the information of the sensors to interpret and present the information in a logical and user friendly manner.

The iBadge is one of the crucial components in the Smart Kindergarten project. It is equipped with sensors such as microphone, localization sensor and temperature sensor. Kids and the teachers are supposed to wear the iBadge. The iBadge is not only a collection of sensors; it possesses enough computational power to process the sensed data. Two processors, Atmel ATMEGA and TI DSP C5416, share the workload according to their computational ability. The DSP processes the computational more expensive speech data and the magnetic sensor data, whereas ATMEGA is interfaced to sensors with simpler data structure and the Bluetooth module. The Bluetooth module provides the wireless connection to the rest of the infrastructure, to which all the collected and processed data is sent.

Smartphones and body-worn sensors enable continuous inference of various context information of users such as activity, location, psychosocial status, and the environment. As more applications exploit such user context information, we claim that the context information should be provided by middleware in order to achieve better software engineering and management of constrained resource of smartphones. We propose flow-based programming architecture, FlowEngine, for efficient and flexible computation of context inferences. FlowEngine is efficient because duplicate computations are shared among different context inferences. It is also flexible because new computation can be dynamically added on runtime and off-loading computations to microcontrollers or cloud services are well-supported. Using FlowEngine, applications can simply subscribe to relevant services to receive updates on context changes, or they can poll the system for the most recent updates. As many applications subscribing to similar contexts, the system optimizes data processing so no duplicate computation can occur. In addition, as phone battery level changes, the system can gracefully degrades the quality of context inferences to meet users’ phone lifetime goals.

Mobile and Wireless Health

In this paper, we present the implementation of
a remote monitoring and interaction system for medical
applications. Recent advances in medical platforms have
focused mainly on continuous and constant remote monitoring
of a patient. Compared to existing solutions, our system has
placed emphasis on remote interaction between a patient and a
physician. We provide in this system capabilities to perform
diagnosis and treatment over-the-air. Such system is valuable
for instant response to emergency health alerts. Today, this
emergency process of monitoring and feedback actuation is
done by a self-contained device (such as an implantable
cardioverter-defibrillator), which provides treatment
automatically when detecting abnormalities. We believe that it
is very important to involve the physician in this process of
actuated treatment.
We accomplish this goal by virtually connecting a patient
and a physician anytime anywhere. We propose a mixed twoand
three- tier infrastructure that extends current three-tier
architecture with a GSM/GPRS peer-to-peer channel. We
present a working system that is built on top of commercial
cellular phones and wireless sensor nodes. In our system, a
physician can create and subscribe to ”interests” provided by a
body sensor network deployed on a patient. An interest is
defined as the useful information acquired from applying a
series of computations to collected vital signals. Profiles of
interests can be periodically delivered in the form of health
status reports, or they can be used to trigger emergency health
alerts immediately when abnormalities are detected. The
advanced interactive capabilities of our system allow a remote
physician to further query for detailed information, to create
and subscribe to new interests, to set sensor parameters, and to
trigger actuators for over-the-air treatment. Additionally, we
introduce a concept of multi-resolution to help a physician
identify useful information from a huge amount of sensor data
collected by the body sensor network on a patient and hence to
reduce communication costs. This paper describes why our
proposed infrastructure suits novel medical scenarios and
outlines the design and implementation of our system.

The project aims to achieve a healthy life for a person by using body sensor networks together with ambient sensors. Future individual telehealth applications involve in a complex environment that is shared by the public, a reliable, long-lasting, and yet convenient system is thus demanded. We focus on context-aware solutions that use different dynamic contexts to capture a usage session and reduce communication overheads. The project demonstrates an effort at NESL to investigate the challenges and advances in integrating following domains: health care, pervasive computing, sensor networks, human-machine interface, and security.

FieldStream (NetSE: Large: Collaborative Research: FieldStream: Network Data Services for Exposure Biology Studies in Natural Environments) is a collaborative project funded by the National Science Foundation and involving researchers from Carnegie Mellon University, Georgia Institute of Technology, University of California at Los Angeles, University of Massachusetts at Amherst, and University of Memphis. The project is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5).
Obtaining physiological/behavioral data from human subjects in their natural environments is essential to conducting ecologically valid social and behavioral research. While several body area wireless sensor network (BAWSN) systems exist today for physiological data collection, their use has been restricted to controlled settings (laboratories, driving/flying scenarios, etc.); significant noise, motion artifacts, and existence of other uncontrollable confounding factors are the often cited reasons for not using physiological measurements from natural environments. In order to provide scientifically valid data from natural environments, a BAWSN system must meet several unique requirements (1) Stringent data quality without sensing redundancy, (2) Personalization to account for wide between person differences in physiological measurements, and (3) Real-time inferencing to allow for subject confirmation and timely intervention.
In this project, which started in September 2009, a multidisciplinary team of researchers spanning various computing disciplines and behavioral sciences is developing a general purpose framework called FieldStream that will make it possible for BAWSN systems to provide long term unattended collection of objective, continuous, and reliable physiological/behavioral data from natural environments that can be used for conducting population based scientific studies. To help validate the assumptions, establish the feasibility of developed solutions, and to uncover new requirements, FieldStream technology will be incorporated in studies being conducted by the NIH sponsored AutoSense effort at Memphis and the NSF sponsored Urban Sensing effort at UCLA. By making it possible to obtain scientifically valid objective data from the field, FieldStream promises to help solve several behavioral problems of critical importance to human society that have remained unanswered for lack of such data.

Energy and Water Sustainability

The impending energy and natural resource crisis forces us to research innovative ways of optimizing our resource consumption. Recent studies have shown that a better understanding of an individual’s energy consumption helps people to lower their energy footprint significantly. We propose SPOTLIGHT, a system that profiles an individual's natural resource consumption pattern in real time using wireless sensor network technology.

Although the HVAC (Heating, Ventilation and Cooling) system dominate the
total power consumption of modern buildings, the IT equipments from PCs to
network and server equipment are becoming an increasing contributor to energy
bills and carbon emissions. Recent study also shows that much of the energy
consumption is due to the IT infrastructure. In a typical office where desktop
computers make up a substantial component of working area, the energy savings
would have been potentially remarkable if users could be made aware of energy
using as well as wasting, and an effective strategy was employed to intelligently control these computers.

In this project, we propose a monitoring and actuation platform to address
this issue. Our platform is essentially a system that incorporates several function modules including data sampling, data visualization, notification, analysis and control. The contribution of our platform can be divided into two aspects. One is that we organize meter/sensor data stream efficiently and present them in a way that clients can easily access them. The other is that we analyze the meter/sensor data and build an effective profile model for each user to predict the subjects activities, and further control subjects computer accordingly to reduce unnecessary energy waste. We demonstrate that our approach achieve a 12% energy savings on average. In addition, the rate of accuracy for prediction of occupant activities using our model reaches approximately 90%.

This is a collaborative project between University of California, Los Angeles and Indian Institute of Information Technology, New Delhi, India. Buildings are among the largest consumers of energy, both directly in the form of electricity, gas etc. as well as indirectly through consumption of water as part of the critical energy-water nexus that one must consider for true sustainability. The objective of this project is to develop the foundations of low-cost and easy-to-deploy sensing methods that provide observability into patterns and causes of energy and water consumption in a building, and run-time methods that use the sensory information for intelligent control of various buildings systems to minimize the direct and indirect energy use. The project team is collaborating with international academic and industrial collaborators who offer access to complementary experimental opportunities and unique opportunities to develop low-cost technologies that scale across different climatic and socioeconomic contexts. Key elements of the research include (i) Low-cost self-calibrating sensors that infer energy and water usage indirectly from side-band signals, (ii) Methods to reduce overall energy and water footprint by better management of building subsystems, by timely identification and repair of energy and water wasting physical degradations, and by providing information feedback and incentives to influence occupant behaviors, and (iii) Study of the impact of human, cultural and societal factors on privacy, safety, and user interaction mechanisms. The project has the potential of significant socioeconomic benefits by facilitating assessment of efficacy of conservation measures, targeting of incentives, auditing compliance with regulations, and facilities maintenance. The project also contributes to workforce development and training of students on energy challenges in a global socioeconomic context. This project is a part of NSF's US-India pervasive communications and computing collaboration (PC3) initiative.

HVAC(Heating, Ventilating and Air Conditioning) system is one of the biggest environment-maintaining systems inside a building, and also the most power consuming system [1]. It is crucial to control the usage of HVAC systems in order to reduce overall power consumption.

In this project, we design an airflow sensor node, which is capable to measure air velocity, temperature and humidity. The sensor node is able to log and transmit data via wireless link.

The airflow sensors proposed in this project will greatly help the control of HVAC systems, by

measuring the injection of heat through the HVAC system into a room,

detecting hot-spots in HVAC usage, providing guidance for building or rebuilding the system,

detecting abnormal usage of HVAC system.

In order to make the node suitable for existing HVAC systems deployment, we need to make the node as small as possible, and as lasting as possible, without loosing accuracy.

Embedded Software

The log instrumentation specification (LIS) language and runtime provides an extremely low overhead, portable, and easy to use framework for gathering runtime logs. The system is designed for application in wireless sensor networks where resource constraints stymie other attempts to expose runtime state.

The current implementation of LIS includes an instrumentation engine to instrument C code, full integration into the TinyOS build system, and plug-in analysis to perform common tasks built using LIS as an intermediary language.

For more information or for installation instructions go to the LIS
website at:

This web site contains information about the project "Design and Run-time Techniques for Physically Coupled Software" funded by the NSF Software for Real-world Systems program. This is a collaboration of NESL with research groups of Ramesh Govindan at USC, Rajesh Gupta at UCSD, and Paulo Tabuada at UCLA.

The research in this project seeks to establish the scientific principles governing software for real-world systems that are deeply embedded in the physical world, and whose operational behavior is determined in large part by a tight coupling between the system components and the physical environment. This objective is being accomplished by focusing on four challenges in the context of distributed sensing and control applications: 1) Support for physical context in the form of programming structures that enable application software to explicitly capture the state of the physical world as an observable in an embedded computation; 2) Formal methods for composing software modules that indirectly interact with each other through the physical world, and a run-time safety supervisor that provably enforces correctness of composition; 3) Programming structures to enable design and verification of applications with resource provisioning that is driven by and adapts to physical-world dynamics; 4) System software support for sharing physically-coupled sensor and actuator resources in distributed settings.

This material is based upon work supported by the NSF under awards # CCF-0820061, CCF-0820034, and CCF-0820230. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.

Privacy, Security, and Integrity

Networks of wirelessly interconnected embedded sensors and actuators promise an unprecedented ability to observe and manipulate our physical world. Indeed, recent years have seen much research on understanding the fundamental properties of such networks, and on developing algorithms and hardware-software building blocks for cheap and energy-efficient implementation. However, as with almost every disruptive technology that has impacted human society, the benefits of Embedded Networked Sensing are accompanied by a significant risk factors and potential for abuse. If wireless sensor networks are really going to be the eyes and ears of our society, as envisioned by many, then one needs to answer the following question: How can a user trust the information provided by the sensor network?

Building sensor networks poses challenges of secure routing, node authentication, data integrity, data confidentiality and access control that are faced in conventional wireless and wired networks as well. In the realm of sensor networks these problems are even more challenging due to the resource constraint nature and the scale of these networks. Only recently, researchers have started developing customized cryptographic solutions for sensor networks. However, current security mechanisms for sensor networks focus on external attacks. They fail to protect against internal attacks where a subset of sensor nodes are compromised. Due to lack of physical security and tamper resistance, adversaries can recover the embedded cryptographic material from these nodes and subsequently pose as authorized nodes in the network. A wide variety of sensor network applications such as forest fire monitoring, anti terrorism, bio/chemical agent monitoring etc. falls into the broad class of sense-response applications, where the system objective is to collaboratively detect the events and report the event detection back to the base station. The detection of an event is followed by some physical response such as sending special personnel, vehicles etc. to the location of the event. Compromised nodes can inject false data about non-existent events and authenticate them correctly to the user using their keys (false positive attacks), or stall the reporting of real events (false negative attacks). Thus, there is a need for developing a secure event reporting protocol.

Cryptographic keys form the backbone of any security protocol; the scale and ad-hoc deployment of nodes coupled with the ability of adversaries to easily recover the cryptographic materials make it a challenging problem to solve. In general the efficacy of any key establishment strategy needs to be gauged on the basis of both security metrics such as resiliency against node capture, node replication, access control measures and also on the complexity aspect such as scalability, storage etc. Existing key establishment techniques rely on a deterministic or probabilistic pre-distribution of keys in the network, trading off its performance on one metric with the other. We believe that a more apt approach in the realm of sensor networks is to derive them deterministically at runtime based on a single master key and unique physical attributes of the nodes.

Although cryptography and authentication help, they alone are not sufficient for the unique characteristics and novel misbehaviors encountered in sensor networks. We believe that in general tools from different domains such as economics theory, statistics and data analysis will have to be combined with cryptography for the development of trustworthy sensor networks. Fundamental to this is the observation that sensor network applications are based on collective interaction between a large numbers of nodes, which do collaborative data gathering, collective data/information processing, and multi-hop data delivery. This decentralized in-network decision-making, which relies on the inherent trust among the sensor nodes, can be abused by internal adversaries to carry out security breaches while generating information. An adversary can potentially insert bogus data to mislead the whole network! Clearly, cryptographic mechanisms alone cannot be used to solve this problem as adversarial nodes can use valid cryptographic keys to authenticate bogus data. Besides malicious attacks, the two other system characteristics that hinder the development of high integrity sensor networks are system faults and sensing channel inconsistencies. Sensor nodes are currently made of cheap hardware components, highly vulnerable to system malfunctioning. Non-malicious behavior such as radios/sensors going bust can also result in the generation of bogus data, bringing equally detrimental effects to the functioning of the whole network. Another distinguishing trait of sensor networks is there strong coupling with the physical world. This gives rise to a unique opportunity for adversaries, whereby instead of abusing the network, they can insert bogus data into the network by abusing the physical world. The very nature of these attacks is completely outside the realm of cryptography.

Many sensor network systems expose general interfaces to system developers for dynamically creating and/or manipulating resources of various kinds. While these interfaces allow programmers to accomplish common system tasks simply and efficiently, they also admit the potential for programmers to mismanage resources, for example through leaked resources or improper resource sharing. These kinds of errors are particularly problematic for sensor networks, given the resource constraints and lack of memory protection on current sensor platforms.

Lighthouse is a static analysis technique that brings the safety of static resource management to systems that dynamically manage resources. Our analysis is based on the observation that sensor network applications often manipulate resources in a producer-consumer pattern. In this style, each resource has a unique owner component at any given point in time, who has both the sole capability to manipulate the resource and the responsibility to properly dispose of the resource or transfer ownership to another component. Our analysis enforces this ownership discipline on components at compile time.

The goal of this project is to develop formal and rigorous approaches to reason about, and to optimize under resource constraints, the quality and value of sensor-based information flows. In large-scale distributed sensing, data from diverse sensor sources flow through increasingly higher-level inferences in a layered information fusion architecture to yield timely, actionable, trusted, and relevant intelligence information for decision makers. How well a particular information flow at a given level of abstraction conveys the true state of the world is formalized as Quality of Information (QoI), and is affected by a multitude of factors, such as: the integrity of the sensor sources; the characteristics of the network services; the nature of sensor information processing; the transformations that occur when information flows cross organizational, human-machine, and cultural boundaries; and, in the case of coalition operations, the policies governing information dissemination across coalition boundaries. How effective the information flow is for a particular use is formalized as Value of Information (VoI), and is a function of the information flow content, its QoI, and the decision making process (human or automated) that uses the information.

With the wide-spread use of mobile smartphones and body-worn sensors, continuous collection of sensor data about individuals becomes feasible, and many useful applications such as medical behavioral studies, personal health-care, and participatory sensing have emerged. Such applications have important privacy implications due to their nature of sharing personal sensor data. In addition, what is shared is not only the raw sensor data but also the information that can be inferred from the data, which raises more privacy concerns of users. This paper proposes SensorSafe, an architecture for managing such personal sensory information in a privacy-preserving way. Our architecture consists of multiple remote data stores and a broker so users can retain the ownership of their data and management of multiple users can be well supported. SensorSafe also provides a fine-grained access control mechanism by which users can define their own sharing rules based on various conditions including context and behavioral status. Users define their privacy preferences and review their data by using our web-based user interface. We discuss our implementation of the SensorSafe architecture and provide application examples to show how our system can support user privacy. Our performance evaluation results demonstrate that building applications using the SensorSafe architecture is feasible so user privacy can be better protected.

Hardware Platforms

UCLA NESL & the UCLA Hypermedia Studio present the ‘Ping-Pong’ mote, a new light sensing module for the Mica mote platform. The Ping-pong mote achieves performance comparable to a commercial light intensity meter, while conforming to the size and energy constraints imposed by its application in wireless sensor networks. The Ping-pong mote was developed to replace the Mica sensor board (MTS310) whose slow response time and narrow dynamic range in light intensity capture is unsuitable to many applications, including media production. The Ping-pong mote features significantly improved SNR due to its adoption of high-end photo sensors, amplification and conversion circuits coupled with active noise suppression, application-tuned filter networks, and a noise-attentive manual layout. Unlike the MTS310, the Ping-pong mote can capture RGB color intensity (for color temperature calculation) and incident light angle (which discerns the angle of ray arrival from the strongest source). Our prototype demonstrated significantly faster response time (> 6x) and a much wider dynamic range (> 10x) in light intensity measurement as compared with the MTS310. The light-angle estimation results were well correlated with an average error of just 2.63°.
Technical data, publications, and results coming soon. E-mail the project participants directly if you need immediate access.

Mobile Phone based Sensing

In traditional sensor systems, one of the fundamental problems concerns the placement of sensors. The analogous problem in participatory sensing is choosing users to perform a particular data collection task. This talk will detail, PICK, a framework that is designed to help with this process. Specifically, the framework considers the capabilities in terms of sensors available by a particular user, the availability of the user to participate in terms of spatial and temporal contexts, the reputation of the user as a data collector, and the incentive cost associated with the user participating as elements involved in the process of choosing data collectors.