Leave your Comments

Contributors

Wireless Sensor Networks
Platforms, Tools and Simulators
Jaydip Sen
Innovation Labs, Tata Consultancy Services
Bangalore, India
1
Introduction
A real-world sensor network application is likely to incorporate all the functionalities like
sensing and estimation, networking, infrastructure services, sensor tasking, data storage
and query. This makes sensor network application development quite different from
traditional distributed system development or database programming. With ad hoc
deployment and frequently changing network topology, a sensor network application can
hardly assume an always-on infrastructure that provides reliable services such as optimal
routing, global directories, or service discovery.
There are two types of programming for sensor networks, those carried out by end users
and those performed by application developers. An end user may view a sensor network
as a pool of data and interact with the network via queries. Just as with query languages
for database systems like SQL, a good sensor network programming language should be
expressive enough to encode application logic at a high level of abstraction, and at the
same time be structured enough to allow efficient execution on the distributed platform.
On the other hand, an application developer must provide end users a sensor network
with the capabilities of data acquisition, processing, and storage. Unlike general
distributed or database systems, collaborative signal and information processing (CSIP)
software comprise reactive, concurrent, distributed programs running on ad hoc resourceconstrained, unreliable computation and communication platforms. For example, signals
are noisy, events can happen at the same time, communication and computation take
time, communications may be unreliable, battery life is limited, and so on.
2
Sensor Node Hardware
Sensor node hardware can be grouped into three categories, each of which entails a
different trade-offs in the design choices.
• Augmented general-purpose computers: Examples include low-power PCs, embedded
PCs (e.g. PC104), custom-designed PCs, (e.g. Sensoria WINS NG nodes), and various
personal digital assistants (PDA). These nodes typically run –ff-the-shelf operating
systems such as WinCE, Linux, or real-time operating systems and use standard
wireless communication protocols such as IEEE 802.11, Bluetooth, Zigbee etc.
Because of their relatively higher processing capability, they can accommodate wide

variety of sensors, ranging from simple microphones to more sophisticated video
cameras.
• Dedicated embedded sensor nodes: Examples include the Berkeley mote family [1],
the UCLA Medusa family [2], Ember nodes and MIT µAMP [3]. These platforms
typically use commercial off-the-shelf (COTS) chip sets with emphasis on small form
factor, low power processing and communication, and simple sensor interfaces.
Because of their COTS CPU, these platforms typically support at least one
programming language, such as C. However, in order to keep the program footprint
small to accommodate their small memory size, programmers of these platforms are
given full access to hardware but rarely any operating system support. A classical
example is the TinyOS platform and its companion programming language, nesC.
• System on-chip (SoC) nodes: Examples of SoC hardware include smart dust [4], the
BWRC picoradio node [5], and the PASTA node [6]. Designers of these platforms try
to push the hardware limits by fundamentally rethinking the hardware architecture
trade-offs for a sensor node at the chip design level. The goal is to find new ways of
integrating CMOS, MEMS, and RF technologies to build extremely low power and
small footprint sensor nodes that still provide certain sensing, computation, and
communication capabilities. Among these hardware platforms, the Berkeley motes,
due to their small form factor, open source software development, and commercial
availability, have gained wide popularity in the sensor network research.
3
Sensor Network Programming Challenges
Traditional programming technologies rely on operating systems to provide abstraction
for processing, I/O, networking, and user interaction hardware. When applying such a
model to programming networked embedded systems, such as sensor networks, the
application programmers need to explicitly deal with message passing, event
synchronization, interrupt handling, and sensor reading. As a result, an application is
typically implemented as a finite state machine (FSM) that covers all extreme cases:
unreliable communication channels, long delays, irregular arrival of messages,
simultaneous events etc.
For resource-constrained embedded systems with real-time requirements, several
mechanisms are used in embedded operating systems to reduce code size, improve
response time, and reduce energy consumption. Microkernel technologies [7] modularize
the operating system so that only the necessary parts are deployed with the application.
Real-time scheduling [8] allocates resources to more urgent tasks so that they can be
finished early. Event-driven execution allows the system to fall into low-power sleep
mode when no interesting events need to be processed. At the extreme, embedded
operating systems tend to expose more hardware controls to the programmers, who now
have to directly face device drivers and scheduling algorithms, and optimize code at the
assembly level. Although these techniques may work well for small, stand-alone
embedded systems, they do not scale up for the programming of sensor networks for two
reasons:

• Sensor networks are large-scale distributed systems, where global properties are
derivable from program execution in a massive number of distributed nodes.
Distributed algorithms themselves are hard to implement, especially when
infrastructure support is limited due to the ad hoc formation of the system and
constrained power, memory, and bandwidth resources.
• As sensor nodes deeply embed into the physical world, a sensor network should be
able to respond to multiple concurrent stimuli at the speed of changes of the physical
phenomena of interest.
There no single universal design methodology for all applications. Depending on the
specific tasks of a sensor network and the way the sensor nodes are organized, certain
methodologies and platforms may be better choices than others. For example, if the
network is used for monitoring a small set of phenomena and the sensor nodes are
organized in a simple star topology, then a client-server software model would be
sufficient. If the network is used for monitoring a large area from a single access point
(i.e., the base station), and if user queries can be decoupled into aggregations of sensor
readings from a subset of nodes, then a tree structure that is rooted at the base station is a
better choice. However, if the phenomena to be monitored are moving targets, as in the
target tracking, then neither the simple client-server model nor the tree organization is
optimal. More sophisticated design and methodologies and platforms are required.
4
Node-Level Software Platforms
Most design methodologies for sensor network software are node-centric, where
programmers think in terms of how a node should behave in the environment. A nodelevel platform can be node-centric operating system, which provides hardware and
networking abstractions of a sensor node to programmers, or it can be a language
platform, which provides a library of components to programmers.
A typical operating system abstracts the hardware platform by providing a set of services
for applications, including file management, memory allocation, task scheduling,
peripheral device drivers, and networking. For embedded systems, due to their highly
specialized applications and limited resources, their operating systems make different
trade-offs when providing these services. For example, if there is no file management
requirement, then a file system is obviously not needed. If there is no dynamic memory
allocation, then memory management can be simplified. If prioritization among tasks is
critical, then a more elaborate priority scheduling mechanism may be added.
5
Operating System: TinyOS
Tiny OS aims at supporting sensor network applications on resource-constrained
hardware platforms, such as the Berkeley motes.

To ensure that an application code has an extremely small foot-print, TinyOS chooses to
have no file system, supports only static memory allocation, implements a simple task
model, and provides minimal device and networking abstractions. Furthermore, it takes a
language-based application development approach so that only the necessary parts of the
operating system are compiled with the application. To a certain extent, each TinyOS
application is built into the operating system.
Like many operating systems, TinyOS organizes components into layers. Intuitively, the
lower a layer is, the ‘closer’ it is to the hardware; the higher a layer is, the closer it is to
the application. In addition to the layers, TinyOS has a unique component architecture
and provides as a library a set of system software components. A components
specification is independent of the components implementation. Although most
components encapsulate software functionalities, some are just thin wrappers around
hardware. An application, typically developed in the nesC language, wires these
components together with other application-specific components.
A program executed in TinyOS has two contexts, tasks and events, which provide two
sources of concurrency. Tasks are created (also called posted) by components to a task
scheduler. The default implementation of the TinyOS scheduler maintains a task queue
and invokes tasks according to the order in which they were posted. Thus tasks are
deferred computation mechanisms. Tasks always run to completion without preempting
or being preempted by other tasks. Thus tasks are non-preemptive. The scheduler invokes
a new task from the task queue only when the current task has completed. When no tasks
are available in the task queue, the scheduler puts the CPU into the sleep mode to save
energy.
The ultimate sources of triggered execution are events from hardware: clock, digital
inputs, or other kinds of interrupts. The execution of an interrupt handler is called an
event context. The processing of events also runs to completion, but it preempts tasks and
can be preempted by other events. Because there is no preemption mechanism among
tasks and because events always preempt tasks, programmers are required to chop their
code, especially the code in the event contexts, into small execution pieces, so that it will
not block other tasks for too long.
Another trade-off between non-preemptive task execution and program reactiveness is
the design of split-phase operations in TinyOS. Similar to the notion of asynchronous
method calls in distributed computing, a split-phase operation separates the initiation of a
method call from the return of the call. A call to split-phase operation returns
immediately, without actually performing the body of the operation. The true execution
of the operation is scheduled later; when the execution of the body finishes, the operation
notifies the original caller through a separate method call.
In TinyOS, resource contention is typically handled through explicit rejection of
concurrent requests. All split-phase operations return Boolean values indicating whether
a request to perform the operation is accepted.