Smart systems, makers and technology

Menu

Definition of Terms

This chapter gives a short overview about the terms and definitions that are used in this thesis to explain the area of context and context awareness.

Context and Context Awareness

The term context has various meanings in different research areas. In this work it addresses the information that could possibly be relevant for an object that performs a certain task. Most of the time, a task depends on context information, which has to be collected from other objects. In smart environments this means that this information has to be passed between different embedded devices over a network. The term context-aware software was first used in the Xerox PARC research project PARCTAB in 1994 [Schilit94]. In this work the term was defined and used for software that is able to adapt according to its actual location, the collection of nearby people, hosts and accessible devices. Also the possibility to track the changes of context information over time, in other words to store historic context information, was mentioned. Over the years, different research groups enriched this basic definition of context and context-aware software. Brown et al. [Brown], for example, widened the scope of context information to temperature, time, season and many other factors. Due to the fact that the number of context information factors is nearly unlimited, the definition of context by Anhind K. Dey is one of the most commonly used:

“Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves.” [Dey]

This definition of context specifies that context contains any kind of information about an entity in order to understand its situation. So context information is not limited to location information, but could also mean information about the social situation of a person or the person’s mood. Usually, such a sort of context information is hard to collect, but there are a reasonable number of research projects that try to collect even this kind of information. An interesting fact about the above definition of context is that Dey identifies three base classes with which all objects can be classified: person, place and object. This kind of classification has practical reasons but is also fixed to a location-dependent view of context information. For simple scenarios this classification is easy to implement and performant, but complex scenarios cannot be created. What if an object that can be located in a certain place is itself a place, such as a bag or a car? Are animals identified as objects? For this work we can reduce Dey’s definition to the following part:

“Context is any information that can be used to characterize the situation of an entity.”

Additionally, context information should be viewed in a completely application-independent way. It should be the application’s responsibility to select relevant context information and to interpret it according to the task that the application has to perform. The concrete classification of objects should also be the application’s responsibility, because different applications could have a different view on the same object. Context-sensitive applications differ from traditional applications according to their new kind of life cycle. Schilit identified a context computing cycle that contains three basic steps.

Discovery: This involves the identification of entities that are relevant for the application’s tasks. In the first step, a context-aware application has to discover and to explore its environment in order to get information to work with. According to a human viewpoint, the discovery is mostly focused on the local environment, because information around the actual physical location is considered more important than anywhere else.

Selection: A context-aware application has to filter the information that was discovered according to its specific needs. The selection process is the most important and most problematic part within the context computing cycle. It is no problem to receive a multitude of different sensor data, but the identification of specific information, that has the required semantic value, is only solvable within specific constraints.

Use: If an application identified a relevant entity and was able to select a specific information, the application is able to use this information to change its configuration.

In this thesis all three steps of the context computing cycle are described in detail and a system architecture is proposed which supports the discovery, selection, and use of context information.

Representation Models for Context Information

In this section, it is shown how the representation model of context information influences the design of the software middleware framework. In order to find general terms, describing different world models, it is necessary to highlight existing notations and their affinities to the approach in this work. The representation model of context information is called the world model, because it describes entities and their interaction. The definition of world model is:

“A world model defines how the description of entities and their relation can be represented in machine readable and changeable form.”

World models are defined to provide machine readable information about whole application environments in order support context-aware self configuring applications. Fig. 1 describes the role of a world model within a software framework that supports the development of context- aware applications. The real world contains a collection of objects (objects 1-n). A small spectrum of aspects of these real world objects can be gathered with the use of sensors. Information about these objects has to be stored in machine readable form. Therefore, the world model defines structures to create a representation of these objects which can be sensed or about which we have some sort of description. The context framework middleware is responsible to retrieve the sensed information about the objects and to store it into a data structure the world model defines. A good representation of a collection of objects also contains information about the relation between the different objects. A relation is a state transition that influences the state of another object. These relations are shown between object 1 and object 2 in Fig. 1. An example for a relation between two objects could be a person that changes its location. When the location of the person changes all related objects have to be informed about the state change. The context framework middleware uses the machine readable description of objects in order to trigger actions through actuators or to deliver parts of the representation to a multitude of different context-aware applications. The framework middleware, generally, is responsible for the gathering, mapping, representation and the transport of the sensed data into its world model. A context middleware tries to remove the complexity of writing context-sensitive applications through wrapping and hiding the context information life cycle.

The world model contains every information a context-sensitive application is able to retrieve. Different research groups, like the SemanticWeb group or various Ontology research projects, try to find general structures and types of objects in order to create representations of real world objects [Onto][SeWeb]. Such general structures are not only necessary to create a consitant representation of a real world situation but enable the interoperability of different context frameworks. Structured, machine readable representations of digital and real world scenarios are the first step towards intelligent web search algorithms and context-aware applications and services.

The implemented middleware is able to manipulate a scenario represenation at runtime. This basic relation is shown in Fig. 2. Sensors gather information from the real world which are mapped into the world model through the use of the middleware framework. On the other side, the middleware framework is able to change the state of the real world objects with actuators. Context rules are able to express the relation between the different objects. The context world model always represents a more or less small view on the real world. Fig. 2 shows a context information life cycle that represents the information flow inside the project’s middleware. The context information life cycle was defined in [Fer03]:

Due to the different requirements of context-related research projects, different world models resulted. While the HP Cooltown [HP01] project developed a world model that is based on three kinds of objects (Person, Thing and Place), the Context Toolkit [Dey01] does not classify its information within such a basic hierarchy. One of the reasons why so many different world models have been devised is that the projects focus on specific areas in the context information life cycle. The CoolTown project focusses on the context information’s web representation. In contrast, the Context Toolkit tries to support the context transformation process for the programmers.

Smart Environments

Smart environments, also called intelligent spaces, appear to be the next step towards a natural interaction between users and digital devices [Oxy]. Places or environments invisibly filled with a multitude of different sensors and actuators such as cameras, location trackers, identification transponders, surround sound systems, displays, or public media walls [Web- Wall] and many more, try to help the user to solve certain tasks. The goal of this technology is to improve the interaction between the user and its digital devices to happen implicitly and unconsciously, so that the user does not realize how complex the interaction scenarios in the background are. The human-computer interaction could be performed by natural input technologies like gesture recognition, speech recognition, human emotion recognition, human action sequence recognition, or even through tangible interfaces and artifacts [Ishii97]. The vision is that an optimal interaction between an embedded system and a human user would demand no extra information displayed or typed into a static interface, which would need the user’s full concentration. A human user could interact with his digital environment as he would do with a human being. Unobstrusively, the user should be able to control a collection of embedded devices such as sensors and actuators.

Applications of smart environments are not limited to office rooms but unfold their immense possibilities also in areas like transportation, automotive, logistics, industry, home and even in personal area networks, which could be embedded in smart clothing. One of the reasons why the pervasive technology emerges to improve personal computing, is the incredible progress in chip design. Today, single chips can host entire digital systems that are easy to embed because of their tiny sizes and low costs. The era of Pervasive Computing was born. IBM’s definition of Pervasive Computing is:

‘Convenient access, through a new class of appliances, to relevant information with the ability to easily take action on it when and where you need it’

Pervasive Computing is based on four fundamental characteristics, which are [Per01]:

Decentralization: The change from a centralized digital system like a mainframe, to a strongly decentralized computing environment. Today, the development goes from a Personal Computer to a multitude of tiny or embedded devices, which interact to solve the users’ problems (like a headset, cell phone and a PDA).

Diversification: Diversification means the shift from a single universal device, which is able to perform a large variety of tasks from entertainment to professional computing, to many specialized devices.

Connectivity: Pervasive and Ubiquitous Computing environments have a strong demand for connectivity and communication. Connectivity between traditional computing systems requires a wired network between static digital nodes. Connectivity in pervasive environments means to connect a multitude of heterogeneous mobile devices, operating systems and system architectures with a wireless network.

Simplicity: Decentralization, diversification, and connectivity provide a lot more possibilities than traditional paradigms but on the other side these characteristics also complicate the use of such devices for human users. The need for simplicity is a direct consequence of the increasing complexity of pervasive systems. Convenience, ubiquity, and intuitiveness are the requirements modern environments have to fulfil.

The area of grid computing is often mentioned in the same context as pervasive computing. Grid computing tries to answer the question how an application can be distributed on a grid of processors. The grid computing aspects are considered important for pervasive computing, because a distributed sensor and actuator network poses similar distribution and synchronisation problems. In smart environments different embedded devices could solve specific parts of a problem. In fact, every embedded device has its own specialized task. A GPRS mobile phone could solve the global communication task, while an embedded location tracker with six degrees of freedom could provide exact location information. Other examples of grid computing problems are the transfer of performance-critical calculations from PDAs to more powerful devices or the distribution of calculations among different embedded devices.

Pervasive Computing

The term Pervasive Computing stands for the philosophy to embed limited intelligence into objects that surround us [Ar99]. Ubiquitous Computing on the other hand means that digital services and applications are mobile and can be consumed everywhere. Often it is not really clear how in which aspects the research areas of Pervasive Computing, Ubiquitous Computing and Mobile Computing differ. Pervasive means that digital technology diffuses through every part which implies high embeddedness. Mobile Computing describes environments in which the user is able to use mobile devices and wireless networks but does not imply any use of embedded devices. Most of the modern applications that are running in smart environments include aspects of all three computing philosophies. Therefore, it is hard to identify which part of the hardware or software is associated to one philosophy. Fig. 3 shows how these computing philosophies can be distinguised according to embeddedness and mobility.

Pervasive Computing implies that everyday objects can get the possibility to communicate and to discover the environment. The idea is, that the technology should not visibly change an environment but should improve common objects below the visible surface. Examples for such changes can already be found in our daily life. One of the first applications of wireless object identification was implemented for supermarkets to prevent the customers to take products without paying them. A primitive mechanism changes the state of the product from not paid to paid when the customer visits the cash box. For the customer it seems as if the system did not change at all.

Another example targets the automotive industry where smart components should improve the safety of vehicle drivers. Limited intelligence is put into some parts of vehicles in order to warn the driver when certain measurements are not within specified limits. One example is the use of embedded chips inside the tires to measure the pressure and other critical values. All the tires communicate their measurements to the car which decides how and when to contact the driver or the mechanic. Different automotive companies are already using these smart tires [SmarTire]. Pervasive systems differ from traditional systems by the fact that the user does not have to be in front of an input/output interface and does not have to focus his attention. Pervasive systems are meant to function with a minimum of supervising by the user, which means that these systems demand less concentration from the users. To solve problems without the user’s attention such systems need information about their environment (context information) and the possibility to communicate and to share this information. Many of these ideas were already mentioned by Mark Weiser in his article ‘The Computer of the 21st Century’ [Wei91].

One of the most important aspects of pervasive computing was not mentioned so far. The fact that many objects of our daily life get some sort of specific intelligence to perform operational tasks leads to the question of who controls an environment, or more general, how the need for security and safety should be solved in pervasive environments. As security targets the issue of not sharing critical information with the wrong people or devices, safety asks the question how such systems may change the user’s situation. How smart devices and environments will change the human’s safety is hard to discover, due to the fact that the systems are steadily growing and are already taking control over some areas of our daily life. Most people already depend on smart devices embedded in their cars that take control when the car is in a critical situation (ABS, ESP, air bags). Other components support the user to control the vehicle (drive by wire). So it can be realized that already today smart components have taken over the control in every days life and that embeded intelligence is already reality. The question how security issues could be handled in such environments is treated in Chapter 7.3.

Wireless Communication

As mentioned before, communication technology and network protocols as well as information- encoding mechanisms are the basic principles on which context-aware computing is built. In order to understand the problems of working in heterogeneous network environments, this section gives a short overview about the latest developments in wireless network technology as well as in wireless identification technologies.

Traditional applications were based on wired networks connecting static base stations with fixed network addresses. Due to a shift from static workstations to mobile computing wireless connections became more and more important. Wireless networks reach from infrared connections, also known as IrDA (Infrared Data Association) connections which operate in direct line of sight around 1,5 meters, to globally available radio-based networks like the GSM (Groupe Special Mobile), GPRS (General Packet Radio Service) and UMTS (Universal Mobile Telecommunications System) standards. The most popular wireless radio-based data communication technology today is the IEEE 802.11 standard family (802.11a-g) for radio-based local area networks and the IEEE 802.15.1 standard, also known as Bluetooth, for personal area networks. Other radio-based local area network standards are IEEE 802.15.4, called ZigBee, for extremely simple hardware environments, IEEE 802.15.3a and DECT (Digital Enhanced Cordless Telecommunications). All these radio-based communication standards differ according to their application, communication range, energy consumption, network topology, bandwidth and latency, connection setup time, and scalability. Another important aspect, is that wide area radio networks like GSM, GPRS and UMTS communicate in frequency bands that are licensed by the government. Local and personal area radio networks like the 802.11 family and Bluetooth use the ISM (Industrial, Scientific and Medical) frequency band (2,4 GHz) which can be used without any restriction. Bluetooth was originally not designed as a networking technology, but as a cable replacement technology. At the moment Bluetooth is used for nearly every purpose, also for personal area networking. The fact that the ISM band can be used without having to obtain a license lead to a massive use of ISM band devices in recent years.

A Frost and Sullivan marketing study [Frost] showed that in 2003 over 70 million digital devices with integrated Bluetooth support existed. In 2004 this number will grow to about 120 million devices. The following subsections describe the most important radio-based wireless network technologies in more detail now.

IEEE 802.11 (WLAN)

The IEEE 802.11 standards, often called wireless Ethernet or WLAN, have evolved into a wireless replacement of the typical wired Ethernet scenario. The first specification in 1997 defined a maximum data rate of 2 MBit/s. Today, the 802.11a standards achieve a data rate of 54 MBit/s and the 802.11b standards about 11 MBit/s. The IEEE 802.11 standard family is one of many IEEE 802 network specifications that share the same layered architecture [MoCo]. The network layer of the ISO/OSI seven layer architecture can therefore always use the same interface irrespective of which underlying protocol is used (e.g. Ethernet, WLAN, Token Ring). Fig. 4 shows the protocol architecture layers of the WLAN standard.

Fig. 4 shows that the physical layer of 802.11 is divided into two different layers: PMD (Physical Medium Dependent) and PLCP (Physical Layer Convergence Protocol). The PMD layer offers physical medium-dependent access for infrared, FHSS (Frequency Hopping Spread Spectrum) and DSSS (Direct Sequence Spread Spectrum) communication, while PLCP provides a medium-independent interface for the MAC (Medium Access Control) lay-er, which manages the package transport from one network interface to another through a shared transmission channel.

FHSS uses the frequency hopping mechanism to avoid collisions with other WLAN devices. The baseband is divided into 79 channels, which are changed in a random order.

DSSS uses the CDMA (Code Division Multiple Access) mechanism, which enables multiple transmissions on the same frequency channel for more than one transmitting device. The different signals are multiplexed with the help of device-unique codes and are demultiplexed at the receiver’s side. The DSSS mechanism is more stable with respect to collisions than the FHSS method and it allows more than one transmission per frequency channel. In modern WLAN devices the DSSS method succeeded the FHSS method.

Another important aspect of WLAN devices are the different operation modes that are defined by the IEEE 802.11 standards:

Infrastructure mode: The infrastructure mode allows the association of WLAN client devices (called Access Points) to a central base station. Access Points are wireless routers that connect the wireless client devices to a wired network. The communication of two wireless client devices, which are located in the same wireless area (hot-spot), is also managed via the Access Point. The wired network between the Access Points is also used to deliver roaming information about mobile WLAN clients.

Ad-hoc mode: The ad-hoc mode of WLAN devices allows the connection of devices which are in communication range. If no higher level packet routing protocol is used the stations can only communicate with other stations that are in communication range.

IEEE 802.15.1 (Bluetooth)

The IEEE 802.15.1 standard was originally designed as a cable replacement between digital devices. There are three main application scenarios for Bluetooth connectivity:

Bluetooth is used to build spontaneous ad-hoc networks, called Piconets, to communicate without central control.

Bluetooth is used as a cable replacement between digital devices.

Bluetooth operates in the same ISM frequency band as IEEE 802.11 devices and microwave ovens. Therefore these devices interfere with each other. To solve this problem, Bluetooth uses frequency hopping to avoid transmission collisions. The available frequency band (83,5 MHz) is divided into 79 channels, each of which having 1 MHz bandwidth. The frequency hopping procedure randomly changes the transmission channel 1600 times per second (fast frequency hopping). With the Gaussian Frequency Shift Keying (GFSK) mechanism Bluetooth offers a maximum data rate of 1 Mbit/s [Br01].

Because Bluetooth was designed as a cable replacement technology the connection range was originally defined to be less than 10 meters. Today, many manufacturers offer Bluetooth devices with higher transmission power in order to reach distances around 50 to 100 meters. Bluetooth distinguishes between two kinds of connections: Synchronous Connection-Oriented Links (SCO) and Asynchronous Connection-Less Links (ACL). SCO connections are primarily used for audio connections, which need a full duplex connection with fixed-size data packages that are transmitted synchronously. SCO links are limited to a maximum of three full duplex voice links per Bluetooth device. ACL connections are used for data transmissions with variable-length data packets that are sent asynchronously. Fig. 5 shows the Bluetooth protocol stack. The Bluetooth protocol stack contains the TCS layer (Telephony Control Protocol Specification) for telephone-related services. The SDP layer (Service Discovery Protocol) enables the discovery of services which are offered by other Bluetooth devices. The RFCOMM layer offers standard serial communication emulation for higher-level protocols. A layer that is able to access the functionality of the baseband layer directly is called Audio. This layer manages the SCO connections for direct audio transmissions.

Bluetooth devices can operate in two modes: master and slave. The master sets the frequency hopping sequence and the slaves are following this sequence. Every Bluetooth device has a unique Bluetooth device address and a clock value. When a slave connects to a master it gets the master’s address and clock value with which it is possible to calculate the frequency hopping sequence. The number of slave devices that are managed by a master is limited to seven. A network that consists of one master device with a maximum of seven slave devices is called piconet. Inside a piconet all transmissions are managed by the master without any direct connections between the slaves (see Fig. 6).

When more than one piconets are connected the resulting network is called a scatternet. In a scatternet one device is either a member of two piconets, or one device is acting both as a master and as a slave, as it is shown in Fig. 7. Scatternets allow the ad-hoc connection of more than seven Bluetooth devices.

IrDA

In 1993 the Infrared Data Association (IrDA) was founded to establish a common standard for infrared data communication. In 1994 the IrDA 1.0 standard was published which allowed a maximum data communication rate of 115 kBit/s. Because of this low data rate, the IrDA group announced IrDA 1.1 (Fast Infrared) in 1995 and VFIR (Very Fast Infrared) in 1999. IrDA 1.1 offers a data rate of 4 MBit/s and VFIR even of 16 MBit. Infrared (IR) communication is a popular and cheap way to transmit data without cables and wires. However, there is quite a difference between IR communication and radio-based communication. IR communication is based on infrared light, which needs a direct line of sight between the sender and the receiver. Due to the fact that the daylight contains parts of the infrared spectrum IR communication can be interrupted or blocked. While radio-based transmissions can permeate objects like walls, doors or clothes, IR transmissions are entirely blocked by such objects. The IR communication range is limited to a few meters whereas the radio-based communication ranges, generally, are higher (e.g. radio based WLAN with 100mW transmission power is limited to 100 meters). The limited communication range and the need for a direct line of sight between sender and receiver offers more privacy than radio-based networks. IR-based communication that is performed within a few meters is hard to intercept from outside. All modern operating systems support the IrDA standard and many mobile devices offer infrared ports. The IrDA standard is based on two substandards:

IrDA Data: This substandard is responsible for data transmissions over infrared connections.

IrDA Control: This substandard defines how input devices like keyboards, mice or joysticks can send control information over an infrared connection.

Fig. 8 shows the IrDA protocol stack. At the bottom of the stack there is the infrared bit transport layer, which manages the encoding of data bits in infrared signals. The IrLAP layer (Infrared Link Access Protocol) is responsible for a reliable connection between sender and receiver. While the IrLAP layer supports only a single reliable channel, the IrLMP layer (Infrared Link Management Protocol) can manage multiple logical channels on a single physical connection. The IAS layer (Information Access Service) allows the discovery of services that are offered by other devices.

The other protocol layers are optional and not necessarily implemented within every IrDA device. The Tiny TP layer (Tiny Transport Protocol) provides the possibility to transmit bigger messages through segmentation. IrLAN layer (Infrared Local Area Network) offers a bridge for connecting to a LAN. IrOBEX (Infrared Object Exchange Protocol) enables the exchange of complex messages such as v-cards, which is a protocol for the exchange of business cards. IrCOMM emulates a standard serial communication, which enables applications to communicate through a serial port.

Wireless Object Identification

One of the most important aspects of context-aware computing is the ability to identify objects. The identification of unknown objects, no matter if they are digital or non-digital, allow devices to discover their environment and to reason about the actual context. A multitude of identification technologies are already common in our daily life: barcode scanners in supermarkets identify products and their prices, chip cards or ID cards identify their owners, or RFID transponders identify customers in skiing areas or wellness temples. For digital systems it is hard to identify unknown objects. To simplify the identification process, various identification technologies have been developed. Each technology has its advantages and disadvantages. Since the identification of objects is an essential aspect of context-aware computing, this chapter gives a detailed overview about some of the most popular identification methods.

Radio Frequency Identification (RFID)

For the work in this thesis, the Radio Frequency Identification (RFID) technology was one of the most important identification mechanisms. The massive use of RFID technology in our project is explained by the advantages that RFID offers over alternative wireless identification systems. One of the most important advantages is that RFID identification tags, also called transponders, are passive devices. RFID transponders are designed to receive energy from an active reader device without any physical contact and to communicate with the reader in wireless mode. The transponder does not need to have its own energy supply. It is possible to tag nearly every object with an RFID tag without worrying about energy supplies. RFID tags can be produced in tiny sizes because they are only composed of an integrated chip and an antenna. They can be quite expensive, hoewever, depending on their form and the kind of system in which they are used. Optical tags are much cheaper but RFID transponders allow the identification of objects that are behind solid obstacles; it is not necessary to have a direct line of sight. RFID systems also provide information about the proximity of a RFID-tagged object relative to the position of the reader [Fer02]. In recent years many different RFID systems appeared, which differ according to their operating range, their frequency, and the kind of communication the transponders support. Generally, it is possible to distinguish between three basic classes of RFID systems [Fi00]. Some of them are already defined in standards of the ISO (International Organisation of Standardization) and IEC (International Electrotechnical Commission) and some standards are still in progress:

close coupling: An RFID system is called closely coupled (ISO/IEC Standard 10536) when its communication range is below 1 cm. This means that the transponder has to be placed directly on top of a reading device. Due to the small reading distance the inductive energy transfer is better than in remotely coupled systems and the RFID chip is able to transfer complex data to the reader. It is even possible for the RFID chip on the transponder to encrypt the transfered data and to allow write operations on RAM, EEPROM or FERAM memory.

remote coupling: Remotely-coupled RFID systems provide a reading and writing range of up to 1 meter. Around 90% of all RFID systems that are used in industrial, medical, and commercial systems are remotely coupled. Remotely-coupled systems are classified into proximity coupling (ISO/IEC 14443) and vicinity coupling (ISO/IEC 15693). Proximity-coupled systems are used for high-speed data transfer over a small distance. Remotely-coupled systems use frequencies less than 135 kHz. There exist also remotely-coupled solutions which use the following frequencies: 6,75 MHz, 13,56 MHz or 27,125 MHz.

long range: Long-range RFID systems use active transponder devices to achieve sending ranges between 1 and 10 meters. Because inductive energy transmission is not possible over such large distances, active transponders include an energy supply. Long-range RFID systems operate in the microwave frequency band around 2,4 GHz as it is shown in Fig. 9.

To distinguish between different RFID systems, it is necessary to know which functionality those systems offer. The functional range of RFID devices starts with low-end systems, which provide read-only transponders, and goes to high-end systems, which can even have an operating system running on the transponders. RFID systems can be categorized into the following functional classes:

read-only: Read-only RFID systems permanently transmit a small amount of data (e.g. the transponder ID) when an electromagnetic field of an active reader is close enough. It is not possible to read more than one transponder ID at a time, so collision detection is not supported. It is not possible for the reader to write data to the transponder. Low-end read-only RFID systems are often used to replace optical barcode systems.

anti-collision: Anti-collision detection enables the identification of more than one RFID transponders within the reading range. It can be implemented with a Time Division Multiple Access (TDMA) ALOHA protocol. Anti-collision detection RFID systems are getting more and more important due to the increased usage of modern appliances in commercial environments (e.g. product tags in supermarkets).

read-write: Read-write RFID systems offer the possibility to store small amounts of data on the passive transponder devices (between 16 Bytes and 16 kBytes on EEPROM or SRAM).

authentication and cryptography: RFID transponders which are based on a microcontroller chip can offer authentication and cryptography mechanisms. Such transponders are not based on a static state machine but can even host an operating system that provides complex functionality. Such high-end RFID systems are similar to microprocessor chip cards.

Basic architecture of inductive coupled RFID systems. As already mentioned above, most RFID systems are using inductive coupling to access the passive RFID transponders. Inductively-coupled passive RFID transponders consist of an integrated chip and a coil that represents the antenna of the passive transponder. To read data from an inductively-coupled RFID transponder the active reading device generates a electromagnetic field that penetrates the transponder’s coil and creates a voltage at the passive transponder. Fig. 10 shows the transmission of energy in inductively-coupled RFID systems.

Form factors of RFID transponders. Today there exists a wide range of form factors for building RFID transponders. Depending on the application area where the RFID transponders are used to identify objects or people, the range of form factors varies from the smart label to the disk transponder form [Tex]. Fig. 11 shows some popular inductively-coupled RFID transponder forms. Smart labels are already very popular for logistic systems and warehousing applications, where it is important to track single objects through their whole production cycle. RFID transponders in credit card sizes are often used for personal identification as it is shown in Fig. 11, where a student card with an RFID transponder is displayed. This sort of transponders are often used for contactless access controls. Glass transponders are extremely small and can be combined with biological material. Therefore they are used for animal identification and tracking. Often the glass transponders are delivered in combination with injection devices to place the transponder under the skin of an animal. The recent years showed that RFID technology is one of the most promising wireless identification technologies.

Ultrasonic Identification

The identification of objects which are equipped with active ultrasonic senders is used in many research [Sen] and industrial projects. The active ultrasonic sending device, often called bat, emits a short pulse of ultrasound that is received by statically installed ultrasound receivers. The receivers are able to identify an object by specific ultrasound pulse times and lengths. One of the most important aspects of ultrasonic identification of objects is that the receivers are able to calculate an object’s fine-grained position by trilateration. This means, that the system is able to calculate the three-dimensional position of an object by using the time the signal needs to travel from the active bat to the receivers that are in range.

ORL system (Olivetti and Oracle Research Laboratory). The ORL identification system [ORL] is based on active ultrasonic sending devices, which are equipped with a 418 MHz radio transceiver for network communication. Each device has a 16-bit unique ID. To enable an environment to identify the positions of the bats, it is necessary to mount a matrix of receivers, which are connected to a controlling PC. The PC periodically broadcasts one of the unique IDs in the 418 MHz band. All the bats receive the message but only the one that has this ID is allowed to respond with an ultrasonic pulse. So the PC is able to identify the specific device and to determine its position in the environment by measuring the signal travel times. It is even possible to obtain information about the object’s orientation. An ORL system that mounts 16 ultrasonic receivers on the ceiling is able to cover an environment of about 75m3 and offers a location accuracy of about 14 cm around the real position of a device. Fig. 12 shows a prototype of an active ultrasonic device.

The ultrasonic location identification mechanism is one of the most useful and cheapest possibilities for locating the position of objects within buildings. Radio positioning systems are successful mechanisms for outdoor location tracking. Within buildings, however, radiobased location tracking is vulnerable to signal reflections and therefore not useful.

Infrared Identification

Infrared object and location identification is a widely-used technology. It uses cheap hardware and most of the modern mobile devices are already equipped with infrared interfaces. Another advantage of this technology is that it can be used for both communication and object identification. Whereas ultrasound is just an object identification and not a communication technology, infrared interfaces can be used for both purposes. However, infrared light is blocked by solid obstacles and interfered by glaring sunlight. That means that the location granularity depends on the environment (e.g. rooms) so that infrared communication and identification is usually an indoor technology. The first ubiquitous computing projects that used infrared object and location identification were Xerox PARC’s PARCTAB environment (described in Chapter 3.1) and Olivetti’s Active Badge system [ABa]. The Active Badge environment was one of the pioneer projects that established active infrared identification badges. People in the Olivetti Laboratory are using infrared-emitting Badges for identifying themselves and for determining their location. In order to save energy the badges send a unique ID only every 15 seconds. Sometimes they offer also limited user input facilities through a set of buttons. The main advantage of Active Badges compared to other identification technologies is that the devices are cheap and simple to build. The energy consumption of Active Badges is much lower than that of ultrasonic bats. An active badge can operate for more than a year. Fig. 13 shows an Active Badge as it is used in the Olivetti Laboratory.

Vision-based systems use visual input from digital cameras to identify objects and locations in an environment. In order to identify specific visual characteristics it is necessary to have a profound knowledge about the visual representation of an environment. In general, there are two methods for vision-based object identification [Ipi]:

Untagged vision-based systems try to recognize an object according to its visual representation. In a simple environment these systems work fine, but as the complexity of objects and their environment increases the identification process gets extremely complex. Untagged vision-based systems require much CPU processing power and for complex object identification, such as human face recognition, these systems often fail completely.

Tagged vision-based systems simplify the identification of an object by attaching a unique visual tag to an object. Successors of this technology are barcode systems, which are used in every supermarket. Visual tags are one of the cheapest identification mechanisms, because the tags can be printed with a standard printer. Due to their low costs visual tags are the most popular identification technology until now.

The main disadvantage of visual tags is that the scanner has to directly face the tag in order to identify it. Any obstacle that comes between the scanner and the tag prevents the identification process. The use of the cheap visual tags such as barcodes has in many ways revolutionized contactless identification in large-scaled logistic systems.

TRIP. The TRIP (Target Recognition using Image Processing) system was developed by the Laboratory for Communications Engineering at the University of Cambridge. It uses circular two-dimensional barcode tags, also called ringcodes, to identify objects by image processing. The TRIP system uses simple CCD or CCTV cameras to identify the ringcode tags within the camera’s field of view.

A TRIPtag consists of two concentric black-colored rings in the middle, which are also called the bull’s eye. A typical TRIPtag is shown on the left side of Fig. 14. Around the bull’s eye two concentric rings, which are divided into 16 sections, are used to encode the TRIPtag’s information. The first section of these rings is always colored black in order to provide a synchronisation sector. The TRIP system uses a ternary encoding, which is shown on the right side of Fig. 14. The synchronization section is the only place where all the ring sectors are black. The next two sections specify an even-parity check and the following four sections specify the radius of the bull’s eye. The remaining 9 sections are used to encode the unique ID of the tag. TRIPtags can represent IDs between 1 and 19.683 (39 – 1). The reason why concentric rings were chosen for identifying objects and their locations is that round shapes are not as common in man-made environments as square and rectangular shapes. Furthermore, the identification of round shapes is easier and less CPU-intensive than the identification of rectangular shapes. To extract the 3D position of a tagged object and its orientation it is necessary to use the known size of the tag and to calculate its perspective projection. The size of TRIPtags is encoded within the sectors 3-7, so it is possible to use variable-sized tags. Visual marker systems, such as the TRIP system, are flexible and inexpensive according to the fact that every web cam can be used to identify tagged objects.

Ad-Hoc Networks

Mobile devices, in combination with wireless networks, require network protocols that allow the dynamic creation of network topologies. Ad-hoc networks can be established between a group of devices that are able to communicate with each other with an automatically created routing table. Ad-hoc networks operate without a connection to a central server that has to be available globally. They do not rely on any infrastructure or already established central administration [Toh]. Networks, such as ad-hoc networks, that do not rely on any infrastructure are called infrastructureless. Infrastructureless networks offer great advantages, but also have some disadvantages compared to static networks with fixed topologies and a central administration. Adhoc networks can be used in any situation between any group of nodes in order to access services, exchange data, or forward requests to other nodes. In environments where no network infrastructure is available (e.g. outdoors), or in environments where a static network topology is not useful (e.g. PANs between MP3 player, mobile phone and headset), ad-hoc networks provide a convenient communication solution. The main disadvantages of ad-hoc networks are their lack of a central security administration and the additional network administration overhead for the individual nodes. Every node in a infrastructureless network has to perform some additional tasks to establish network communication to other nodes. A node has to serve as a router, in order to forward packages to the next hop on the route to the package’s destination node. Due to the fact that ad-hoc networks are highly dynamic (e.g. it is normal that nodes can disappear without notification) the routing information often changes. In static networks, where millions of devices are connected by a central administration, the network address of a device is often mapped directly to its location in the network topology. Therefore it is easier to calculate the network route in static networks. In ad-hoc networks the network address is completely independent of the device’s location in the network. Most of the time ad-hoc network nodes are mobile devices which change their location. Due to this fact, the network topology changes dynamically and the calculation of a route is much more complex. Routing algorithms can be classified into adaptive and non-adaptive routing mechanisms, where adaptive mechanisms are able to react automatically on changes in the network topology. For ad-hoc networks only adaptive routing mechanisms can be used due to the highly dynamic nature of such network topologies. Adaptive routing mechanisms can be categorized into the following groups:

Table-driven routing mechanisms (proactive algorithms) use routing tables to find a route to a destination. Such mechanisms update their tables periodically and also gather routing information about hosts that were not demanded before. Examples for table-driven routing mechanisms are WRP [WRP] and DSDV [DSDV].

On-demand mechanisms collect routing information to a destination address when a node has to forward a package to this address, but they do not store this information for later use as table driven mechanisms do. Examples for such mechanisms are DSV [DSV] and ABR (Associative-Based Routing) [ABR].

An important characteristic of routing protocols in mobile computing environments is the amount of energy that is necessary to transmit a package on a calculated route. Changing the power adjustment of a mobile device (e.g. into power saving mode) can therefore change the network topology and the routes. Ad-hoc networks, most of the time, consist of a multitude of heterogenous devices with different hardware capabilities. A device which has a permanent power supply, a powerful CPU and a high amount of memory is a better alternative than a mobile device with very limited resources. Bandwidth constraints are also a critical aspect of ad-hoc routing protocols, because wireless networks often offer significantly lower bandwidth than wired networks. Wireless connections are also likely to change their bandwidth according to signal strength or latency measurements. For connecting ad-hoc networks to traditional infrastructure networks, such as the global Internet, it is necessary to specify how ad-hoc connections can be routed into a static wired network. The resulting mobile Internet can be divided into two layers: the mobile host and mobile router layer [CO99], as it is shown in Fig. 15. The mobile host layer consists of several mobile hosts that are temporarily connected to fixed routers, which are directly connected to a wired network. The mobile host layer is supported by the standards MobileIP [RFC2290] and DHCP [RFC2131]. In this layer the communication between mobile hosts is only possible though the infrastructure.

The mobile router layer consists of mobile hosts and mobile routers. Each mobile host in the mobile router layer is associated with a mobile router through a wireless ad-hoc connection. A mobile router routes between other mobile hosts or into a traditional static network through a wireless infrastructure coonection. The mobile network layer does not need any infrastructural support from the traditional static network. The mobile router layer (the ad-hoc network) forms a parallel network to the static network. In recent years, research on ad-hoc networks has focused on military scenarios where many heterogeneous devices have to communicate in unknown environments without any infrastructure. With the emergence of Peer-To-Peer file- and resource-sharing frameworks, as well as with personal area networks, ad-hoc networks gain more and more importance.

Peer-To-Peer Computing

P2P describes systems and applications that share resources, such as files or services, without the use of any central authority, like a server. P2P systems are comparable with ad-hoc networks where every node is able to communicate and to forward service requests without any central infrastructure. While ad-hoc networks provide basic networking protocols to enable communication between devices without a central router, P2P computing provides higher level services on top of any networking protocols. The difference between ad-hoc network protocols and P2P systems is that ad-hoc network protocols are located in the network layer and tranport layer of the ISO/OSI seven layer architecture [OSI] and P2P systems are operating in the presentation and application layer. The term Peer-To-Peer Computing (P2P) refers to a research area which gained a lot of popularity in the last years. P2P computing is a controversial topic, according to the fact that many of the technologies and mechanisms used by P2P frameworks are already known from other research areas (e.g. grid computing, parallel computing, network communication). P2P systems often rely on an arbitrary network structure (ad-hoc or managed, wireless or wired) and realize a higher level decentralized organisation of resources. In fact P2P applications are typical designed to run on ad-hoc networks, even if most of the actual P2P applications are running on traditional networks. One of the most popular showcases for a P2P application is SETI@home (Search for Extraterrestrial Intelligence), which distributes small amounts of signal recognition calculations among millions of private PCs. The SETI@home scenario shows that the P2P technology can offer great advantages compared to traditional mechanisms. Some companies have already started to establish software frameworks to support the development of P2P applications, such as Sun’s Java based JXTA framework [JXTA], or MIT’s IRIS framework [IRIS]. There is even a framework which is able to test P2P protocol implementations, called p2psim [P2PSIM]. Other well-known examples are file sharing P2P applications like Napster [NAP] and Gnutella [GNUT], which came to questionable fame in the last years, by sharing copyrighted resources. A typical P2P node, which is also called peer, is designed according to a hybrid clientserver model. This means that a peer may act as a server for some peers as well as a client for others. The high autonomy of peers leads to the same problems as in mobile hosts scenarios. One negative aspect is the lack of trustworthyness, because there is no possibility to contact a trusted central server. Other negative aspects are the high redundancy of information that may travel through a P2P network as well as the limited scalability of such networks because they require higher management effort. The design of P2P applications is an alternative to the classic client-server model, but many P2P frameworks use a hybrid approach with some central peers in order to reduce the security and redundancy problems. Fig. 16 shows the taxonomy of computing systems, including P2P, taken from [P2PHP]. As the right taxonomy in Fig. 16 shows, P2P systems can be used in four different contexts: distributed computing, file or resource sharing, collaboration, and platform infrastructure. SETI@home represents a typical instance of distributed computing. Napster, Gnutella and Kazaa are classified as file or resource sharing systems. The messenger tool Jabber is a collaboration system, JXTA and IRIS form P2P platform infrastructures. The SiLiCon framework, that is presented in this thesis, is a combination of all four of these aspects.

P2P Discovery Algorithms

One of the most important parts of a P2P system is the discovery of other peers and the lookup of services. P2P architectures try to work as decentralized as possible. Therefore, it is necessary to provide a powerful discovery mechanism for finding other peers. There are several such algorithms ranging from completely decentralized to centralized discovery.

Centralized discovery. Centralized discovery algorithms use a central repository to store the contact information of all other peers. When a peer tries to find another peer it just has to get its contact information from the repository in order to be able to communicate with it. This mechanism offers at least a small amount of security and privacy compared to broadcast discovery mechanisms, because the centralized discovery mechanism depends on a central trusted repository. Also a big scalability factor is assured through the use of this central peer. The problem of this approach is that the central peer is a bottleneck. Furthermore, there has to be a connection between every peer and the central repository. A centralized discovery mechanism is completely useless within an ad-hoc infrastructure. Broadcast or flooding discovery. Broadcast discovery mechanisms are pure P2P solutions, where the peers have no shared information. They are based on completely decentralized algorithms where all peers have to announce their appearance by a broadcast or multicast message. When two peers meet they exchange their contact information as well as the information that they have collected from other peers before. Broadcast mechanisms provide a good solution for peer discovery in local environments. In large-scale environments with a high number of peers, however, they produce too much network traffic and are not scalable. Due to scalability problems broadcast discovery algorithms often limit the number of hops a discovery package is able to travel. Therefore search requests may return without a result, even if the desired peer is running and able to communicate. Broadcast discovery mechanisms have a non-deterministic behavior and a search request without a limited number of hops might take an indefinite amount of time. Hybrid discovery solutions. Modern P2P frameworks use hybrid discovery mechanisms in order to reduce the scalability problems in large-scale environments. One possible solution is the definition of superpeers which are peers with a higher reliability and more CPU power than an average node. A superpeer collects information about a dedicated group of standard peers in order to speed up the discovery process. Every discovery request is first sent to the nearest known superpeer. The file sharing application KaZaa uses such a hybrid mechanism based on superpeers. It is expected that P2P applications and platforms will appear also in areas other than file sharing. Decentralized systems offer the possibility to operate in centralized networks (such as the Internet) as well as in ad-hoc networks. There are even some sorts of P2P systems that are designed to run on mobile devices such as smart cell phones or PDAs, which offer some higher-level operating system (e.g. Symbian OS, WindowsCE or Linux).

Jini

Jini (Java Intelligent Network Infrastructure) [Jini] was introduced by Sun Microsystems to provide a reliable software infrastructure for ad-hoc connectivity between heterogeneous digital devices. It was presented to the public in 1999 and is based on the well-established Java environment. Jini should widen the spectrum of Java-enabled devices to mobile and embedded systems. It should be usable, for example, to spontaneously connect even smart household appliances such as refrigerators, microwaves or dishwashers. Traditional Java applications are running on top of a virtual machine, called the Java Virtual Machine (JVM). Jini is designed to build a spontaneous grid of JVMs in order to distribute services between heterogeneous devices. Service platforms like Jini facilitate the rapid creation and deployment of services, as well as the provision of dynamic service discovery mechanisms. The Jini service communication is based on RMI (Remote Method Invocation), and requires clients to be implemented in Java. RMI is an extension of the traditional Remote Procedure Call (RPC) mechanism, which allows a completely transparent call of methods over a network. In Java source code an RMI call cannot be distinguished from a local method call. The use of underlying Java technologies offers great advantages for developers who can rely on already established Java technologies and libraries. Java Object Serialization enables the transport of complex RMI parameters over the network and even to move entire objects including their code. To access low-level device capabilities it is often necessary to implement a bridge between Java and C++ using JNI (Java Native Interface). Fig. 17 shows the layered architecture of a Jini service provider and a Jini client application.

At the lowest level of the Jini protocol stack the network layer is responsible for the transport of byte arrays. The Java network package allows Jini to use TCP/IP sockets, which provide a reliable connection between two network endpoints, as well as UDP (User Datagram Protocol), which is an unreliable package-oriented network protocol. IP Multicast is used to send packages to IP multicast groups in order to discover a Jini lookup service provider.

Jini discovery and lookup

The kernel of the Jini service platform is based on three protocols: discovery, join and lookup. When a Jini device appears, it tries to find the next lookup services provider in the current environment; this is called discovery. Once a lookup service provider is found, the device registers its own services there; this is called join. A service provider which joins a lookup service provider has to register a service object. A service object represents a proxy object which is able to call the service at the service providers host. It is up to the service object how the service is called at the remote host but most of the time RMI (Remote Method Invocation) is used. The service object is registered at the service provider through the Java serializing mechanism which allows the transmission of Java objects. The discovery process is performed with the use of IP multicast packages, which means that the Jini device has to know the IP multicast group address where a possible lookup service provider listens. Services are described by Java interfaces. Therefore, a Jini client asks a lookup service provider for a specific interface. If a service with this interface has been registered there the client receives the service object from the lookup service provider which is able to call the service. Fig. 18 shows how two Jini devices, a digital camera and a photo printer, send IP multicast discovery packages in order to find a lookup service provider. When a lookup service provider receives a discovery package from a device it responds with a multicast response package containing it’s address. The requesting device, the camera and the printer, get the response package from the lookup service provider. The devices register their services at the service provider. Jini services are specified through a Java interface definition. For registration the Jini service provider has to send sends the 128 Bit UUID (Universal Unique Identifier), the service interface and a collection of attributes to the lookup service provider. The attributes provide optional metainformation about the service.

In a TCP/IP environment, all multicast IP packages are sent via UDP, which is a unreliable package-oriented protocol. With the Java Serialization mechanism it is possible to encode the request and response into a byte array, which can be transmitted in a UDP datagram package. Because a UDP datagram package is limited to 512 bytes the discovery requests are also limited to this size. The Java serialization mechanism guarantees also platform independence. Fig. 19 shows how the digital camera does a lookup for a service with the specific Java interface, which offers the method printPhoto(Photo p). Jini devices use the Java type system to determine whether an interface matchs a service lookup. Java interfaces that offer the same signature but implement different types do not match. After the camera has received the address and the service object of the service provider it is able to call the service with the printers service object.

Leases

In Jini environments, as in all distributed systems, a client has to know how long a service runs and how reliable it is. These characteristics may differ from device to device. While a printing service may be running for months without interruption a mobile device could power off unexpectedly. Jini tries to handle this problem with leases. A lease is a period of time for which the service provider guarantees that the holder of the lease is able to access a Jini service. Jini works with lease duration rather than with absolute time because of synchronisation problems. A client can request a lease for a certain period of time. It can renew an existing lease or cancel it if the client is not longer interested in the service. If the lease times out without a renew request the lease expires.

Jini Summary

Jini was one of the first service platforms that supported the spontaneous lookup and interaction of distributed services. It is based on the Java environment, which allows Jini developers to use many well-established technologies such as Java RMI, Java object Serialization, JavaBeans, Enterprise JavaBeans, and JavaSpaces. One of the major problems of Jini is that it is not possible to run it on the Java Micro Edition (J2ME), which is the Java Virtual Machine implementation for embedded systems. Jini needs the full Runtime Environment, which means that a Jini device can not run on most embedded and mobile devices. Another drawback of Jini is that the service interface has to be specified as a Java interface type which means that a Jini service has to be implemented in Java. Modern service platforms use language- and platform-independent interface definition languages such as WSDL (Web Services Description Language).