Outline

Introduction

Chronic ailments such as cardiovascular diseases, hypertension, and diabetes affect a significant number of the western population [Ref.Â 1]. Telemonitoring (TM) allows healthcare institutions to take care of their patients while they are out of hospital, which is especially useful for managing various chronic diseases. A prognosis for the year 2013 [Ref.Â 2] expects the use of TM to significantly increase. New trends in sensor technologies, ubiquitous computing, and home automation enable a unobtrusive, non-invasive TM which is applicable to an increasing number of patients and diseases. Fig.1 [Fig.Â 1] illustrates the information flow between patients' body sensors and the caregivers of a typical TM scenario. Caregivers can be specialists in hospitals or general practitioners. Mobile and stationary devices build up a networked system to allow for acquisition, processing, storing, and forwarding of sensory data. The TM system receives its input from sensory data sources such as skin electrodes, non-invasive blood-pressure monitors, and accelerometers which deliver their results continuously. As a consequence, the system must process incoming data on the fly, because storage capacity is limited and in most cases communication facilities are too unreliable and limited for simple data forwarding strategies. The need for local online-processing is also driven by the prevalent demand for rapid detection of problematic health conditions based on incoming data and by thresholds on sensory data processing results.

Data Stream Management for Telemonitoring

In order to fully capitalize these new developments for TM applications, we state that new concepts of information management are needed. In this paper we concentrate on two main aspects: (1) data stream management (DSM) and (2) reliable process management (RPM).

DSM is particularly suitable for TM [Ref.Â 3], because it targets near real-time processing, large volumes of continually produced data, and timely responses of processing results. Various groups are currently in research for new paradigms and techniques to handle data streams like a database system does with static data [Ref.Â 4], [Ref.Â 5], [Ref.Â 6]. In addition to those rather generic approaches investigated by these groups, we have identified a set of basic operators needed for most TM applications (e.g., basic ECG analysis like measurement of the RR interval). Additionally, we put much emphasis on extensibility: Our infrastructure can be extended by disease or scenario specific operators. While DSM is quite promising, it is not enough. The increased number of components, devices, and platforms leads to an increased failure probability. Reliability and provable correctness are new challenges [Ref.Â 7] that are of utmost importance in TM applications. In order to meet this challenges, we have started to add DSM to the hyperdatabase (HDB) project [Ref.Â 8] that is designed for reliable and correct transactional process management.

Combining the Hyperdatabase Middleware with Data Stream Management

The introduction of relational database systems in the 80's has led to a new infrastructure and main platform for development of data-intensive applications. But by now, the prerequisites for application development have changed dramatically. Usually, specialized application systems or components are well in place. Applications no longer are built from scratch but rather by combining existing components into processes or workflows. The hyperdatabase middleware [Ref.Â 8] is designed as a platform for reliable execution of such processes. Applications are easily developed by designing user defined processes based on existing components. Additionally, new components may be integrated to provide new functionality. This offers high flexibility to the application designer. Furthermore, HDB systems allow for reliable process execution by offering sophisticated failure handling. OSIRIS [Ref.Â 9] is a prototype of a hyperdatabase. It has been developed at ETH Zurich and is the starting point of our infrastructure for TM. We extend OSIRIS in order to combine the advantages of transactional process management with the virtues of managing streaming data. Therefore, our infrastructure additionally supports DSM as an extended form of processes, called stream processes. Our infrastructure enables to build up applications based on the combination of basic components (both streaming and non-streaming).

The sophisticated failure management is also applied to stream processes and offers high reliability. In case of a failure in process execution, other suitable components can be used to fulfil the failed task If this is not possible, an alternative process branch is executed. Finally, if the alternative branch also fails, a user-defined process can be invoked to handle this failure scenario (e.g., technical support is called to fix the problem at patient's home). Our proposed infrastructure allows for high flexibility by offering an extensible set of basic components for the creation of patient and disease specific monitoring applications. The processing of sensory data is specified by combining basic components to traditional and stream processes. Thresholds and parameters of components can be individually set for each patient. Adaptability is achieved by user defined quality of service definitions, which prioritise important processing in case of failures or high system load. Specialized communication operators adapt processes to intermitted connectivity between components (e.g., between a wireless sensor and home based PC). In case of long times of disconnection, special user defined processes are invoked (e.g., reminding the patient to return in connection range by sending a SMS to his/her mobile phone). As the previous example has shown, the integration of DSM and process management into a single hyperdatabase infrastructure allows for interaction between both paradigms. For example, stream process results can invoke traditional processes (e.g., calling the ambulance in case of critical health state) and vice versa. In general, this combination allows for the creation of complex TM applications where process and data stream management work seamlessly together. Finally, usability is achieved by user-friendly graphical interfaces with a "boxes-and-arrows" approach. Boxes represent the operators which process the data streams, and arrows represent the data streams between the operators. This approach gives non-programmers (e.g., medical personnel) high flexibility in defining and customizing their sensor data processing.

Conclusion

We propose a TM infrastructure, which meets the new challenges as a combination of an HDB middleware and DSM. Flexibility allows to provide a one-in-all healthcare telemonitoring solution for various chronic diseases usable by medical personnel without extensive training. Adaptability enables the infrastructure to adapt to different devices, components, network connections, and load situations with respect to user defined quality of service constraints. Reliability is achieved by sophisticated failure handling strategies, which are an important demand if we consider TM applications in healthcare where we need systems we can count on.