Background: The standard imaging procedure for a patient presenting with renal colic is unenhanced computed tomography (CT). The CT measured size has a close correlation to the estimated prognosis for spontaneous passage of a ureteral calculus. Size estimations of urinary calculi in CT images are still based on two-dimensional (2D) reformats.Purpose: To develop and validate a calculus oriented three-dimensional (3D) method for measuring the length and width of urinary calculi and to compare the calculus oriented measurements of the length and width with corresponding 2D measurements obtained in axial and coronal reformats.Material and Methods: Fifty unenhanced CT examinations demonstrating urinary calculi were included. A 3D symmetric segmentation algorithm was validated against reader size estimations. The calculus oriented size from the segmentation was then compared to the estimated size in axial and coronal 2D reformats.Results: The validation showed 0.1 +/- 0.7mm agreement against reference measure. There was a 0.4mm median bias for 3D estimated calculus length compared to 2D (P &lt; 0.001), but no significant bias for 3D width compared to 2D.Conclusion: The length of a calculus in axial and coronal reformats becomes underestimated compared to 3D if its orientation is not aligned to the image planes. Future studies aiming to correlate calculus size with patient outcome should use a calculus oriented size estimation.

The concept of Ecology of Physically Embedded Intelligent Systems, or PEIS-Ecology, combines insights from the fields of ubiquitous robotics and ambient intelligence to provide a new solution to building intelligent robots in the service of people. While this concept provides great potential, it also presents a number of new scientific challenges. The PEIS-Ecology project is an ongoing collaborative project between Swedish and Korean researchers which addresses these challenges. In this paper we introduce the concept of PEIS-Ecology, discuss its potential and its challenges, and present our current steps toward its realization. We also point to experimental results that show the viability of this concept.

Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent-based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a proof of concept smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feedback received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work.

The most common use of wireless sensor networks (WSNs) is to collect environmental data from a specificarea, and to channel it to a central processing node for on-line or off-line analysis. The WSN technology,however, can be used for much more ambitious goals. We claim that merging the concepts and technology ofWSN with the concepts and technology of distributed robotics and multi-agent systems can open new waysto design systems able to provide intelligent services in our homes and working places. We also claim thatendowing these systems with learning capabilities can greatly increase their viability and acceptability, bysimplifying design, customization and adaptation to changing user needs. To support these claims, we illus-trate our architecture for an adaptive robotic ecology, named RUBICON, consisting of a network of sensors,effectors and mobile robots.

The fields of autonomous robotics and ambient intelligence are converging toward the vision of smart robotic environments, in which tasks are performed via the cooperation of many networked robotic devices. To enable this vision, we need a common communication and cooperation model that can be shared between robotic devices at different scales, ranging from standard mobile robots to tiny embedded devices. Unfortunately, today's robot middlewares are too heavy to run on tiny devices, and middlewares for embedded devices are too simple to support the cooperation models needed by an autonomous smart environment. In this paper, we propose a middleware model which allows the seamless integration of standard robots and simple off-the-shelf embedded devices. Our middleware is suitable for building truly ubiquitous robotics applications, in which devices of very different scales and capabilities can cooperate in a uniform way. We discuss the principles and implementation of our middleware, and show an experiment in which a mobile robot, a commercial mote, and a custom-built mote cooperate in a home service scenario.

The fields of autonomous robotics and ambient intelligence are converging toward the vision of smart robotic environments, or ubiquitous robotics, in which tasks are performed via the cooperation of many simple networked robotic devices. The concept of Ecology of Physically Embedded Intelligent Systems, or PEIS-Ecology, combines insights from these fields to provide a new solution to building intelligent robots in the service of people. To enable this vision, we need a common communication and cooperation model that allows dynamically assembled ad-hoc networks of robotic devices, a flexible introspection and configuration model allowing automatic (re)configuration and that can be shared between robotic devices at different scales, ranging from standard mobile robots to tiny networked embedded devices.In this paper we discuss the development of a middleware suitable for ubiquitous robotics in general and PEIS-Ecologies in specific. Our middleware is suitable for building truly ubiquitous robotics applications, in which devices of very different scales and capabilities can cooperate in a uniform way. We discuss the principles and implementation of our middleware, and also point to experimental results that show the viability of this concept.

We present a new approach for odour detection and recognition based on a so-called PEIS-Ecology: a network of gas sensors and a mobile robot are integrated in an intelligent environment. The environment can provide information regarding the location of potential odour sources, which is then relayed to a mobile robot equipped with an electronic nose. The robot can then perform a more thorough analysis of the odour character. This is a novel approach which alleviates some the challenges in mobile olfaction techniques by single and embedded mobile robots. The environment also provides contextual information which can be used to constrain the learning of odours, which is shown to improve classification performance.

GPU’s have recently emerged as a significantly more powerful computing plat-form, capable of several orders of magnitude faster computations compared toCPU based approaches. However, they require significant changes in the algorithmic design compared to traditional programming paradigms. In this chapter we specifically introduce the reader to an overview of GPGPU development tools and the potential algorithmic pitfalls and bottlenecks when developing medical imaging algorithms for the GPU. We present a few general methodologies and building blocks for implementing fast image processing on GPUs. More specifically they include: methods for performing fast image convolutions and filtering;line detection, and bandwidth and memory considerations when processing volumetric datasets. Finally we conclude with a discourse on numerical precision as well as on mixing single floating-point versus double floating-point code.

Time resolved three-dimensional (3D) echocardiography generates four-dimensional (3D+time) data sets that bring new possibilities in clinical practice. Image quality of four-dimensional (4D) echocardiography is however regarded as poorer compared to conventional echocardiography where time-resolved 2D imaging is used. Advanced image processing filtering methods can be used to achieve image improvements but to the cost of heavy data processing. The recent development of graphics processing unit (GPUs) enables highly parallel general purpose computations, that considerably reduces the computational time of advanced image filtering methods. In this study multidimensional adaptive filtering of 4D echocardiography was performed using GPUs. Filtering was done using multiple kernels implemented in OpenCL (open computing language) working on multiple subsets of the data. Our results show a substantial speed increase of up to 74 times, resulting in a total filtering time less than 30 s on a common desktop. This implies that advanced adaptive image processing can be accomplished in conjunction with a clinical examination. Since the presented GPU processor method scales linearly with the number of processing elements, we expect it to continue scaling with the expected future increases in number of processing elements. This should be contrasted with the increases in data set sizes in the near future following the further improvements in ultrasound probes and measuring devices. It is concluded that GPUs facilitate the use of demanding adaptive image filtering techniques that in turn enhance 4D echocardiographic data sets. The presented general methodology of implementing parallelism using GPUs is also applicable for other medical modalities that generate multidimensional data.