A common way to determine tissue acceptance of biomaterials is to perform histomorphometrical analysis on histologically stained sections from retrieved samples with surrounding tissue, using various methods. The "time and money consuming" methods and techniques used are often "in house standards". We address light microscopic investigations of bone tissue reactions on un-decalcified cut and ground sections of threaded implants. In order to screen sections and generate results faster, the aim of this pilot project was to compare results generated with the in-house standard visual image analysis tool (i.e., quantifications and judgements done by the naked eye) with a custom made automatic image analysis program. The histomorphometrical bone area measurements revealed no significant differences between the methods but the results of the bony contacts varied significantly. The raw results were in relative agreement, i.e., the values from the two methods were proportional to each other: low bony contact values in the visual method corresponded to low values with the automatic method. With similar resolution images and further improvements of the automatic method this difference should become insignificant. A great advantage using the new automatic image analysis method is that it is time saving-analysis time can be significantly reduced.

Fully automatic co-registration of functional to anatomical brain images using information intrinsic to the scans has been validated in a clinical setting for positron emission tomography (PET), but not for single-photon emission tomography (SPET). In thi

We present a technique that can be used to obtain a series of connected minimal bending trigonometric splines that will intersect any number of predefined points in space. The minimal bending property is obtained by a least square minimization of the acceleration. Each curve segment between two consecutive points will be a trigonometric Hermite spline obtained from a Fourier series and its four first terms. The proposed method can be used for a number of points and predefined tangents. The tangent length will then be optimized to yield a minimal bending curve. We also show how both the tangent direction and length can be optimized to give as smooth curves as possible. It is also possible to obtain a closed loop of minimal bending curves. These types of curves can be useful tools for 3D modelling, etc.

This gem shows how a curve with minimal acceleration can be obtained using Hermite splines [Hearn04]. Acceleration is higher in the bends and therefore this type of curve is a minimal bending curve. This type of curve can be useful for subdivision surfaces when it is required that the surface has this property, which assures that the surface is as smooth as possible. A similar approach for Bézier curves and subdivision can be found in [Overveld97]. It could also be very useful for camera movements [Vlachos01] since it allows that both the position and the direction of the camera can be set for the curve. Moreover, we show how several such curves can be connected in order to achieve continuity between the curve segments.

Our world is three dimensional. With our eyes we mainly see the surfaces of 3D objects and in conventional imaging we see projections of parts of the 3D world down to 2D. But over the last decades new imaging techniques such as tomography and confocal microscopy have evolved that make true 3D volume images available,. These images can reveal information about the inner properties and conditions of objects, e.g. our bodies, that can be of immense value to science and medicine. But to really explore the information in these images we need computer support.

At the Centre for Image Analysis in Uppsala we are developing methods for the analysis and visualisation of volume images. A nice aspect of image processing methods is that they in most cases are independent of the scale in the images. In this presentation we will give examples of how images of widely different scales can be analysed and visualised.

- At the highest resolution we have images of protein molecules created by cryo-electron tomography with voxels of a few nanometers.

- Using confocal microscopy we can also image single molecules, but then only seeing them as bright spots that need to be localized at micrometer scales in the cells.

- The cells build up tissue and using conventional pathology stains or micro CT we can image the tissue in 2D and 3D. We are using such images to develop methods for studying tissue integration of implants.

- Finally conventional X-ray tomography and magnetic resonance tomography provide images on the organ level with voxels in the millimetre range. We are developing methods for liver segmentation in CT data and visualising the contrast uptake over time in MR angiography images of breasts.

The visual interpretation of images is at the core of most medical diagnostic procedures and the final decision for many diseases, including cancer, is based on microscopic examination of cells and tissues. Through screening of cell samples the incidence and mortality of cervical cancer have been reduced significantly. The visual interpretation is, however, tedious and in many cases error-prone. Therefore many attempts have been made at using the computer to supplement or replace the human visual inspection by computer analysis and to automate some of the more tedious visual screening tasks. The computers and computer networks have also been used to manage, store, transmit and display images of cells and tissues making it possible to visually analyze cells from remote locations. In this presentation these developments are traced from their very beginning through the present situation and into the future.

Almost all cancers are diagnosed through visual examination of microscopic tissue samples. Visual screening of cell samples, so called PAP-smears, has drastically reduced the incidence of cervical cancers in countries that have implemented population wide screening programs. But the visual examination is tedious, subjective and expensive. There has therefore been much research aiming for computer assisted or automated cell image analysis systems for cancer detection and diagnosis. Progress has been made but still most of cytology and pathology is done visually. In this presentation I will discuss some of the major issues involved, examine some of the proposed solutions and give some comments about the state of the art.

Biomedical cell image analysis is one of the main application fields of computerized image analysis. This paper outlines the field and the different analysis steps related to it. Relative advantages of different approaches to the crucial step of image segmentation are discussed. Cell image segmentation can be seen as a modeling problem where different approaches are more or less explicitly based on cell models. For example, thresholding methods can be seen as being based on a model stating that cells have an intensity that is different from the surroundings. More robust segmentation can be obtained if a combination of features, such as intensity, edge gradients, and cellular shape, is used. The seeded watershed transform is proposed as the most useful tool for incorporating such features into the cell model. These concepts are illustrated by three real-world problems.

For a PET agent to be successful as a biomarker in early clinical trials of new anticancer agents, some conditions need to be fulfilled: the selected tracer should show a response that is related to the antitumoral effects, the quantitative value of this response should be interpretable to the antitumoral action, and the timing of the PET scan should be optimized to action of the drug. These conditions are not necessarily known at the start of a drug-development program and need to be explored. We proposed a translational imaging activity in which experiments in spheroids and later in xenografts are coupled to modeling of growth inhibition and to the related changes in the kinetics of PET tracers and other biomarkers. In addition, we demonstrated how this information can be used for planning clinical trials. Methods: The first part of this concept is illustrated in a spheroid model with BT474 breast cancer cells treated with the heat shock protein 90 (Hsp90) inhibitor NVP-AUY922. The growth-inhibitory effect after a pulse treatment with the drug was measured with digital image analysis to determine effects on volume with high accuracy. The growth-inhibitory effect was described mathematically by a combined E-max and time course model fitted to the data. The model was then used to simulate a once-per-week treatment, in these experiments the uptake of the PET tracers F-18-FDG and 3'-deoxy-3'-F-18-fluorothymidine (F-18-FLT) was determined at different doses and different time points. Results: A drug exposure of 2 h followed by washout of the drug from the culture medium generated growth inhibition that was maximal at the earliest time point of 1 d and decreased exponentially with time during 10-12 d. The uptake of F-18-FDG per viable tumor volume was minimally affected by the treatment, whereas the F-18-FLT uptake decreased in correlation with the growth inhibition. Conclusion: The study suggests a prolonged action of the Hsp90 inhibitor that supports a once-per-week schedule. F-18-FLT is a suitable tracer for the monitoring of effect, and the F-18-FLT PET study might be performed within 3 d after dosing.

An autostereoscopic display based on a Holographic Optical Element(HOE) presents new opportunities for faithful 3D displaying but also presents potential new problems, such as: accuracy of 3D objects, interactivity and user perception. In this evaluation, which is the first of its kind for this type of display, I have explored and tested methods and tools for the evaluation of these potential problems. I have found that the visual quality is comparable to more common display types but with a significant visual delay due to the parallel rendering of graphics and the projectors significant input lag. From this I have concluded that the display system is not yet ready for its intended purpose, cranio-maxillofacial surgery planning. We need projectors with less input lag and preferably better optics. The software needs to be optimized for multimonitor rendering as well.

This paper describes a new evolutionary algorithm for image segmentation. The evolution involves the colonization of a bidimensional world by a number of populations. The individuals, belonging to different populations, compete to occupy all the available space and adapt to the local environmental characteristics of the world. We present experiments with synthetic images, where we show the efficiency of the proposed method and compare it to other segmentation algorithm, and an application to medical images. Reported results indicate that the segmentation of noise images is effectively improved. Moreover, the proposed method can be applied to a wide variety of images.

In this paper, we describe two methods for computerized analysis of cryo electron tomography reconstructions of biomolecules. Both methods aim at quantifying the degree of structural flexibility of macromolecules and eventually resolving the inner dynamics through automatized protocols. The first method performs a Brownian dynamics evolution of a simplified molecular model into a fictitious force field generated by the tomograms. This procedure enables us to dock the simplified model into the experimental profiles. The second uses a fuzzy framework to delineate the subparts of the proteins and subsequently determine their interdomain relations. The two methods are discussed and their complementarities highlighted with reference to the case of the immunoglobulin antibody. Both artificial maps, constructed from immunoglobulin G entries in the Protein Data Bank and real tomograms are analyzed. Robustness issues and agreement with previously reported measurements are discussed.

Starting from a binary digital image, a multi-valued pyramid is built and suitably treated, so that shape and topology properties of the pattern are preserved satisfactorily at all resolution levels. The multi-valued pyramid can then be used as input data

Digital distance transforms have been used in image processing and analysis since the 1960s. Distance transforms are excellent tools for all applications regarding shape. They are, in fact, extensively used, especially in industrial and medical applications. At the same time, from the mid 1980s until today, there has been a rich literature that investigates distance transforms theoretically, constructs new ones, and improves computation algorithms. Despite this, distance transforms have not really been incorporated into the general image analysis toolbox.

They are usually not mentioned at all -- or the oldest ones (e.g.,

City block and Chessboard) are mentioned very briefly -- in the

basic books on image analysis used in education. One reason for the under-use of distance transforms could be that the oldest distancetransforms are very rotation dependent, giving quite different results depending of the position of an object. The Euclidean distance transform is rotation independent up to digitisation effects, but often leads to complex algorithms where it is used. The compromise is the integer weighted distance transforms, that combines the simplicity of the old distance transforms with a reasonable rotation independence. Here, a large number of distance transforms will be described,with some of their properties and the simplest computation algorithms.