The design of a panoramic viewing system and its application to cavity inspection and measurement are discussed, along with the general properties of a special panoramic annular lens (PAL). Various examples are described, showing how the PAL can be used for simple inspection or for precision contouring of interior walls of cavities using techniques such as moire, holointerferometry, and electronic speckle pattern interferometry (ESPI).

This is a description of the SAAB Missiles image processing laboratory. At present four persons can work simultaneously with advanced image processing and it is easy to expand the system. A lot of functions can be realized in real-time since it consists of 21 image processing boards. It is the combination of the extremely fast image processing and the multi-user function that makes this system unique.

Interactive digital image processing package (IDIPP) is a general purpose image processing package built upon two requirements: user friendliness and easy integration of new processing modules. To satisfy these requirements, a special graphical user interface (GUI), made of elements easily manipulated through a set of high level tools, was designed. The availability of these tools allows the addition of new processing modules with very little effort. Around this interface, several image processing modules have been developed. This paper describes the user interface structure and the developed image processing modules.

A highly functional and versatile camera platform for a multi-purpose robotic vehicle utilizing a novel modular approach is presented. The platform uses only three CCD cameras, four computer-controlled rotation stages, and three modular optical imaging systems to provide both a stereo vision mode at any desired viewing angle and a panoramic vision mode with a 180 degree(s) non-overlapping field-of-view. The panoramic view from the imaging system of each module is collected by one of three mirrors and transmitted collectively toward a corresponding CCD camera. Stereo vision mode is accessed by aligning any two of the three modules in parallel using their respective rotation stages. When the entire assembly is rotated by another rotation stage, any desired viewing angle is obtained.

Digital image processing, with its ever-growing application base, is placing ever higher demands on processing power, high transmission bandwidth, and large, online storage capacity. The recent availability of algorithm-specific, chip-level devices certainly increases raw throughput, but does not change the eternal compromise of high-performance, application- specific systems with lower-performance, open, versatile solutions. If a wide range of applications is to be dealt with by a single machine then it must be fully programmable. As even the fastest mono-processor today still falls well short of the required performance, the only viable solution seems to be in massive parallelization of easy-to-program processors. This paper presents a fully programmable, complete image processing machine designed to handle 2-D and 3-D applications. Two important aspects of this machine are covered: its internal data transfer and processor architecture; and its open, nonspecific modality interface.

This paper describes the current development of a novel 4-D scanner, which can fully sample three dimensional motion at 12.5 Hz. The scanner, a variant on structured light produces a series of position lines every 40 ms, from each video frame. The reconstruction of the range information uses a color coded pattern, which is analyzed by a flexible and robust pattern matching algorithm.

In this research, we first show that single-image shape from shading (SFS) algorithms share an inherent limitation in the accuracy of reconstructed surfaces due to the property of the reflectance map. That is, surface orientations can be accurately recovered if they lie along the gradient direction of the reflectance map function, but cannot be easily recovered if lying along the tangential direction. Then, we consider two methods which incorporate stereo information with shading to improve the performance. One is to use multiple images taken under different lightening conditions known as photometric stereo, and the other is to incorporate the height information obtained from images taken from different viewing angles known as the geometric stereo. With photometric stereo, we compensate the weakness of each reflectance map by combining several reflectance maps in a proper way in the gradient space and hence improve the accuracy of the results. With geometric stereo, absolute heights at sparse feature points are obtained and used as constraints on the resulting surface so that the ambiguity can be resolved. Simulation results for several test images are given to show the performance of our new robust algorithms.

A great deal of research has been carried out by biologists, pathologists, and biomedical physicists on the studies of the metamorphosis in various types of cells with symptoms of cancerous disease. Color and texture of the cell and their interrelationships are important features for analyzing cells and prove successful to differentiate abnormal from normal ones. However, to categorize the abnormal (or suspicious) cell into cancerous or non-cancerous ones, more information rather than that obtained only from the observations on the microscopic images of the smear are needed, and therefore, further steps, such as biopsy, etc., have to be taken. In this paper, color image processing technique is introduced as a means to enhance the visualization and diagnostic capability of a human expert, and we hope that it would come out to be an effective tool in identifying the non-cancerous cells from the cancerous ones even when they look alike under the microscope for some reason. A real microscopic image is first resolved into several spectral-component images. More and useful features are extracted, respectively, from 0.4 to 0.5 micrometers , 0.5 to 0.6 micrometers , and 0.6 to 0.7 micrometers spectral bands. Combination of these different spectral-band images after separate processings derives a color image, which will sharpen the distinguishing features between cancerous and non-cancerous cells, even when they all look alike originally both morphologically and chromatically under the microscope. In this paper some recent promising results obtained with real color image processing in our laboratory are given. To improve the resolution, 512 X 512 pixel image, other than 256 X 256, we employed for processing.

Image processing is progressively becoming a science, but it remains closer to techniques than to theory. Hence, one needs to define a common frame to settle general enough problems and to build systems for various purposes in vision: a systematic method for tackling applications in computer vision. The tentative method here stems from three main principles: (1) taking the operational framework into account, to get constraints, (2) introducing specific knowledge related to the application as early as possible, and (3) extracting local and global image properties at both segmentation and matching steps. As it is shown in the paper, all principles ask for an explicit expertise on classical image processing techniques: application bounds and limits. They lead to less classical image procedures, to be specially developed as in the present case: (1) cooperative segmentation, (2) use of planarity constraints. Two applications have been selected, they are different on both their operational framework and the image processing problems they pose: (1) target tracking in IR imagery, (2) 3-D scene reconstruction of classical mobile robot environments: indoor or outdoor urban scenes. Both systems have actually been designed and built. It is impossible to prove any generality of a method based on two different applications only, but these have been selected to be generic enough and with sufficient differences between them so that our systematic method of applicative system designing can likely be used with success in many other image processing applications. After explaining the systematic method outlines through the three basic principles, each principle is illustrated by examples derived from both the above mentioned applications.

Image statistics of some computed images and statistical history of the corresponding quadtree are analyzed. A new multiple resolution segmentation (MRS) approach using quadtree for these computed images is presented. The results obtained by using this approach demonstrate the correctness of the derived statistical properties and efficacy of this MRS scheme.

The objective of this paper is to describe an approach to separate text from a mixed text/graphic document, and describe this graphic as overlapping meaningful shapes. Accuracy in the reconstruction of the mixed text/graphic document from the description file is also reported. This paper is a continuation of our previous work, which was mainly on engineering drawings with polygonal shapes. This paper focuses on documents consisting of any curved shape components with text. In this paper algorithms are designed to automate the process of generation of loops with minimum redundancy from the bit map of the image, and to break the interweaved complex loops into simpler interpretable shapes of curved segments. Finally, a succinct description file can be established for the whole image, thus achieving drastic saving in memory when archiving the document images. Effectiveness of the algorithms has been evaluated through experiments on a large number of mixed text/graphic documents. Results show that the algorithms developed are computationally efficient. Once the text is separated from the graphic, the graphic image is then decomposed into the meaningful component parts, the data reduction achieved through this succinct description is extremely high. Even for those silhouettes of curved shape, an approach, called concatenated-arc representation, is developed for their description. With this concatenated-arc approach, much fewer number of arc segments are needed than those needed by line segment approximation. Shapes reconstructed from these description files match closely with the original ones, even for the very complex graphics.

We describe in this paper several geometry problems in photogrammetry and machine vision; the geometric methods of projective invariants which we apply to these problems; and some new results and current areas of investigation involving geometric invariants for object structures and non-pinhole-camera imaging systems.

Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

Model-based target recognition is an active area of research. However, little attention has been given to the problem of target model generation for a model-based automatic target recognition (ATR) system. This paper describes novel algorithms which automatically generate a 3-D object-oriented spatial database that is used to represent and manipulate 3-D target models.

Moment invariants used as features for shape recognition have been widely used. The moments were computed using all the information of shape boundary associated with the interior region. This paper presents the theoretical improved moments computed based only on the shape boundary. Some invariants derived from improved moments in variation to translation, rotation, and scaling are presented. The computations of improved moment invariants based on chain code representation of a shape boundary can be done in real time. Experiments of discriminating country maps, industrial tools, and printed numerals by using improved moment invariants as features via graphical plots suggest that the improved moment invariants be good shape features close to human visual processing.

Registration of synthetic aperture radar (SAR) images is a non-trivial task because of the significant speckle noise associated with them. We have performed the registration using 2-D cepstrum technique which has been verified to be more noise-tolerant and computationally more efficient than the conventional correlation methods. The cepstral peaks revealed liner translations between SAR image pairs, accurately. Further work is in progress to isolate the registration peaks from spurious peaks in a more reliable way than the present heuristic approach. Removal of speckle noise from the SAR images is also addressed. Spatial averaging is a standard technique used on SAR images to reduce speckle. However, this causes a loss of resolution. We have employed mathematical morphology techniques to remove more speckle than spatial averaging can, with little loss of resolution. Long, one-dimensional structuring elements in different orientations are used to filter speckle while maintaining the sharpness of region boundaries. Afterward, a small, two-dimensional structuring element is used to remove thin line elements. The targets appearing as small bright spots are separated from the original images by a thresholding operation and superimposed on the filtered images. The computational time required on a sequential machine is comparable to that for spatial averaging. In addition, like other morphological filters, this technique could be implemented on a real time parallel architecture. The improvement in resolution and noise reduction over the spatial averaging is demonstrated for images acquired at different wavelengths.

The present paper uses a fractal model for differentiating and quantifying image texture. The employment of the fractal model to texture classification involves evaluation of the fractal dimension of the images concerned. A parametric representation of the image texture in terms of fractal dimension is achieved by extending fractional Brownian motion to the discrete case and using a maximum likelihood estimator (MLE) for estimation of the fractal parameter H. The algorithm developed for this model is applied successfully to texture classification of synthetic polymeric membranes. Such texture classification provides us with a quantitative descriptor of polymeric membrane morphology for establishing a correlation between the morphology and the chemical transport phenomena in generating membranes for various industrial applications.

The problem of image restoration has an extensive literature and can be expressed as the solution of an integral equation of the first kind. Conventional linear restoration methods reconstruct spatial frequencies below the diffraction-limited cutoff of the optical aperture. Nonlinear methods, such as maximum entropy, have the potential to reconstruct frequencies above the diffraction limit. Reconstruction of information above diffraction we refer to as super-resolution. Specific algorithms developed for super-resolution are the iterative algorithms of Gerchberg and Papoullis, the maximum likelihood method of Holmes, and the Poisson maximum-a-posteriori algorithm of Hunt. The experimental results published with these algorithms show the potential of super-resolution, but are not as satisfactory as an analytical treatment. In the following paper we present a model to quantify the capability of super-resolution, and discuss the model in the context of the well-known CLEAN algorithm.

This work addresses the problem of restoring blurred and noise corrupted images when typical deterministic methods (least squares, max entropy, etc.) are not known to be optimal. The proposed approach is to adapt, based on observed image data only, the optimization criterion used in the restoration to one most suited to the statistical properties of the observed image. This is done without prior knowledge or restriction assumptions about the data. Maximum likelihood (ML) image restoration is considered where the noise distribution is not known a- priori, but is modeled by a general family of parametric distributions whose widely varying shapes are controlled by a small set of parameters. It is shown that the generalized p-Gaussian (gpG) distribution family can match a surprisingly wide range of typical noise distributions (uniform, Gaussian, exponential, Cauchy, etc.) by varying a single shape parameter p. Restoration is accomplished by adapting the noise model through adjusting p as part of the estimation problem. Once p is found, the ML estimate is simply the associated lp norm minimization solution. The optimization criterion is thus adapted to suit the observation. Examples of improved reconstruction using this method, as compared with least squares and maximum entropy, are presented. The extension of model adaptive restoration to maximum a- posteriori (MAP) estimation is discussed. The potential applicability of another more general parametric distribution, the generalized beta of the second kind (GB2), is discussed.

Cellular array are very important and very suitable in image processing. The local neighborhood operations can be implemented in the cellular array to increase their speed. Recently, the morphological filters and stack filters received much attention. They are very appropriate to be implemented in the cellular array. In this paper, we use a very powerful method called threshold decomposition to implement these filters in the cellular array. Threshold decomposition provides a very powerful method to transform gray filtering into binary filtering. The filters include median, order-statistic (OS), and morphological filters. Filtering of an M-level image by threshold decomposition requires three stages: (1) thresholding, (2) binary filtering, and (3) reconstruction. To increase the performance of the processing element in the array, a fast reconstructor with (2M-2-Log2M) half adders was developed.

This paper presents a new method for model-based object recognition which uses a single, comprehensive analytic object model representing the entirety of a suite of gray-scale views of the object. In this way, object orientation and identity can be directly established from arbitrary views, even though these views are not related by any geometric image transformation. The approach is also applicable to other real and complex sensed data, such as radar and thermal signatures. The unprocessed object model is comprised of a set of basis images with complex exponential harmonic terms as coefficients. A new model is formed comprised of the reciprocal set of the object basis set. The projection of an acquired image onto the reciprocal basis thus produces samples of a complex exponential, the phase of which reveals the pose parameters. Estimation of this phase for several degrees of freedom corresponds to the plane wave direction of arrival (DOA) problem; thus the pose parameters can be found using DOA solution techniques. Results are given which illustrate the performance of a simplified, preliminary, implementation of this method using real-world images.

This paper presents an overview of the methods in nonlinear dynamics which are being applied to signal processing; and a new nonlinear coding method which is based on strange attractors. In particular, these nonlinear dynamic methods are being used for: noise reduction; discriminating signals from a noisy background, for example, when nonlinear media are involved; signal identification; classification; and prediction schemes; filtering out multipath propagation; speech modeling; as well as nonlinear codes; medical applications; and the monitoring and control of vibrations.

We describe the implementation of the Ott-Grebogi-Yorke method of controlling chaos in a physical system. This method requires only small time dependent perturbations of one system parameter and does not demand the use of model equations to describe the dynamics of the system. One advantage of the OGY method is that, between these perturbations, the system remains on chaotic trajectories. One can thus use the sensitivity of the chaotic system to switch between different orbits at will.

We present a method for noise reduction that does not depend on detailed prior knowledge of system dynamics. The method has performed reasonably well for known maps and flows. Also, we present an empirically based technique to estimate the initial signal-to-noise ratio for time series whose dynamical origin may be unknown.

A newly developed picture quality scale (PQS) provides a numerical measure of image quality for monochrome images well correlated with the mean opinion score. In this paper, we report some results on the evaluation and comparison of image coding methods with such an objective quality measure. The emphasis is given here to the evaluation of the quality of JPEG coding standard. We also review and discuss the important new areas of research that an objective distortion measure now makes possible.

The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

The Q-coder has been reported in the literature, and is a renorm-driven binary adaptive arithmetic coder. A similar renorm-driven coder, the QM coder, uses the same approach with an initial attack to more rapidly estimate the statistics in the beginning, and with a different state table. The QM coder is the adaptive binary arithmetic coder employed in the JBIG and JPEG image compression algorithms. The QM-AYA arithmetic coder is similar to the QM coder, with a different state table, that offers balanced improvements to the QM probability estimation for the less skewed distributions. The QM-AYA performs better when the probability estimate is near 0.5 for each binary symbol. An approach for constructing effective index change tables for Q-coder type adaptation is discussed.

Motion-compensated interpolation has several applications in digital image processing like field frequency conversion of different television standards, image coding with the frame skipping method, and so on. Differing from a motion-compensated (MC) predictive algorithm which aims to minimize the prediction errors that must be coded and transmitted, the motion estimator for MC interpolation must provide reliable motion vectorfields which closely approximate the actual motions in scenes in order to reconstruct the missed images in the receiver end. Generally speaking, motion vectorfields obtained by the conventional motion estimation algorithms are not good enough for the application of MC interpolation. In this paper, a new motion estimation algorithm including smoothness constraints is proposed, and its application to MC interpolation of skipped images in the environment of a low bit-rate codec is investigated. The simulation results show that motion vectorfields obtained by the proposed motion estimation algorithm are more homogeneous and more reliable than those obtained by the conventional algorithms, and the quality of interpolated images is substantially improved for the examined sequence.

A subband coding scheme for the prediction error in a hybrid coder at a rate of 8 kbit/s is presented. In order to achieve small address overhead we propose intraband encoding of rectangular shaped regions. An iterative procedure of relaxation type which jointly considers all subbands determines number, size, and position of updated rectangles. It is driven by optimization of the coding-gain per bit ratio. The prediction error amplitudes within the selected rectangles compose vectors of different dimensions. They are quantized by a multi codebook vector quantizer which contains one codebook per vector dimension. The problem of large storage requirement of multi codebook vector quantizers is circumvented by a codebook sharing approach. A training procedure for the constrained-storage VQ is presented. In terms of SNR it performs less than 0.5 dB worse than a common multi codebook VQ while saving 90% storage.

New techniques for redundancy removal of quantized coefficients of a wavelet transform are discussed. Several strategies are developed to improve the redundancy reduction stage in subband/wavelet based image compression. The influence of the scanning path in coding the coefficients of the subbands after transformation are pointed out and solutions are proposed for exploiting the correlation between the coefficients in a more efficient way. New methods are also proposed to encode the address of non zero coefficients using blocks in both the lossy and lossless approach. Simulations show a better performance of the proposed techniques when compared to classical methods, while maintaining an efficient implementation complexity.

This article presents a coding scheme using variable block-size transform coding together with vector quantization (VQ) called variable block-size transform vector quantization (VBSTVQ). The VBSTVQ shows a satisfying picture quality at bitrates of about 0.3 - 0.6 bit/pel (coding of the luminance signal only). The coding scheme is well suited for multimedia, computer, and distribution applications due to its asymmetry in complexity and its inherent hierarchical structure. The picture is segmented into rectangles of different sizes. These rectangles are transformed by a two-dimensional DCT and coded by VQ based on analysis in the spatial and transform domain. A decomposition scheme of the rectangles into vectors which is adapted to non- stationary signals like edges is introduced. Computer simulations compare the results of constant and variable blocksize TVQ.

This paper presents a method for interlaced image sequence coding for digital TV. The interlaced nature of the CCIR 601 format, which is the current standard for digital TV, is a serious drawback in most digital video codecs. In order to obtain a more efficient compression, we propose to process only the fields of one parity instead of processing the frames resulting from interlaced to progressive format change. The fields of the other parity are predicted using spatial interpolation based on the corresponding decoded fields, and the prediction error is also coded and transmitted. In this way, the decoder can reconstruct the odd and even parity fields with a reduced transmission cost. Experimental results, where the proposed interlaced coding method is applied to Gabor-like wavelet transform coding of MPEG2 image sequences, show a very good performance.

This paper describes a predictive vector quantizer (PVQ) for coding grayscale images. The method described can be regarded as an extension of an existing speech coding algorithm in 1- dimension to 2-dimensional images. The method applies vector quantization (VQ) to innovations generated by the well known scalar differential pulse code modulation (DPCM) method. It tries to exploit the advantages of both the simplicity of DPCM and the high compressibility of VQ. Two types of code books, viz., random and deterministic, are used in the implementation. Performance results of the method with both types of codebooks are presented for industrial radiographic images. The results are also compared with reconstructions obtained using the discrete cosine transform (DCT) method.

Transform encryption coding (TEC) is a universal technique for the performance improvement of conventional transform coding (TC) techniques. It not only increases the compression ratio, quality, and security level of coded image but also decreases the coded image sensitivity to channel noise. In TEC, TC technique is applied to the encrypted image instead of the natural scene image. The sample of encrypted image is the weighted sum of the original image samples. Apparently, TEC is compatible with all the TC techniques. JPEG system is applicable to continuous-tone gray-scale or color digital still image data. Since TC technique is employed in the JPEG baseline system, TEC technique can then be used to improve its performance. In this paper, the quantization tables in the JPEG system are redesigned to match the statistical characteristics of encrypted images. In addition, some parameters required by encryption process also are chosen. Therefore, the performance of the JPEG baseline system on the encrypted images can then be increased. According to the simulation results, about a 0.9 dB luminance and a 0.7 dB chrominance SNR increase in JPEG demonstration images is obtained by the combined JPEG-TEC technique.

The present work reports a three stage matching algorithm for latent fingerprints. The algorithm includes preprocessing by a transform domain filter, computation of moments as invariant features and finally, use of a nearest neighbor clustering analysis for fingerprint matching. The transform domain filter involves selective amplification of the spectral band containing the highest energy, and subsequent use of a band-pass filter. The enhanced image is almost noise-free, and shows prominent features in the fingerprint that cannot be extracted by other conventional enhancement techniques. The moments of enhanced fingerprints provide invariant features. The classification of fingerprints is performed by a nearest neighbor clustering of the moment features characterizing a specific fingerprint.

Oxide residues on rolled aluminum sheets appear as blemishes which deteriorate aesthetic surface quality. Consequently, the relative severities of surface oxides must be evaluated for product quality control, especially in packaging applications. In the current practice, an experienced QA person visually inspects sheet surface for oxide residues and assigns a grade based on the apparent severity. This procedure is limited by the ability of the operator to resolve varying levels of oxides, and inspection results vary from operator to operator. This paper presents an imaging technique developed to measure undesirable oxides on metal surfaces. The technique provides quantification of oxide severity with potentially finer resolution and improved repeatability than currently possible. Preliminary results from the evaluation of oxides on cold rolled aluminum samples show that oxide severities quantified using this technique correlate well with the discrete grades assessed by QA personnel.

This paper describes a method of indexing the color distribution of an image, which makes it easier to search for and access regions by their colors. The index has a quadtree structure with color indexes for successively divided image quadrants in its nodes. Thus the colors in each image quadrant are represented by a color index defined to take account of human visual perception. To perform a search, we descend through the index from a root, checking whether the color index is satisfying a condition. The nodes satisfying that condition correspond to image quadrants that contain the color being queried. Experimental result shows that this method is useful for detecting regions according to their colors in the early stages of the search process.

It is known that ohmic contact technology is a key problem in the development of GaAs MESFET circuits. It is usually achieved through a multilayer (Au, Ge, Ni) interdiffusion operation under controlled annealing. The electrical quality of the contact comes from the textures of complex alloy islands or micro-dots induced by the technology process. There is presently no nondestructive means of observing the physical nature of the interface of the contact with the bulk material. We propose using laser scanning tomography to explore the interfacial microprecipitates noninvasively and nondestructively. A new method of micro scanning and corresponding data processing allows us to obtain a 3 dimensional view of the internal region underlying the contact; this method is at a micron scale in the lateral direction, but it is, however, largely sub-micron in the z direction perpendicular to the surface which means that it gives a precise analysis of the critical region of the electronic transfer in the transistor. Experimental results are presented on standard circuits which have undergone thermal aging processes.

The development of noninvasive techniques for the determination of biomechanical body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers, etc.) receives increasing attention from the medical sciences (e,.g., orthopaedic gait analysis), bioengineering, sport biomechanics, and the various space programs. In the present paper, a novel method is presented for determining body segment parameters rapidly and accurately. It is based on the video-image processing of four different body configurations and a finite mass-element human body model. The four video images of the subject in question are recorded against a black background, thus permitting the application of shape recognition procedures incorporating edge detection and calibration algorithms. In this way, a total of 181 object space dimensions of the subject's body segments can be reconstructed and used as anthropometric input data for the mathematical finite mass- element body model. The latter comprises 17 segments (abdomino-thoracic, head-neck, shoulders, upper arms, forearms, hands, abdomino-pelvic, thighs, lower legs, feet) and enables the user to compute all the required segment parameters for each of the 17 segments by means of the associated computer program. The hardware requirements are an IBM- compatible PC (1 MB memory) operating under MS-DOS or PC-DOS (Version 3.1 onwards) and incorporating a VGA-board with a feature connector for connecting it to a super video windows framegrabber board for which there must be available a 16-bit large slot. In addition, a VGA-monitor (50 - 70 Hz, horizontal scan rate at least 31.5 kHz), a common video camera and recorder, and a simple rectangular calibration frame are required. The advantage of the new method lies in its ease of application, its comparatively high accuracy, and in the rapid availability of the body segment parameters, which is particularly useful in clinical practice. An example of its practical application illustrates the technique.

Ultrasonic B-scan images present a particular texture known as `speckle' which may reveal information relative to the investigated tissue structure. The present work is devoted to the discrimination of various prostatic tissues (normal tissue, benign prostatic hypertrophy, and cancer). This is done on ultrasonic scans by means of texture analysis. Three methods have been implemented: the autocorrelation function and the co-occurrence matrices, measuring second order statistics and the gray level run lengths matrices. Parameters derived from the co-occurrence matrices provides a fairly good tissue signature. The processing of 37 images gives a 78% score of samples classified with success. Note that the various images can not be visually discriminated. However, such results are obtained when wide regions of interest are investigated (64 X 64 pixels), but they are less significant when the sample size decreases, that is when the pathological area is very small.

A device devoted to the 3-D representation of the prostate has been developed. It operates with either sagittal or transverse images. On each selected image, an operator outlines the prostate (and/or an eventual pathological area), by means of a digitizing tablet. These contours are then described by a limited number of points. From these points, belonging to the object envelope, two 3-D representation techniques have been implemented: the B-spline parametric surface and the triangulation method. The main advantage of the triangulation algorithm is its rapidity, contrary to the one using the parametric surfaces. Moreover, it allows satisfying representations of simple anatomic shapes like the prostate. The understanding of the geometry of the object is improved by a fast rotation of the whole object.

The problems nonscanning three-dimensional all-around data acquisition are discussed using the concept of `sphere of vision.' Based on the recently developed panoramic annular lens (PAL-optics), which abandons the `see-through-a-window' (STW) concept for data acquisition, a new signal collecting module has been developed using the visual strategy of birds. The PAL imaging module for all-around (spherical) data acquisition and recording (PALIMADAR) uses two PAL-optics juxtapositioned on the same optical axis, facing each other's flat surfaces. The unique feature of PALIMADAR is that its visual field consists of three regions, in the same way as that of the birds: a binocular or stereoscopic region, an anterior visual field, and a lateral visual field. As a consequence, this module -- for the first time in imaging history -- covers a spherical visual field without the need for using any scanning technique.