This paper presents an overview of past and current activities in the field of computer vision at Joanneum Research. Joanneum Research is a non-profit research organization covering most of its revenues from contract research with industrial partners and international agencies in the area of space research (e.g., the European Space Agency). The focus on industrial real-time image processing started with the development of a 2D inspection system for wooden surfaces, and has expanded to the development of a 3D vision system for spacecraft navigation and elevation modeling. In this presentation the major projects in this area of research are outlined.

This paper presents an application of digital image processing in the historical sciences. it deals with the processing of x-ray recordings of watermark images taken from Middle Ages codices. A sequence of processing steps for image enhancement, geometrical transformations, watermark extraction,a nd binarization is suggested. The watermarks are stored together with alphanumeric information in a database, allowing the historian to retrieve and compare watermarks and to measure parameters of watermarks such as height, length, distance between special points, and radii.

The work deals with the application of polynomial bases for digital image processing using sliding window. An algorithm is built for the parallel-recursive calculation of local moment characteristics. A parametric set of polynomial bases is introduced that yields the fastest realization of the algorithm. We consider methods of the calculation of the polynomial approximation parameters for the convolution kernel. The examples are adduced of the employment of polynomial bases for the 2-D signal filtration, and for the detection and recognition of objects on the image.

The problem of processing two gray-level images forming a stereo pair is considered on the basis of treating the sought-for disparity map and the original image pair as, respectively, the hidden and the observable constituent of a two-component random field with known probabilistic properties. The stereo matching algorithm resulting from such an approach is actually a version of the classical random field interpolation procedure with some alterations aimed at the attainment of a higher computation speed. The specificity of the stereo matching problem among random field interpolation problems of general kind is carried by a special a priori model that allows for taking into account all the usual constraints posed upon the disparity map by the geometric projection laws.

In the present paper we tackle the problem of the identification of optical distorting systems. The identification is realized on the informative fragments, which are determined using the conditionality estimates of the matrixes. We construct simple formal procedures for evaluating the degree of conditionality with the objective to determine image informative fragments that permit the identification to be performed with desired accuracy.

Methods for parametric and nonparametric statistics estimating are developed to specify the pixels belonging to the object and to the image background when some prior probability density functions of background and object locations are unknown.

The problem of the design of unerring neural networks with block structure is considered. The bottleneck is building faultless blocks without spurious states. All the commonly used rules of recurrent neural network learning don't exclude the possibility of spurious state apparition in the stable state neighborhoods. In recognition problems these spurious states cause false solutions deteriorating over the network and destroying its robustness. The problem of finding the number and localizing the stable states in neural networks with polynomial capacity without oscillations is considered. A method is proposed to detect spurious states that are close to the stable states of a given network. The method is based on random sampling. It is fast and almost always finds dangerous spurious states. The algorithm is of Las Vegas type and can find the centers and approximate sizes of all the basins of attraction that are great enough. The evaluation of the test time and the probability of the test correctness are given. The number of operations is polynomial unless the capacity is exponential. Some approaches to spurious state suppression are proposed. The results of experiments with some neural network models are briefly described.

Automatic gray scale correction of captured video data (both still and moving images) is one of the least researched questions in the image processing area, in spite of this the question is touched almost in every book concerned with image processing. Classically it is related to the image enhancement, and frequently is classified as histogram modification techniques. Traditionally used algorithms, based on analysis of the image histogram, are not able to decide the problem properly. The investigating difficulties are associated with the absence of a formal quantitative estimate of image quality -- till now the most often used criteria are human visual perception and experience. Hence, the problem of finding out some measurable properties of real images, which might be the basis for automatic building of gray scale correction function (sometimes identified also as gamma-correction function), is still unsolved. In the paper we try to discern some common properties of real images that could help us to evaluate the gray scale image distortion, and, finally, to construct the appropriate correction function to enhance an image. Such a method might be sufficiently used for automatic image processing procedures, like enhancing of medical images, reproducing of pictures in the publishing industry, correcting of remote sensing images, preprocessing of captured data in the computer vision area, and for many other applications. The question of complexity of analysis procedure becomes important when an algorithm is realized in real-time (for example in video input devices, like video cameras).

This paper presents regularized least squares algorithms for the restoration and reconstruction of images. Whitening filters of short length are derived formally as optimal regularization operators. Adaptive versions of the algorithms are developed by matching a weighting function to the particular regularization function. The adaptive regularization leads to proper noise suppression as well as to enhanced resolution of discontinuities. The application focuses on the restoration of images recorded by the Hubble Space Telescope (HST).

The image restoration problem is considered to be an underdetermined inverse problem. Its solution is reduced to the solution of an extreme problem of two types -- unconditional and conditional. The choice of problem type depends on the character of a priori information about the ideal image. Under certain conditions to obtain the solution of these problems one can use the Fourier transform. However, for underdetermined problems it is not possible to use the Fourier transform without the preliminary extension of an output image and a distorting operator. Such extension procedure is proposed in this paper.

The paper is devoted to image features based on fractal dimension of a set. The algorithm for calculation of grid empirical fractal dimension (GEFD) of a gray-level image is introduced. The properties of GEFD are investigated using the representation of an original gray-level image by the pyramid of 3-D binary images.

A new data processing technology is proposed, based on the adaptive analytical description of digital arrays by truncated orthogonal series. Further data processing is performed in the space of the expansion coefficients. The approach combines moderate computing times with the full use of analytical methods in problems of data compression, description, and analysis.

An application of the generalized spectral-analytic method, being developed by the authors, in solving complex problems of signal processing and image analysis is described. Some particular problems are considered. Some approaches to data compression, initial and boundary problems solutions, and dimension identification of planar curves are presented.

In the present paper the procedure of estimating small probabilities of false recognition of objects on the image is proposed and experimentally investigated. It is based on a modified scheme of the method of significant sampling. This procedure permits us to estimate immediately the total dependence of probabilities on the varying input signal distribution parameter in one cycle of simulation.

We propose a method of constructing the filters for reliable pattern recognition in an optical setup. The filters based on a phase-only filter are designed by applying a quantization technique to the optimal filter. High light efficiency and discrimination capability close to that of the optimal filter can be obtained by the proposed method. Computer simulation results are shown and discussed.

In the preset paper we discuss methods for the synthesis of fast algorithms of the discrete orthogonal transforms which are based on the inclusions of a rational number field including input data into different algebraic structures: cyclotomic fields, alternative algebras, etc.

In the present work different feasible implementations of the cosine transform are compared in terms of theoretical estimates of their complexity and experimental execution time. Recommendations are given concerning application of the discussed algorithms to block methods of coding.

Implementation of real-time digital image processing, which is needed in a variety of modern scientific and technical applications, involves considerable difficulties. These difficulties occur because of rigid requirements on the speed of the hardware used. An application of a complex designing concept of high-speed systems for digital image processing (DIP) gives us the tools for solving some of the problems. The new direction of research and development for solving these problems consists in the introduction of pipeline LSI and VLSI table type structure devices. This article suggests an approach based on computer arithmetic using the mew modification of classic modular number system (MNS) -- minimal redundant modular number system.

Spline and sinc interpolation methods for image geometrical transforms and their efficient computational implementations are described. Experimental comparison of the methods in terms of the root mean squared error of the reconstructed image after rotation shows that they significantly outperform the conventional methods of nearest neighbor and bilinear interpolation.

Two computer-generated display macro holograms (CGDMH) have been synthesized to demonstrate the possibility of holographic display of 3D objects given by their mathematical descriptions only. Three dimensional models of the objects and shaded 2D projections in varying viewing directions were generated using the methods of computer graphics. For each projection, a Fourier hologram was synthesized and encoded by the kinoform method. The recording of the obtained digital kinoforms on a commercially available photographic film was done by a computer controlled laser device. This process produces, after film development and bleaching, a facet CGDMH. The complete CGDMHs have a size of 672 X 672 mm2 and consist of 900 elementary holograms of 256 X 256 samples each, calculated for different directions within the solid angle of +/- 90 degree(s). They allow the visual representation of 3D objects with good quality.

A new method of calculating computer-generated true-color rainbow holograms (CGTRRH) which are reconstructed by a white light point source is proposed. The presented CGTRGH are the analog of the well-known rainbow holograms, but the proposed technique allows us to obtain the true-color object reconstruction. White light and monochromatic light reconstruction results are discussed.

The possibilities of fabrication of high-resolution diffractive lenses having sufficiently large diameter for x ray applications by spatial-frequency multiplication method are shown. The ways of determining the optimal spatial-frequency of a parent zone plate are discussed.

Kinoforms are phase elements serving to transform incident laser light into a pregiven intensity distribution (image) with high efficiency and into a desired spatial plane. An iterative Gerchberg-Saxton algorithm (GS) for which it has been proven that the algorithm converges, is a variant of the conditional gradient method and minimizes the functional of the rms deviation of the reconstructed image from a pregiven one. In the present paper we have considered a parametric extension of the GS algorithm for calculating kinoforms. It has been shown that by fitting the parameter one can raise the rate of the algorithm convergence. The developed algorithm is called `weight-based' and is shown to have the convergence rate twice as large as that for the GS algorithm. The problem of calculating kinoforms is related to the successive approximations' method used in solving a non-linear integral equation.

The iterative algorithm developed allows calculating phase optical elements serving to transform incident coherent light into nondiffracting beams characterized by an unchanging transverse intensity distribution. Such nondiffracting light beams represent Bessel modes and are described as a superposition of a small number of Bessel functions with equal arguments. The results of numerical calculations are discussed.

We discuss polarizing kinoforms -- diffractive optical elements that modulate only the polarization of incident light and form desired intensity distribution at a pregiven distance. The surface microrelief of such optical elements has the form of a binary curvilinear diffraction grating with a constant period less than wavelength. The energy efficiency of the elements is equivalent to that of conventional multilevel phase kinoforms.

A review of the multichannel correlation methods applied to discriminate color objects with the logical operation AND is presented. The effects of the color CCD camera as the acquisition system are studied. A preprocessing method is proposed to increase the discrimination. The generation of the filters is improved by applying lithographic techniques. Finally an approach of the optimal filter proposed by Yaroslavsky is applied to each channel in the correlation process. Computer and experimental results are given.

Implementations of rank order and morphological filtering in optical-digital processors are reviewed. In the processors, all of the convolutions are performed in inherently parallel optical correlators. Subsequent arithmetic and logic operations are made digitally. In the processors, gray scale images are sequentially treated slice by slice due to the threshold decomposition concept. The optical-digital method of local histogram calculation within both binary and weighted neighborhoods of arbitrary size and shape is recalled. Several configurations of optical correlators are discussed. Finally, some examples of efficient use of hybrid processors are presented.

In our work, we present a method and an interactive program resulting from it for visualizing and manipulating different three-dimensional data sets in a consistent way. After an introductory discussion of the motivation which initiated the ongoing research, a short description of the program ensues. A section devoted to different uses of the program in a few chosen domains illustrates the applicability of the program to the manifold issues encountered in diverse visualization situations nowadays.

At present, most of the medical institutions of Russia use manual methods for the calculation of blood shape elements. Usually, a laboratory assistant conducts the calculation under the monocular microscope with imposed Goryaev's graticule. Goryaev's graticule consists of 256 large squares, divided into 16 small ones. The squares involvement in the calculation depends on the particles' type. For example, at the calculation of leukocytes 100 large squares are used and at the calculation of erythrocytes 80 small squares are used. The manual calculation is slow enough and toilsome, therefore the task of the elaboration of automated methods of particles' calculation appears. The elaborated methods are based on the same principle as is the case for the manual ones: the image in some number of squares of Goryaev's grid is input into a computer and then is processed. In this paper, the calculation of rosette-like shape elements is described.

Vascular eye fundus diseases, including diabetic retinopathy, have become the most important reason of loss of vision among people of working age. The latest achievements in laser and surgical treatment of these diseases are based on early diagnostics and accurate evaluation of dynamics of eye fundus lesions and the state of retinal vessels. Moreover, a number of newly developed classification schemes contain a lot of describing semiquantitative clinical data on the status of the retinal vascular network. These data are used to prognosticate the course of disease and to choose a treatment modality. These facts stimulate ophthalmologists to apply in their routine work computerized techniques of processing, interpretation, and storage of clinical information. Being the base of routine diagnostics, ophthalmoscopically visible eye fundus lesions remain the most difficult for quantitative evaluation and analysis. Hence computerized eye fundus analysis seems to be very important in the improvement of routine clinical practice. This paper presents our package of programs of the quantitative evaluation of retinal vascular network (RVN). RVN parameters can be divided into 2 large subgroups: static and dynamic. Static parameters are obtained from ordinary eye fundus images and reflect the pattern of RVN. Dynamic parameters are derived from retinal angiography analysis and indicate the state of blood flow and vessel permeability. The aim of our investigations was to create a tool for quantitative analysis of RVN static parameters, as most suitable for screening and early diagnostics of vascular diseases of the retina.

Hardware and software designed for the analysis of speckle dynamics caused by displacement, deformation, and damage of a diffuse surface are described in this paper. For processing of optical information two variants of systems are used: CCD camera and photodiode array in a complete set with an IBM PC/AT computer; a photodiode array in a complete set with an autonomous microcomputer. The first group of programs is intended for processing of data input into the computer beforehand. The second group allows one to debug algorithms for the analysis of speckle dynamics in real-time scale. A possibility of data monitoring, selecting and averaging, correlation analysis, and analysis of speckle `twinklings' is available. The experimental results for the determination of surface displacement, damage and hardening are presented.

This paper proposes an automated method for detecting microcrystal structure distortions on a tear crystallogram using the Karhunen-Loeve expansion (KLE) with preliminary construction of the crystallogram directions' field. The method implies optical implementation: we propose a scheme for optically constructing the image directions' field using a rotating slot. Using the ophthalmologist selected samples of crystallograms a computer-aided simulation has been conducted.

In the present work the applications of a generalized projection strategy to the synthesis of phase diffractive optical elements (DOEs) is considered. The passage from the complex transmission function to the phase-only function based on the introduction of a carrier into the phase function of the element under synthesis is shown to be equivalent to the choice of the projection operator on a set of functions with constant amplitude distribution in the space with special norm introduced. The method combines iterative calculation of the hologram phase function with coding of the amplitude-phase characteristics of the complex transmission function into phase. The results of computational experiments are presented.

Considerable recent attention has been focused on the investigation of `nondiffracting beams.' This term was introduced for the denotation of the zero order Bessel beam propagating in the homogeneous medium. Similar to the unlimited plane wave, the `nondiffracting beam' does not spread during propagation, but has the maximum of intensity at the optical axis. This article discusses possibilities for forming `nondiffracting beams' with the help of diffractive optical elements. The results of the modulated axicon experimental investigations are represented.

The `Quick-DOE' software for an IBM PC-compatible computer is aimed at calculating the masks of diffractive optical elements (DOEs) and computer generated holograms, computer simulation of DOEs, and for executing a number of auxiliary functions. In particular, among the auxiliary functions are the file format conversions, mask visualization on display from a file, implementation of fast Fourier transforms, and arranging and preparation of composite images for the output on a photoplotter. The software is aimed for use by opticians, DOE designers, and the programmers dealing with the development of the program for DOE computation.

The present work deals with the application of pseudogeometric optics techniques to calculate a light field generated by a focusator of laser radiation into a rectangular domain. We have derived an analytical expression for the principal term of the asymptotic expansion in the focal plane including the geometric-optics shadow domain.

A method of color image compression is proposed that can restore images exactly after 2-3- fold compression. It is based on the so-called component transformation with pixel interpolation (CTPI). CTPI component combinations are used to achieve additional compression. Two possibilities of calculating optimal combinations are described. Use of pairwise differences of CTPI differential components seems to be the most attractive way to realize adaptive versions of color combiners. Experiments have shown that the programs written to carry out the proposed method are very effective.

Variable quantization (VQ) has become a common lossy operation almost in every modern image compression method. This is precisely the procedure defining the final decompressed image quality followed by the lossless compression. If the lossless stage of compression is well investigated (Huffman and arithmetic codes), the lossy stage up to the present remains on the level of art basing on the experience of an investigator. A rather general multilayer source model for optimizing (reconciling with the source and human vision properties) variable quantization processes is proposed. This model allows us to analyze the quantization from the new approximation point of view, formalize the optimization task, and propose various simple and effective VQ schemes that can be used as lossy procedure in an arbitrary image compression method.

Methods of image characterizing are welcomed in pattern recognition. Characteristic features are designed by using intuition or abstract theories. Therefore, designed characteristics have a comprehensible sense right away. However, we intend to find a great deal of features without their meaning beforehand taking only usefulness into consideration. Most of the calculations are combined with scanning procedures. So a special scanning device can be made for image recognition and decision making. The time of image processing is about the time of scanning. In this paper we introduce a trace transformation, give examples of it, and investigate its reaction to distortion of an image. Also, we show how to receive many useful features using the trace transformation.