This research line studies techniques for room response equalization and for nonlinear active noise control.

Room response equalization has been applied for improving the objective and subjective quality of sound reproduction systems in cinema theaters, home theaters, and car HiFi systems. Room response equalization systems act by shaping the room transfer function from the sound reproduction system to the listener with a suitably designed equalizer. Both minimum-phase and mixed-phase room equalizers have been proposed in the literature. Minimum-phase room equalizers acting on the minimum-phase part of the RTF phase response can be used in order to shape the RTF magnitude response. In contrast, in mixed-phase room equalizers the non-minimum-phase part of the RTF phase response can be corrected too. In principle, in mixed-phase room equalizers, some of the room reverberation can be removed, but particular care must be taken to avoid "pre-echoes" caused by the errors in the non-causal part of the equalizer. Room equalizers can also be divided in single position and multiple position equalizers. In the first case, the equalization filter is designed on the basis of a measurement of the room impulse response in a single location. These equalizers can achieve the room equalization only in a reduced zone around the measurement point (of the size of a fraction of the acoustic wavelength). Indeed, the room impulse response varies significantly with the position in the room. Moreover, the room impulse response varies also with time and thus the room should be considered a weakly nonstationary system. Multiple position room equalizers are capable to enlarge the equalized zone by measuring the room impulse response in multiple locations. Different minimum phase and mixed phase multipoint room equalizers avoiding pre-echoes have been studied within this research line.

The principle of the active noise control is the cancellation of an acoustic disturbance based on the destructive interference with another noise produced by the controller with the same amplitude but opposite phase. While the literature is mainly concerned with linear controllers, there is evidence that nonlinear effects may influence the behavior of active noise control systems. On these bases, novel nonlinear filter structures and adaptive algorithms suitable for active noise control and nonlinear system identification have been researched.

ADVANCED DISPLAYS FOR MEDICAL APPLICATIONS

G. Ramponi, S. Marsi

Within the EU_Artemis CHIRON project, the University of Trieste collaborates with FIMI and Barco on the design of an innovative display, based on the “Dual Layer LCD” technology, which offers a contrast ratio of over 50,000:1. One prototype is currently available at UNITS and is used for testing. This display requires appropriate “dual layer display processing” algorithms in order to prepare and generate the two images which drive the panels, starting from a grayscale image, typically in DICOM format. This processing consists of two main components: a mapping, from gray levels to luminance, and a splitting, from luminance to digital driving levels. Quality assurance tests are also addressed.

The image splitting algorithms that generate the two images reproduced by each panel were designed based on the characteristics of the current prototypes, which use two identical liquid crystal panels, and therefore generate two images with identical resolution. UNITS has also set up and partially performed a set of psychophysical experiments aimed at developing a novel image mapping technique, suitable for the display of medical images on the Dual Layer LCD.

The first experiments showed that the technique currently in use (the DICOM Grayscale Standard Display Function) causes a low visibility of the details in the dark portions of the image; therefore, a replacement is required in order to fully exploit the capabilities of the device. For this purpose, it is necessary to measure experimentally the visual threshold at low luminance levels, possibly using the same definition as the DICOM standard one in order to obtain a good consistency with the behavior of existing devices. For this purpose, UNITS designed a software application that generates a sequence of test images, displays them to an observer, reads the response and performs a statistical analysis of the data. A “staircase” method was selected for the generation of the test sequence, and different techniques were implemented for the statistical analysis of the data. Simulations are also being performed, in order to verify the reliability of the data analysis techniques.

Advanced instrumentation for synchrotron radiation light sources

S. CarratoNew generation synchrotron radiation sources and free electron lasers require novel concepts of beam diagnostics to keep photon beams under surveillance, and need simultaneous position and intensity monitoring. Diamond is a promising material for the production of semitransparent in-situ photon beam monitors which can withstand the high dose rates occurring in such radiation facilities. We report on the development of freestanding, single crystal CVD (chemical vapor deposited) diamond detectors with segmented electrodes.

Performances in both low and radio frequency beam monitoring are presently being studied. By using charge integration techniques at a frame rate of 6.5 kHz in combination with a needle synchrotron radiation beam and mesh scans, the inhomogeneity of the sensor was found to be of the order of 2% with a measured electronics noise of 2 pA / √Hz; a 0.05% relative precision in the intensity measurements (at 1 µA) and a 0.1 µm resolution in the position encoding have been estimated. The high electron and hole mobility of diamond, compared with those of other active materials, enables a charge collection characterized by rise times below 1 ns; this allowed us to utilize single pulse integration to simultaneously detect the intensity and the position of each synchrotron radiation photon bunch generated by a bending magnet in the X band (10 ÷ 40 keV) and by an undulator in EUV band (19 ÷ 400 eV).

Preliminary measurements at the Fermi FEL have been performed with these detectors, extracting quantitative intensity and position information for 100 fs wide FEL pulses, with a long term spatial precision of about 85 μm.

Development of electronic instrumentation for biomedical applications

S. Carrato

Diabetic patients have great difficulties in safely enjoying physical indoor or outdoor activities, since they have lost the physiological mechanisms which allow the body to maintain an appropriate blood glucose level despite the increased carbohydrate consumption during exercise. On the contrary, the positive effects of an adequate physical activity for diabetic insulin-dependent subjects are well recognized, but the risk of an excessive lowering of glycemia tends to discourage most of them; in fact, in the worst case a condition of hypoglycemia can bring the subject to coma.

We developed a method for calculating the amount of carbohydrates needed by a patient in order to practice aerobic physical activity without any risk. This value is calculated on the basis of the subject characteristics, therapy and some details about the kind of sport that the subject is going to do. The algorithm is being implemented on a software running on a server which is accessible via SMS; in this way the access to the system is granted to the subscribed user via a mobile phone without the need for a PC. Alternative implementations presently under study are an app for smartphones, and a portable wristwatch-like device.

First tests on 89 patients show that the suggested amount of carbohydrates allows patients to conclude the exercises with an optimal glycaemia in a high percentage of cases with exercises of short/mean duration; in any case, further tests are needed to better validate the method.

Hardware architecture for real time image processing

S.Marsi, G.Ramponi, S.Carrato

Digital cameras, new generation phones, commercial TV sets and in general all modern devices for image acquisition and visualization can benefit from algorithms for image enhancement suitable to work in real time and preferably with limited power consumption. Unfortunately, most systems described in the scientific literature either require a very high computational effort or provide rather poor performances.

In this activity the objective is to bridge the gap between theory and practice, i.e. to consider the detailed steps, methodology, and trade-off analysis required to achieve a real-time performance and eventually to identify a suitable hardware architecture. This is a rather interdisciplinary research activity since real-time image processing include many different steps like the development, the design and the implementation aspects. There are many phases one should go through to take an image processing algorithm that is developed in a research environment to an actual working product.

A common misunderstanding regarding the perception of real time is that since hardware is getting faster and more powerful each year, real-time constraints can be met simply by using the latest, fastest, most powerful hardware, thus rending real time a non-issue. The problem with this argument is that often such a solution is not practical, especially for embedded systems that have constraints on their cost, size, power consumption and response time. Thus the complete design of a real-time system must address many different challenging issues and the solution frequently comes from suitable compromise among all of them. Indeed one important aspect to consider in the design of algorithms to be implemented in real time on a dedicated hardware platform is related to the limited computational resources and to the required power consumption. The system must be developed trying to reduce since the algorithm design phase the required power consumption and the band needed to exchange data or to access the memory. Thus, first we analyze possible simplifications to existing methods, making them more suitable to real-time implementation and at the same time less demanding in terms of power requirements. However also novel solutions are devised, capable of conjugating good objective performances and simplicity of realization, making them suitable to be implemented in a limited-performance architecture. The realization aspects are taken into account, both in terms of possible simplification of the algorithms and of usage of suitable optimization approaches, like fast architectures for specific operators or parallel processing for the most critical processing components. The algorithms are step-by-step analyzed, starting from a high-level behavioral model, then moving toward the architecture description and to their actual implementation.

HIGH DYNAMIC RANGE IMAGES

S.Marsi, G.Ramponi, S.Carrato

Humans are able to distinguish without difficulty the details of an image, both when the object is placed in direct sunlight and when it is in shadow or, even worse, in a dark zone. By contrast, unfortunately, when the acquisition and the visualization systems are of an artificial type (video or still camera and printer or monitor) problems due to the limited dynamic range of the entire system appear. The dynamic range of an image is defined indeed as the ratio between the highest and the lowest luminance level. In a high dynamic range (HDR) image, this value exceeds the capabilities of conventional display devices; as a consequence, dedicated visualization techniques are required otherwise the result could be an image that sometimes shows over- or under-exposed areas, especially if it has been acquired in critical environmental conditions. In particular, it is possible to process an HDR image in order to reduce its dynamic range without producing a significant change in the visual sensation experienced by the observer. An effective approach to process the image is to deal with the superposition of two issues: the illumination of the scene, which must be suitably optimized, brightening up the dark zones and exploiting the available system dynamics; and the image details, which must be emphasized enhancing them when they are not well defined.

We have proposed a dynamic range reduction algorithm that produces high-quality results with a low computational cost and a limited number of parameters.

The algorithm belongs to the category of methods based upon the Retinex theory of vision, and the main novel contribution consists in a filter for the estimation of the illumination component that was specifically designed in order to prevent the formation of common artifacts, such as halos around the sharp edges and clipping of the highlights, that often affect methods of this kind. The advantage of the proposed method is threefold.

• Thanks to the absence of halo and clipping artifacts, the processed images exhibit a great naturalness and, in most cases, the observer does not realize that the image has been processed even if the dynamic range has been reduced significantly.

• The implementation we propose has a very low computational cost and is, therefore, suitable for interactive applications. In particular, the specific numerical method we chose permits a parallel processing even on a standard PC, and an even better performance is possible if dedicated hardware is used.

• The low number of parameters makes the method attractive for consumer applications: the default settings provide satisfactory results for most images, and each parameter is easy to tune individually because it influences a different aspect of the processed image.

Image and video processing for forensic applications

S. Carrato

Forensic science has experienced an increasing interest in all research activities: all paths that allow the investigators to obtain more information about the crime scene dynamics and about the culprit are tracked to solve serious crimes like homicides and major crimes related to national security such as a terrorists' attack. We are presently considering this area of research for three different applications: analysis of latent fingerpring using synchrotron radiation, automatic show marks retrieval, and reduction of hot air turbulence artifacts.

The aim of the first research activity is to adopt a multi-technique approach, based on conventional and Synchrotron Radiation (SR) techniques, to study latent fingerprints from the morphological and chemical point of view, offering to forensic science a comprehensive tool to be exploited for particularly complex criminal cases. In particular, we address fingerprint analysis, performing a study on latent fingerprint visualization with a SR source. Thus, the project involves expertise in digital signal processing, in condensed matter physics and in forensic analysis of fingerprints, with the ultimate goal to develop methods of image reconstruction that merge all the information coming from different SR techniques in order to produce comprehensive and easily understandable evidence. We are presently analysing clear fingerprint using infrared microspectroscopy (IRMS), and contaminated ones using both X-ray phase contrast (XPC) and X-ray absorption fine structure analysis (XAFS). While being in its early stage, this work seems promising in helping investigators to visualize and analyze latent fingerprint, thanks to the different, and complementary, types of information (chemical nature of the deposits for FT-IRMS, chemical state for XAS, and density and morphological variations for XPC) which are provided by these techniques.

Shoe marks found on the crime scene are invaluable for the identification of the culprit when no other piece of evidence is available. Semi-automatic and automatic systems have been recently proposed to find the make and model of the footwear that left the shoe marks. The systems proposed up to now have two main drawbacks, as they (i) are generally not based on rotation and translation invariant descriptors, and (ii) are tested on synthetic shoe marks, i.e. on shoeprints with added synthetic noise. We have developed a translation and rotation invariant descriptor based on the Fourier transform properties, and we are presently testing it using both synthetic shoe marks and shoe marks coming from real crime scenes.

The third application is focussed on the reduction of hot air turbulence artifacts which may be present in video sequences shot during intelligence operations, when the subject is far (some kilometers) from the camera; in some cases the turbulence may significantly reduce the quality of the images, possibly making them completely useless. The task reminds that encounetered in astronomical imaging, even if some differences apply, mostly due to the different path (typically quasi-horizontal, in this case) followed by the light rays, which implies a slightly different modelling of the turbulence. We are investiganting techniques based both on the bispectrum, and on the extraction of the optical flow and the following re-alignment of the parts of the image.

MEMS-based instrumentation

S. Carrato

Resonators based on MEMS technology are widely spread in the biological and medical analysis, and are also interesing for their potential integration in the electronics board (e.g., for the replacement of the quartz resonators); their strengths are the great sensibility and versatility. To overcome the limitation of conventional cantilever resonators (i.e. complex fabrication techniques and poor control on the sample position), we are developing a columnar silicon μ-resonator, together with an alternative actuation of the sensors based on a non-uniform electric field.

We introduced an original device geometry which enables the application to vertical μ-pillars of a dielectric gradient driving force. The device consists of an isolated pillar flanked by three electrically and mechanically independent electrodes. To obtain electrical isolation we started from SOI (silicon on insulator) wafers, in which a 5 μm thick layer of oxide separates two conductive crystalline silicon layers.

We measured the resonance frequency by means of the optical deflection method in a system which includes a vacuum chamber (<10-6 mbar), an impulse green laser, and a high speed (>80 MHz) four quadrant photodetector. The typical resonance frequency of our devices is ~7 MHz and the Q factor is above 10000.

Further research activity will include the complete characterization of the resonators and the related actuation electronics, and experiments on the detection of μ-masses.

PERCEPTUAL QUALITY OF IMAGES

G. Ramponi

The studies in this group aim at devising techniques for the automatic, no-reference estimation of the quality of video frames. Specifically, blocking artifacts due to coding and blurred details are detected.

A method to automatically quantify the blocking artefact in video frames was presented, in which two peculiar problems of the video blockiness are addressed, namely the shift in position of the blocking discontinuities in predicted frames, due to encoding with motion compensation, and the degradation of the edges of moving objects. Original solutions are proposed to detect both aspects and quantify their seriousness, avoiding erroneous detection caused by active areas, aliasing and ringing. Experiments show that the proposed indices respond coherently to the increase in video compression as well as in the subjective perception of blockiness.

The problem of estimating the amount of blurriness in video frames was also addressed. A new method to assess the presence and the strength of the blurring artefact is presented. The estimation is performed first through a global and simple measure over the whole picture, then through a finer, block-level analysis of the sharpness of the objects borders. The subjective relevance of blurriness in different parts of the scene is estimated, using an existing visual attention method that estimates the perceptive relevance of each pixel. Then the scene activity, or clutter, is measured by counting the number of distinct picture regions. In an active scene, indeed, a blurred object is deemed to be less apparent. Finally a method is devised to find human faces, image parts a human viewer will most of the time look at. The parameters obtained through a combination of objective measurements and subjective relevance respond coherently to changes in image quality due to different video encodings, as experimental results show. The indices enable the automatic quantification of the strength of blurriness and give some hints at its origin. In particular, some new results have been achieved in the ability to automatically distinguish natural blurriness, present in the image content, from undesired one, introduced during encoding and processing.

RESTORATION OF ANTIQUE PHOTOGRAPHS AND BOOKS

G. Ramponi

Antique photographs and paper documents constitute an immense patrimony that is distributed among thousands of private and public libraries, museums and archives all over the world. Despite care in their conservation, they are based on fragile materials, and hence they are easily affected by environmental agents. In the cases of photographs and documents, frequently the paper support presents cracks, scratches, holes and added stamps or text; moreover, chemical reactions between the paper and some microorganisms produce visible stains, and humidity and water cause blotches that change the aspect of the picture or document. By digitization and subsequent virtual restoration, the usability of the object is improved. This procedure also allows for the elaboration of an excellent virtual copy, for instance, of a single existant copy of an edition, especially when it is affected by structural or chromatic pathologies. Two more advantages derive from a restoration operation of this virtual type: it allows the work to be read and otherwise observed by vast numbers of users without affecting the original document traumatically or irreversibly. The use of the computer allows for the greater part of the operations of restoration to be simulated, supplying instruments and materials that will help the ’ofﬁcial restorer’ in planning future work and appraising the ﬁnal result.

We report the most important algorithms to digitally detect and restore typical damages that photographs suffer such as foxing, water blotches, fading and glass cracks; also, defects that books suffer such as yellowing and foxing. We report as well on the state of the art of the quality evaluation methods.

In fact, many automatic restoration techniques make use of quality metrics to guide the choice of several parameters. Quality assessments based on measures of the contrast contents of the image are quite common.