Advanced search

Advanced search is divided into two main parts, and one or more groups in each of the main parts. The main parts are the "Search for" (including) and the "Remove from search" (excluding) part. (The excluding part might not be visible until you hit "NOT" for the first time.) You can add new groups to both the including and the excluding part by using the buttons "OR" or "NOT" respectively, and you can add more search options to all groups through the drop down menu on the last row (in each group).

For a result to be included in the search result, is it required to fit all added including parameters (in at least one group) and not fit all parameters in one of the excluding groups. This system with the two main parts and their groups makes it possible to combine two (or more) distinct searches into one search result, while being flexible in removing results from the final list.

The vehicle-to-vehicle (V2V) propagation channel has significant implications on the design and performance of novel communication protocols for vehicular ad hoc networks (VANETs). Extensive research efforts have been made to develop V2V channel models to be implemented in advanced VANET system simulators for performance evaluation. The impact of shadowing caused by other vehicles has, however, largely been neglected in most of the models, as well as in the system simulations. In this paper we present a shadow fading model targeting system simulations based on real measurements performed in urban and highway scenarios. The measurement data is separated into three categories, line-of-sight (LOS), obstructed line-of-sight (OLOS) by vehicles, and non-line-of-sight due to buildings, with the help of video information recorded during the measurements. It is observed that vehicles obstructing the LOS induce an additional average attenuation of about 10 dB in the received signal power. An approach to incorporate the LOS/OLOS model into existing VANET simulators is also provided. Finally, system level VANET simulation results are presented, showing the difference between the LOS/OLOS model and a channel model based on Nakagami-m fading.

The aim of this study was to evaluate tracking performance when an extra reference block is added to a basic block-matching method, where the two reference blocks originate from two consecutive ultrasound frames. The use of an extra reference block was evaluated for two putative benefits: (i) an increase in tracking performance while maintaining the size of the reference blocks, evaluated using in silico and phantom cine loops; (ii) a reduction in the size of the reference blocks while maintaining the tracking performance, evaluated using in vivo cine loops of the common carotid artery where the longitudinal movement of the wall was estimated. The results indicated that tracking accuracy improved (mean - 48%, p<0.005 [in silico]; mean - 43%, p<0.01 [phantom]), and there was a reduction in size of the reference blocks while maintaining tracking performance (mean - 19%, p<0.01 [in vivo]). This novel method will facilitate further exploration of the longitudinal movement of the arterial wall. (C) 2014 World Federation for Ultrasound in Medicine & Biology.

Periocular biometrics specifically refers to the externally visible skin region of the face that surrounds the eye socket. Its utility is specially pronounced when the iris or the face cannot be properly acquired, being the ocular modality requiring the least constrained acquisition process. It appears over a wide range of distances, even under partial face occlusion (close distance) or low resolution iris (long distance), making it very suitable for unconstrained or uncooperative scenarios. It also avoids the need of iris segmentation, an issue in difficult images. In such situation, identifying a suspect where only the periocular region is visible is one of the toughest real-world challenges in biometrics. The richness of the periocular region in terms of identity is so high that the whole face can even be reconstructed only from images of the periocular region. The technological shift to mobile devices has also resulted in many identity-sensitive applications becoming prevalent on these devices.

We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum at different frequencies and orientations. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, and 4) rotation compensation between query and test images. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region.

We present a new iris segmentation algorithm based on the Generalized Structure Tensor (GST). We compare this approach with traditional iris segmentation systems based on Hough transform and integro-differential operators. Results are given using the CASIA-IrisV3-Interval database with respect to a segmentation made manually by a human expert. The proposed algorithm outperforms the baseline approaches, pointing out the validity of the GST as an alternative to classic iris segmentation systems. We also detect the cross positions between the eyelids and the outer iris boundary. Verification results using a publicly available iris recognition system based on 1D Log-Gabor wavelets are also given, showing the benefits of the eyelids removal step.

This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13×13.

One of the biggest challenges in person recognition using biometric systems is the variability in the acquired data. In this paper, we evaluate the effects of an increasing time lapse between reference and test biometric data consisting of static images of handwritten signatures and texts. We use for our experiments two recognition approaches exploiting information at the global and local levels, and the BiosecurlD database, containing 3,724 signature images and 532 texts of 133 individuals acquired in four acquisition sessions distributed along a 4 months time span. We report results of the recognition systems working both in verification (one-to-one) and identification (one-to-many) mode. The results show the extent of the impact that the time separation between samples under comparison has on the recognition rates, being the local approach more robust to the time lapse than the global one. We also observe in our experiments that recognition based on handwritten texts provides higher accuracy than recognition based on signatures.

An imponant step in fingerprint recognition is the segmentation of the region of interest. In this paper, we present an enhanced approach for fingerprint segmentation based on the response of eight oriented Gabor filters. The performance of the algorithm has been evaluated in terms of decision error trade-off curves of an overall verification system. Experimental results demonstrate the robustness of the proposed method.

Multimodal biometric systems allow to overcome some of the problems presented in unimodal systems, such as non-universality, lack of distinctiveness of the unimodal trait, noise in the acquired data, etc. Integration at the matching score level is the most common approach used due to the ease in combining the scores generated by different unimodal systems. Unfortunately, scores usually lie in application-dependent domains. In this work, we use linear logistic regression fusion, in which fused scores tend to be calibrated log-likelihood-ratios and thus, independent of the application. We use for our experiments the development set of scores of the DS2 Evaluation (Access Control Scenario) of the BioSecure Multimodal Evaluation Campaign, whose objective is to compare the performance of fusion algorithms when query biometric signals are originated from heterogeneous biometric devices. We compare a fusion scheme that uses linear logistic regression with a set of simple fusion rules. It is observed that the proposed fusion scheme outperforms all the simple fusion rules, with the additional advantage of the application-independent nature of the resulting fused scores.

Fingerprint databases are structured collections of fingerprint data mainly used for either evaluation or operational recognition purposes.

Fingerprint data in databases for evaluation are usually detached from the identity of corresponding individuals. These databases are publicly available for research purposes, and they usually consist of raw fingerprint images acquired with live-scan sensors or digitized from inked fingerprint impressions on paper. Databases for evaluation are the basis for research in automatic fingerprint recognition, and together with specific experimental protocols, they are the basis for a number of technology evaluations and benchmarks. This is the type of fingerprint databases further covered here.

On the other hand, fingerprint databases for operational recognition are typically proprietary, they usually incorporate personal information about the enrolled people together with the fingerprint data, and they can incorporate either raw fingerprint image data or some form of distinctive fingerprint descriptors such as minutiae templates. These fingerprint databases represent one of the modules in operational automated fingerprint recognition systems, and they will not be adressed here.

Since the establishment of biometrics as a specific research area in the late 1990s, the biometric community has focused its efforts in the development of accurate recognition algorithms [1]. Nowadays, biometric recognition is a mature technology that is used in many applications, offering greater security and convenience than traditional methods of personal recognition [2].

During the past few years, biometric quality measurement has become an important concern after a number of studies and technology benchmarks that demonstrate how performance of biometric systems is heavily affected by the quality of biometric signals [3]. This operationally important step has been nevertheless under-researched compared to the primary feature extraction and pattern recognition tasks [4]. One of the main challenges facing biometric technologies is performance degradation in less controlled situations, and the problem of biometric quality measurement has arisen even stronger with the proliferation of portable handheld devices, with at-a-distance and on-the-move acquisition capabilities. These will require robust algorithms capable of handling a range of changing characteristics [2]. Another important example is forensics, in which intrinsic operational factors further degrade recognition performance.

There are number of factors that can affect the quality of biometric signals, and there are numerous roles of a quality measure in the context of biometric systems. This section summarizes the state of the art in the biometric quality problem, giving an overall framework of the different challenges involved.

On-line signature verification for Tablet PC devices is studied. The on-line signature verification algorithm presented by the authors at the First International Signature Verification Competition (SVC 2004) is adapted to work in Tablet PC environments. An example prototype of securing access and securing document application using this Tablet PC system is also reported. Two different commercial Tablet PCs are evaluated, including information of interest for signature verification systems such as sampling and pressure statistics. Authentication performance experiments are reported considering both random and skilled forgeries by using a new database with over 3000 signatures.

Fingerprint image quality affects heavily the performance of fingerprint recognition systems. This paper reviews existing approaches for fingerprint image quality computation. We also implement, test and compare a selection of them using the MCYT database including 9000 fingerprint images. Experimental results show that most of the algorithms behave similarly.

Biometric methods based on iris images are believed to allow very high accuracy, and there has been an explosion of interest in iris biometrics in recent years. In this paper, we use the Scale Invariant Feature Transformation (SIFT) for recognition using iris images. Contrarily to traditional iris recognition systems, the SIFT approach does not rely on the transformation of the iris pattern to polar coordinates or on highly accurate segmentation, allowing less constrained image acquisition conditions. We extract characteristic SIFT feature points in scale space and perform matching based on the texture information around the feature points using the SIFT operator. Experiments are done using the BioSec multimodal database, which includes 3,200 iris images from 200 individuals acquired in two different sessions. We contribute with the analysis of the influence of different SIFT parameters on the recognition performance. We also show the complementarity between the SIFT approach and a popular matching approach based on transformation to polar coordinates and Log-Gabor wavelets. The combination of the two approaches achieves significantly better performance than either of the individual schemes, with a performance improvement of 24% in the Equal Error Rate.

Automotive radar is an emerging field of research and development. Technological ‎advancements in this field will improve safety for vehicles, pedestrians, and ‎bicyclists, and enable the development of autonomous vehicles. Usage of the ‎Automotive radar is expanding ‎in car and road areas to reduce collisions and ‎accident. Automotive radar ‎developers face a problem to test their radar sensor in ‎the street since there are a lot of interferences ‎signals, noise and unpredicted ‎situations. This thesis provides a part of the solution for this problem by designing a ‎device can demonstrate a different speeds value. This device will help the developer ‎to test their radar sensor inside an anechoic chamber room that provides accurate ‎control of the environmental conditions. This report shows how to ‎build the ‎measuring setup device, step by step to demonstrate the people and vehicle’s speed ‎‎in the street by a Doppler emulator using the wheel for millimetre FWMC radar. ‎Linear speed system needs a large space for testing, but using the rotational wheel ‎allow the developer to test the radar sensor in a small area. It begins with the wheel ‎design specifications and the relation between the ‎rotational speed (RPM) of the ‎wheel and the Doppler frequency. The Doppler ‎frequency is changed by varying the ‎speed of the wheel. Control and power circuit ‎was carefully designed to control the ‎wheel speed accurately. All the measuring ‎setup device parts were assembled in one ‎box. Also, signal processing was done by ‎MATLAB to measure the Doppler frequency ‎using millimetre FMCW radar sensor. ‎The measuring setup device was tested in the ‎anechoic chamber room for different speeds. the ‎manual and automatic tests show ‎good results to measure the different wheel speeds ‎with high accuracy.‎