Carlos Fernández

PhD Student

University of Alcalá

Bio

After obtaining my BSc in Computer Science in 2008 I started working for the Robesafe Research Group at the University of Alcalá in the field of vision-based ADAS (Advanced Driver Assistance Systems) under the supervision of Dr. Miguel Ángel Sotelo.

In 2010 I obtained my MSc in Advanced Electronics Systems and Intelligent Systems and I also started working for the ISIS Research Group at the University of Alcalá under the supervision of Dr. Miguel Ángel Sotelo as well.

Since 2011 I am reviewer in some international conferences (IEEE Intelligent Vehicles Symposium, IEEE Conference on Intelligent Transportation Systems, European Conference on Computer Vision) and also international journals indexed in JCR (IEEE Transactions on Intelligent Transportation Systems, IEEE Intelligent Transportation Systems Magazine).

In 2012 I was working for the AUTOPIA Research Group at CSIC in the field of autonomous vehicles.
I was involved with a demonstration of an autonomous mini-bus with capacity for 14 people during the IEEE Intelligent Vehicles Symposium.

In December 2013, I have had a 4 year grant (2 grant + 2 contract) funded by the Spanish Ministry of Economy and Competitiveness to get the PhD, so I started my PhD under the supervision of Dr. Miguel Ángel Sotelo and Dr. David Fernández Llorca.
My thesis is focus on the environment perception using computer vision and LIDAR for autonomous vehicles in urban scenarios.

In September 2016, I got my PhD with the higher mark "CUM LAUDE".

Academic Positions

Present2016

Postdoctoral Researcher

INVETT
Research Group (Intelligent Vehicles and Traffic Technologies), University of Alcalá

20162013

PhD Grant

ISIS
Research Group (Innovative Sensors and Intelligent Systems), University of Alcalá

Recent trends in car industry are the installation of ADAS and different types of sensors in todays cars.
I think this is a promising field of research but I am also interested in augmented reality and computer vision applications for sports.

Interests

Computer Vision

Machine Learning

3D Point Clouds

ADAS

Autonomous Vehicles

Intelligent Transportation Systems

Research Projects

DRIVERTIVE

Cooperative Autonomous Vehicles.

DRIVERTIVE is a driverless cooperative vehicle developed at the University of Alcalá intended
for autonomous operation on urban areas and highways. The DRIVERTIVE team won the Prize for the
Best Team with Full Automation in the Grand Cooperative Driving Challenge (GCDC 2016) held in Helmond, The Netherlands, in 28-29 May 2016.
(2015-2017).

Assistive Pedestrian Crossing

Adaptive response depending on users disabilities.

We define an Assistive Pedestrian Crossing as a pedestrian crossing able to interact
with users with disabilities and provide an adaptive response to increase, maintain or improve
their functional capabilities while crossing. Thus, the infrastructure should be able to locate
the pedestrians with special needs as well as to identify their specific disability.
user location is obtained by means of a stereo-based pedestrian detection system.
Disability identification is proposed by means of a RFID-based anonymous procedure from which
pedestrians are only required to wear a portable and passive RFID tag. (2014-2016).

Automatic vehicle model recognition

Develop an automatic vehicle model recognition system to improve current traffic surveillance applications.

Automatic vehicle model detection is a still
unresolved task, the need for a full vehicle identification approach
is getting more relevant due to the increased demand
for effectiveness and security. Current traffic surveillance
applications, speed and access control platforms, automatic
tollgate systems, etc., rely on the use of License Plate
Recognition (LPR) systems that provide a unique and weak
identifier for each detected vehicle: the license plate. A
more detailed description of the different parameters of the
vehicle would enhance current vehicle identification systems.
Besides the license plate, vehicle colour, plate colour,
car make, and finally, the car model,
are representative variables of the vehicles. (2013-2015).

SDK for FLIR Camera IRXCAM

Development of a Software Development Kit for a GigE Vision camera.

Company: Orbital Aerospace. Description: Development of an API in C++ to connect, configure and capture images
from a FLIR camera using GigE Vision protocol. (2013).

ON Demand Autonomous Fleet in dedicated areas (ONDA-F)

Develop fully-autonomous vehicles able to drive safely in dedicated areas to move persons or
valuable items under real conditions.

The main goal of this project is to design, implement and test an intelligent transportation system able to
manage a fleet of autonomous vehicles in a dedicated area to solve transportation needs on demand.
This goal will lead us to solve important issues in global coordination of a fleet of vehicles; in
autonomous vehicles area, like to join and leave traffic in roundabout, sensor fusion for positioning
improving, positioning backing systems to improve reliability, etc. And also it will be needed to develop
other functionalities to detect the system users: pedestrians that want to be transported, accident victims
that need an ambulance, or even expensive items that need to moved.
The specific goals are:

1. Develop a supervisor control system capable of integrating both infrastructure and vehicle
information in order to manage the traffic in a dedicated area using wireless communications.

2. Develop fully-autonomous vehicles able to drive safely in dedicated areas to move persons or
valuable items under real conditions.

3. Develop an infrastructure sensor and actuation (traffic lights) network able to give the needed
data and operative control over the manually driven cars to the supervisor control system.
In summary, work in the frontier of the knowledge in the intelligent transportation system field and, due
to the available facilities we aim to run an open, public demonstration event in order to show the results
of the project, showing the capacity of Spanish research institutions to go beyond the state of the art in
frontier knowledge in the field of Intelligent Transportation Systems. (2012-2016).

2D/3D railway geometry monitoring.

2D/3D railway geometry monitoring for imperfection detection.

Company: Euroconsult. Description: 2D/3D railway geometry monitoring for imperfection detection.
For this project, high resolution LIDAR, cameras and IMU sensor are installed in a train and geometry
of the tunnel and rails are processed using Point Cloud Library and OpenCV. (2012-2013).

Obstacle detection for autonomous navigation

The goal is to develop an obstacle detection algorithm for autonomous navigation.

The driverless public transportation
systems, which are at present operating in some airports
and train stations, are restricted to dedicated roads and
exhibit serious trouble dynamically avoiding obstacles
in the trajectory. In this project, an electric autonomous
mini-bus is used during the
demonstration event of the 2012 IEEE Intelligent Vehicles
Symposium that took place in Alcalá de Henares (Spain).
The demonstration consisted of a route 725 metres long
containing a list of latitude-longitude points (waypoints).
The mini-bus was capable of driving autonomously from
one waypoint to another using a GPS sensor. Furthermore,
the vehicle is provided with a multi-beam Laser Imaging
Detection and Ranging (LIDAR) sensor for surrounding
reconstruction and obstacle detection. When an obstacle
is detected in the planned path, the planned route is
modified in order to avoid the obstacle and continue its
way to the end of the mission. On the demonstration day,
a total of 196 attendees had the opportunity to get a ride on
the vehicles. A total of 28 laps were successfully completed
in full autonomous mode in a private circuit located in the
National Institute for Aerospace Research (INTA), Spain.
In other words, the system completed 20.3 km of driverless
navigation and obstacle avoidance. Funded by CSIC. (2012).

PROPINA

Robotic research platform to develop high-level applications.

In this project we have built a robotic research platform to develop high-level applications using the
robotics development platform Robot Operating System (ROS).
It is a differential traction platform equipped with odometry and distance sensors (ultrasound and infrared).
It is designed to work indoors. The embedded cards run ROS modules to control the motors and to perceive the information from sensors.
In this way the perception is completely transparent to the remote control station.
The modular design has been chosen to increase the functionality and autonomy.
In addition, we designed a 3D model in Gazebo simulator that can be used as prior before designing the actual application. (2012-2013).

VISETRAF

Visual system for location and recognition of traffic signs using radio frequency.

One of the challenge still open in traffic sign recognition
is to discard detected signs that do not pertain to the host
road. The position of each detected traffic sign is obtained
from stereo pair of cameras and those whose position are far
from the vehicle lane will be discarded. However, there are
some scenarios where the 3D relative position is not enough
for discarding signs that do not apply to the host road, in
those cases information from vehicle-to-infrastructure (V2I)
communication system is proposed as solution, so V2I
communication system using, wireless technology, works as
support of the traffic sign recognition system. Financed by the Regional Government of Madrid. (2011).

RoboCity2030 and RoboCity2030II-CM

The objective of Robocity2030-II is to develop an innovative integration of applications of Service Robots, in an effort to increase the quality of living of citizens in metropolitan areas.

The objective of Robocity2030-II is to develop an innovative integration of applications of Service Robots,
in an effort to increase the quality of living of citizens in metropolitan areas.
This means that the human is now the centre of things and the Service Robots are developed from,
for and to the benefit of humans. To do so, the project brings together and coordinates the research of six
leading Service Robot groups in the Community of Madrid, with around 70 projects of R+D in robotics in the past five years,
nearly a third of which are European. (2010-2013).

Blind Spot Warning System (BSW)

Real-time vision based blind spot warning system for daytime and nighttime conditions.

In this project a real-time vision-based blind spot warning system that has been specially designed for
motorcycles detection in both daytime and nighttime conditions. Motorcycles are fast moving and small vehicles that
frequently remain unseen to other drivers, mainly in the blind-spot area. In fact, although in recent years the number of fatal
accidents has decreased overall, motorcycle accidents have increased by 20%. The risks are primarily linked to the inner
characteristics of this mode of travel: motorcycles are fast moving vehicles, light, unstable and fragile. These features make
the motorcycle detection problem a difficult but challenging task to be solved from the computer vision point of view. In this
project, we developed a daytime and nighttime vision-based motorcycle and car detection system in the blind spot area using a
single camera installed on the side mirror. On the one hand, daytime vehicle detection is carried out using optical flow features
and Support Vector Machine-based (SVM) classification. On the other hand, nighttime vehicle detection is based on head
lights detection. The proposed system warns the driver about the presence of vehicles in the blind area, including information
about the position and the type of vehicle. Extensive experiments have been carried out in 172 minutes of sequences recorded
in real traffic scenarios in both daytime and nighttime conditions, in the context of the Valencia MotoGP Grand Prix 2009. (2010).

GUIADE

Automatic guidance of public transit vehicles by multimodal perception for improved traffic efficiency.

GUIADE's aim is the development of an autonomous public transport fleet based on a multi-modal perception of the environment,
using information collected by the vehicles from the environment as well as from the infrastructure.
Financed by the Spanish Ministry of Science and Innovation (MICINN). (2008-2011).

RFID tags for traffic sign monitoring

Company: 3M. Description: This project is a complement of VISUALISE.
The goal is to install a RFID antenna in the VISUALISE vehicle and read the traffic sign RFID tag.
Then, a matching process is applied to add the retrorreflection measurement with VISUALISE to the traffic sign history.
Every traffic sign information (position, installation date, type of reflective sheet, etc.)
is stored in a database to improve the quality of service in the road maintenance company. (2008-2010).

VISUALISE

Automatic inspection system for traffic and overhead signs.

Company: Euroconsult. Description: VISUALISE is a high-performance unit for dynamic auscultation of traffic signs on roads.
It allows automatic determination of the traffic signs conditions in regard with night visibility.
By means of this Equipment it can be carried out signs retroreflection measurements at regular traffic speed.
The main technological innovation of this equipment is that test data acquisition is performed dynamically.
The system is installed in a vehicle that circulates at regular traffic speed.
Valid data can be obtained while travelling at speeds up to 110 km/h. (2008-2010).

Publications

Disclaimer: This material is presented to ensure timely dissemination of scholarly and technical work.
Copyright and all rights therein are retained by authors or by other copyright holders.
All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright.
In most cases, these works may not be reposted without the explicit permission of the copyright holder.

IEEE material: Personal use of this material is permitted.
However, permission to reprint/republish this material for advertising or promotional purposes or for
creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

Abstract

Stereo-based object detection systems can be
greatly enhanced thanks to the use of passive UHF RFID
technology. By combining tag localization with its identification
capability, new features can be associated with each detected
object, extending the set of potential applications. The main
problem consists in the association between RFID tags and
objects due to the intrinsic limitations of RSSI-based localization
approaches. In this paper, a new directional RSSIdistance
model is proposed taking into account the angle
between the object and the antenna. The parameters of the
model are automatically obtained by means of a stereo-RSSI
automatic calibration process. A robust data association method
is presented to deal with complex outdoor scenarios in medium
sized areas with a measurement range up to 15m. The proposed
approach is validated in crosswalks with pedestrians wearing
portable RFID passive tags

Comparison between UHF RFID and BLE for Stereo-Based Tag Association in Outdoor Scenarios.

Abstract

Stereo-based object detection systems can be greatly enhanced thanks to the use of wireless identification technology. By combining tag localization with its identification capability, new features can be associated with each detected object, extending the set of potential applications. The main problem consists in the association between wireless tags and objects due to the intrinsic limitations of Received Signal Strength Indicator-based localization approaches. In this paper, an experimental comparison between two specific technologies is presented: passive UHF Radio Frequency IDentification (RFID) and Bluetooth Low Energy (BLE). An automatic calibration process is used to model the relationship between RSSI and distance values. A robust data association method is presented to deal with complex outdoor scenarios in medium sized areas with a measurement range up to 15m. The proposed approach is validated in crosswalks with pedestrians wearing portable RFID passive tags and active BLE beacons.

Abstract

Assistive technology usually refers to systems used to increase, maintain, or improve functional capabilities of individuals with disabilities. This idea is here extended to transportation infrastructures, using pedestrian crossings as a specific case study. We define an Assistive Pedestrian Crossing as a pedestrian crossing able to interact with users with disabilities and provide an adaptive response to increase, maintain or improve their functional capabilities while crossing. Thus, the infrastructure should be able to locate the pedestrians with special needs as well as to identify their specific disability. In this paper, user location is obtained by means of a stereo-based pedestrian detection system. Disability identification is proposed by means of a RFID-based anonymous procedure from which pedestrians are only required to wear a portable and passive RFID tag. Global nearest neighbor is applied to solve data association between stereo targets and RFID measurements. The proposed assistive technology is validated in a real crosswalk, including different complex scenarios with multiple RFID tags.

A Comparative Analysis of Decision Trees Based Classifiers for Road Detection in Urban Environments.

Abstract

In this paper a comparative analysis of decision trees based classifiers is presented. Two different approaches are presented, the first one is a speficic classifier depending on the type of scene. The second one is a general classifier for every type of scene. Both approaches are trained with a set of features that enclose texture, color, shadows, vegetation and other 2D features. As well as 2D features, 3D features are taken into account, such as normals, curvatures and heights with respect to the ground plane. Several tests are made on five different classifiers to get the best parameters configuration and obtain the importance of each features in the final classification. In order to compare the results of this paper with the state of the art, the system has been tested on the KITTI Benchmark public dataset.

Abstract

This paper addresses the problem of curb detection for ADAS or autonomous navigation in urban scenarios. The algorithm is based on clouds of 3D points. It is evaluated using 3D information from a pair of stereo cameras and a LIDAR. Curbs are detected based on road surface curvature. The curvature estimation requires a dense point cloud, therefore the density of the LIDAR cloud has been augmented using Iterative Closest Point (ICP) based on the previous scans. The proposed algorithm can deal with curbs of different curvature and heights, from as low as 3 cm, in a range up to 20 m (whenever that curbs are connected in the curvature image). The curb parameters are modeled using straight lines and compared to the ground-truth using the lateral error as the key parameter indicator. The ground-truth sequences were manually labeled on urban images from the KITTI dataset and made publicly available for the scientific community.

Abstract

In this paper, a stereo- and infrastructure-based pedestrian detection system is presented to deal with
infrastructure-based pedestrian safety measurements as well as to assess pedestrian behaviour modelling methods.
Pedestrian detection is performed by region growing over temporal 3D density maps, which are obtained
by means of stereo reconstruction and background modelling. 3D tracking allows to correlate the pedestrian
position with the different pedestrian crossing regions (waiting and crossing areas). As an example of an
infrastructure safety system, a blinking luminous traffic sign is switched on to warn the drivers about the presence
of pedestrians in the waiting and the crossing regions. The detection system provides accurate results
even for nighttime conditions: an overall detection rate of 97.43% with one false alarm per each 10 minutes.
In addition, the proposed approach is validated for being used in pedestrian behaviour modelling, applying
logistic regression to model the probability of a pedestrian to cross or wait. Some of the predictor variables are
automatically obtained by using the pedestrian detection system. Other variables are still needed to be labelled
using manual supervision. A sequential feature selection method showed that time-to-collision and pedestrian
waiting time (both variables automatically collected) are the most significant parameters when predicting the
pedestrian intent. An overall predictive accuracy of 93.10% is obtained, which clearly validates the proposed
methodology.

Abstract

This paper addresses a framework for road curb
and lanes detection in the context of urban autonomous driving,
with particular emphasis on unmarked roads. Based on a 3D
point cloud, the 3D parameters of several curb models are
computed using curvature features and Conditional Random
Fields (CRF). Information regarding obstacles is also computed
based on the 3D point cloud, including vehicles and urban
elements such as lampposts, fences, walls, etc. In addition,
a gray-scale image provides the input for computing lane
markings whenever they are present and visible in the scene. A
high level decision-making system yields accurate information
regarding the number and location of drivable lanes, based
on curbs, lane markings, and obstacles. Our algorithm can
deal with curbs of different curvature and heights, from as
low as 3 cm, in a range up to 20 m. The system has been
successfully tested on images from the KITTI data-set in real
traffic conditions, containing different number of lanes, marked
and unmarked roads, as well as curbs of quite different height.
Although preliminary results are promising, further research
is needed in order to deal with intersection scenes where no
curbs are present and lane markings are absent or misleading.

Abstract

This paper describes a real-time vision-based blind spot warning system that has been specially designed for
motorcycles detection in both daytime and nighttime conditions. Motorcycles are fast moving and small vehicles that
frequently remain unseen to other drivers, mainly in the blind-spot area. In fact, although in recent years the number of fatal
accidents has decreased overall, motorcycle accidents have increased by 20%. The risks are primarily linked to the inner
characteristics of this mode of travel: motorcycles are fast moving vehicles, light, unstable and fragile. These features make
the motorcycle detection problem a difficult but challenging task to be solved from the computer vision point of view. In this
paper we present a daytime and nighttime vision-based motorcycle and car detection system in the blind spot area using a
single camera installed on the side mirror. On the one hand, daytime vehicle detection is carried out using optical flow features
and Support Vector Machine-based (SVM) classification. On the other hand, nighttime vehicle detection is based on head
lights detection. The proposed system warns the driver about the presence of vehicles in the blind area, including information
about the position and the type of vehicle. Extensive experiments have been carried out in 172 minutes of sequences recorded
in real traffic scenarios in both daytime and nighttime conditions, in the context of the Valencia MotoGP Grand Prix 2009.

Autonomous Navigation and Obstacle Avoidance of a micro-bus

Abstract

At present, the topic of automated vehicles
is one of the most promising research areas in the
field of Intelligent Transportation Systems (ITS). The
use of automated vehicles for public transportation
also contributes to reductions in congestion levels and
to improvements in traffic flow. Moreover, electrical
public autonomous vehicles are environmentally friendly,
provide better air quality and contribute to energy
conservation. The driverless public transportation
systems, which are at present operating in some airports
and train stations, are restricted to dedicated roads and
exhibit serious trouble dynamically avoiding obstacles
in the trajectory. In this paper, an electric autonomous
mini-bus is presented. All datasets used in this article
were collected during the experiments carried out in the
demonstration event of the 2012 IEEE Intelligent Vehicles
Symposium that took place in Alcalá de Henares (Spain).
The demonstration consisted of a route 725 metres long
containing a list of latitude-longitude points (waypoints).
The mini-bus was capable of driving autonomously from
one waypoint to another using a GPS sensor. Furthermore,
the vehicle is provided with a multi-beam Laser Imaging
Detection and Ranging (LIDAR) sensor for surrounding
reconstruction and obstacle detection. When an obstacle
is detected in the planned path, the planned route is
modified in order to avoid the obstacle and continue its
way to the end of the mission. On the demonstration day,
a total of 196 attendees had the opportunity to get a ride on
the vehicles. A total of 28 laps were successfully completed
in full autonomous mode in a private circuit located in the
National Institute for Aerospace Research (INTA), Spain.
In other words, the system completed 20.3 km of driverless
navigation and obstacle avoidance.

Abstract

This paper describes an automatic system that detects thermal
insulation properties of the different components of buildings envelope
by combining laser data with thermal images. Sensor data is
obtained from a moving vehicle equipped with a GPS sensor. Range
data is integrated to obtain the 3D structure of the building facade, and
combined with thermal images to separate components such as walls,
windows frames and glasses. Thermal leakage is detected by detecting
irregularities in the thermal measurements of each building component
separately (window glasses, window frames and walls).

Intelligent Automatic Overtaking System using Vision for Vehicle Detection

Abstract

There is clear evidence that investment in intelligent transportation system technologies brings major
social and economic benefits. Technological advances in the area of automatic systems in particular
are becoming vital for the reduction of road deaths. We here describe our approach to automation of
one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo
vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking
manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans
overtake. Its input is information from the vision system and from a positioning-based system consisting
of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is
the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals.
The system has been incorporated into a commercial Citroën car and tested on the private driving circuit
at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck
– with encouraging results.

Abstract

In this paper, a real-time free space detection
system is presented using a medium-cost lidar sensor and a low
cost camera. The extrinsic relationship between both sensors is
obtained after an off-line calibration process. The lidar provides
measurements corresponding to 4 horizontal layers with a
vertical resolution of 3.2 degrees. These measurements are
integrated in time according to the relative motion of the vehicle
between consecutive laser scans. A special case is considered
here for Spanish speed humps, since these are usually detected
as an obstacle. In Spain, speed humps are directly related with
raised zebra-crossings so they should have painted white stripes
on them. Accordingly the conditions required to detect a speed
hump are: detect a slope shape on the road and detect a zebra
crossing at the same time. The first condition is evaluated using
lidar sensor and the second one using the camera.

Abstract

This paper presents the results of a set of extensive
experiments carried out in daytime and nighttime conditions in
real traffic using an enhanced or extended Floating Car Data
system (xFCD) that includes a stereo vision sensor for detecting
the local traffic ahead. The detection component implies the use
of previously monocular approaches developed by our group
in combination with new stereo vision algorithms that add
robustness to the detection and increase the accuracy of the
measurements corresponding to relative distance and speed.
Besides the stereo pair of cameras, the vehicle is equipped with a
low-cost GPS and an electronic device for CAN Bus interfacing.
The xFCD system has been tested in a 198-minutes sequence
recorded in real traffic scenarios with different weather and
illumination conditions, which represents the main contribution
of this paper. The results are promising and demonstrate that
the system is ready for being used as a source of traffic state
information.

Abstract

This paper describes a new approach for improving
the estimation of the global position of a vehicle in complex urban
environments by means of visual odometry and map fusion.
The visual odometry system is based on the compensation of
the heterodasticity in the 3D input data using a weighted nonlinear
least squares based system. RANdom SAmple Consensus
(RANSAC) based on Mahalanobis distance is used for outlier
removal. The motion trajectory information is used to keep track
of the vehicle position in a digital map during GPS outages.
The final goal is the autonomous vehicle outdoor navigation
in large-scale environments and the improvement of current
vehicle navigation systems based only on standard GPS. This
research is oriented to the development of traffic collective
systems aiming vehicle-infrastructure cooperation to improve
dynamic traffic management. We provide examples of estimated
vehicle trajectories and map fusion using the proposed method
and discuss the key issues for further improvement.

Abstract

This paper presents a vision-based road surface classification
in the context of infrastructure inspection and maintenance, proposed as
stage for improving the performance of a distress detection system. High
resolution road images are processed to distinguish among surfaces arranged
according to the different materials used to build roads and their
grade of granulation and striation. A multi-class Support Vector Machine
(SVM) classification system using mainly Local Binary Pattern
(LBP), Gray-Level Co-occurrence Matrix (GLCM) and Maximally Stable
Extremal Regions (MSER) derived features is described. The different
texture analysis methods are compared based on accuracy and computational
load. Experiments with real application images show a significant
improvement on the the distress detection system performance by combining
several feature extraction methods.

Abstract

This paper describes a new approach for improving the estimation of a vehicle motion
trajectory in complex urban environments by means of visual odometry. A new strategy for
compensating the heterodasticity in the 3D input data using a weighted non-linear least
squares based system is presented. A Matlab simulator is used in order to analyze the error in
the estimation and validate the new solution. The obtained results are discussed and compared
to the previous system. The final goal is the autonomous vehicle outdoor navigation in
large-scale environments and the improvement of current vehicle navigation systems based
only on standard GPS. This research is oriented to the development of traffic collective
systems aiming vehicle-infrastructure cooperation to improve dynamic traffic management.
We provide examples of estimated vehicle trajectories using the proposed method and discuss
the key issues for further improvement.

Abstract

This paper presents a complete vision-based vehicle
detection system for Floating Car Data (FCD) enhancement
in the context of Vehicular Ad hoc NETworks (VANETs).
Three cameras (side, forward and rear looking cameras) are
installed onboard a vehicle in a fleet of public buses. Thus, a
more representative local description of the traffic conditions
(extended FCD) can be obtained. Specifically, the vision modules
detect the number of vehicles contained in the local area
of the host vehicle (traffic load) and their relative velocities.
Absolute velocities (average road speed) and global positioning
are obtained after combining the outputs provided by the
vision modules with the data supplied by the CAN Bus and
the GPS sensor. This information is transmitted by means
of a GPRS/UMTS data connection to a central unit which
merges the extended FCD in order to maintain an updated
map of the traffic conditions (traffic load and average road
speed). The presented experiments are promising in terms
of detection performance and computational costs. However,
significant effort is further necessary before deploying a system
for large-scale real applications.

Studying of WiFi range-only sensor and its application to localization and mapping systems

Abstract

The goal of this paper is to study a noisy WiFi
range-only sensor and its application in the development of
localization and mapping systems. Moreover, the paper shows
several localization and mapping techniques to be compared.
These techniques have been applied successfully with other
technologies, like ultra-wide band (UWB), but we demonstrate
that even using a much more noisier sensor these systems can
be applied correctly. We use two trilateration techniques and a
particle filter to develop the localization and mapping systems
based on the range-only sensor. Some experimental results and
conclusions are presented.

Automatic Information Extraction of Traffic Panels based on Computer Vision

Abstract

Computer vision systems used on road maintenance,
either related to signs or to the road itself, are playing a
major role in many countries because of the higher investment
on public works of this kind. These systems are able to collect a
wide range of information automatically and quickly, with the
aim of improving road safety. In this context, the suitability of
the information contained on the road signs located above the
road, typically known as traffic panels, is vital for a correct and
safe use by the road user. This paper describes an approach to
the first steps of a developing system which will be able to make
an inventory and to check the reliability of the information
contained on the traffic panels, and whose final aim is to take
part on an automatic visual inspection system of signs and
panels.

Teaching

I taught the practical part of different subjects in engineering degrees for 5 years.
During the period 2012-2016, I taught programming in C/C++ in the Telecommunications Engineering degree.
In addition, I taught computer vision for 3 years in the Electronics and Industrial Automation Engineering degree at University of Alcala.

Current Teaching

20152013

Computer Vision

The objective of the course is the study of computer vision and image acquisition systems for industry applications. The programme of the course includes camera configuration and image acquisition, camera calibration, motion detection, objects detection, segmentation algorithms, image filtering and pattern recognition.

20162012

C/C++ Programming

The objective of the course is the study in depth the structured programming using C programming language. The programme of the course is: review of basic concepts about pointers, advanced use of pointers, advanced management of functions, creation and manipulation of files, dynamic data structures and algorithms.

Dataset

All datasets and code on this page are copyright by us and published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license. If you are interested in commercial usage you can contact us for further options.

Dataset For Blind Spot Vehicle Detection

This dataset were recorded using a Fire-i camera with a resolution of 640 x 480 @ 30 fps. Two examples of code are available to use the videos. The first one plays a video using mplayer and the second one reads the video and shows the frame using OpenCV library. Chessboard pattern images are provided for camera calibration.

Download dataset

Day 1 and day 2 include daytime sequences in urban environment, highway and roundabouts. Day 3 folder has sequences in highway after Valencia MotoGP Grand Prix 2009. Finally, day 4 includes daytime and nighttime in urban environment, highway and roundabouts.

Ground Truth For Curb Detection System Using KITTI Dataset

City

The sequences currently labeled are: 2011_09_26_drive_0002

Software For Labeling New Sequences

Camera

This software is written in C/C++ and the GUI is designed using QT. Furthermore, the labelling application requires OpenCV library to run.

Velodyne

This software is written in C/C++ and the GUI is designed using QT. Furthermore, the labelling application requires Point Cloud Library (PCL) library to run.