Special Issue Information

Dear Colleagues,

Frequently, smart sensors operate in a well-defined hierarchical structure. Sensor intelligence in the lower layer improves selectivity, suppressing the influence of undesired variables. Intelligence in the middle layer generates intermediate output by combining the outputs of the lower layer. Intermediate outputs are sent to the upper layer intelligence, which recognizes the situation. The lower layer uses to be implemented in low-capability microcontrollers, often with energy constraints, and hence it is the higher layer where computationally heavy tasks use to take place. There is a compromise between the amount of information that is passed to the higher layers and the amount of processing at the lowest levels. Cooperative processing, data-driven feature extraction, model-based and indirect measurements, among other techniques, are leveraged to balance computational power, energy consumption and information flow at the lowest layer. Data fusion, parameter tuning and model-based synthesis of variables are performed at the middle layer. Intelligent data analysis and certain data-driven decision systems are deployed at the higher layer. In this respect, Computational Intelligence (CI) and Soft Computing-based sensors build on fuzzy logic, artificial neural networks, evolutionary computing, learning theory and probabilistic methods to solve the mentioned tasks at each level of the architecture. The application of CI to sensor systems is a hot topic, as shown by the following (non-exhaustive) list of problems, that is comprised of different applications of CI to sensor systems reported during the first half of 2018:

Social sensing (humans as “sensors” to report observations about the physical world)

The Special Issue will publish original research, reviews and applications in the field of Computational Intelligence techniques (fuzzy logic, artificial neural networks, evolutionary computing, learning theory and probabilistic methods) applied to sensor systems.

Prof. Dr. Luciano SánchezDr. David AnseánGuest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Sensors are becoming more and more ubiquitous as their price and availability continue to improve, and as they are the source of information for many important tasks. However, the use of sensors has to deal with noise and failures. The lack of reliability
[...] Read more.

Sensors are becoming more and more ubiquitous as their price and availability continue to improve, and as they are the source of information for many important tasks. However, the use of sensors has to deal with noise and failures. The lack of reliability in the sensors has led to many forms of redundancy, but simple solutions are not always the best, and the precise way in which several sensors are combined has a big impact on the overall result. In this paper, we discuss how to deal with the combination of information coming from different sensors, acting thus as “virtual sensors”, in the context of human activity recognition, in a systematic way, aiming for optimality. To achieve this goal, we construct meta-datasets containing the “signatures” of individual datasets, and apply machine-learning methods in order to distinguish when each possible combination method could be actually the best. We present specific results based on experimentation, supporting our claims of optimality.
Full article

AutomationML (AML) can be seen as a partial knowledge-based solution for manufacturing and automation domains since it permits integrating different engineering data format, and also contains information about physical and logical structures of production systems, using basic concepts as resources, process, and products,
[...] Read more.

AutomationML (AML) can be seen as a partial knowledge-based solution for manufacturing and automation domains since it permits integrating different engineering data format, and also contains information about physical and logical structures of production systems, using basic concepts as resources, process, and products, in semantic structures. However, it is not a complete knowledge-based solution because it does not have mechanisms for querying and reasoning procedures, which are basic functions for semantic inferences. Additionally, AutomationML does not deal with aspects of sensor fusion naturally. In this sense, we propose an ontology to describe those sensors’ fusion elements, including procedures for runtime processing, and also elements that can turn AutomationML into a complete knowledge-based solution. The approach was applied in a case study with two different industrial processes with some sensors under fusion. The results obtained demonstrate that the ontology allows describing sensors that are under fusion and deal with the occurrence of data divergence. In a broader view, the results show how to apply AutomationML description for runtime processing of data generated from different sensors of a manufacturing system using an ontology to complement the AML description, where AutomationML concentrates knowledge about a specific production system and the ontology describes a general and reusable knowledge about sensor fusion.
Full article

This paper introduces an adaptive image rendering using a parametric nonlinear mapping-function-based on the retinex model in a low-light source. For this study, only a luminance channel was used to estimate the reflectance component of an observed low-light image, therefore halo artifacts coming
[...] Read more.

This paper introduces an adaptive image rendering using a parametric nonlinear mapping-function-based on the retinex model in a low-light source. For this study, only a luminance channel was used to estimate the reflectance component of an observed low-light image, therefore halo artifacts coming from the use of the multiple center/surround Gaussian filters were reduced. A new nonlinear mapping function that incorporates the statistics of the luminance and the estimated reflectance in the reconstruction process is proposed. In addition, a new method to determine the gain and offset of the mapping function is addressed to adaptively control the contrast ratio. Finally, the relationship between the estimated luminance and the reconstructed luminance is used to reconstruct the chrominance channels. The experimental results demonstrate that the proposed method leads to the promised subjective and objective improvements over state-of-the-art, scale-based retinex methods.
Full article

Augmented Reality (AR) is a class of “mediated reality” that artificially modifies the human perception by superimposing virtual objects on the real world, which is expected to supplement reality. In visual-based augmentation, text and graphics, i.e., label, are often associated with a physical
[...] Read more.

Augmented Reality (AR) is a class of “mediated reality” that artificially modifies the human perception by superimposing virtual objects on the real world, which is expected to supplement reality. In visual-based augmentation, text and graphics, i.e., label, are often associated with a physical object or a place to describe it. View management in AR is to maintain the visibility of the associated information and plays an important role on communicating the information. Various view management techniques have been investigated so far; however, most of them have been designed for two dimensional see-through displays, and few have been investigated for projector-based AR called spatial AR. In this article, we propose a view management method for spatial AR, VisLP, that places labels and linkage lines based on the estimation of the visibility. Since the information is directly projected on objects, the nature of optics such as reflection and refraction constrains the visibility in addition to the spatial relationship between the information, the objects, and the user. VisLP employs machine-learning techniques to estimate the visibility that reflects human’s subjective mental workload in reading information and objective measures of reading correctness in various projection conditions. Four classes are defined for a label, while the visibility of a linkage line has three classes. After 88 and 28 classification features for label and linkage line visibility estimators are designed, respectively, subsets of features with 15 and 14 features are chosen to improve the processing speed of feature calculation up to 170%, with slight degradation of classification performance. An online experiment with new users and objects showed that 76.0% of the system’s judgments were matched with the users’ evaluations, while 73% of the linkage line visibility estimations were matched.
Full article

Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the
[...] Read more.

Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application.
Full article

Wearable health monitoring has emerged as a promising solution to the growing need for remote health assessment and growing demand for personalized preventative care and wellness management. Vital signs can be monitored and alerts can be made when anomalies are detected, potentially improving
[...] Read more.

Wearable health monitoring has emerged as a promising solution to the growing need for remote health assessment and growing demand for personalized preventative care and wellness management. Vital signs can be monitored and alerts can be made when anomalies are detected, potentially improving patient outcomes. One major challenge for the use of wearable health devices is their energy efficiency and battery-lifetime, which motivates the recent efforts towards the development of self-powered wearable devices. This article proposes a method for context aware dynamic sensor selection for power optimized physiological prediction using multi-modal wearable data streams. We first cluster the data by physical activity using the accelerometer data, and then fit a group lasso model to each activity cluster. We find the optimal reduced set of groups of sensor features, in turn reducing power usage by duty cycling these and optimizing prediction accuracy. We show that using activity state-based contextual information increases accuracy while decreasing power usage. We also show that the reduced feature set can be used in other regression models increasing accuracy and decreasing energy burden. We demonstrate the potential reduction in power usage using a custom-designed multi-modal wearable system prototype.
Full article

This paper presents a smart “e-nose” device to monitor indoor hazardous air. Indoor hazardous odor is a threat for seniors, infants, children, pregnant women, disabled residents, and patients. To overcome the limitations of using existing non-intelligent, slow-responding, deficient gas sensors, we propose a novel artificial-intelligent-based multiple hazard gas detector (MHGD) system that is mounted on a motor vehicle-based robot which can be remotely controlled. First, we optimized the sensor array for the classification of three hazardous gases, including cigarette smoke, inflammable ethanol, and off-flavor from spoiled food, using an e-nose with a mixing chamber. The mixing chamber can prevent the impact of environmental changes. We compared the classification results of all combinations of sensors, and selected the one with the highest accuracy (98.88%) as the optimal sensor array for the MHGD. The optimal sensor array was then mounted on the MHGD to detect and classify the target gases without a mixing chamber but in a controlled environment. Finally, we tested the MHGD under these conditions, and achieved an acceptable accuracy (70.00%).
Full article

The tensile force on the hanger cables of a suspension bridge is an important indicator of the structural health of the bridge. Tensile force estimation methods based on the measured frequency of the hanger cable have been widely used. These methods empirically pre-determinate
[...] Read more.

The tensile force on the hanger cables of a suspension bridge is an important indicator of the structural health of the bridge. Tensile force estimation methods based on the measured frequency of the hanger cable have been widely used. These methods empirically pre-determinate the corresponding model order of the measured frequency. However, because of the uncertain flexural rigidity, this empirical order determination method not only plays a limited role in high-order frequencies, but also hinders the online cable force estimation. Therefore, we propose a new method to automatically identify the corresponding model order of the measured frequency, which is based on a Markov chain Monte Carlo (MCMC)-based Bayesian approach. It solves the limitation of empirical determination in the case of large flexural rigidity. The tensile force and the flexural rigidity of cables can be calculated simultaneously using the proposed method. The feasibility of the proposed method is validated via a numerical study involving a finite element model that considers the flexural rigidity and via field application to a suspension bridge.
Full article

Due to the rapid installation of a massive number of fixed and mobile sensors, monitoring machines are intentionally or unintentionally involved in the production of a large amount of geospatial data. Environmental sensors and related software applications are rapidly altering human lifestyles and
[...] Read more.

Due to the rapid installation of a massive number of fixed and mobile sensors, monitoring machines are intentionally or unintentionally involved in the production of a large amount of geospatial data. Environmental sensors and related software applications are rapidly altering human lifestyles and even impacting ecological and human health. However, there are rarely specific geospatial sensor web (GSW) applications for certain ecological public health questions. In this paper, we propose an ontology-driven approach for integrating intelligence to manage human and ecological health risks in the GSW. We design a Human and Ecological health Risks Ontology (HERO) based on a semantic sensor network ontology template. We also illustrate a web-based prototype, the Human and Ecological Health Risk Management System (HaEHMS), which helps health experts and decision makers to estimate human and ecological health risks. We demonstrate this intelligent system through a case study of automatic prediction of air quality and related health risk.
Full article