This paper is about the formal specification of requirements of a rail communication protocol called Saturn, proposed by ClearSy systems engineering, a French company specialised in safety critical systems. The protocol was developed and implemented within a rail product, widely used, without modeling, verifying and even documenting its requirements. This paper outlines the formal specification, verification and validation of Saturn’s requirements in order to guarantee its correct behavior and to allow the definition of slightly different product lines. The specification is performed according to SysML/KAOS, a formal requirements engineering method developed in the ANR FORMOSE project for critical and complex systems. System requirements, captured with a goal modeling language, give rise to the behavioral part of a B System specification. In addition, an ontology modeling language allows the specification of domain entities and properties. The domain models thus obtained are used to derive the structural part of the B System specification obtained from system requirements. The B System model, once completed with the body of events, can then be verified and validated using the whole range of tools that support the B method. Five refinement levels of the rail communication protocol were constructed. The method has proven useful. However, several missing features were identified. This paper also provides a formally defined extension of the modeling languages to fill the shortcomings.

Rainfall forecasting is a major issue for anticipating severe meteorological events and for agriculture management. Weather radar imaging has been identified in the literature as the best way to measure rainfall on a large domain, with a fine spatial and temporal resolution. This paper describes two methods allowing to improve rain nowcast from radar images at two different scales. These methods are further compared to an operational chain relying on only one type of radar observation. The comparison is led with regional and local criteria. For both, significant improvements are quantified compared to the original method.

Research in the human voice and environment sound recognition has been well studied during the past decades. Nowadays, modeling auditory awareness has received more and more attention. Its basic concept is to imitate the human auditory system to give artificial intelligence the auditory perception ability. In order to successfully mimic human auditory mechanism, several models have been proposed in the past decades. In view of deep learning (DL) algorithms has better classification performance than conventional approaches (such as GMM and HMM), the latest research works mainly focused on building auditory awareness models based on deep architectures. In this survey, we will offer a quality and compendious survey on recent auditory awareness models and development trend. This article includes three parts: i) classical auditory saliency detection method and developments during the past decades, ii) the application of machine learning in ASD. Finally, summarizing comments and development trends in this filed will be given.

One major limitation of the motion estimation methods that are available in the literature concerns the availability of the uncertainty on the result. This is however assessed by a number of filtering methods, such as the ensemble Kalman filter (EnKF). The paper consequently discusses the use of a description of the displayed structures in an ensemble Kalman filter, which is applied for estimating motion on image acquisitions. An example of such structure is a cloud on meteorological satellite acquisitions. Compared to the Kalman filter, EnKF does not require propagating in time the error covariance matrix associated to the estimation, resulting in reduced computational requirements. However, EnKF is also known for exhibiting a shrinking effect when taking into account the observations on the studied system at the analysis step. Methods are available in the literature for correcting this shrinking effect, but they do not involve the spatial content of images and more specifically the structures that are displayed on the images. Two solutions are described and compared in the paper, which are first a dedicated localization function and second an adaptive domain decomposition. Both methods proved being well suited for fluid flows images, but only the domain decomposition is suitable for an operational setting. In the paper, the two methods are applied on synthetic data and on satellite images of the atmosphere, and the results are displayed and evaluated.

Opinion and trend mining on micro-blogs like Twitter recently attracted research interest in several fields including
Information Retrieval (IR) and Natural Language Processing (NLP). However, the performance of
existing approaches is limited by the quality of available training material. Moreover, explaining automatic
systems’ suggestions for decision support is a difficult task thanks to this lack of data. One of the promising
solutions of this issue is the enrichment of textual content using large micro-blog archives or external document
collections, e.g. Wikipedia. Despite some advantages in Reputation Dimension Classification (RDC)
task pushed by RepLab, it remains a research challenge. In this paper we introduce a supervised classification
method for RDC based on a threshold intersection graph. We analyzed the impact of various micro-blogs
extension methods on RDC performance. We demonstrated that simple statistical NLP methods that do not
require any external resources can be easily optimized to outperform the state-of-the-art approaches in RDC
task. Then, the conducted experiments proved that the micro-blog enrichment by effective expansion techniques
can improve classification quality.

The main goal of this paper is to manage the switching on/off of servers in a data center during time to adapt
the system with incoming traffic changes to ensure a good performance and a reasonable energy consumption.
In this work, the system is modeled by a queue then, an optimization algorithm is designed to manage energy
consumption and quality of service in the data center. For several systems, the algorithm is tested by numerical
analysis under various types of job arrivals: arrivals with constant rate, arrivals defined by an constant discrete
distribution, arrivals specified by a variable discrete distribution over time, and arrivals modeled by discrete
distributions obtained from real traffic traces. The optimization algorithm that we suggest, adapts and adjusts
dynamically the number of operational servers according to: traffic variation, workload, cost of keeping a job
in the buffer, cost of losing a job, and energetic cost for serving a job.

Motion estimation from image data has been widely studied in the literature. Due to the aperture problem, one equation with two unknowns, a Tikhonov regularization is usually applied, which constrains the estimated
motion field. The paper demonstrates that the use of regularization functions is equivalent to the definition of correlations between pixels and the formulation of the corresponding correlation matrices is given. This equivalence allows to better understand the impact of the regularization with a display of the correlation values as images. Such equivalence is of major interest in the context of image assimilation as these methods are
based on the minimization of errors that are correlated on the space-time domain. It also allows to characterize the role of the errors during the assimilation process.

This paper designs an Image-based Ensemble Kalman Filter (IEnKF), whose components are defined only
from image properties, to estimate motion on image sequences. The key elements of this filter are, first,
the construction of the initial ensemble, and second, the propagation in time of this ensemble on the studied
temporal interval. Both are analyzed in the paper and their impact on results is discussed with synthetic and real
data experiments. The initial ensemble is obtained by adding a Gaussian vector field to an estimate of motion
on the first two frames. The standard deviation of this normal law is computed from motion results given by
a set of optical flow methods of the literature. It describes the uncertainty on the motion value at initial date.
The propagation in time of the ensemble members relies on the following evolution laws: transport by velocity
of the image brightness function and Euler equations for the motion function. Shrinking of the ensemble is
avoided thanks to a localization method and the use of observation ensembles, both techniques being defined
from image characteristics. This Image-based Ensemble Kalman Filter is quantified on synthetic experiments
and applied on traffic and meteorological images.

This paper deals with visual evaluation of object distances using Soft-Computing based approaches and
pseudo-3D standard low-cost sensor, namely the Kinect. The investigated technique points toward robots’
vision and visual metrology of the robot’s surrounding environment. The objective is providing the robot
the ability of evaluating distances between objects in its surrounding environment. In fact, although
presenting appealing advantages, the Kinect has not been designed for metrological aims. The investigated
approach offers the possibility to use this low-cost pseudo-3D sensor for distance evaluation avoiding 3D
feature extraction and thus exploiting the simplicity of only 2D image’ processing. Experimental results
show the viability of the proposed approach and provide comparison between different machine learning
techniques as Adaptive-network-based fuzzy inference (ANFIS), Multi-layer Perceptron (MLP), Support
vector regression (SVR), Bilinear interpolation.

Artificial awareness is an interesting way of realizing artificial intelligent perception for machines. Since the foreground object can provide more useful information for perception and informative description of the environment than background regions, the informative saliency characteristics of the foreground object can be treated as a important cue of the objectness property. Thus, a sparse reconstruction error based detection approach is proposed in this paper. To be specific, the overcomplete dictionary is trained by using the image features derived from randomly selected background images, while the reconstruction error is computed in several scales to obtain better detection performance. Experiments on popular image dataset are conducted by applying the proposed approach, while comparison tests by using a state of the art visual saliency detection method are demonstrated as well. The experimental results have shown that the proposed approach is able to detect the foreground object which is distinct for awareness, and has better performance in detecting the information salient foreground object for artificial awareness than the state of the art visual saliency method.