3D cameras for face capturing are quite common today thanks to their ease of use and affordable cost. The depth information they provide is mainly used to enhance face pose estimation and tracking, and face-background segmentation, while applications that require finer face details are usually not possible due to the low-resolution data acquired by such devices. In this paper, we propose a framework that allows us to derive high-quality 3D models of the face starting from corresponding low-resolution depth sequences acquired with a depth camera. To this end, we start by defining a solution that exploits temporal redundancy in a short-sequence of adjacent depth frames to remove most of the acquisition noise and produce an aggregated point cloud output with intermediate level details. Then, using a 3DMM specifically designed to support local and expression-related deformations of the face, we propose a two-steps 3DMM fitting solution: initially the model is deformed under the effect of landmarks correspondences; subsequently, it is iteratively refined using points closeness updating guided by a mean-square optimization. Preliminary results show that the proposed solution is able to derive 3D models of the face with high visual quality; quantitative results also evidence the superiority of our approach with respect to methods that use one step fitting based on landmarks.

This paper presents a methodology for the formal modeling of security attacks on cyber-physical systems, and the analysis of their effects on the system using logic theories. We consider attacks only on sensors and actuators. A simulated attack can be triggered internally by the simulation algorithm or interactively by the user, and the effect of the attack is a set of assignments to the variables. The effects of the attacks are studied by injecting attacks in the system model and simulating them. The overall system, including the attacks, the system dynamics and the control part, is co-simulated. The INTO-CPS framework has been used for co-simulation, and the methodology is applied to the Line follower robot case study of the INTO-CPS project.

This paper describes the NoisyArt dataset, a dataset designed to support research on webly-supervised recognition of artworks. The dataset consists of more than 90,000 images and in more than 3,000 webly-supervised classes, and a subset of 200 classes with verified test images. Candidate artworks are identified using publicly available metadata repositories, and images are automatically acquired using Google Image and Flickr search. Document embeddings are also provided for short descriptions of all artworks. NoisyArt is designed to support research on webly-supervised artwork instance recognition, zero-shot learning, and other approaches to visual recognition of cultural heritage objects. Baseline experimental results are given using pretrained Convolutional Neural Network (CNN) features and a shallow classifier architecture. Experiments are also performed using a variety of techniques for identifying and mitigating label noise in webly-supervised training data.

Tracking the structural evolution of a site has important fields of application, ranging from documenting the excavation progress during an archaeological campaign, to hydro-geological monitoring. In this paper, we propose a simple yet effective method that exploits vision-based reconstructed 3D models of a time-changing environment to automatically detect any geometric changes in it. Changes are localized by direct comparison of time-separated 3D point clouds according to a majority voting scheme based on three criteria that compare density, shape and distribution of 3D points. As a by-product, a 4D (space + time) map of the scene can also be generated and visualized. Experimental results obtained with two distinct scenarios (object removal and object displacement) provide both a qualitative and quantitative insight into method accuracy.

In the experience of a railway signaling manufacturer, schedulability analysis takes an important portion of the
time dedicated to configure a complex, generic, real-time application into a specifically customized signalling
embedded application. We report on an approach aimed at substituting possibly unreliable and costly empirical
measures with rigorous analysis. The analysis is done resorting to modeling the scheduling algorithms by Petri
Nets. We have compared two types of Petri Nets: Timed Petri Nets (TPN) and Coloured Petri Nets (CPN),
supported by open source tools, respectively TINA and CPN Tools 4.0 concluding that the latter are more
suited for the dealt problem.

This paper proposes a novel strategy to find the best reference homography in mosaics from video sequences. The reference homography globally minimizes the distortions induced on each image frame by the mosaic homography itself. This method is designed for planar mosaics on which a bad choice of the first reference image frame can lead to severe distortions after concatenating several successive homographies. This often happens in the case of underwater mosaics with non-flat seabed and no georeferential information available. Given a video sequence of an almost planar surface, sub-mosaics with low distortions of temporally close image frames are computed and successively merged according to a hierarchical clustering procedure. A robust and effective feature tracker using an approximated global position map between image frames allows us to build the mosaic also between locally close but not temporally consecutive frames. Sub-mosaics are successively merged by concatenating their relative homographies with another reference homography which minimizes the distortion on each frame of the fused image. Experimental results on challenging real underwater videos show the validity of the proposed method.

This paper presents a new online preprocessing strategy to detect and discard ongoing bad frames in video sequences. These include frames where an accurate localization between corresponding points is difficult, such as for blurred frames, or which do not provide relevant information with respect to the previous frames in terms of texture, image contrast and non-flat areas. Unlike keyframe selectors and deblurring methods, the proposed approach is a fast preprocessing working on a simple gradient statistic, that does not require to compute complex time-consuming image processing, such as the computation of image feature keypoints, previous poses and 3D structure, or to know a priori the input sequence. The presented method provides a fast and useful frame pre-analysis which can be used to improve further image analysis tasks, including also the keyframe selection or the blur detection, or to directly filter the video sequence as shown in the paper, improving the final 3D reconstruction by discarding noisy frames and decreasing the final computation time by removing some redundant frames. This scheme is adaptive, fast and works at runtime by exploiting the image gradient statistic of the last few frames of the video sequence. Experimental results show that the proposed frame selection strategy is robust and improves the final 3D reconstruction both in terms of number of obtained 3D points and reprojection error, also reducing the computational time.

In this paper, we introduce a new Eclipse-based IDE for teaching Java following the object-later approach. In particular, this IDE allows the programmer to write code in Java--, a smaller version of the Java language that
does not include object-oriented features. For the implementation of this language we used Xtext, an Eclipse framework for implementing Domain Specific Languages; besides the compiler mechanisms, Xtext also allows
to easily implement all the IDE tooling mechanisms in Eclipse. By using Xtext we were able to provide an implementation of Java-- with all the powerful features available when using an IDE like Eclipse (including
debugging, automatic building, and project wizards). With our implementation, it is also straightforward to create self-assessment exercises for students, which are integrated in Eclipse and JUnit.

Soccer is a sport team with a discontinuous nature of physical effort and the duration of the regular season is 10 months length. Hydration status, water consumption are aspects of human performance debate in recent years and it’s well demonstrated as a reduction of total body water impairs endurance ability. Bio impedance is a useful methods to assess total body water, in addition recent studies reports a new approach in the evaluation of hydration status independently from body weight. The aim of the study was to determine changes of the bioelectrical impedance throughout a soccer season. Bioelectrical parameters of a Italian professional football team were recorded eight time during a regular season. The detection were carried out following the standard tetra polar method. Twenty-five male soccer players were submitted at BIA measurement, but only eleven athletes took part in all eight sessions detection. The data recorded by conventional BIA processing didn’t show any statistical differences in weight, hydration and cellular masses. Bio Impedance Vector Analysis (BIVA) shows a high significance in Anova test for the values of Xc (p<0.01) and PA (p<0.001), no difference in Rz among eight measurements. Body composition and hydration status in footballers are generally well and the variations in conventional BIA are minimal. Therefore BIVA in this population may give specific information for physiological changes for training dues. A regular bio impedance assessment in athletes is desirable to follow adaptations to training loads.

In cancer patients visceral and subcutaneous fat is strongly related to an enhancement of the comorbidities.
Correction of dietary habits and Physical Exercise (PE) are means used for the reduction of metabolic risk factors especially in these patients.
Aerobic exercise has been well studied but few data are available in case of combination with resistance exercise.
The aim of the study is to assess the effects of both exercise, resistance and aerobic, associated to correction of dietary habits in reducing the major risk factors.