Cloud Computing opens new possibilities of service provisioning for sensor networks, necessary as they become more pervasive and distributed. This paper introduces a Model as a Service approach and a modeling language specifically defined for representing sensor network architecture, based on a four-phased method. It describes the metamodel and the correspondent environment for graphical modeling, with examples of sensor network models for road traffic monitoring. The sensor modeling environment was integrated on a private Cloud platform, within a virtual machine template, to provide sensor network modeling as a service, which is currently available for our university students. This serves as a foundation for delivering new services based on the interpretation of the resulted sensor network architecture models.

The management of natural and human-caused hazards is performed by reuniting a large variety of stakeholders, non-homogeneous collections of data, and systems that may not have been conceived for interoperability. The interdependency between hazards and the need of coordinated response also lead to the necessity to develop multi-hazard solutions, resulting in systems with a high complexity. This paper presents a metamodeling approach for hazard management systems, and a specific modeling environment, which considers the hazard, emergency, and geospatial views. The use of the model editor is exemplified on a system for early warning in case of accidental water pollution.

In the last decades, digital communications and network technologies have been growing rapidly, which
makes secure speech communication an important issue. Regardless of the communication purposes, military,
business or personal, people want a high level of security during their conversations. In this context, many
voice encryption methods have been developed, which are based on cryptographic algorithms. One of the
major issues regarding these algorithms is to identify those that can ensure high throughput when dealing with
reduced bandwidth of the communication channel. A solution is to use resource constrained embedded
systems because they are designed such that they consume little system resources, providing at the same time
very good performances. To fulfil all the strict requirements, hardware and software optimizations should be
performed by taking into consideration the complexity of the chosen algorithm, the mapping between the
selected architecture and the cryptographic algorithm, the selected arithmetic unit (floating point or fixed
point) and so on. The purpose of this paper is to compare and evaluate based on several criteria the Digital
Signal Processor (DSP) implementations of three voice encryption algorithms in real time. The algorithms
can be divided into two categories: asymmetric ciphers (NTRU and RSA) and symmetric ciphers (AES). The
parameters taken into consideration for comparison between these ciphers are: encryption, decryption and
delay time, complexity, packet lost and security level. All the previously mentioned algorithms were
implemented on Blackfin and TMS320C6x processors. Making hardware and software level optimizations,
we were able to reduce encryption/decryption/delay time, as well as to reduce the energy consumed. The
purpose of this paper is to determine which is the best system hardware (DSP platform) and which encryption
algorithm is feasible, safe and best suited for real-time voice encryption.

This paper presents the design of an intelligent haptic robotic glove (IHRG) model for the rehabilitation of the patients that have been diagnosed with a cerebrovascular accident (CVA). Total loss or loss of range of motion, decreased reaction times and disordered movement organization create deficits in motor control, which affect the patient’s independent living. The control system for a rehabilitation hand exoskeleton is discussed. One contribution is given by using a velocity observer and a force observer for performance evaluation. The disturbance effects are eliminated by a cascade closed loop control with velocity and force observers. The performance of the control system is demonstrated by the simulation. The second proposed control implementation version has a great advantage - the possibility to specify some vocal commands, which will help the patient to make a lot of medical exercises by themselves.

Several neighborhood strategies for QPSO algorithms are proposed and analyzed in order to improve the
performances of the original methods. The proposed strategies are applied to some of the most well known
QPSO algorithms such as the QPSO with random mean, the QPSO with Gaussian attractor and of course the
basic QPSO. To prevent the premature convergence and to avoid being trapped in local minima the
neighborhoods are dynamically changed during the optimization process. For testing the efficiency of the
neighborhood techniques two benchmark optimization problems from the electromagnetic field computation
have been chosen, Loney’s solenoid and TEAM22.

A large part of the latest research in speech coding and speech encryption algorithms is motivated by the
need of obtaining secure military communications, to allow effective operation in a hostile environment.
Since the bandwidth of the communication channel is a sensitive problem in military applications, low bitrate
speech compression methods and high throughput encryption algorithms are mostly used. Several
speech encryption methods are characterized by very strict requirements in power consumption, size, and
voltage supply. These requirements are difficult to fulfill, given the complexity and number of functions to
be implemented, together with the real time requirement and large dynamic range of the input signals. To
meet these constraints, careful optimization should be done at all levels, ranging from algorithmic level,
through system and circuit architecture, to layout and design of the cell library. The key points of this
optimization are among others, the choice of the algorithms, the modification of the algorithms to reduce
computational complexity, the choice of a fixed-point arithmetic unit, the minimization of the number of
bits required at every node of the algorithm, and a careful match between algorithms and architecture. This
paper describes the performance analysis on Digital Signal Processor (DSP) platform of some of the
recently proposed voice encryption algorithms, as well as the performance of stream ciphers such as Grain
v1, Trivium and Mickey 2.0 (which are suited for real time voice encryption). The algorithms were ported
onto a fixed point DSP, Blackfin 537, and stage by stage optimization was performed to meet the real time
requirements. Memory optimization techniques such as data placement and caching were also used to
reduce the processing time. The goal was to determine which of the evaluated encryption algorithms is best
suited for real time secure communications.

This paper studies two parallelization techniques for the implementation of a SPSO algorithm applied to
optimize electromagnetic field devices, GPGPU and Pthreads for multiprocessor architectures. The GPGPU
and Pthreads implementations are compared in terms of solution quality and speed up. The electromagnetic
optimization problems chosen for testing the efficiency of the parallelization techniques are the TEAM22
benchmark problem and Loney’s solenoid problem. As we will show, there is no single best parallel
implementation strategy since the performances depend on the optimization function.

In the present Doctoral Consortium paper the general purpose MUVEs are re-discussed in the context of mixing a virtual online 3D campus with real time activities within communities of practice. In our research we study and implement Gamification and visual Learning Analytics for supporting a more creative and adaptive instructional design, stimulate intrinsic and extrinsic motivation of both students and teachers, measure performance and usage indicators. The paper presents the research problem and details on general objectives, state-of-the art approaches, methodology and the expected results, according to the initial research hypothesis.

New advances of the apertureless Scanning Near-field Optical Microscopy (a-SNOM) are presented in fields like materials science and biology. Together with experimental data, the oscillating point-dipole model (OPDM) is used for signal analysis, influence of functioning parameters analysis and for quantitative electric permittivity measurements with nanoscale resolution.

The autocorrelation is often used in signal processing as a tool for finding repeating patterns in a signal. In
image processing, there are various image analysis techniques that use the autocorrelation of an image for a
broad range of applications from texture analysis to grain density estimation. In this paper, a novel approach
of capturing the autocorrelation of an image is proposed. More precisely, the autocorrelation is recorded in
a set of features obtained by comparing pairs of patches from an image. Each feature stores the euclidean
distance between a particular pair of patches. Although patches contain contextual information and have
advantages in terms of generalization, most of the patch-based techinques used in image processing are heavy
to compute with current machines. Therefore, patches are selected using a dense grid over the image to reduce
the number of features. This approach is termed Patch Autocorrelation Features (PAF). The proposed approach
is evaluated in a series of handwritten digit recognition experiments using the popular MNIST data set. The
Patch Autocorrelation Features are compared with the euclidean distance using two classification systems,
namely the k-Nearest Neighbors and Support Vector Machines. The empirical results show that the feature
map proposed in this work is always better than a feature representation based on raw pixel values, in terms of
accuracy. Furthermore, the results obtained with PAF are comparable to other state of the art methods.