Neuromorphic computing aims to build digital or analog computer systems that emulate or simulate the biological brain, in order to achieve high performance and low power consumption for intelligent information processing applications. This article reviews on neuromorphic computing based on Spiking neural networks (SNNs), including its history of development, common neuron models, major research projects, neuromorphic sensors, and applications in brain-computer Interfaces.

Interval-valued information system is a kind of knowledge representation model for uncertain information. An interval-valued attribute has an expectation by experience or background knowledge, called cognitive expectation. There are few studies aiming at intervalvalued attributes with cognitive expectations. We propose the concept of Interval-valued decision system with expectations (IDSE). A new dominance relation based on the distances between expectations and interval values is constructed. Based on the constructed dominance relation, a rough set model for IDSE is investigated. Attribute reduction in IDSE is also examined by using discernibility matrices and discernibility functions.

Research on cyborg intelligent insects requires the experiment platform to collect data from the distributed sensing models and process the multi-modal signals in real-time. To overcome these issues, we present a novel data model, naming as the Hybrid-synchronizedrealtime (HSR) data model, which can synchronize the hybrid raw data channels by a meta data integration method and process them with a fixed priority scheduling algorithm. Real animal experiments show that the insect-machine experiment platform based on the HSR data model can fully satisfy the requirements of cyborg intelligent insect research, especially in dealing with the technical challenges of data synchronization, real-time processing and hybrid data integration. It provides an efficient approach to implement the experiment platform and thus aid the frontier research on cyborg insects.

Rat cyborg is a rat which can receive cerebral commands and act as human wants. The navigation task is a widely used scenario in which the rat cyborg is required to walk along a specified path according to commands. Mostly the commands are given by human operators, but there's a need for automatic control which is quite challenging since several modules need to be elaborately combined together to work as a whole, among which the monitoring system and the commands model are two major components. Previously, few attempts were made for behavior monitoring of rat cyborg. Existing works often implement an over-simple rat information extractor which is capable of giving only a few parameters of the rat cyborg, which greatly limited the performance of their automatic control accuracy. In response to this requirement, we develop a monitoring system that is capable of giving detailed motion parameters and accurate body postures of the rat cyborg. We explore the possibility of recognizing rat postures using shape information, which is described by the powerful Zernike moments. We propose several simple shape descriptors which are fast to compute and achieve acceptable performance.

Most of the algorithms for training restricted Boltzmann machines (RBM) are based on Gibbs sampling. When the sampling algorithm is used to calculate the gradient, the sampling gradient is the approximate value of the true gradient and there is a big error between the sampling gradient and the true gradient, which seriously affects the training effect of the network. Aiming at this problem, this paper analysed the numerical error and orientation error between the approximate gradient and the true gradient. Their influence on the performance of network training is given then. An gradient fixing model was established. It was designed to adjust the numerical value and orientation of the approximate gradient and reduce the error. We also designed gradient fixing based Gibbs sampling training algorithm (GFGS) and gradient fixing based parallel tempering algorithm (GFPT), and the comparison experiment of the novel algorithms and the existing algorithms is given. It has been demonstrated that the new algorithms can effectively tackle the issue of gradient error, and can achieve higher training accuracy at a reasonable expense of computational runtime.

Network function virtualization (NFV) and Service function chaining (SFC) can fulfill the traditional network functions by simply running special softwares on general-purpose computer servers and switches. This not only provides significantly more agility and flexibility in network service deployment, but can also greatly reduce the capital and operating cost of networks. In this paper, a comprehensive survey on the motivations and state of the art efforts towards implementing the NFV and SFC is provided. In particular, the paper first presents the main concepts of these new emerging technologies; then discusses in details various stages of SFC, including the description, composition, placement and scheduling of service chains. Afterwards, existing approaches to SFC are reviewed according to their application environments, parameters used, and solution strategies. Finally, the paper points out a number of future research directions.

Quantum image processing is the intersection of quantum computation and image processing. Due to it is a newly emerging thing, researchers are facing not only great opportunities but also many challenges to develop more efficient and practicable services. We provide a comprehensive survey on quantum image processing to gather the current mainstream and discuss the advances made in the area, including quantum image representations, processing algorithms, and image measurement. Moreover, some open challenges and future directions are pointed out to attract continued research efforts

Impossible differential cryptanalysis is one of the most powerful attacks against modern block ciphers. In most cases, the resistance of a block cipher against impossible differential cryptanalysis can be measured by the length of the longest impossible differentials. By taking a closer look into the round function, we present a new method to find longer impossible differentials of wordoriented generalized Feistel structures. We conclude the existence of impossible differentials by the nonzero points of the XOR-ed masked differences in the middle round. This method uses differential style and its nonzero point to find the impossible differential, which is much easier than the classical impossible differential searching method. By applying our method, we can find several longest impossible differentials of some famous block cipher structures with SP (Substitution-permutation) round functions. If some extra conditions of the round function are taken into consideration (e.g. the permutation layer is designed as binary matrix or some sparse matrix), longer impossible differentials could be achieved by our method.

We study trace codes with defining set L, a subgroup of the multiplicative group of an extension of degree m of a certain ring of order 27. These codes are abelian, and their ternary images are quasi-cyclic of coindex three (a.k.a. cubic codes). Their Lee weight distributions are computed by using Gauss sums. These codes have three nonzero weights when m is singly-even. When mis odd, under some hypothesises on the size of L, we obtain two new infinite families of two-weight codes which are optimal. Applications of the image codes to secret sharing schemes are also given.

In this paper, we construct the twists of twisted Edwards curves in Weierstrass form. Then we define a new twisted Ate pairing on twisted Weierstrass curves named the Tx-Ate pairing. Following Miller's algorithm, we give a computation of the Tx-Ate pairing on high degree twisted Weierstrass curves, where the point operations are over Edwards form, and the computation of Miller function is over Weierstrass form. Although, in one doubling loop, our method to compute the Tx-Ate pairing is a litter slower than the previously fastest method. By twists, the Tx-Ate pairing can be calculated on more twisted Weierstrass curves with short loop length. The Tx-Ate pairing is even competitive with optimal Ate pairing when they have the same short loop length.

The disclosure of sensitive contents hidden in trajectories may jeopardize individuals' privacy security. The privacy preserving technologies on trajectories face the challenge of how to give consideration to the spatiotemporal structure of mobile data. Traditional trajectory privacy preserving tricks focus chiefly on partial structure, while ignoring global feature of trajectories. This research orientation tends to cause exorbitant distortion on the spatiotemporal structure of trajectories, which may give rise to low data utility. The moving behavior has itself innate sparseness feature. Thus a trajectory privacy preserving method based on feature maintaining is proposed by introducing the low-rank and sparse decomposition technique about large matrix. The proposed method iteratively decompose the feature matrix until the rank achieving stability to extract the primary feature and eliminate the private parts. The released trajectories are reconstructed by perturbing the final refined feature matrix with a series of low-rank components generated in the decomposition procedure. Experimental results on real-world dataset verified that the proposed method has low information loss on large-scale data.

Segmentation, named entity recognition and parsing are standalone techniques in natural language processing community, and their annotations are inconsistent. However, the joint output is needed in some practical use, and they rely on the result of each other to make more concise output. A unified model is learned to resolve these three tasks simultaneously. At the training stage, the joint annotation of the three tasks are employed to learn a unified model. At the decoding stage, the three tasks are carried out on a given text to provide a consistent output. Experiment results demonstrate the higher performance for each task and verify the benefits of the unified framework.

A literature recommendation algorithm based on a heterogeneous information network is proposed. The proposed algorithm can process different types of semantic information and implicit feedback. The experimental results show that the proposed algorithm can provide more effective recommendations than those algorithms without employing these semantic information and implicit feedback.

In the process of software testing, it requires a lot of manpower and time to design test cases. Test cases of high similarities can be modified to reduce the workload of test case design, and improve the test efficiency. A test case reuse method based on function calling path is proposed. According to the change functions and the correlation functions it determined the changed path to be tested. It selected the function calling path which has high degree of similarity with the changed path to be tested from the original function calling path set. The collection of reusable test cases were determined according to mapping relationship between function calling path and test case, and the reusable test cases were modified to cover the change paths. Experimental results have validated the effectiveness of the proposed test case reuse method, and further reduce the workload of designing test cases and regression testing costs.

Large scale programs usually imply many programming rules, which are missing from the specification documents. However, if programmers violate these rules in the process of programming, it is possible to introduce software defects. Previous works on mining function call correlation patterns only use structural information of the program, while control flow, data flow or other semantic information of the program are not exploited in those approaches. As a result, the defect detecting ability is restricted and high false rate is caused. This paper proposes a defect detection method based on mining function call association rules from program paths, which can be provided by simple static analysis. Then, the programs are automatically checked against the function call association rules for detecting suspicious defects. Based on this approach, experiments are carried out on a group of open source projects. The experiment results show that this approach can improve the capability of detecting defects and find more bugs related to program execution path. In addition, the false positive function call patterns and the overhead for manually validating suspicious defects are reduced.

Taking the technical requirements for the dynamicity and flexibility of software system in open network environment as cut-in points, this paper put forward a novel dynamic component model named SoftMan component (SMC) which was well-formed and evolvable. And then a distributed multi-task collaboration mechanism based on game theory in SoftMan component dynamic evolution system is put forward in order to maximize the utilization of computing resources and resolve multi-tasks collaboration problem. Three kinds of experiments are designed to test the feasibility, robustness and performance of distributed multi-tasks collaboration mechanism. Results show that distributed multi-tasks collaboration mechanism based on game theory has a higher successrate and accuracy with good algorithm stability and reliability.

The performance of Electronic fuel injection (EFI) systems has been significantly improved owing to the enhancement of materials and components. Thus, it is very difficult to obtain enough time-to-failure data for a specific time. This raises a big challenge with regard to traditional reliability evaluations of EFI systems since the evaluations lack abundant failure data. To resolve the problem, this paper proposes a new method. By analyzing the working principle and failure mechanism of an EFI system, the extractionmethod and degradation analysis of the system parameters are first carried out. Then, a temperature stress degradation model of linear regression based on degradation data distribution is presented. Experimental results show that EFI system performance reliability can be effectively evaluated. This can provide a theoretical basis for an overall performance estimation and proactive maintenance of automotive ECUs.

Currently, almost all color image encryption/decryption algorithms are designed based on a classical computer, in which the key space is relatively small, and the huge gains from quantum parallelism are not obtained. To address this problem, we propose a novel color image encryption/decryption method based on random rotation of qubit and Quantum Fourier transform (QFT). First, the color image is represented in a quantum superposition state |Image〉, in which the color information of each pixel is described by only one qubit |c〉. Then, the |c〉 are randomly rotated on the Bloch sphere about three coordinate axis, and the QFT is performed on the |Image〉. Once again, the |c〉 is randomly rotated on the Bloch sphere and then the inverse QFT is performed on the |Image〉, which the encryption process is implemented. The keys are the rotation angles of two above-mentioned rotations. The decryption is the inverse process of the encryption. Our method may run on a quantum computer in the future. The simulation results on the classic computer show that our approaches have better security.

The moving planets have a few of pixels which can lead to lacking enough image characteristics when tracking them. So we propose the motion planets detection and tracking algorithm based on gestalt principle. We structure Gaussian mixture model to detect the motion area from the visual cognition perspective, and use astronomy images graphics characteristics to confirm the planet position. Then we propose space-time fusion model for tracking planet. All the experiments use 1000 16-bitframe images of wide view CCD camera which contain 5684 motion planets in total. The results show that our algorithm reaches an accuracy of 94% and has robustness.

This paper addresses the problem of generating a high-resolution image from a low-resolution image. Many dictionary based methods have been proposed and have achieved great success in super resolution application. Most of these methods use small patches as dictionary atoms, and utilize a unified dictionary pair to conduct reconstruction for each patch, which may limit the super resolution performance. We use large patches instead of small ones to combine a dictionary and to conduct patch reconstruction. Since a large patch contains more meaningful information than a small one, the reconstruction result may have more high frequency details. To guarantee the completeness of the dictionary with large patch, the scale of the dictionary should be large as well. To handle the storage and calculation problems with large dictionaries, we adopt a binary encoding method. This method can preserve local information of patches. For each patch in the low-resolution image, we search its similar patches in the low-resolution dictionary to obtain a sub-dictionary. We compute its sparse representation to get the corresponding high-resolution version. Global reconstruction constraint is enforced to eliminate the discrepancy between the SR result and the ground truth. Experimental results demonstrate that our method outperforms other super resolution methods, especially when the magnification factor is large or the image is blurred by white Gaussian noise.

Dynamic frame fusion which is based on hybrid DSm model is an important problem in information fusion. But the traditional combination rules are mainly under fixed discernment frame (Shafer model and free DSm model) responding to static model. A new method for dynamic proportional conflict redistribution rules (dynamic PCR rules) based on hybrid DSm model is proposed for the shortness of classical dynamic PCR rules. In the new dynamic PCR rule, combination involved with empty set is defined as one kind to obtain more reasonable results. For the redistribution weight, the conjunction Basic belief assignment (BBA) and conflict redistribution BBA are both taken into account to raise the fusion precision. The effectiveness of revised dynamic PCR rule is studied and simulated in both aspect of fusion accuracy and calculation.

One of the key issues of noisy speech enhancement technique is to achieve appropriate statistical distributions to model the clean speech and noise signals accurately. Most of the existing algorithms try to employ a sole model assumption in transform domain, which, however, has been proven to being contrary with the fact. To address this problem, the statistical properties of clean speech as well as several noise signals are analyzed using actual data in Discrete cosine transform (DCT) domain, and the study indicates the statistic of clean speech DCT coefficients tending to fall somewhere in between the Gaussian and Laplacian distribution. Based on the results, a novel speech enhancement algorithm is proposed using Gaussian-Laplacian combination model, whose core is employing a linear combination of Gaussian and Laplacian distribution to model the statistic of clean speech DCT coefficients. The corresponding weights of either distribution to the combination model are adaptively adjusted in terms of the probability of each hypothesis, which is estimated based on a soft decision technique by using Bayesian theorem. Through a number of objective and subjective tests, we compare the performance of the proposed algorithm with other recent model based approaches and have found that our algorithm is superior to the related approaches at all testing environments.

Reproduction of focused sound sources is a fundamental problem in acoustic signal processing. Based on the cylindrical harmonic expansion of higher order Ambisonics (HOA), a novel method for focused sound source reproduction is proposed. To restrict the near-field distortion caused by higher order harmonic components of focused sound sources, a Symmetry sigmoid regularization function (SSRF) is designed, which exhibits a more suitable attenuation characteristic. By using SSRF, we first give the generalized cylindrical harmonic expansion of focused sound sources. The concept of continuous loudspeaker is utilized over a circular array to generate the discrete loudspeaker driving function. Based on the wideband error analysis, the match relationship between the SSRF and the variance pair (i.e., the operating frequency and the expansion order) is established. Objective evaluation results show that the near-field distortion is eliminated significantly by the proposed method. The reproduction errors of interior/exterior sound field are evidently lower than the state-of-the-art techniques. The subjective evaluations also confirm that the proposed approach is able to achieve higher reproduction quality compared to the referenced methods.

A reliable and precise recognition of the differentially expressed genes of tumor is crucial to treat the cancer effectively. The small number of differentially expressed genes in a huge gene expression dataset determines the important role of sparse methods, such as Penalty matrix decomposition (PMD), among the feature selection methods. The sparse methods always have the drawback: they do not take advantage of known class labels of gene expression data. A novel supervised-sparse method named as Supervised PMD (SPMD) is proposed by adding the class information into PMD via the total scatter matrix. The brief idea of our method used to select the differentially expressed genes is given as follows. The total scatter matrix is obtained according to the gene expression data with class label. The obtained total scatter matrix is decomposed by PMD to acquire the sparse vectors. The non-zero items in sparse vectors are selected as the differentially expressed genes. The Gene ontology (GO) enrichment of functional annotation of the selected genes is detected by ToppFun. Experiments on synthetic data and two real tumor gene expression datasets show that the proposed SPMD is quite promising to select the differentially expressed genes.

Creating effective features is a critical issue in malware analysis. It requires a proper tradeoff between discriminative power and invariance. Previous studies have shown that it is fairly effective to design features based on the binary code. However, the current existing binary-based features seldom take into consideration the problem of obfuscation, such as relocated sections, incomplete code and redundant operations. In this paper, we propose a novel Pairwise rotation invariant co-occurrence local binary pattern (PRICoLBP) feature, and further extend it to incorporate the Term frequency-inverse document frequency (TFIDF) transform. Different from other static analysis techniques, our method not only achieves better linear separability, but also appears to be more resilient to obfuscation. In addition, we evaluate PRICoLBPTFIDF comprehensively on three datasets from different perspectives, e.g., classification performance, classifier selection and performance against obfuscation. What's more, we compare our PRICoLBP-TFIDF method with other techniques, and demonstrate that PRICoLBP-TFIDF is quite an efficient and effective tradeoff between discriminative power and invariance.

A physical layer security scheme based on cooperative relays is proposed to address the problem of eavesdropping in satellite downlink which is induced by the broadcasting nature of the shared communication medium and the long distance of the satellite-ground transmissions. By evaluating the outage probability, the relay nodes participating in cooperative communication can be divided into a cooperative forwarding group and a cooperative jamming group. The cooperative jamming group and the cooperative forwarding groups construct a cascaded channel which combined with the random weighting technique improves the quality of transmission on the main channel. The cooperative jamming group uses the main channel null space to transmit artificial noise which does not affect the quality of the main channel, but reduces the eavesdropping channel quality. Simulation results show that the proposed anti-eavesdropping scheme can maximize the quality difference between the main channel and the eavesdropping channels, which thereby may significantly improve the security capacity of satellite downlink systems.

When the information is exchanged and transmitted among the group members in mobile cloud environment, the group members may distribute different security domains. The information exchanged among group members may have different secret levels in this environment. When a person has some secret information, he only wants to share these information with some people who have the appropriate level of security permissions, other than all the members in the group. Aiming at these needs, we propose a flexible Asymmetric group key agreement (AGKA) protocol that information exchange and transmission are specific-targeting. The paper adopts bilinear mapping and two-way anonymous authentication technology to hide personal identity authentication information, and uses storage and computing migration technology to reduce the resource consumption of mobile terminal, and also proposes the secret key factor oriented extraction and combinations technology to achieve multi-level three-dimensional complex space security information exchange requirements and meet the lightweight computing. The analysis proves that the protocol has good safety performance and low resource consumption.

By using processing variations in the unitcircuits of the same structures and design parameters during manufacturing, Physical unclonable function (PUF) generates security keys with characteristic of uniqueness, randomness and unclonability. In this paper, a highly reliable multiport PUF scheme was proposed based on MOSFET Zero temperature coefficient (ZTC) point. It consists of input register, deviation-current generating module, arbiter array and obfuscation circuit. After reconfiguring deviation-current generating module by applying different input challenges, the PUF circuit updates keys without physical replacement. And multi-bit keys can be generated in one clock cycle. In TSMC-LP 65nm CMOS technology, the layout of 64-port reconfigurable PUF occupies 131 μm×242μm with custom designing. Experimental results show that the PUF circuit has good statistical characteristic of uniqueness and randomness. It exhibits high reliability of 98.2% with respect to temperature variation from -40°C to 125°C and supply voltage variation from 1.08V to 1.32V, indicating that it can be reliably and effectively used in information security field.

We study the problem of defense strategy against Objective function attack (OFA) in Cognitive networks (CNs), where the network can be operated at the optimal state by adapting the parameters of its objective function. An OFA attacker can disrupt the parameter adaptation by interfering measures, which results in some operating parameters of the objective function deviating from their optimal settings. We first model the interactive process between the OFA attacker and defense system using differential game theory, and propose a defense strategy by introducing a new metric, namely, threat factor. Then, we obtain the optimal defense strategy by proving the existence of the saddle point of the proposed model. Moreover, this defense strategy is scale-free such that it can conquer a large number of OFAs in a decentralized way. Finally, we conduct extensive simulations to show that the proposed approach is effective.