Fuzzy co-clustering is sensitive to noise data. To overcome this noise sensitivity defect, possibilistic clustering relaxes the constraints in FCM-type fuzzy (co-)clustering. In this paper, we introduce a new possibilistic fuzzy co-clustering algorithm based on information bottleneck (ibPFCC). This algorithm combines fuzzy co- clustering and possibilistic clustering, and formulates an objective function which includes a distance function that employs information bottleneck theory to measure the distance between feature data point and feature cluster centroid. Many experiments were conducted on three datasets and one artificial dataset. Experimental results show that ibPFCC is better than such prominent fuzzy (co-)clustering algorithms as FCM, FCCM, RFCC and FCCI, in terms of accuracy and robustness.

With increasing interests in renewables, more consumers are installing an energy storage system (ESS) in their backyards, and thus, the ESS will play a critical role in the emerging smart grid. Due to mechanical properties, however its operational dynamics must be well understood before connecting the ESS to the smart grid (and eventually to an IT system). To this end, we investigate charging and discharging processes in detail. This paper, then, proposes methods for four type tests (state of charge test, conversion efficiency test, response time test, and ramp rate test) that can assess the dynamics of the ESS. The proposed methods can capture accurate delay values of mechanical processes in the ESS, and it is expected for those values to help design real-time communication systems in the smart grid involving the ESS.

One of the most visible developments in Decision Support Systems (DSS) was the emergence of rule-based expert systems. Hence, despite their success in many sectors, developers of Medical Rule-Based Systems have met several critical problems. Firstly, the rules are related to a clearly stated subject. Secondly, a rule-based system can only learn by updating of its rule-base, since it requires explicit knowledge of the used domain. Solutions to these problems have been sought through improved techniques and tools, improved development paradigms, knowledge modeling languages and ontology, as well as advanced reasoning techniques such as case-based reasoning (CBR) which is well suited to provide decision support in the healthcare setting. However, using CBR reveals some drawbacks, mainly in its interrelated tasks: the retrieval and the adaptation. For the retrieval task, a major drawback raises when several similar cases are found and consequently several solutions. Hence, a choice for the best solution must be done. To overcome these limitations, numerous useful works related to the retrieval task were conducted with simple and convenient procedures or by combining CBR with other techniques. Through this paper, we provide a combining approach using the multi-criteria analysis (MCA) to help, the traditional retrieval task of CBR, in choosing the best solution. Afterwards, we integrate this approach in a decision model to support medical decision. We present, also, some preliminary results and suggestions to extend our approach.

Many organizations today use patch management systems to uniformly manage software vulnerabilities. However, the patch management system does not guarantee the integrity of the patch in the process of providing the patch to the client. In this paper, we propose a method to guarantee patch integrity through dual electronic signatures. The dual electronic signatures are performed by the primary distribution server with the first digital signature and the secondary distribution server with the second digital signature. The dual electronic signature ensures ensure that there is no forgery or falsification in the patch transmission process, so that the client can verify that the patch provided is a normal patch. The dual electronic signatures can enhance the security of the patch management system, providing a secure environment for clients.

MiRNA is a biological short sequence, which plays a crucial role in almost all important biological process. MiRNA patterns are common sequence segments of multiple mature miRNA sequences, and they are of significance in identifying miRNAs due to the functional implication in miRNA patterns. In the proposed approach, the primary miRNA patterns are produced from sequence alignment, and they are then cut into short segment miRNA patterns. From the segment miRNA patterns, the candidate miRNA patterns are selected based on estimated probability, and from which, the potential miRNA patterns are further selected according to the classification performance between authentic and artificial miRNA sequences. Three parameters are suggested that bi-nucleotides are employed to compute the estimated probability of segment miRNA patterns, and top 1% segment miRNA patterns of length four in the order of estimated probabilities are selected as potential miRNA patterns.

The network coding mechanism has attracted much attention because of its advantage of enhanced network throughput which is a desirable characteristic especially in a multi-hop wireless network with limited link capacity such as the device-to-device (D2D) communication network of 5G. COPE proposes to use the XOR- based network coding in the two-hop wireless network topology. For multi-hop wireless networks, the Distributed Coding-Aware Routing (DCAR) mechanism was proposed, in which the coding conditions for two flows intersecting at an intermediate node are defined and the routing metric to improve the coding opportunity by preferring those routes with longer queues is designed. Because the routes with longer queues may increase the delay, DCAR is inefficient in delivering real-time multimedia traffic flows. In this paper, we propose a network coding-aware routing protocol for multi-hop wireless networks that enhances DCAR by considering traffic load distribution and link quality. From this, we can achieve higher network throughput and lower end-to-end delay at the same time for the proper delivery of time-sensitive data flow. The Qualnet-based simulation results show that our proposed scheme outperforms DCAR in terms of throughput and delay.

This paper introduces a new type of determining factor for Pseudo Random Strings (PRS). This classification depends upon a mathematical property called Finite Induction (FI). FI is similar to a Markov Model in that it presents a model of the sequence under consideration and determines the generating rules for this sequence. If these rules obey certain criteria, then we call the sequence generating these rules FI a PRS. We also consider the relationship of these kinds of PRS’s to Good/deBruijn graphs and Linear Feedback Shift Registers (LFSR). We show that binary sequences from these special graphs have the FI property. We also show how such FI PRS’s can be generated without consideration of the Hamiltonian cycles of the Good/deBruijn graphs. The FI PRS’s also have maximum Shannon entropy, while sequences from LFSR’s do not, nor are such sequences FI random.

As interest in big data has increased recently, NoSQL, a solution for storing and processing big data, is getting attention. NoSQL supports high speed, high availability, and high scalability, but is limited in areas where data integrity is important because it does not support multiple row transactions. To overcome these drawbacks, many studies are underway to support multiple row transactions in NoSQL. However, existing studies have a disadvantage that the number of transactions that can be processed per unit of time is low and performance is degraded. Therefore, in this paper, we design and implement a multi-row transaction system for data integrity in big data environment based on HBase, a column-based NoSQL which is widely used recently. The multi- row transaction system efficiently performs multi-row transactions by adding columns to manage transaction information for every user table. In addition, it controls the execution, collision, and recovery of multiple row transactions through the transaction manager, and it communicates with HBase through the communication manager so that it can exchange information necessary for multiple row transactions. Finally, we performed a comparative performance evaluation with HAcid and Haeinsa, and verified the superiority of the multirow transaction system developed in this paper.

Since rapidly disseminating of Internet of Things (IoT) as the new communication paradigm, a number of studies for various applications is being carried out. Especially, interest in the smart medical system is rising. In the smart medical system, a number of medical devices are distributed in popular area such as station and medical center, and this high density of medical device distribution can cause serious performance degradation of communication, referred to as the coexistence problem. When coexistence problem occurs in smart medical system, reliable transmitting of patient’s biological information may not be guaranteed and patient’s life can be jeopardized. Therefore, coexistence problem in smart medical system should be resolved. In this paper, we propose a distributed coexistence mitigation scheme for IoT-based smart medical system which can dynamically avoid interference in coexistence situation and can guarantee reliable communication. To evaluate the performance of the proposed scheme, we perform extensive simulations by comparing with IEEE 802.15.4 MAC protocol which is a traditional low-power communication technology.

The concept of the Internet of Things (IoT) enables physical objects or things to be virtually accessible for both consuming and providing services. Undue access from irresponsible activities becomes an interesting issue to address. Maintenance of data integrity and privacy of objects is important from the perspective of security. Privacy can be achieved through various techniques: password authentication, cryptography, and the use of mathematical models to assess the level of security of other objects. Individual methods like these are less effective in increasing the security aspect. Comprehensive security schemes such as the use of frameworks are considered better, regardless of the framework model used, whether centralized, semi-centralized, or distributed ones. In this paper, we propose a new semi-centralized security framework that aims to improve privacy in IoT using the parameters of trust and reputation. A new algorithm to elect a reputation coordinator, i.e., ConTrust Manager is proposed in this framework. This framework allows each object to determine other objects that are considered trusted before the communication process is implemented. Evaluation of the proposed framework was done through simulation, which shows that the framework can be used as an alternative solution for improving security in the IoT.

The combination texture feature extraction approach for texture image retrieval is proposed in this paper. Two kinds of low level texture features were combined in the approach. One of them was extracted from singular value decomposition (SVD) based dual-tree complex wavelet transform (DTCWT) coefficients, and the other one was extracted from multi-scale local binary patterns (LBPs). The fusion features of SVD based multi-directional wavelet features and multi-scale LBP features have short dimensions of feature vector. The comparing experiments are conducted on Brodatz and Vistex datasets. According to the experimental results, the proposed method has a relatively better performance in aspect of retrieval accuracy and time complexity upon the existing methods.

To make an unmanned aerial vehicle (UAVs) fly in indoor environments, the indoor locations of the UAV are required. One of the approaches to calculate the locations of an UAV in indoor environments is enhanced trilateration using one Bluetooth-based beacon and three or more access points (APs). However, the locations of the UAV calculated by the common chord-based trilateration has errors due to the distance errors of the beacon measured at the multiple APs. This paper proposes a method that corrects the errors that occur in the process of applying the common chord-based trilateration to calculate the locations of an UAV. In the experiments, the results of measuring the locations using the proposed method in an indoor environment was compared and verified against the result of measuring the locations using the common chord-based trilateration. The proposed method improved the accuracy of location measurement by 81.2% compared to the common chord-based trilateration.

In this paper, Atanassov’s intuitionistic fuzzy set theory is used to handle the uncertainty of students’ knowledgeon domain concepts in an E-learning system. Their knowledge on these domain concepts has been collected from tests that were conducted during their learning phase. Atanassov’s intuitionistic fuzzy user model is proposed to deal with vagueness in the user’s knowledge description in domain concepts. The user model uses Atanassov’s intuitionistic fuzzy sets for knowledge representation and linguistic rules for updating the user model. The scores obtained by each student were collected in this model and the decision about the students’ knowledge acquisition for each concept whether completely learned, completely known, partially known or completely unknown were placed into the information table. Finally, it has been found that the proposed scheme is more appropriate than the fuzzy scheme.

In this paper, we propose a novel feature for recognizing handwritten Odia numerals. By using polygonal approximation, each numeral is segmented into segments of equal pixel counts where the centroid of the character is kept as the origin. Three primitive contour features namely, distance (l), angle (?), and arc-to- chord ratio (r), are extracted from these segments. These features are used in a neural classifier so that the numerals are recognized. Other existing features are also considered for being recognized in the neural classifier, in order to perform a comparative analysis. We carried out a simulation on a large data set and conducted a comparative analysis with other features with respect to recognition accuracy and time requirements. Furthermore, we also applied the feature to the numeral recognition of two other languages— Bangla and English. In general, we observed that our proposed contour features outperform other schemes.

Software today has become an inseparable part of our life. In order to achieve the ever demanding needs of customers, it has to rapidly evolve and include a number of changes. In this paper, our aim is to study the relationship of object oriented metrics with change proneness attribute of a class. Prediction models based on this study can help us in identifying change prone classes of a software. We can then focus our efforts on these change prone classes during testing to yield a better quality software. Previously, researchers have used statistical methods for predicting change prone classes. But machine learning methods are rarely used for identification of change prone classes. In our study, we evaluate and compare the performances of ten machine learning methods with the statistical method. This evaluation is based on two open source software systems developed in Java language. We also validated the developed prediction models using other software data set in the same domain (3D modelling). The performance of the predicted models was evaluated using receiver operating characteristic analysis. The results indicate that the machine learning methods are at par with the statistical method for prediction of change prone classes. Another analysis showed that the models constructed for a software can also be used to predict change prone nature of classes of another software in the same domain. This study would help developers in performing effective regression testing at low cost and effort. It will also help the developers to design an effective model that results in less change prone classes, hence better maintenance.

The Structured Query Language (SQL) Injection continues to be one of greatest security risks in the world according to the Open Web Application Security Project’s (OWASP)[1] Top 10 Security vulnerabilities 2013. The ease of exploitability and severe impact puts this attack at the top. As the countermeasures become more sophisticated, SOL Injection Attacks also continue to evolve, thus thwarting the attempt to eliminate this attack completely. The vulnerable data is a source of worry for government and financial institutions. In this paper, a detailed survey of different types of SQL Injection and proposed methods and theories are presented, along with various tools and their efficiency in intercepting and preventing SQL attacks.

The handwriting based person identification systems use their designer’s perceived structural properties of handwriting as features. In this paper, we present a system that uses those structural properties as features that graphologists and expert handwriting analyzers use for determining the writer’s personality traits and for making other assessments. The advantage of these features is that their definition is based on sound historical knowledge (i.e., the knowledge discovered by graphologists, psychiatrists, forensic experts, and experts of other domains in analyzing the relationships between handwritten stroke characteristics and the phenomena that imbeds individuality in stroke). Hence, each stroke characteristic reflects a personality trait. We have measured the effectiveness of these features on a subset of handwritten Devnagari and Latin script datasets from the Center for Pattern Analysis and Recognition (CPAR-2012), which were written by 100 people where each person wrote three samples of the Devnagari and Latin text that we have designed for our experiments. The experiment yielded 100% correct identification on the training set. However, we observed an 88% and 89% correct identification rate when we experimented with 200 training samples and 100 test samples on handwritten Devnagari and Latin text. By introducing the majority voting based rejection criteria, the identification accuracy increased to 97% on both script sets.

In this work a Discrete Cosine Transform (DCT)-based feature dimensionality reduced approach for fingerprint matching is proposed. The DCT is applied on a small region around the core point of fingerprint image. The performance of our proposed method is evaluated on a small database of Bologna University and two large databases of FVC2000. A dimensionally reduced feature vector is formed using only approximately 19%, 7%, and 6% DCT coefficients for the three databases from Bologna University and FVC2000, respectively. We compared the results of our proposed method with the discrete wavelet transform (DWT) method, the rotated wavelet filters (RWFs) method, and a combination of DWT+RWF and DWT+(HL+LH) subbands of RWF. The proposed method reduces the false acceptance rate from approximately 18% to 4% on DB1 (Database of Bologna University), approximately 29% to 16% on DB2 (FVC2000), and approximately 26% to 17% on DB3 (FVC2000) over the DWT based feature extraction method.

Certificateless public key cryptography (CL-PKC) is a new benchmark in modern cryptography. It not only simplifies the certificate management problem of PKC, but also avoids the key escrow problem of the identity based cryptosystem (ID-PKC). In this article, we propose a certificateless blind signature protocol which is based on elliptic curve cryptography (CLB-ECC). The scheme is suitable for the wireless communication environment because of smaller parameter size. The proposed scheme is proven to be secure against attacks by two different kinds of adversaries. CLB-ECC is efficient in terms of computation compared to the other existing conventional schemes. CLB-ECC can withstand forgery attack, key only attack, and known message attack. An e-cash framework, which is based on CLB-ECC, has also been proposed. As a result, the proposed CLB-ECC scheme seems to be more effective for applying to real life applications like e-shopping, e-voting, etc., in handheld devices.

The Journal of Information Processing Systems (JIPS) is the official international journal published by the Korean Information Processing Society. As a leading and multidisciplinary journal, JIPS is indexed in ESCI, SCOPUS, EI COMPENDEX, DOI, DBLP, EBSCO, Google Scholar and CrossRef. Its purpose is to enable researchers and professionals to promote, share, and discuss all major research issues and developments in the field of information processing technologies and other related fields. JIPS publishes diverse papers, including theoretical research contributions presenting new techniques, concepts, or analyses; experience reports; experiments involving the implementation and application of new theories; and tutorials on state-of-the-art technologies related to information processing systems. The subjects covered by this journal include, but are not limited to, topics related to computer systems and theories, multimedia systems and graphics, communication systems and security, and software systems and applications.

This survey paper explores the application of multimodal feedback in automated systems for motor learning. In this paper, we review the findings shown in recent studies in this field using rehabilitation and various motor training scenarios as context. We discuss popular feedback delivery and sensing mechanisms for motion capture and processing in terms of requirements, benefits, and limitations. The selection of modalities is presented via our having reviewed the best-practice approaches for each modality relative to motor task complexity with example implementations in recent work. We summarize the advantages and disadvantages of several approaches for integrating modalities in terms of fusion and frequency of feedback during motor tasks. Finally, we review the limitations of perceptual bandwidth and provide an evaluation of the information transfer for each modality.

Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new meta- heuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.

The need for embedded devices to be able to exchange information with each other and with data centers is essential for the advent of the Internet of Things (IoT). Several existing communication protocols are designed for small devices including the message-queue telemetry transport (MQTT) protocol or the constrained application protocol (CoAP). However, most of the existing implementations are convenient for computers or smart phones but do not consider the strict constraints and limitations with regard resource usage, portability and configuration. In this paper, we report on an industrial research and development project which focuses on the design, implementation, testing and deployment of a MQTT module. The goal of this project is to develop this module for platforms having minimal RAM, flash code memory and processing power. This software module should be fully compliant with the MQTT protocol specification, portable, and inter-operable with other software stacks. In this paper, we present our approach based on abstraction layers to the design of the MQTT module and we discuss the compliance of the implementation with the requirements set including the MISRA static analysis requirements.

Since the progress of digital medical imaging techniques, it has been needed to compress the variety of medical images. In medical imaging, reversible compression of image's region of interest (ROI) which is diagnostically relevant is considered essential. Then, improving the global compression rate of the image can also be obtained by separately coding the ROI part and the remaining image (called background). For this purpose, the present work proposes an efficient reversible discrete cosine transform (RDCT) based embedded image coder designed for lossless ROI coding in very high compression ratio. Motivated by the wavelet structure of DCT, the proposed rearranged structure is well coupled with a lossless embedded zerotree wavelet coder (LEZW), while the background is highly compressed using the set partitioning in hierarchical trees (SPIHT) technique. Results coding shows that the performance of the proposed new coder is much superior to that of various state-of-art still image compression methods.

Nowadays, geographic information system (GIS) is developed and implemented in many areas. A huge volume of vector map data has been accessed unlawfully by hackers, pirates, or unauthorized users. For this reason, we need the methods that help to protect GIS data for storage, multimedia applications, and transmission. In our paper, a selective encryption method is presented based on vertex randomization and hybrid transform in the GIS vector map. In the proposed algorithm, polylines and polygons are focused as the targets for encryption. Objects are classified in each layer, and all coordinates of the significant objects are encrypted by the key sets generated by using chaotic map before changing them in DWT, DFT domain. Experimental results verify the high efficiency visualization by low complexity, high security performance by random processes

Near-field source localization algorithms are very sensitive to sensor gain/phase response errors, and so it is important to calibrate the errors. We took into consideration the uniform linear array and are proposing a blind calibration algorithm that can estimate the directions-of-arrival and range parameters of incident signals and sensor gain/phase responses jointly, without the need for any reference source. They are estimated separately by using an iterative approach, but without the need for good initial guesses. The ambiguities in the estimations of 2-D electric angles and sensor gain/phase responses are also analyzed in this paper. We show that the ambiguities can be remedied by assuming that two sensor phase responses of the array have been previously calibrated. The behavior of the proposed method is illustrated through simulation experiments. The simulation results show that the convergent rate is fast and that the convergent precision is high

A joint channel estimation and data detection technique for a multiple input multiple output (MIMO) wireless communication system is proposed. It combines the least square (LS) training based channel estimation (TBCE) scheme with sphere decoding. In this new approach, channel estimation is enhanced with the help of blind symbols, which are selected based on their correctness. The correctness is determined via sphere decoding. The performance of the new scheme is studied through simulation in terms of the bit error rate (BER). The results show that the proposed channel estimation has comparable performance and better computational complexity over the existing semi-blind channel estimation (SBCE) method.

The PCI Express is a widely used system bus technology that connects the processor and the peripheral I/O devices. The PCI Express is nowadays regarded as a de facto standard in system area interconnection network. It has good characteristics in terms of high-speed, low power. In addition, PCI Express is becoming popular interconnection network technology as like Gigabit Ethernet, InfiniBand, and Myrinet which are extensively used in high-performance computing. In this paper, we designed and implemented a evaluation platform for interconnect network using PCI Express between two computing nodes. We make use of the non-transparent bridge (NTB) technology of PCI Express in order to isolate between the two subsystems. We constructed a testbed system and evaluated the performance on the testbed.

State-of-the-art speaker recognition systems may work better for the English language. However, if the same system is used for recognizing those who speak different languages, the systems may yield a poor performance. In this work, the decisions of a Gaussian mixture model-universal background model (GMM- UBM) and a learning vector quantization (LVQ) are combined to improve the recognition performance of a multilingual speaker identification system. The difference between these classifiers is in their modeling techniques. The former one is based on probabilistic approach and the latter one is based on the fine-tuning of neurons. Since the approaches are different, each modeling technique identifies different sets of speakers for the same database set. Therefore, the decisions of the classifiers may be used to improve the performance. In this study, multitaper mel-frequency cepstral coefficients (MFCCs) are used as the features and the monolingual and cross-lingual speaker identification studies are conducted using NIST-2003 and our own database. The experimental results show that the combined system improves the performance by nearly 10% compared with that of the individual classifier.

In this paper, we present an approach to transmit data from the source to the destination through a minimal path (least-cost path) in a computer network of n nodes. The motivation behind our approach is to address the problem of finding a minimal path between the source and destination. From the work we have studied, we found that a Steiner tree with bounded Steiner vertices offers a good solution. A novel algorithm to construct a Steiner tree with vertices and bounded Steiner vertices is proposed in this paper. The algorithm finds a path from each source to each destination at a minimum cost and minimum number of Steiner vertices. We propose both the sequential and parallel versions. We also conducted a comparative study of sequential and parallel versions based on time complexity, which proved that parallel implementation is more efficient than sequential.

Mammogram images are sensitive in nature and even a minor change in the environment affects the quality of the images. Due to the lack of expert radiologists, it is difficult to interpret the mammogram images. In this paper an algorithm is proposed for a computer-aided diagnosis system, which is based on the wavelet based adaptive sigmoid function. The cascade feed-forward back propagation technique has been used for training and testing purposes. Due to the poor contrast in digital mammogram images it is difficult to process the images directly. Thus, the images were first processed using the wavelet based adaptive sigmoid function and then the suspicious regions were selected to extract the features. A combination of texture features and gray- level co-occurrence matrix features were extracted and used for training and testing purposes. The system was trained with 150 images, while a total 100 mammogram images were used for testing. A classification accuracy of more than 95% was obtained with our proposed method.

This paper proposes a color image coding algorithm based on shape-adaptive all phase biorthogonal transform (SA-APBT). This algorithm is implemented through four procedures: color space conversion, image segmentation, shape coding, and texture coding. Region-of-interest (ROI) and background area are obtained by image segmentation. Shape coding uses chain code. The texture coding of the ROI is prior to the background area. SA-APBT and uniform quantization are adopted in texture coding. Compared with the color image coding algorithm based on shape-adaptive discrete cosine transform (SA-DCT) at the same bit rates, experimental results on test color images reveal that the objective quality and subjective effects of the reconstructed images using the proposed algorithm are better, especially at low bit rates. Moreover, the complexity of the proposed algorithm is reduced because of uniform quantization

The wireless sensor networks (WSNs) became a very essential tool in borders and military zones surveillance, for this reason specific applications have been developed. Surveillance is usually accomplished through the deployment of nodes in a random way providing heterogeneous topologies. However, the process of the identification of all nodes located on the network’s outer edge is very long and energy-consuming. Before any other activities on such sensitive networks, we have to identify the border nodes by means of specific algorithms. In this paper, a solution is proposed to solve the problem of energy and time consumption in detecting border nodes by means of node selection. This mechanism is designed with several starter nodes in order to reduce time, number of exchanged packets and then, energy consumption. This method consists of three phases: the first one is to detect triggers which serve to start the mechanism of boundary nodes (BNs) detection, the second is to detect the whole border, and the third is to exclude each BN from the routing tables of all its neighbors so that it cannot be used for the routing.

Effective identification of wireless channel in different scenarios or regions can solve the problems of multipath interference in process of wireless communication. In this paper, different characteristics of wireless channel are extracted based on the arrival time and received signal strength, such as the number of multipath, time delay and delay spread, to establish the feature vector set of wireless channel which is used to train backpropagation (BP) neural network to identify different wireless channels. Experimental results show that the proposed algorithm can accurately identify different wireless channels, and the accuracy can reach 97.59%.

Mobile phones are the most common communication devices in history. For this reason, the number of mobile subscribers will increase dramatically in the future. Therefore, the determining the location of a mobile station will become more and more difficult. The mobile station must be authenticated to inform the network of its current location even when the user switches it on or when its location is changed. The most basic weakness in the GSM authentication protocol is the unilateral authentication process where the customer is verified by the system, yet the system is not confirmed by the customer. This creates numerous security issues, including powerlessness against man-in-the-middle attacks, vast bandwidth consumption between VLR and HLR, storage space overhead in VLR, and computation costs in VLR and HLR. In this paper, we propose a secure authentication mechanism based new mobility management method to improve the location management in the GSM network, which suffers from a lot off drawbacks, such as transmission cost and database overload. Numerical analysis is done for both conventional and modified versions and compared together. The numerical results show that our protocol scheme is more secure and that it reduces mobility management costs the most in the GSM network.

In recent decades, the ad hoc network for vehicles has been a core network technology to provide comfort and security to drivers in vehicle environments. However, emerging applications and services require major changes in underlying network models and computing that require new road network planning. Meanwhile, blockchain widely known as one of the disruptive technologies has emerged in recent years, is experiencing rapid development and has the potential to revolutionize intelligent transport systems. Blockchain can be used to build an intelligent, secure, distributed and autonomous transport system. It allows better utilization of the infrastructure and resources of intelligent transport systems, particularly effective for crowdsourcing technology. In this paper, we proposes a vehicle network architecture based on blockchain in the smart city (Block-VN). Block-VN is a reliable and secure architecture that operates in a distributed way to build the new distributed transport management system. We are considering a new network system of vehicles, Block-VN, above them. In addition, we examine how the network of vehicles evolves with paradigms focused on networking and vehicular information. Finally, we discuss service scenarios and design principles for Block-VN.

In software systems, it has been observed that a fault is often caused by an interaction between a small number of input parameters. Even for moderately sized software systems, exhaustive testing is practically impossible to achieve. This is either due to time or cost constraints. Combinatorial (t-way) testing provides a technique to select a subset of exhaustive test cases covering all of the t-way interactions, without much of a loss to the fault detection capability. In this paper, an approach is proposed to generate 2-way (pairwise) test sets using genetic algorithms. The performance of the algorithm is improved by creating an initial solution using the overlap coefficient (a similarity matrix). Two mutation strategies have also been modified to improve their efficiency. Furthermore, the mutation operator is improved by using a combination of three mutation strategies. A comparative survey of the techniques to generate t-way test sets using genetic algorithms was also conducted. It has been shown experimentally that the proposed approach generates faster results by achieving higher percentage coverage in a fewer number of generations. Additionally, the size of the mixed covering arrays was reduced in one of the six benchmark problems examined.

The Journal of Information Processing Systems (JIPS) is the official international journal of the Korea Information Processing Society (KIPS). As a leading and multidisciplinary journal, JIPS is indexed in ESCI (Emerging Sources Citation Index), SCOPUS, EI COMPENDEX, DOI, DBLP, EBSCO, Google Scholar, and CrossRef. As information processing systems continue to progress at a rapid pace, KIPS is committed to providing researchers and other professionals with the academic information and resources they need to keep abreast of these ongoing developments. JIPS aims to be a leading source that enables researchers and professionals all over the world to promote, share, and discuss all of the major research issues and developments in the field of information processing systems and other related fields

Artificial intelligence, especially deep learning technology, is penetrating the majority of research areas, including the field of bioinformatics. However, deep learning has some limitations, such as the complexity of parameter tuning, architecture design, and so forth. In this study, we analyze these issues and challenges in regards to its applications in bioinformatics, particularly genomic analysis and medical image analytics, and give the corresponding approaches and solutions. Although these solutions are mostly rule of thumb, they can effectively handle the issues connected to training learning machines. As such, we explore the tendency of deep learning technology by examining several directions, such as automation, scalability, individuality, mobility, integration, and intelligence warehousing.

Foraging is a biological process, where a bacterium moves to search for nutriments, and avoids harmful substances. This paper proposes a hybrid approach integrating the bacterial foraging optimization algorithm (BFOA) in a radial basis function neural network, applied to image classification, in order to improve the classification rate and the objective function value. At the beginning, the proposed approach is presented and described. Then its performance is studied with an accent on the variation of the number of bacteria in the population, the number of reproduction steps, the number of elimination-dispersal steps and the number of chemotactic steps of bacteria. By using various values of BFOA parameters, and after different tests, it is found that the proposed hybrid approach is very robust and efficient for several-image classification

The rapid growth of smart devices demands an enhanced throughput for network connection sustainability during mobility. However, traditional wireless network architecture suffers from mobility management issues. In order to resolve the traditional mobility management issues, we propose a novel architecture for future wireless access network based on software-defined network (SDN) by using the advantage of network function virtualization (NFV). In this paper, network selection approach (NSA) has been introduced for mobility management that comprises of acquiring the information of the underlying networking devices through the OpenFlow controller, percepts the current network behavior and later the selection of an appropriate action or network. Furthermore, mobility-related scenarios and use cases to analyze the implementation aspects of the proposed architecture are provided. The simulation results confirm that the proposed scenarios have obtained a seamless mobility with enhanced throughput at minimum packet loss as compared to the existing IEEE 802.11 wireless network.

An active research area in computer vision, stereo matching is aimed at obtaining three-dimensional (3D) information from a stereo image pair captured by a stereo camera. To extract accurate 3D information, a number of studies have examined stereo matching algorithms that employ adaptive support weight. Among them, the adaptive census transform (ACT) algorithm has yielded a relatively strong matching capability. The drawbacks of the ACT, however, are that it produces low matching accuracy at the border of an object and is vulnerable to noise. To mitigate these drawbacks, this paper proposes and analyzes the features of an improved stereo matching algorithm that not only enhances matching accuracy but also is also robust to noise. The proposed algorithm, based on the ACT, adopts the truncated absolute difference and the multiple sparse windows method. The experimental results show that compared to the ACT, the proposed algorithm reduces the average error rate of depth maps on Middlebury dataset images by as much as 2% and that is has a strong robustness to noise.

In this paper, an interference aware distributed multi-channel MAC (IDMMAC) protocol is proposed for wireless sensor and actor networks (WSANs). The WSAN consists of a huge number of sensors and ample amount of actors. Hence, in the IDMMAC protocol a lightweight channel selection mechanism is proposed to enhance the sensor's lifetime. The IDMMAC protocol divides the beacon interval into two phases (i.e., the ad- hoc traffic indication message (ATIM) window phase and data transmission phase). When a sensor wants to transmit event information to the actor, it negotiates the maximum packet reception ratio (PRR) and the capacity channel in the ATIM window with its 1-hop sensors. The channel negotiation takes place via a control channel. To improve the packet delivery ratio of the IDMMAC protocol, each actor selects a backup cluster head (BCH) from its cluster members. The BCH is elected based on its residual energy and node degree. The BCH selection phase takes place whenever an actor wants to perform actions in the event area or it leaves the cluster to help a neighbor actor. Furthermore, an interference and throughput aware multi- channel MAC protocol is also proposed for actor-actor coordination. An actor selects a minimum interference and maximum throughput channel among the available channels to communicate with the destination actor. The performance of the proposed IDMMAC protocol is analyzed using standard network parameters, such as packet delivery ratio, end-to-end delay, and energy dissipation, in the network. The obtained simulation results indicate that the IDMMAC protocol performs well compared to the existing MAC protocols.

The dorsal hand vein biometric system developed has a main objective and specific targets; to get an electronic signature using a secure signature device. In this paper, we present our signature device with its different aims; respectively: The extraction of the dorsal veins from the images that were acquired through an infrared device. For each identification, we need the representation of the veins in the form of shape descriptors, which are invariant to translation, rotation and scaling; this extracted descriptor vector is the input of the matching step. The optimization decision system settings match the choice of threshold that allows accepting/rejecting a person, and selection of the most relevant descriptors, to minimize both FAR and FRR errors. The final decision for identification based descriptors selected by the PSO hybrid binary give a FAR =0% and FRR=0% as results.

Big data information and pattern analysis have applications in many industrial sectors. To reduce energy consumption effectively, the eco-driving method that reduces the fuel consumption of vehicles has recently come under scrutiny. Using big data on commercial vehicles obtained from digital tachographs (DTGs), it is possible not only to aid traffic safety but also improve eco-driving. In this study, we estimate fuel consumption efficiency by processing and analyzing DTG big data for commercial vehicles using parallel processing with the MapReduce mechanism. Compared to the conventional measurement of fuel consumption using the On-Board Diagnostics II (OBD-II) device, in this paper, we use actual DTG data and OBD-II fuel consumption data to identify meaningful relationships to calculate fuel efficiency rates. Based on the driving pattern extracted from DTG data, estimating fuel consumption is possible by analyzing driving patterns obtained only from DTG big data.

The selection of subset size is of great importance to the accuracy of digital image correlation (DIC). In the traditional DIC, a constant subset size is used for computing the entire image, which overlooks the differences among local speckle patterns of the image. Besides, it is very laborious to find the optimal global subset size of a speckle image. In this paper, a self-adaptive and bidirectional dynamic subset selection (SBDSS) algorithm is proposed to make the subset sizes vary according to their local speckle patterns, which ensures that every subset size is suitable and optimal. The sum of subset intensity variation (?) is defined as the assessment criterion to quantify the subset information. Both the threshold and initial guess of subset size in the SBDSS algorithm are self-adaptive to different images. To analyze the performance of the proposed algorithm, both numerical and laboratory experiments were performed. In the numerical experiments, images with different speckle distribution, different deformation and noise were calculated by both the traditional DIC and the proposed algorithm. The results demonstrate that the proposed algorithm achieves higher accuracy than the traditional DIC. Laboratory experiments performed on a substrate also demonstrate that the proposed algorithm is effective in selecting appropriate subset size for each point.

In this work, we are interested in the extraction of areas of interest from satellite images by introducing a MOTRIBES/OC-SVM approach. The One-Class Support Vector Machine (OC-SVM) is based on the estimation of a support that includes training data. It identifies areas of interest without including other classes from the scene. We propose generating optimal training data using the Multi-Objective TRIBES (MO-TRIBES) to improve the performances of the OC-SVM. The MO-TRIBES is a parameter-free optimization technique that manages the search space in tribes composed of agents. It makes different behavioral and structural adaptations to minimize the false positive and false negative rates of the OC-SVM. We have applied our proposed approach for the extraction of earthquakes and urban areas. The experimental results and comparisons with different state-of-the-art classifiers confirm the efficiency and the robustness of the proposed approach.

A watermark is a signal added to the original signal in order to preserve the copyright of the owner of the digital content. The basic challenge for designing a watermarking system is a dilemma between transparency and robustness. If we want a higher rate of transparency, there has to be a compromise in terms of its robustness and vice versa. Also, until now, watermarking is generalized, resulting in the need for a specialized algorithm to work for a specialized image processing application domain. Our proposed technique takes into consideration the image characteristics for watermark insertion and it optimizes transparency and robustness. It achieved a 99.98% retrieval efficiency for an image blurring attack and counterfeits other attacks. Our proposed technique counterfeits almost all of the image processing attacks.

Image restoration has been carried out by texture synthesis mostly for large regions and inpainting algorithms for small cracks in images. In this paper, we propose a new approach that allows for the simultaneous fill-in of different structures and textures by processing in a wavelet domain. A combination of structure inpainting and patch-based texture synthesis is carried out, which is known as patch-based inpainting, for filling and updating the target region. The wavelet transform is used for its very good multiresolution capabilities. The proposed algorithm uses the wavelet domain subbands to resolve the structure and texture components in smooth approximation and high frequency structural details. The subbands are processed separately by the prioritized patch-based inpainting with isophote energy driven texture synthesis at the core. The algorithm automatically estimates the wavelet coefficients of the target regions of various subbands using optimized patches from the surrounding DWT coefficients. The suggested performance improvement drastically improves execution speed over the existing algorithm. The proposed patch optimization strategy improves the quality of the fill. The fill-in is done with higher priority to structures and isophotes arriving at target boundaries. The effectiveness of the algorithm is demonstrated with natural and textured images with varying textural complexions.

Nowadays most vehicles are equipped with a variety of electronic devices to improve user convenience as well as its performance itself. In order to efficiently interconnect these devices with each other, Controller Area Network (CAN) is commonly used. However, the CAN requires reconfiguration of the entire network when a new device, which is capable of supporting both of transmission and reception of data, is added to the existing network. In addition, since CAN is based on the collision avoidance using address priority, it is difficult that a new node is assigned high priority and eventually it results in transmission delay of the entire network. Therefore, in this paper we propose a new system component, called CAN coordinator, and design a new CAN framework capable of supporting plug and play functionality. Through experiments, we also prove that the proposed framework can improve real-time ability based on plug and play functionality.

In this paper, a new iterative algorithm for reconstructing block sparse signals, called block backtrackingbased adaptive orthogonal matching pursuit (BBAOMP) method, is proposed. Compared with existing methods, the BBAOMP method can bring some flexibility between computational complexity and reconstruction property by using the backtracking step. Another outstanding advantage of BBAOMP algorithm is that it can be done without another information of signal sparsity. Several experiments illustrate that the BBAOMP algorithm occupies certain superiority in terms of probability of exact reconstruction and running time.

The image segmentation is the most important operation in an image processing system. It is located at the joint between the processing and analysis of the images. Unsupervised segmentation aims to automatically separate the image into natural clusters. However, because of its complexity several methods have been proposed, specifically methods of optimization. In our work we are interested to the technique SFLA (Shuffled Frog-Leaping Algorithm). It’s a memetic meta-heuristic algorithm that is based on frog populations in nature searching for food. This paper proposes a new approach of unsupervised image segmentation based on SFLA method. It is implemented and applied to different types of images. To validate the performances of our approach, we performed experiments which were compared to the method of K-means.

Speech recognition is one of the fascinating fields in the area of Computer science. Accuracy of speech recognition system may reduce due to the presence of noise present in speech signal. Therefore noise removal is an essential step in Automatic Speech Recognition (ASR) system and this paper proposes a new technique called combined thresholding for noise removal. Feature extraction is process of converting acoustic signal into most valuable set of parameters. This paper also concentrates on improving Mel Frequency Cepstral Coefficients (MFCC) features by introducing Discrete Wavelet Packet Transform (DWPT) in the place of Discrete Fourier Transformation (DFT) block to provide an efficient signal analysis. The feature vector is varied in size, for choosing the correct length of feature vector Self Organizing Map (SOM) is used. As a single classifier does not provide enough accuracy, so this research proposes an Ensemble Support Vector Machine (ESVM) classifier where the fixed length feature vector from SOM is given as input, termed as ESVM_SOM. The experimental results showed that the proposed methods provide better results than the existing methods.

Nowadays, with the development of signal processing technique, the protection to the integrity and authenticity of images has become a topic of great concern. A blind image authentication technology with high tamper detection accuracy for different common attacks is urgently needed. In this paper, an improved fragile watermarking method based on local binary pattern (LBP) is presented for blind tamper location in images. In this method, a binary watermark is generated by LBP operator which is often utilized in face identification and texture analysis. In order to guarantee the safety of the proposed algorithm, Arnold transform and logistic map are used to scramble the authentication watermark. Then, the least significant bits (LSBs) of original pixels are substituted by the encrypted watermark. Since the authentication data is constructed from the image itself, no original image is needed in tamper detection. The LBP map of watermarked image is compared to the extracted authentication data to determine whether it is tampered or not. In comparison with other state-of-the-art schemes, various experiments prove that the proposed algorithm achieves better performance in forgery detection and location for baleful attacks.

The amount of sources of information available on the web using ontologies as support continues to increase and is often heterogeneous and distributed. Ontology alignment is the solution to ensure semantic inter- operability. In this paper, we describe a new ontology alignment approach, which consists of combining structure-based and reasoning-based approaches in order to discover new semantic correspondences between entities of different ontologies. We used the biblio test of the benchmark series and anatomy series of the Ontology Alignment Evaluation Initiative (OAEI) 2012 evaluation campaign to evaluate the performance of our approach. We compared our approach successively with LogMap and YAM++ systems. We also analyzed the contribution of our method compared to structural and semantic methods. The results obtained show that our performance provides good performance. Indeed, these results are better than those of the LogMap system in terms of precision, recall, and F-measure. Our approach has also been proven to be more relevant than YAM++ for certain types of ontologies and significantly improves the structure-based and reasoning- based methods.

Music emotion is an important component in the field of music information retrieval and computational musicology. This paper proposes an approach for automatic emotion classification, based on rough set (RS) theory. In the proposed approach, four different sets of music features are extracted, representing dynamics, rhythm, spectral, and harmony. From the features, five different statistical parameters are considered as attributes, including up to the 4th order central moments of each feature, and covariance components of mutual ones. The large number of attributes is controlled by RS-based approach, in which superfluous features are removed, to obtain indispensable ones. In addition, RS-based approach makes it possible to visualize which attributes play a significant role in the generated rules, and also determine the strength of each rule for classification. The experiments have been performed to find out which audio features and which of the different statistical parameters derived from them are important for emotion classification. Also, the resulting indispensable attributes and the usefulness of covariance components have been discussed. The overall classification accuracy with all statistical parameters has recorded comparatively better than currently existing methods on a pair of datasets.

Power allocation is an important factor for cognitive radio networks to achieve higher communication capacity and faster equilibrium. This paper considers power allocation problem to each cognitive user to maximize capacity of the cognitive systems subject to the constraints on the total power of each cognitive user and the interference levels of the primary user. Since this power control problem can be formulated as a mixed-integer nonlinear programming (NP) equivalent to variational inequality (VI) problem in convex polyhedron which can be transformed into complementary problem (CP), we utilize modified projection method to solve this CP problem instead of finding NP solution and give a power control allocation algorithm with a subcarrier allocation scheme. Simulation results show that the proposed algorithm performs well and effectively reduces the system power consumption with almost maximum capacity while achieve Nash equilibrium.

In this paper, data hiding algorithm using Discrete Wavelet Transform (DWT) and Arnold Transform is proposed. The secret data is scrambled using Arnold Transform to make it secure. Wavelet subbands of a cover image are obtained using DWT. The scrambled secret data is embedded into significant wavelet coefficients of subbands of a cover image. The proposed algorithm is robust to a variety of attacks like JPEG and JPEG2000 compression, image cropping and median filtering. Experimental results show that the PSNR of the composite image is 1.05 dB higher than the PSNR of existing algorithms and capacity is 25% higher than the capacity of existing algorithms.

The Journal of Information Processing Systems (JIPS) publishes a broad array of subjects related to information communication technology across prevalent and advanced fields including system, network, architecture, algorithm, application, security, and so forth. As the official international journal published by the Korean Information Processing Society and a prominent, multidisciplinary journal throughout the world, JIPS is being indexed in ESCI, SCOPUS, EI COMPENDEX, DOI, DBLP, EBSCO, Google Scholar, and CrossRef. The purpose of JIPS is to provide a prominent, influential forum wherein researchers and professionals gather to promote, share, and discuss all major research issues and developments. Published theoretical and practical articles have contributed to related research areas by presenting new techniques, concepts, or analyses, featuring experience reports, experiments involving the implementation and application of new theories, and tutorials on state-of-the-art technologies related to information processing systems. The subjects covered by this journal include, but are not limited to, topics related to computer systems and theories, multimedia systems and graphics, communication systems and security, and software systems and applications

Associative and bidirectional associative memories are examples of associative structures studied intensively in the literature. The underlying idea is to realize associative mapping so that the recall processes (one- directional and bidirectional ones) are realized with minimal recall errors. Associative and fuzzy associative memories have been studied in numerous areas yielding efficient applications for image recall and enhancements and fuzzy controllers, which can be regarded as one-directional associative memories. In this study, we revisit and augment the concept of associative memories by offering some new design insights where the corresponding mappings are realized on the basis of a related collection of landmarks (prototypes) over which an associative mapping becomes spanned. In light of the bidirectional character of mappings, we have developed an augmentation of the existing fuzzy clustering (fuzzy c-means, FCM) in the form of a so- called collaborative fuzzy clustering. Here, an interaction in the formation of prototypes is optimized so that the bidirectional recall errors can be minimized. Furthermore, we generalized the mapping into its granular version in which numeric prototypes that are formed through the clustering process are made granular so that the quality of the recall can be quantified. We propose several scenarios in which the allocation of information granularity is aimed at the optimization of the characteristics of recalled results (information granules) that are quantified in terms of coverage and specificity. We also introduce various architectural augmentations of the associative structures.

Multimedia is a ubiquitous and indispensable part of our daily life and learning such as audio, image, and video. Objective and subjective quality evaluations play an important role in various multimedia applications. Blind image quality assessment (BIQA) is used to indicate the perceptual quality of a distorted image, while its reference image is not considered and used. Blur is one of the common image distortions. In this paper, we propose a novel BIQA index for Gaussian blur distortion based on the fact that images with different blur degree will have different changes through the same blur. We describe this discrimination from three aspects: color, edge, and structure. For color, we adopt color histogram; for edge, we use edge intensity map, and saliency map is used as the weighting function to be consistent with human visual system (HVS); for structure, we use structure tensor and structural similarity (SSIM) index. Numerous experiments based on four benchmark databases show that our proposed index is highly consistent with the subjective quality assessment.

In this paper, we present a new texture image retrieval method which combines color and texture features extracted from images by a set of multi-resolution multi-direction (MRMD) filters. The MRMD filter set chosen is simple and can be separable to low and high frequency information, and provides efficient multi- resolution and multi-direction analysis. The color space used is HSV color space separable to hue, saturation, and value components, which are easily analyzed as showing characteristics similar to the human visual system. This experiment is conducted by comparing precision vs. recall of retrieval and feature vector dimensions. Images for experiments include Corel DB and VisTex DB; Corel_MR DB and VisTex_MR DB, which are transformed from the aforementioned two DBs to have multi-resolution images; and Corel_MD DB and VisTex_MD DB, transformed from the two DBs to have multi-direction images. According to the experimental results, the proposed method improves upon the existing methods in aspects of precision and recall of retrieval, and also reduces feature vector dimensions.

In the centralized cloud controlled environment, the decision-making and monitoring play crucial role where in the host controller (HC) manages the resources across hosts in data center (DC). HC does virtual machine (VM) and physical hosts management. The VM management includes VM creation, monitoring, and migration. If HC down,the services hosted by various hosts in DC can’t be accessed outside the DC. Decentralized VM management avoids centralized failure by considering one of the hosts from DC as HC that helps in maintaining DC in running state. Each host in DC has many VM’s with the threshold limit beyond which it can’t provide service.To maintain threshold , the host’s in DC does VM migration across various hosts. The data in migration is in the form of plaintext, the intruder can analyze packet movement and can control hosts traffic. The incorporation of security mechanism on hosts in DC helps protecting data in migration. This paper discusses an approach for dynamic HC selection, VM selection and secure VM migration over cloud environment.

In this study, we proposed a new approach to segment ground and nonground points gained from a 3D laser range sensor. The primary aim of this research was to provide a fast and effective method for ground segmentation. In each frame, we divide the point cloud into small groups. All threshold points and start- ground points in each group are then analyzed. To determine threshold points we depend on three features: gradient, lost threshold points, and abnormalities in the distance between the sensor and a particular threshold point. After a threshold point is determined, a start-ground point is then identified by considering the height difference between two consecutive points. All points from a start-ground point to the next threshold point are ground points. Other points are nonground. This process is then repeated until all points are labelled

In this paper, an improved zone-based routing protocol for heterogeneous wireless sensor networks is proposed. The proposed protocol has fixed the sized zone according to the distance from the base station and used a dynamic clustering technique for advanced nodes to select a cluster head with maximum residual energy to transmit the data. In addition, we select an optimal route with minimum energy consumption for normal nodes and conserve energy by state transition throughout data transmission. Simulation results indicated that the proposed protocol performed better than the other algorithm by reducing energy consumption and providing a longer network lifetime and better throughput of data packets

Due to the rapid growth of the amount of data, research on bigdata processing has been highlighted. For bigdata processing, CUBRID Shard is able to support query processing in parallel way by dividing the database into a number of CUBRID servers. However, CUBRID Shard can answer a user’s query only when the query is required to gain accesses to a single CUBRID server, instead of multiple ones. To solve the problem, in this paper we propose a CUBRID based distributed parallel query processing system that can answer a user’s query in parallel and distributed manner. Finally, through the performance evaluation, we show that our proposed system provides 2–3 times better performance on query processing time than the existing CUBRID Shard

Massive volumes of GPS trajectory data bring challenges to storage and processing. These issues can be addressed by compression algorithm which can reduce the size of the trajectory data. A key requirement for GPS trajectory compression algorithm is to reduce the size of the trajectory data while minimizing the loss of information. Synchronized Euclidean distance (SED) as an important error measure is adopted by most of the existing algorithms. In order to further reduce the SED error, an improved algorithm for open window time ratio (OPW-TR) called local optimum open window time ratio (LO-OPW-TR) is proposed. In order to make SED error smaller, the anchor points are selected by calculating point’s accumulated synchronized Euclidean distance (ASED). A variety of error metrics are used for the algorithm evaluation. The experimental results show that the errors of our algorithm are smaller than the existing algorithms in terms of SED and speed errors under the same compression ratio

The amount of multimedia traffic over the Internet has been increasing because of the development of networks and mobile devices. Accordingly, studies on multicast, which is used to provide efficient multimedia and video services, have been conducted. In particular, studies on centralized multicast tree construction have attracted attention with the advent of software-defined networking. Among the centralized multicast tree construction algorithms, the group Takahashi and Matsuyama (GTM) algorithm is the most commonly used in multiple multicast tree construction. However, the GTM algorithm considers only the network-cost overhead when constructing multicast trees; it does not consider the temporary service disruption that arises from a link change for users receiving an existing service. Therefore, in this study, we propose a multiple multicast tree construction algorithm that can reduce network cost while avoiding considerable degradation of service quality to users. This is accomplished by considering both network-cost and link-change overhead of users. Experimental results reveal that, compared to the GTM algorithm, the proposed algorithm significantly improves the user-experienced quality of service by substantially reducing the number of link- changed users while only slightly adding to the network-cost overhead.

Cyberbullying has been an emerging issue in recent years where research has revealed that users generally spend an increasing amount of time in social networks and forums to keep connected with each other. However, issue arises when cyberbullies are able to reach their victims through these social media platforms. There are different types of cyberbullying and like traditional bullying; it causes victims to feel overly self- conscious, increases their tendency to self-harm and generally affects their mental state negatively. Such situations occur due to security issues such as user anonymity and the lack of content restrictions in some social networks or web forums. In this paper, we highlight the existing solutions, which are Intrusion Prevention System and Intrusion Detection System from a number of researchers. However, even with such solutions, cyberbullying acts still occurs at an alarming rate. As such, we proposed an alternative solution that aims to prevent cyberbullying activities at a younger age, e.g., young children. The application would provide an alternative method to preventing cyberbullying activities among the younger generations in the future

Cloud computing is an attractive solution that can provide low cost storage and powerful processing capabilities for government agencies or enterprises of small and medium size. Yet the confidentiality of information should be considered by any organization migrating to cloud, which makes the research on relational database system based on encryption schemes to preserve the integrity and confidentiality of data in cloud be an interesting subject. So far there have been various solutions for realizing SQL queries on encrypted data in cloud without decryption in advance, where generally homomorphic encryption algorithm is applied to support queries with aggregate functions or numerical computation. But the existing homomorphic encryption algorithms cannot encrypt floating-point numbers. So in this paper, we present a mechanism to enable the trusted party to encrypt the floating-points by homomorphic encryption algorithm and partial trusty server to perform summation on their ciphertexts without revealing the data itself. In the first step, we encode floating-point numbers to hide the decimal points and the positive or negative signs. Then, the codes of floating-point numbers are encrypted by homomorphic encryption algorithm and stored as sequences in cloud. Finally, we use the data structure of DoubleListTree to implement the aggregate function of SUM and later do some extra processes to accomplish the summation

Recently, there has been an increase in the number of hazardous events, such as fire accidents. Monitoring systems that rely on human resources depend on people; hence, the performance of the system can be degraded when human operators are fatigued or tensed. It is easy to use fire alarm boxes; however, these are frequently activated by external factors such as temperature and humidity. We propose an approach to fire detection using an image processing technique. In this paper, we propose a fire detection method using multi- channel information and gray level co-occurrence matrix (GLCM) image features. Multi-channels consist of RGB, YCbCr, and HSV color spaces. The flame color and smoke texture information are used to detect the flames and smoke, respectively. The experimental results show that the proposed method performs better than the previous method in terms of accuracy of fire detection

Due to the rapid growth and expansion of the Internet, the digital multimedia such as image, audio and video are available for everyone. Anyone can make unauthorized copying for any digital product. Accordingly, the owner of these products cannot protect his ownership. Unfortunately, this situation will restrict any improvement which can be done on the digital media production in the future. Some procedures have been proposed to protect these products such as cryptography and watermarking techniques. Watermarking means embedding a message such as text, the image is called watermark, yet, in a host such as a text, an image, an audio, or a video, it is called a cover. Watermarking can provide and ensure security, data authentication and copyright protection for the digital media. In this paper, a new watermarking method of still image is proposed for the purpose of copyright protection. The procedure of embedding watermark is done in a transform domain. The discrete cosine transform (DCT) is exploited in the proposed method, where the watermark is embedded in the selected coefficients according to several criteria. With this procedure, the deterioration on the image is minimized to achieve high invisibility. Unlike the traditional techniques, in this paper, a new method is suggested for selecting the best blocks of DCT coefficients. After selecting the best DCT coefficients blocks, the best coefficients in the selected blocks are selected as a host in which the watermark bit is embedded. The coefficients selection is done depending on a weighting function method, where this function exploits the values and locations of the selected coefficients for choosing them. The experimental results proved that the proposed method has produced good imperceptibility and robustness for different types of attacks

For dual-hop multiple-input multiple-output (MIMO) decode-and-forward relaying systems, we propose a selective relaying scheme that uses orthogonal space-time block code (OSTBC) and transmit antenna selection with maximal-ratio combining (TAS/MRC) or vice versa at the first and second hops, respectively. The aim is to achieve an asymptotically identical performance to the dual-hop relaying system with only TAS/MRC, while requiring lower feedback overhead. In particular, we give the selection criteria based on the antenna configurations and the average channel powers for the first and second hops, assuming Rayleigh fading channels. Also, the numerical results are shown for the outage performance comparison between the dual-hop DF relaying systems with the proposed scheme, only TAS/MRC, and only OSTBC.

Mobile nodes can't always connect each other in DTNs (delay tolerant networks). Many DTN routing protocols that favor the “multi-hop forwarding” are proposed to solve these network problems. But they also lead to intolerant delivery cost so that designing a overhead-efficient routing protocol which is able to perform well in delivery ratio with lower delivery cost at the same time is valuable. Therefore, we utilize the small-world property and propose a new delivery metric called multi-probability to design our relay node selection principles that nodes with lower delivery predictability can also be selected to be the relay nodes if one of their history nodes has higher delivery predictability. So, we can find more potential relay nodes to reduce the forwarding overhead of successfully delivered messages through our proposed algorithm called HESnW. We also apply our new messages copies allocation scheme to optimize the routing performance. Comparing to existing routing algorithms, simulation results show that HESnW can reduce the delivery cost while it can also obtain a rather high delivery ratio.

In this paper, we analyze a recently proposed semi-fragile watermarking scheme based on local binary pattern (LBP) operators, and note that it has a fundamental flaw in the design. In this work, a binary watermark is embedded into image blocks by modifying the neighborhood pixels according to the LBP pattern. However, different image blocks might have the same LBP pattern, which can lead to false detection in watermark extraction process. In other words, one can modify the host image intentionally without affecting its watermark message. In addition, there is no encryption process before watermark embedding, which brings another potential security problem. To illustrate its weakness, two special copy-paste attacks are proposed in this paper, and several experiments are conducted to prove the effectiveness of these attacks. To solve these problems, an improved semi-fragile watermarking based on LBP operators is presented. In watermark embedding process, the central pixel value of each block is taken into account and Arnold transform is adopted to guarantee the security of watermark. Experimental results show that the improved watermarking scheme can overcome the above defects and locate the tampered region effectively.

In this paper, we propose the opportunistic non-orthogonal multiple access (NOMA)-based cooperative relaying system (CRS) with channel state information (CSI) available at the source, where CSI for the source- to-destination and source-to-relay links is used for opportunistic transmission. Using the CSI, for opportunistic transmission, the source instantaneously chooses between the direct transmission and the cooperative NOMA transmission. We provide an asymptotic expression for the average achievable rate of the opportunistic NOMA-based CRS under Rayleigh fading channels. We verify the asymptotic analysis through Monte Carlo simulations, and compare the average achievable rates of the opportunistic NOMA-based CRS and the conventional one for various channel powers and power allocation coefficients used for NOMA

The Journal of Information Processing Systems (JIPS) publishes a broad array of subjects related to information communication technology in a wide variety of prevalent and advanced fields, including systems, networks, architecture, algorithms, applications, security, and so forth. As the official international journal published by the Korean Information Processing Society and a prominent, multidisciplinary journal in the world, JIPS is indexed in ESCI, SCOPUS, EI COMPENDEX, DOI, DBLP, EBSCO, Google Scholar, and CrossRef. The purpose of JIPS is to provide a prominent, influential forum where researchers and professionals can come together to promote, share, and discuss all major research issues and developments. Published theoretical and practical articles contribute to their related research areas by presenting new techniques, concepts, or analyses, and feature experience reports, experiments involving the implementation and application of new theories, and tutorials on state-of-the-art technologies related to information processing systems. The subjects covered by this journal include, but are not limited to, topics related to computer systems and theories, multimedia systems and graphics, communication systems and security, and software systems and applications.

The significant advances in information and communication technologies are changing the process of how information is accessed. The internet is a very important source of information and it influences the development of other media. Furthermore, the growth of digital content is a big problem for academic digital libraries, so that similar tools can be applied in this scope to provide users with access to the information. Given the importance of this, we have reviewed and analyzed several proposals that improve the processes of disseminating information in these university digital libraries and that promote access to information of interest. These proposals manage to adapt a user’s access to information according to his or her needs and preferences. As seen in the literature one of the techniques with the best results, is the application of recommender systems. These are tools whose objective is to evaluate and filter the vast amount of digital information that is accessible online in order to help users in their processes of accessing information. In particular, we are focused on the analysis of the fuzzy linguistic recommender systems (i.e., recommender systems that use fuzzy linguistic modeling tools to manage the user’s preferences and the uncertainty of the system in a qualitative way). Thus, in this work, we analyzed some proposals based on fuzzy linguistic recommender systems to help researchers, students, and teachers access resources of interest and thus, improve and complement the services provided by academic digital libraries.

Recently, the importance of big data has been emphasized with the development of smartphone, web/SNS. As a result, MapReduce, which can efficiently process big data, is receiving worldwide attention because of its excellent scalability and stability. Since big data has a large amount, fast creation speed, and various properties, it is more efficient to process big data summary information than big data itself. Wavelet histogram, which is a typical data summary information generation technique, can generate optimal data summary information that does not cause loss of information of original data. Therefore, a system applying a wavelet histogram generation technique based on MapReduce has been actively studied. However, existing research has a disadvantage in that the generation speed is slow because the wavelet histogram is generated through one or more MapReduce Jobs. And there is a high possibility that the error of the data restored by the wavelet histogram becomes large. However, since the wavelet histogram generation system based on the MapReduce developed in this paper generates the wavelet histogram through one MapReduce Job, the generation speed can be greatly increased. In addition, since the wavelet histogram is generated by adjusting the error boundary specified by the user, the error of the restored data can be adjusted from the wavelet histogram. Finally, we verified the efficiency of the wavelet histogram generation system developed in this paper through performance evaluation.

In order to solve the undetected probability of multiple targets in ultra-wideband (UWB) through-the-wall radar imaging (TWRI), a time-delay and amplitude modified back projection (BP) algorithm is proposed. The refraction point is found by Fermat’s principle in the presence of a wall, and the time-delay is correctly compensated. On this basis, transmission loss of the electromagnetic wave, the absorption loss of the refraction wave, and the diffusion loss of the spherical wave are analyzed in detail. Amplitude compensation is deduced and tested on a model with a single-layer wall. The simulating results by finite difference time domain (FDTD) show that it is effective in increasing the scattering intensity of the targets behind the wall. Compensation for the diffusion loss in the spherical wave also plays a main role. Additionally, the two-layer wall model is simulated. Then, the calculating time and the imaging quality are compared between a single- layer wall model and a two-layer wall model. The results illustrate the performance of the time-delay and amplitude-modified BP algorithm with multiple targets and multiple-layer walls of UWB TWRI.

In this paper, we propose a transliteration approach based on semantic information (i.e., language origin and gender) which are automatically learnt from the person name, aiming to transliterate the person name of Uyghur into Chinese. The proposed approach integrates semantic scores (i.e., performance on language origin and gender detection) with general transliteration model and generates the semantic knowledge-based model which can produce the best candidate transliteration results. In the experiment, we use the datasets which contain the person names of different language origins: Uyghur and Chinese. The results show that the proposed semantic transliteration model substantially outperforms the general transliteration model and greatly improves the mean reciprocal rank (MRR) performance on two datasets, as well as aids in developing more efficient transliteration for named entities.

The burgeoning distribution of smartphone web applications based on various mobile environments is increasingly focusing on the performance of mobile applications implemented by JavaScript and HTML5 (Hyper Text Markup Language 5). If application software has a simple functional processing structure, then the problem is benign. However, browser loads are becoming more burdensome as the amount of JavaScript processing continues to increase. Processing time and capacity of the JavaScript in current mobile browsers are limited. As a solution, the Web Worker is designed to implement multi-threading. However, it cannot guarantee the computing ability as a native application on mobile devices, and is not sufficient to improve processing speed. The method proposed in this research overcomes the limitation of resources as a mobile client and guarantees performance by native application software by providing high computing service. It shifts the JavaScript process of a mobile device on to a cloud-based computer server. A performance evaluation experiment revealed the proposed algorithm to be up to 6 times faster in computing speed compared to the existing mobile browser’s JavaScript process, and 3 to 6 times faster than Web Worker. In addition, memory usage was also less than the existing technology.

This paper proposes an automatic method to summarize Bangla news document. In the proposed approach, pronoun replacement is accomplished for the first time to minimize the dangling pronoun from summary. After replacing pronoun, sentences are ranked using term frequency, sentence frequency, numerical figures and title words. If two sentences have at least 60% cosine similarity, the frequency of the larger sentence is increased, and the smaller sentence is removed to eliminate redundancy. Moreover, the first sentence is included in summary always if it contains any title word. In Bangla text, numerical figures can be presented both in words and digits with a variety of forms. All these forms are identified to assess the importance of sentences. We have used the rule-based system in this approach with hidden Markov model and Markov chain model. To explore the rules, we have analyzed 3,000 Bangla news documents and studied some Bangla grammar books. A series of experiments are performed on 200 Bangla news documents and 600 summaries (3 summaries are for each document). The evaluation results demonstrate the effectiveness of the proposed technique over the four latest methods.

Cross-lingual query expansion is usually based on the relationship among monolingual words. Bilingual comparable corpus contains relationships among bilingual words. Therefore, this paper proposes a method based on these relationships to conduct query expansion. First, the word vectors which characterize the bilingual words are trained using Chinese and Thai bilingual comparable corpus. Then, the correlation between Chinese query words and Thai words are computed based on these word vectors, followed with selecting the Thai candidate expansion terms via the correlative value. Then, multi-group Thai query expansion sentences are built by the Thai candidate expansion words based on Chinese query sentence. Finally, we can get the optimal sentence using the Chinese and Thai query expansion method, and perform the Thai query expansion. Experiment results show that the cross-lingual query expansion method we proposed can effectively improve the accuracy of Chinese and Thai cross-language information retrieval.

This paper introduces a new algorithm that renders motion blur using triangular motion paths. A triangle occupies a set of pixels when moving from a position in the start of a frame to another position in the end of a frame. This is a motion path of a moving triangle. For a given pixel, we use a motion path of each moving triangle to find a range of time that this moving triangle is visible to the camera. Then, we sort visible time ranges in the depth-time dimensions and use bitwise operations to solve the occlusion problem. Thereafter, we compute an average color of each moving triangle based on its visible time range. Finally, we accumulate an average color of each moving triangle in the front-to-back order to produce the final pixel color. Thus, our algorithm performs shading after the visibility test and renders motion blur in real time.

The traditional text similarity measurement methods based on word frequency vector ignore the semantic relationships between words, which has become the obstacle to text similarity calculation, together with the high-dimensionality and sparsity of document vector. To address the problems, the improved singular value decomposition is used to reduce dimensionality and remove noises of the text representation model. The optimal number of singular values is analyzed and the semantic relevance between words can be calculated in constructed semantic space. An inverted index construction algorithm and the similarity definitions between vectors are proposed to calculate the similarity between two documents on the semantic level. The experimental results on benchmark corpus demonstrate that the proposed method promotes the evaluation metrics of F-measure.

A virtual reality is a virtual space constructed by a computer that provides users the opportunity to indirectly experience a situation they have not experienced in real life through the realization of information for virtual environments. Various studies have been conducted to realize virtual reality, in which the user interface is a major factor in maximizing the sense of immersion and usability. However, most existing methods have disadvantages, such as costliness or being limited to the physical activity of the user due to the use of special devices attached to the user’s body. This paper proposes a new type of interface that enables the user to apply their intentions and actions to the virtual space directly without special devices, and test content is introduced using the new system. Users can interact with the virtual space by throwing an object in the space; to do this, moving object detectors are produced using infrared sensors. In addition, the users can control the virtual space with their own postures. The method can heighten interest and concentration, increasing the sense of reality and immersion and maximizing user’s physical experiences.

Weighted network link prediction is a challenge issue in complex network analysis. Unsupervised methods based on local structure are widely used to handle the predictive task. However, the results are still far from satisfied as major literatures neglect two important points: common neighbors produce different influence on potential links; weighted values associated with links in local structure are also different. In this paper, we adapt an effective link prediction model—local naive Bayes model into a weighted scenario to address this issue. Correspondingly, we propose a weighted local naive Bayes (WLNB) probabilistic link prediction framework. The main contribution here is that a weighted cluster coefficient has been incorporated, allowing our model to inference the weighted contribution in the predicting stage. In addition, WLNB can extensively be applied to several classic similarity metrics. We evaluate WLNB on different kinds of real-world weighted datasets. Experimental results show that our proposed approach performs better (by AUC and Prec) than several alternative methods for link prediction in weighted complex networks.

The round robin algorithm is regarded as one of the most efficient and effective CPU scheduling techniques in computing. It centres on the processing time required for a CPU to execute available jobs. Although there are other CPU scheduling algorithms based on processing time which use different criteria, the round robin algorithm has gained much popularity due to its optimal time-shared environment. The effectiveness of this algorithm depends strongly on the choice of time quantum. This paper presents a new effective round robin CPU scheduling algorithm. The effectiveness here lies in the fact that the proposed algorithm depends on a dynamically allocated time quantum in each round. Its performance is compared with both traditional and enhanced round robin algorithms, and the findings demonstrate an improved performance in terms of average waiting time, average turnaround time and context switching.

In this paper, we newly propose a traffic information service model that collects traffic information sensed by an individual vehicle in real time by using a smart device, and which enables drivers to share traffic information on all roads in real time using an application installed on a smart device. In particular, when the driver requests traffic information for a specific area, the proposed driver-personalized service model provides him/her with traffic information on the driving directions in advance by predicting the driving directions of the vehicle based on the learning of the driving records of each driver. To do this, we propose a traffic information management model to process and manage in real time a large amount of online-generated traffic information and traffic information requests generated by each vehicle. We also propose a road node- based indexing technique to efficiently store and manage location-based traffic information provided by each vehicle. Finally, we propose a driving learning and prediction model based on the hidden Markov model to predict the driving directions of each driver based on the driver's driving records. We analyze the traffic information processing performance of the proposed model and the accuracy of the driving prediction model using traffic information collected from actual driving vehicles for the entire area of Seoul, as well as driving records and experimental data.

Extraction of influential people from their respective domains has attained the attention of scholastic community during current epoch. This study introduces an innovative interaction strength metric for retrieval of the most influential users in the online social network. The interactive strength is measured by three factors, namely re-tweet strength, commencing intensity and mentioning density. In this article, we design a novel algorithm called IPRank that considers the communications from perspectives of followers and followees in order to mine and rank the most influential people based on proposed interaction strength metric. We conducted extensive experiments to evaluate the strength and rank of each user in the micro-blog network. The comparative analysis validates that IPRank discovered high ranked people in terms of interaction strength. While the prior algorithm placed some low influenced people at high rank. The proposed model uncovers influential people due to inclusion of a novel interaction strength metric that improves results significantly in contrast with prior algorithm.

Social networking services (SNSs) such as Twitter, MySpace, and Facebook have become progressively significant with its billions of users. Still, alongside this increase is an increase in security threats such as cross- site scripting (XSS) threat. Recently, a few approaches have been proposed to detect an XSS attack on SNSs. Due to the certain recent features of SNSs webpages such as JavaScript and AJAX, however, the existing approaches are not efficient in combating XSS attack on SNSs. In this paper, we propose a machine learning- based approach to detecting XSS attack on SNSs. In our approach, the detection of XSS attack is performed based on three features: URLs, webpage, and SNSs. A dataset is prepared by collecting 1,000 SNSs webpages and extracting the features from these webpages. Ten different machine learning classifiers are used on a prepared dataset to classify webpages into two categories: XSS or non-XSS. To validate the efficiency of the proposed approach, we evaluated and compared it with other existing approaches. The evaluation results show that our approach attains better performance in the SNS environment, recording the highest accuracy of 0.972 and lowest false positive rate of 0.87.

The Journal of Information Processing Systems (JIPS) publishes a wide range of topics related to a wide variety of advanced information and communication technologies, including systems, networks, architectures, algorithms, applications, and security. JIPS is the official international journal published by the Korea Information Processing Society and is the world's leading academic journal indexed by ESCI, SCOPUS, EI COMPENDEX, DOI, DBLP, EBSCO, Google Scholar, and CrossRef. The purpose of JIPS is to provide an outstanding, influential forum where researchers and experts gather to promote, share, and discuss crucial research issues and developments. The published theoretical and practical articles contribute to the relevant research area by presenting cutting-edge techniques related to information processing including new theories, approaches, concepts, analysis, functional experience reports, implementations, and applications. Topics covered in this journal include, but are not limited to, computer systems and theory, multimedia systems and graphics, communication systems and security, software systems, and applications.

For the autoencoder (AE) implemented as a construction component, this paper uses the method of greedy layer-by-layer pre-training without supervision to construct the stacked autoencoder (SAE) to extract the abstract features of the original input data, which is regarded as the input of the logistic regression (LR) model, after which the click-through rate (CTR) of the user to the advertisement under the contextual environment can be obtained. These experiments show that, compared with the usual logistic regression model and support vector regression model used in the field of predicting the advertising CTR in the industry, the SAE-LR model has a relatively large promotion in the AUC value. Based on the improvement of accuracy of advertising CTR prediction, the enterprises can accurately understand and have cognition for the needs of their customers, which promotes the multi-path development with high efficiency and low cost under the condition of internet finance.

This paper aims to extract an ObjectProperty-UsageMethod relation, in particular the HerbalMedicinalProperty- UsageMethod relation of the herb-plant object, as a semantic relation between two related sets, a herbal- medicinal-property concept set and a usage-method concept set from several web documents. This HerbalMedicinalProperty-UsageMethod relation benefits people by providing an alternative treatment/solution knowledge to health problems. The research includes three main problems: how to determine EDU (where EDU is an elementary discourse unit or a simple sentence/clause) with a medicinal-property/usage-method concept; how to determine the usage-method boundary; and how to determine the HerbalMedicinalProperty- UsageMethod relation between the two related sets. We propose using N-Word-Co on the verb phrase with the medicinal-property/usage-method concept to solve the first and second problems where the N-Word-Co size is determined by the learning of maximum entropy, support vector machine, and nai?ve Bayes. We also apply nai?ve Bayes to solve the third problem of determining the HerbalMedicinalProperty-UsageMethod relation with N-Word-Co elements as features. The research results can provide high precision in the HerbalMedicinalProperty-UsageMethod relation extraction.

A variety of medical service applications in the field of the Internet of Things (IoT) are being studied. Segmentation is important to identify meaningful regions in images and is also required in 3D images. Previous methods have been based on gray value and shape. The Visible Korean dataset consists of serially sectioned high-resolution color images. Unlike computed tomography or magnetic resonance images, automatic segmentation of color images is difficult because detecting an object’s boundaries in colored images is very difficult compared to grayscale images. Therefore, skilled anatomists usually segment color images manually or semi-automatically. We present an out-of-core 3D segmentation method for large-scale image datasets. Our method can segment significant regions in the coronal and sagittal planes, as well as the axial plane, to produce a 3D image. Our system verifies the result interactively with a multi-planar reconstruction view and a 3D view. Our system can be used to train unskilled anatomists and medical students. It is also possible for a skilled anatomist to segment an image remotely since it is difficult to transfer such large amounts of data.

The satisfiability problem is always a core problem in artificial intelligence (AI). And how to improve the efficiency of algorithms solving the satisfiability problem is widely concerned. Algorithm IER (Improved Extension Rule) is based on extension rule. The number of atoms and the number of clauses affect the efficiency of the algorithm IER. DPLL rules are helpful to reduce these numbers. Then a complete algorithm CIER based on splitting rule and extension rule is proposed in this paper in order to improve the efficiency. At first, the algorithm CIER (Complete Improved Extension Rule) reduces the scale of a clause set with DPLL rules. Then, the clause set is split into a group of small clause sets. In the end, the satisfiability of the clause set is got from these small clause sets’. A strategy MOAMD (maximum occurrences and maximum difference) for the algorithm CIER is given. With this strategy, a better arrangement of atoms could be got. This arrangement could make the number of small clause sets fewer and the scale of these sets smaller. So, the algorithm CIER will be more efficient.

Many real-world applications information are organized and represented with graph structure which is often used for representing various ubiquitous networks, such as World Wide Web, social networks, and protein- protein interactive networks. In particular, similarity evaluation between graphs is a challenging issue in many fields such as graph searching, pattern discovery, neuroscience, chemical compounds exploration and so forth. There exist some algorithms which are based on vertices or edges properties, are proposed for addressing this issue. However, these algorithms do not take both vertices and edges similarities into account. Towards this end, this paper pioneers a novel approach for similarity evaluation between graphs based on formal concept analysis. The feature of this approach is able to characterize the relationships between nodes and further reveal the similarity between graphs. Therefore, the highlight of our approach is to take vertices and edges into account simultaneously. The proposed algorithm is evaluated using a case study for validating the effectiveness of the proposed approach on detecting and measuring the similarity between graphs.

In the past decades, various image regularization methods have been introduced. Among them, total variation model has drawn much attention for the reason of its low computational complexity and well-understood mathematical behavior. However, regularization parameter estimation of total variation model is still an open problem. To deal with this problem, a novel adaptive regularization parameter selection scheme is proposed in this paper, by means of using the local spectral response, which has the capability of locally selecting the regularization parameters in a content-aware way and therefore adaptively adjusting the weights between the two terms of the total variation model. Experiment results on simulated and real noisy image show the good performance of our proposed method, in visual improvement and peak signal to noise ratio value.

Intrusion detection systems (IDSs) are crucial in this overwhelming increase of attacks on the computing infrastructure. It intelligently detects malicious and predicts future attack patterns based on the classification analysis using machine learning and data mining techniques. This paper is devoted to thoroughly evaluate classifier ensembles for IDSs in IEEE 802.11 wireless network. Two ensemble techniques, i.e. voting and stacking are employed to combine the three base classifiers, i.e. decision tree (DT), random forest (RF), and support vector machine (SVM). We use area under ROC curve (AUC) value as a performance metric. Finally, we conduct two statistical significance tests to evaluate the performance differences among classifiers.

The pitch tracking of music has been researched for several decades. Several possible improvements are available for creating a good t-distribution, using the instantaneous robust algorithm for pitch tracking framework to perfectly detect pitch. This article shows how to detect the pitch of music utilizing an improved detection method which applies a statistical method; this approach uses a pitch track, or a sequence of frequency bin numbers. This sequence is used to create an index that offers useful features for comparing similar songs. The pitch frequency spectrum is extracted using a modified instantaneous robust algorithm for pitch tracking (IRAPT) as a base combined with the statistical method. The pitch detection algorithm was implemented, and the percentage of performance matching in Thai classical music was assessed in order to test the accuracy of the algorithm. We used the longest common subsequence to compare the similarities in pitch sequence alignments in the music. The experimental results of this research show that the accuracy of retrieval of Thai classical music using the t-distribution of instantaneous robust algorithm for pitch tracking (t-IRAPT) is 99.01%, and is in the top five ranking, with the shortest query sample being five seconds long.

A new medical materials scheduling system and its modeling method for the complex rescue are presented. Different from other similar system, first both the BeiDou Satellite Communication System (BSCS) and the Special Fiber-optic Communication Network (SFCN) are used to collect the rescue requirements and the location information of disaster areas. Then all these messages will be displayed in a special medical software terminal. After that the bipartite graph models are utilized to compute the optimal scheduling of medical materials. Finally, all these results will be transmitted back by the BSCS and the SFCN again to implement a fast guidance of medical rescue. The sole drug scheduling issue, the multiple drugs scheduling issue, and the backup-scheme selection issue are all utilized: the Kuhn-Munkres algorithm is used to realize the optimal matching of sole drug scheduling issue, the spectral clustering-based method is employed to calculate the optimal distribution of multiple drugs scheduling issue, and the similarity metric of neighboring matrix is utilized to realize the estimation of backup-scheme selection issue of medical materials. Many simulation analysis experiments and applications have proved the correctness of proposed technique and system.

As technologies related to sensor network are currently emerging and the use of GeoSensor is increasing along with the development of Internet of Things (IoT) technology, spatial query processing systems to efficiently process spatial sensor data are being actively studied. However, existing spatial query processing systems do not support a spatial-temporal data type and a spatial-temporal operator for processing spatial- temporal sensor data. Therefore, they are inadequate for processing spatial-temporal sensor data like GeoSensor. Accordingly, this paper developed a spatial-temporal query processing system, for efficient spatial-temporal query processing of spatial-temporal sensor data in a sensor network. Lastly, this paper verified the utility of System through a scenario, and proved that this system’s performance is better than existing systems through performance assessment of performance time and memory usage.

Related to the maximum vector problem, a skyline query is to discover dominating tuples from a set of tuples, where each defines an object (such as a hotel) in several dimensions (such as the price and the distance to the beach). A tuple, an instance of an object, dominates another tuple if it is equally good or better in all dimensions and better in at least one dimension. Traditionally, skyline queries are defined upon single- instance data or upon objects each of which is associated with an instance. However, in some cases, an object is not associated with a single instance but rather by multiple instances. For example, on a review website, many users assign scores to a product or a service, and a user’s score is an instance of the object representing the product or the service. Such data is an example of multi-instance data. Unlike most (if not all) others considering the traditional setting, we consider skyline queries defined upon multi-instance data. We define the dominance calculation and propose an algorithm to reduce its computational cost. We use synthetic and real data to evaluate the proposed methods, and the results demonstrate their utility.

This paper presents a scalable multiple camera collaboration strategy for active tracking applications in large areas. The proposed approach is based on distributed mechanism but emulates the master-slave mechanism. The master and slave cameras are not designated but adaptively determined depending on the object dynamic and density distribution. Moreover, the number of cameras emulating the master is not fixed. The collaboration among the cameras utilizes global and local sectors in which the visual correspondences among different cameras are determined. The proposed method combines the local information to construct the global information for emulating the master-slave operations. Based on the global information, the load balancing of active tracking operations is performed to maximize active tracking coverage of the highly dynamic objects. The dynamics of all objects visible in the local camera views are estimated for effective coverage scheduling of the cameras. The active tracking synchronization timing information is chosen to maximize the overall monitoring time for general surveillance operations while minimizing the active tracking miss. The real-time simulation result demonstrates the effectiveness of the proposed method

Wireless sensor networks for forest monitoring are typically deployed in fields in which manual intervention cannot be easily accessed. An interesting approach to extending the lifetime of sensor nodes is the use of energy harvested from the environment. Design constraints are application-dependent and based on the monitored environment in which the energy harvesting takes place. To reduce energy consumption, we designed a power management scheme that combines dynamic duty cycle scheduling at the network layer to plan node duty time. The dynamic duty cycle scheduling is realized based on a tier structure in which the network is concentrically organized around the sink node. In addition, the multi-paths preserved in the tier structure can be used to deliver residual packets when a path failure occurs. Experimental results show that the proposed method has a better performance.

A prototype selection method chooses a small set of training points from a whole set of class data. As the data size increases, the selected prototypes play a significant role in covering class regions and learning a discriminate rule. This paper discusses the methods for selecting prototypes in a classification framework. We formulate a prototype selection problem into a set covering optimization problem in which the sets are composed with distance metric and predefined classes. The formulation of our problem makes us draw attention only to prototypes per class, not considering the other class points. A training point becomes a prototype by checking the number of neighbors and whether it is preselected. In this setting, we propose a greedy algorithm which chooses the most relevant points for preserving the class dominant regions. The proposed method is simple to implement, does not have parameters to adapt, and achieves better or comparable results on both artificial and real-world problems.

Dynamic thermal rating (DTR) system is an effective method to improve the capacity of existing overhead line. According to the methodology based on CIGRE (International Council on Large Electric systems) standard, ampacity values under steady-state heating balance can be calculated from ambient environmental conditions. In this study, simulation analysis of relations between parameters and ampacity is described as functional dependence, which can provide an effective basis for the design and research of overhead transmission lines. The simulation of ampacity variation in different rating scales is described in this paper, which are determined from real-time meteorological data and conductor state parameters. To test the performance of DTR in different rating scales, capacity improvement and risk level are presented. And the experimental results show that the capacity of transmission line by using DTR has significant improvement, with low probability of risk. The information of this study has an important reference value to the operation management of power grid

In this paper, a texture feature extraction method using local energy and local correlation of Gabor transformed images is proposed and applied to an image retrieval system. The Gabor wavelet is known to be similar to the response of the human visual system. The outputs of the Gabor transformation are robust to variants of object size and illumination. Due to such advantages, it has been actively studied in various fields such as image retrieval, classification, analysis, etc. In this paper, in order to fully exploit the superior aspects of Gabor wavelet, local energy and local correlation features are extracted from Gabor transformed images and then applied to an image retrieval system. Some experiments are conducted to compare the performance of the proposed method with those of the conventional Gabor method and the popular rotation-invariant uniform local binary pattern (RULBP) method in terms of precision vs recall. The Mahalanobis distance is used to measure the similarity between a query image and a database (DB) image. Experimental results for Corel DB and VisTex DB show that the proposed method is superior to the conventional Gabor method. The proposed method also yields precision and recall 6.58% and 3.66% higher on average in Corel DB, respectively, and 4.87% and 3.37% higher on average in VisTex DB, respectively, than the popular RULBP method.

For text categorization task, distinctive text features selection is important due to feature space high dimensionality. It is important to decrease the feature space dimension to decrease processing time and increase accuracy. In the current study, for text categorization task, we introduce a novel statistical feature selection approach. This approach measures the term distribution in all collection documents, the term distribution in a certain category and the term distribution in a certain class relative to other classes. The proposed method results show its superiority over the traditional feature selection methods.

Mobility arises naturally in the Internet of Things networks, since the location of mobile objects, e.g., mobile agents, mobile software, mobile things, or users with wireless hardware, changes as they move. Tracking their current location is essential to mobile computing. To overcome the scalability problem, hierarchical architectures of location databases have been proposed. When location updates and lookups for mobile objects are localized, these architectures become effective. However, the network signaling costs and the execution number of database operations increase particularly when the scale of the architectures and the numbers of databases becomes large to accommodate a great number of objects. This disadvantage can be alleviated by a location caching scheme which exploits the spatial and temporal locality in location lookup. In this paper, we propose a hierarchical location caching scheme, which acclimates the existing location caching scheme to a hierarchical architecture of location databases. The performance analysis indicates that the adjustment of such thresholds has an impact on cost reduction in the proposed scheme.

The Journal of Information Processing Systems (JIPS) is one of the journals published by the Korean Information Processing Society (KIPS), which publishes papers related to a wide variety of advanced research fields including systems, applications, networks, architecture, algorithms, security, and so forth. The organization and has the indices such as ESCI, SCOPUS, EI COMPENDEX, DOI, DBLP, EBSCO, Google Scholar, and CrossRef. There are four divisions: Computer System and Theory, Multimedia Systems and Graphics, Communication Systems and Security, and Information Systems and Application.

Quorum-based algorithms are widely used for solving several problems in mobile ad hoc networks (MANET) and wireless sensor networks (WSN). Several quorum-based protocols are proposed for multi-hop ad hoc networks that each one has its pros and cons. Quorum-based protocol (QEC or QPS) is the first study in the asynchronous sleep scheduling protocols. At the time, most of the proposed protocols were non-adaptive ones. But nowadays, adaptive quorum-based protocols have gained increasing attention, because we need protocols which can change their quorum size adaptively with network conditions. In this paper, we first introduce the most popular quorum systems and explain quorum system properties and its performance criteria. Then, we present a comparative and comprehensive survey of the non-adaptive and adaptive quorum-based protocols which are subsequently discussed in depth. We also present the comparison of different quorum systems in terms of the expected quorum overlap size (EQOS) and active ratio. Finally, we summarize the pros and cons of current adaptive and non-adaptive quorum-based protocols.

Recently, the development of the smart home field provides a range of services to install and keep the smart home appliance in a user's residential environment pleasantly. However, the conventional system method is not convenient enough to use properly because users have to select a device and manually operate the device on their own. In this paper, we propose a system to set the priority of the devices selected by the user and proceed with the task. When a user selects a device, the system recommends an optimal device associated with the device. The system compares and sets the priority of each device, carrying out the task one by one according to the set priority. Therefore, the proposed system is expected to provide users with increased convenience and more efficient task management.

Gene identification is at the center of genomic studies. Although the first phase of the Encyclopedia of DNA Elements (ENCODE) project has been claimed to be complete, the annotation of the functional elements is far from being so. Computational methods in gene identification continue to play important roles in this area and other relevant issues. So far, a lot of work has been performed on this area, and a plethora of computational methods and avenues have been developed. Many review papers have summarized these methods and other related work. However, most of them focus on the methodologies from a particular aspect or perspective. Different from these existing bodies of research, this paper aims to comprehensively summarize the mainstream computational methods in gene identification and tries to provide a short but concise technical reference for future studies. Moreover, this review sheds light on the emerging trends and cutting-edge techniques that are believed to be capable of leading the research on this field in the future.

In this paper, we propose a method to achieve improved number plate detection for mobile devices by applying a multiple convolutional neural network (CNN) approach. First, we processed supervised CNN- verified car detection and then we applied the detected car regions to the next supervised CNN-verifier for number plate detection. In the final step, the detected number plate regions were verified through optical character recognition by another CNN-verifier. Since mobile devices are limited in computation power, we are proposing a fast method to recognize number plates. We expect for it to be used in the field of intelligent transportation systems.

The Active Appearance Model (AAM) is a class of deformable models, which, in the segmentation process, integrates the priori knowledge on the shape and the texture and deformation of the structures studied. This model in its sequential form is computationally intensive and operates on large data sets. This paper presents another framework to implement the standard version of the AAM model. We suggest a distributed and parallel approach justified by the characteristics of the model and their potentialities. We introduce a schema for the representation of the overall model and we study of operations that can be parallelized. This approach is intended to exploit the benefits build in the area of advanced image processing.

Organizations in some industries are still hesitant to adopt the Enterprise Resource Planning (ERP) system due to its high risk of failures. This study examined how industry classification affects the successful implementation of the ERP system. To achieve this goal, we reinvestigated the existing ERP Success Model that was developed by Chung with the data from various industry sectors, since Chung validated the model only in the engineering and construction industries. In order to test to see if the Chung model can be applicable outside the engineering and construction industries, the relationships between the ERP success indicators and the critical success factors in the Chung model and those in the sample data collected from ten different industry sectors were compared and investigated. The ten industry sectors were selected based on the Global Industry Classification Standard (GICS). We found that the impact of success factors on the success of implementing an ERP system varied across industry sectors. This means that the success of ERP system implementation can be industry-specific. Thus, industry classification should be considered as another factor to help IT decision makers or top-management avoid ERP system failures when they plan to implement a new ERP system.

This paper presents a complete method for vehicle detection and tracking in a fixed setting based on computer vision. Vehicle detection is performed based on Scale Invariant Feature Transform (SIFT) feature matching. With SIFT feature detection and matching, the geometrical relations between the two images is estimated. Then, the previous image is aligned with the current image so that moving vehicles can be detected by analyzing the difference image of the two aligned images. Vehicle tracking is also performed based on SIFT feature matching. For the decreasing of time consumption and maintaining higher tracking accuracy, the detected candidate vehicle in the current image is matched with the vehicle sample in the tracking sample set, which contains all of the detected vehicles in previous images. Most remarkably, the management of vehicle entries and exits is realized based on SIFT feature matching with an efficient update mechanism of the tracking sample set. This entire method is proposed for highway traffic environment where there are no non- automotive vehicles or pedestrians, as these would interfere with the results.

System Architecture Evolution (SAE) with Long Term Evolution (LTE) has been used as the key technology for the next generation mobile networks. To support mobility in the LTE/SAE-based mobile networks, the Proxy Mobile IPv6 (PMIP), in which the Mobile Access Gateway (MAG) of the PMIP is deployed at the Serving Gateway (S-GW) of LTE/SAE and the Local Mobility Anchor (LMA) of PMIP is employed at the PDN Gateway (P-GW) of LTE/SAE, is being considered. In the meantime, the Host Identity Protocol (HIP) and the Locator Identifier Separation Protocol (LISP) have recently been proposed with the identifier-locator separation principle, and they can be used for mobility management over the global-scale networks. In this paper, we discuss how to provide the inter-domain mobility management over PMIP-based LTE/SAE networks by investigating three possible scenarios: mobile IP with PMIP (denoted by MIP-PMIP-LTE/SAE), HIP with PMIP (denoted by HIP-PMIP-LTE/SAE), and LISP with PMIP (denoted by LISP-PMIP-LTE/SAE). For performance analysis of the candidate inter-domain mobility management schemes, we analyzed the traffic overhead at a central agent and the total transmission delay required for control and data packet delivery. From the numerical results, we can see that HIP-PMIP-LTE/SAE and LISP-PMIP-LTE/SAE are preferred to MIP-PMIP-LTE/SAE in terms of traffic overhead; whereas, LISP-PMIP-LTE/SAE is preferred to HIP-PMIP-LTE/SAE and MIP-PMIP-LTE/SAE in the viewpoint of total transmission delay.

Global value numbering (GVN) is a method for detecting equivalent expressions in programs. Most of the GVN algorithms concentrate on detecting equalities among variables and hence, are limited in their ability to identify value-based redundancies. In this paper, we suggest improvements by which the efficient GVN algo- rithm by Gulwani and Necula (2007) can be made to detect expression equivalences that are required for identifying value based redundancies. The basic idea for doing so is to use an anticipability-based Join algo- rithm to compute more precise equivalence information at join points. We provide a proof of correctness of the improved algorithm and show that its running time is a polynomial in the number of expressions in the program

Cloud computing is a distributed computing model that has lot of drawbacks and faces difficulties. Many new innovative and emerging techniques take advantage of its features. In this paper, we explore the security threats to and Risk Assessments for cloud computing, attack mitigation frameworks, and the risk-based dynamic access control for cloud computing. Common security threats to cloud computing have been explored and these threats are addressed through acceptable measures via governance and effective risk management using a tailored Security Risk Approach. Most existing Threat and Risk Assessment (TRA) schemes for cloud services use a converse thinking approach to develop theoretical solutions for minimizing the risk of security breaches at a minimal cost. In our study, we propose an improved Attack-Defense Tree mechanism designated as iADTree, for solving the TRA problem in cloud computing environments.

Due to the proliferation of data being exchanged and the increase of dependency on this data for critical decision-making, it has become imperative to ensure the trustworthiness of the data at the receiving end in order to obtain reliable results. Data provenance, the derivation history of data, is a useful tool for evaluating the trustworthiness of data. Various frameworks have been proposed to evaluate the trustworthiness of data based on data provenance. In this paper, we briefly review a history of these frameworks for evaluating the trustworthiness of data and present an overview of some prominent state-of-the-art evaluation frameworks. Moreover, we provide a comparative analysis of two key frameworks by evaluating various aspects in an executional environment. Our analysis points to various open research issues and provides an understanding of the functionalities of the frameworks that are used to evaluate the trustworthiness of data.

In 2004, Yang et al. proposed a threshold proxy signature scheme that efficiently reduced the computational complexity of previous schemes. In 2009, Hu and Zhang presented some security leakages of Yang’s scheme and proposed an improvement to eliminate the security leakages that had been pointed out. In this paper, we will point out that both Yang and Hu’s schemes still have some security weaknesses, which cannot resist warrant attacks where an adversary can forge valid proxy signatures by changing the warrant . We also propose two secure improvements for these schemes.

The exceptional development of electronic device technology, the miniaturization of mobile devices, and the development of telecommunication technology has made it possible to monitor human biometric data anywhere and anytime by using different types of wearable or embedded sensors. In daily life, mobile devices can collect wireless body area network (WBAN) data, and the co-collected location data is also important for disease analysis. In order to efficiently analyze WBAN data, including location information and support medical analysis services, we propose a geohash-based spatial index method for a location-aware WBAN data monitoring system on the NoSQL database system, which uses an R-tree-based global tree to organize the real-time location data of a patient and a B-tree-based local tree to manage historical data. This type of spatial index method is a support cloud-based location-aware WBAN data monitoring system. In order to evaluate the proposed method, we built a system that can support a JavaScript Object Notation (JSON) and Binary JSON (BSON) document data on mobile gateway devices. The proposed spatial index method can efficiently process location-based queries for medical signal monitoring. In order to evaluate our index method, we simulated a small system on MongoDB with our proposed index method, which is a document-based NoSQL database system, and evaluated its performance.

A primary task in wireless sensor networks (WSNs) is data collection. The main objective of this task is to collect sensor readings from sensor fields at predetermined sinks using routing protocols without conducting network processing at intermediate nodes, which have been proved as being inefficient in many research studies using a static sink. The major drawback is that sensor nodes near a data sink are prone to dissipate more energy power than those far away due to their role as relay nodes. Recently, novel WSN architectures based on mobile sinks and mobile relay nodes, which are able to move inside the region of a deployed WSN, which has been developed in most research works related to mobile WSN mainly exploit mobility to reduce and balance energy consumption to enhance communication reliability among sensor nodes. Our main purpose in this paper is to propose a solution to the problem of deploying mobile data collectors for alleviating the high traffic load and resulting bottleneck in a sink’s vicinity, which are caused by static approaches. For this reason, several WSNs based on mobile elements have been proposed. We studied two key issues in WSN mobility: the impact of the mobile element (sink or relay nodes) and the impact of the mobility model on WSN based on its performance expressed in terms of energy efficiency and reliability. We conducted an extensive set of simulation experiments. The results obtained reveal that the collection approach based on relay nodes and the mobility model based on stochastic perform better.

Spectrum sensing is an essential function that enables cognitive radio technology to explore spectral holes and resourcefully access them without any harmful interference to the licenses user. Spectrum sensing done by a single node is highly affected by fading and shadowing. Thus, to overcome this, cooperative spectrum sensing was introduced. Currently, the advancements in multiple antennas have given a new dimension to cognitive radio research. In this paper, we propose a multiple energy detector for cooperative spectrum sensing schemes based on the evidence theory. Also, we propose a reporting mechanism for multiple energy detectors. With our proposed system, we show that a multiple energy detector using a cooperative spectrum sensing scheme based on evidence theory increases the reliability of the system, which ultimately increases the spectrum sensing and reduces the reporting time. Also in simulation results, we show the probability of error for the proposed system. Our simulation results show that our proposed system outperforms the conventional energy detector system

TCS_SHA-3 is a family of four cryptographic hash functions that are covered by a United States patent (US 2009/0262925). The digest sizes are 224, 256, 384 and 512 bits. The hash functions use bijective functions in place of the standard compression functions. In this paper we describe first and second preimage attacks on the full hash functions. The second preimage attack requires negligible time and the first preimage attack requires O(236) time. In addition to these attacks, we also present a negligible time second preimage attack on a strengthened variant of the TCS_SHA-3. All the attacks have negligible memory requirements. To the best of our knowledge, there is no prior cryptanalysis of any member of the TCS_SHA-3 family in the literature.

To provide effective communication in Wireless Mesh Network (WMN), several algorithms have been proposed. Since, the possibilities of numerous failures always exist during communication; resiliency has been proved to be an important aspect for WMN to recover from these failures. Resiliency in general is the diligence of reliability and availability in network. Several types of resiliency based routing algorithms have been proposed i.e. Resilient Multicast, ROMER etc. Resilient Multicast establishes two-node disjoint path and ROMER uses credit based approach to provide resiliency in the network. However these proposed approaches have some disadvantages in terms of network throughput and network congestion. Previously Buffer Based Routing (BBR) approach has been proposed to overcome these disadvantages. We have proved earlier that BBR is more efficient w.r.t throughput, network performance and reliability. In this paper we have considered the node/link failure issues and analogous performance of BBR. For this we have proposed Resilient Packet Transmission (RPT) algorithm as a remedy for BBR during such failures. Further we have shown the comparative performance analysis of previous approaches with our proposed approach. Network throughput, network congestion and resiliency against node/link failure are particular performance metrics which are examined over different sized WMN.

Image compression is an essential technique for saving time and storage space for the gigantic amount of data generated by images. This paper introduces an adaptive source-mapping scheme that greatly improves bit- level lossless grayscale image compression. In the proposed mapping scheme, the frequency of occurrence of each symbol in the original image is computed. According to their corresponding frequencies, these symbols are sorted in descending order. Based on this order, each symbol is replaced by an 8-bit weighted fixed-length code. This replacement will generate an equivalent binary source with an increased length of successive identical symbols (0s or 1s). Different experiments using Lempel-Ziv lossless image compression algorithms have been conducted on the generated binary source. Results show that the newly proposed mapping scheme achieves some dramatic improvements in regards to compression ratios.

Dynamic textures are videos that exhibit a stationary property with respect to time (i.e., they have patterns that repeat themselves over a large number of frames). These patterns can easily be tracked by a linear dynamic system. In this paper, a model t...

The recent advent of increasingly affordable and powerful 3D scanning devices capable of capturing high resolution range data about real-world objects and environments has fueled research into effective 3D surface reconstruction techniques for rendering the raw point cloud data produced by many of these devices into a form that would make it usable in a variety of application domains. This paper, therefore, provides an overview of the existing literature on surface reconstruction from 3D point clouds. It explains some of the basic surface reconstruction concepts, describes the various factors used to evaluate surface reconstruction methods, highlights some commonly encountered issues in dealing with the raw 3D point cloud data and delineates the tradeoffs between data resolution/accuracy and processing speed. It also categorizes the various techniques for this task and briefly analyzes their empirical evaluation results demarcating their advantages and disadvantages. The paper concludes with a cross-comparison of methods which have been evaluated on the same benchmark data sets along with a discussion of the overall trends reported in the literature. The objective is to provide an overview of the state of the art on surface reconstruction from point cloud data in order to facilitate and inspire further research in this area.

In order to considerably reduce the ambiguity rate, we propose in this article a disambiguation approach that is based on the selection of the right diacritics at different analysis levels. This hybrid approach combines a linguistic approach with a multi-criteria decision one and could be considered as an alternative choice to solve the morpho-lexical ambiguity problem regardless of the diacritics rate of the processed text. As to its evaluation, we tried the disambiguation on the online Alkhalil morphological analyzer (the proposed approach can be used on any morphological analyzer of the Arabic language) and obtained encouraging results with an F-measure of more than 80%.

The Joint Bayesian (JB) method has been used in most state-of-the-art methods for face verification. However, since the publication of the original JB method in 2012, no improved verification method has been proposed. A lot of studies on face verification have been focused on extracting good features to improve the performance in the challenging Labeled Faces in the Wild (LFW) database. In this paper, we propose an improved version of the JB method, called the two-dimensional Joint Bayesian (2D-JB) method. It is very simple but effective in both the training and test phases. We separated two symmetric terms from the three terms of the JB log likelihood ratio function. Using the two terms as a two-dimensional vector, we learned a decision line to classify same and not-same cases. Our experimental results show that the proposed 2D-JB method significantly outperforms the original JB method by more than 1% in the LFW database.

The aim of this paper is to examine the effectiveness of combining three popular tools used in pattern recognition, which are the Active Appearance Model (AAM), the two-dimensional discrete cosine transform (2D-DCT), and Kernel Fisher Analysis (KFA), for face recognition across age variations. For this purpose, we first used AAM to generate an AAM-based face representation; then, we applied 2D-DCT to get the descriptor of the image; and finally, we used a multiclass KFA for dimension reduction. Classification was made through a K-nearest neighbor classifier, based on Euclidean distance. Our experimental results on face images, which were obtained from the publicly available FG-NET face database, showed that the proposed descriptor worked satisfactorily for both face identification and verification across age progression.

In this paper, we propose a framework that attempts to incorporate landmarks into a segment-based Mandarin speech recognition system. In this method, landmarks provide boundary information and phonetic class information, and the information is used to direct the decoding process. To prove the validity of this method, two kinds of landmarks that can be reliably detected are used to direct the decoding process of a segment model (SM) based Mandarin LVCSR (large vocabulary continuous speech recognition) system. The results of our experiment show that about 30% decoding time can be saved without an obvious decrease in recognition accuracy. Thus, the potential of our method is demonstrated.

Despite the convenience brought by the advances in web and Internet technology, users are increasingly being exposed to the danger of various types of cyber attacks. In particular, recent studies have shown that today’s cyber attacks usually occur on the web via malware distribution and the stealing of personal information. A drive-by download is a kind of web-based attack for malware distribution. Researchers have proposed various methods for detecting a drive-by download attack effectively. However, existing methods have limitations against recent evasion techniques, including JavaScript obfuscation, hiding, and dynamic code evaluation. In this paper, we propose an emulation-based malicious webpage detection method. Based on our study on the limitations of the existing methods and the state-of-the-art evasion techniques, we will introduce four features that can detect malware distribution networks and we applied them to the proposed method. Our performance evaluation using a URL scan engine provided by VirusTotal shows that the proposed method detects malicious webpages more precisely than existing solutions.

This research presents the battery discharge rate models for the energy consumption of mobile phone batteries based on machine learning by taking into account three usage patterns of the phone: the standby state, video playing, and web browsing. We present the experimental design methodology for collecting data, preprocessing, model construction, and parameter selections. The data is collected based on the HTC One X hardware platform. We considered various setting factors, such as Bluetooth, brightness, 3G, GPS, Wi-Fi, and Sync. The battery levels for each possible state vector were measured, and then we constructed the battery prediction model using different regression functions based on the collected data. The accuracy of the constructed models using the multi-layer perceptron (MLP) and the support vector machine (SVM) were compared using varying kernel functions. Various parameters for MLP and SVM were considered. The measurement of prediction efficiency was done by the mean absolute error (MAE) and the root mean squared error (RMSE). The experiments showed that the MLP with linear regression performs well overall, while the SVM with the polynomial kernel function based on the linear regression gives a low MAE and RMSE. As a result, we were able to demonstrate how to apply the derived model to predict the remaining battery charge.

This paper presents the constituent-based approach for aligning bilingual multiword expressions, such as noun phrases, by considering the relationship not only between source expressions and their target translation equivalents but also between the expressions and constituents of the target equivalents. We only considered the compositional preferences of multiword expressions and not their idiomatic usages because our multiword identification method focuses on their collocational or compositional preferences. In our experimental results, the constituent-based approach showed much better performances than the general method for extracting bilingual multiword expressions. For our future work, we will examine the scoring method of the constituent-based approach in regards to having the best performance. Moreover, we will extend target entries in the evaluation dictionaries by considering their synonyms.

The paper proposes a novel framework for 3D face verification using dimensionality reduction based on highly distinctive local features in the presence of illumination and expression variations. The histograms of efficient local descriptors are used to represent distinctively the facial images. For this purpose, different local descriptors are evaluated, Local Binary Patterns (LBP), Three-Patch Local Binary Patterns (TPLBP), Four- Patch Local Binary Patterns (FPLBP), Binarized Statistical Image Features (BSIF) and Local Phase Quantization (LPQ). Furthermore, experiments on the combinations of the four local descriptors at feature level using simply histograms concatenation are provided. The performance of the proposed approach is evaluated with different dimensionality reduction algorithms: Principal Component Analysis (PCA), Orthogonal Locality Preserving Projection (OLPP) and the combined PCA+EFM (Enhanced Fisher linear discriminate Model). Finally, multi-class Support Vector Machine (SVM) is used as a classifier to carry out the verification between imposters and customers. The proposed method has been tested on CASIA-3D face database and the experimental results show that our method achieves a high verification performance.

As interest in the Internet increases, related technologies are also quickly progressing. As smart devices become more widely used, interest is growing in words are missing here like “improving the” or “figuring out how to use the” future Internet to resolve the fundamental issues of transmission quality and security. The future Internet is being studied to improve the limits of existing Internet structures and to reflect new requirements. In particular, research on words are missing here like “finding new forms of” or “applying new forms of” or “studying various types of” or “finding ways to provide more” reliable communication to connect the Internet to various services is in demand. In this paper, we analyze the security threats caused by malicious activities in the future Internet and propose a human behavior analysis-based security service model for malware detection and intrusion prevention to provide more reliable communication. Our proposed service model provides high reliability services by responding to security threats by detecting various malware intrusions and protocol authentications based on human behavior.

Considering video copy transform diversity, a multi-feature video copy detection algorithm based on a Speeded-Up Robust Features (SURF) local descriptor is proposed in this paper. Video copy coarse detection is done by an ordinal measure (OM) algorithm after the video is preprocessed. If the matching result is greater than the specified threshold, the video copy fine detection is done based on a SURF descriptor and a box filter is used to extract integral video. In order to improve video copy detection speed, the Hessian matrix trace of the SURF descriptor is used to pre-match, and dimension reduction is done to the traditional SURF feature vector for video matching. Our experimental results indicate that video copy detection precision and recall are greatly improved compared with traditional algorithms, and that our proposed multiple features algorithm has good robustness and discrimination accuracy, as it demonstrated that video detection speed was also improved.

The Virtual Local Area Network (VLAN) has been used for a long time in campus and enterprise networks as the most popular network virtualization solution. Due to the benefits and advantages achieved by using VLAN, network operators and administrators have been using it for constructing their networks up until now and have even extended it to manage the networking in a cloud computing system. However, their configuration is a complex, tedious, time-consuming, and error-prone process. Since Software Defined Networking (SDN) features the centralized network management and network programmability, it is a promising solution for handling the aforementioned challenges in VLAN management. In this paper, we first introduce a new architecture for campus and enterprise networks by leveraging SDN and OpenFlow. Next, we have designed and implemented an application for easily managing and flexibly troubleshooting the VLANs in this architecture. This application supports both static VLAN and dynamic VLAN configurations. In addition, we discuss the hybrid-mode operation where the packet processing is involved by both the OpenFlow control plane and the traditional control plane. By deploying a real test-bed prototype, we illustrate how our system works and then evaluate the network latency in dynamic VLAN operation.

Data hiding is a wide field that is helpful to secure network communications. It is common that many data hiding researchers consider improving and increasing many aspects such as capacity, stego file quality, or robustness. In this paper, we use an audio file as a cover and propose a reversible steganographic method that is modifying the sample values using modulus function in order to make the reminder of that particular value to be same as the secret bit that is needed to be embedded. In addition, we use a location map that locates these modified sample values. This is because in reversible data hiding it needs to exactly recover both the secret message and the original audio file from that stego file. The experimental results show that, this method (measured by correlation algorithm) is able to retrieve exactly the same secret message and audio file. Moreover, it has made a significant improvement in terms of the following: the capacity since each sample value is carrying a secret bit. The quality measured by peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), Pearson correlation coefficient (PCC), and Similarity Index Modulation (SIM). All of them have proven that the quality of the stego audio is relatively high.

Cloud computing is a new style of computing in which dynamically scalable and reconfigurable resources are provided as a service over the internet. The MapReduce framework is currently the most dominant programming model in cloud computing. It is necessary to protect the integrity of MapReduce data processing services. Malicious workers, who can be divided into collusive workers and non-collusive workers, try to generate bad results in order to attack the cloud computing. So, figuring out how to efficiently detect the malicious workers has been very important, as existing solutions are not effective enough in defeating malicious behavior. In this paper, we propose a security protection framework to detect the malicious workers and ensure computation integrity in the map phase of MapReduce. Our simulation results show that our proposed security protection framework can efficiently detect both collusive and non-collusive workers and guarantee high computation accuracy.

The Journal of Information Processing Systems (JIPS) is the official international journal of the Korea Information Processing Society, and has become the leading journal in the various areas of information processing technology in Korea that was indexed in ESCI, SCOPUS, EI, DOI, DBLP, COMPENDEX, EBSCO, Google Scholar, and CrossRef. This rapid growth represents result as paper submissions amount of papers submitted in 2016 is about 15 times higher than in 2013. Thus, the accepted rate has been decreasing, which means that we have been publishing outstanding papers from the high competition.

In the conventional clustering algorithms, an object could be assigned to only one group. However, this is sometimes not the case in reality, there are cases where the data do not belong to one group. As against, the fuzzy clustering takes into consideration the degree of fuzzy membership of each pixel relative to different classes. In order to overcome some shortcoming with traditional clustering methods, such as slow convergence and their sensitivity to initialization values, we have used the Harmony Search algorithm. It is based on the population metaheuristic algorithm, imitating the musical improvisation process. The major thrust of this algorithm lies in its ability to integrate the key components of population-based methods and local search-based methods in a simple optimization model. We propose in this paper a new unsupervised clustering method called the Fuzzy Harmony Search-Fourier Transform (FHS-FT). It is based on hybridization fuzzy clustering and the harmony search algorithm to increase its exploitation process and to further improve the generated solution, while the Fourier transform to increase the size of the image's data. The results show that the proposed method is able to provide viable solutions as compared to previous work

We studied the current state-of-the-art of Smart TV, the challenges and the drawbacks. Mainly we discussed the lack of end-to-end solution. We then illustrated the differences between Smart TV and IPTV from network service provider point of view. Unlike IPTV, viewer of Smart TV’s over-the-top (OTT) services could be global, such as foreign nationals in a country or viewers having special viewing preferences. Those viewers are sparsely distributed. The existing TV service deployment models over Internet are not suitable for such viewers as they are based on content popularity, hence we propose a community based service deployment methodology with proactive content caching on rendezvous points (RPs). In our proposal, RPs are intermediate nodes responsible for caching routing and decision making. The viewer’s community formation is based on geographical locations and similarity of their interests. The idea of using context information to do proactive caching is itself not new, but we combined this with “in network caching” mechanism of content centric network (CCN) architecture. We gauge the performance improvement achieved by a community model. The result shows that when the total numbers of requests are same; our model can have significantly better performance, especially for sparsely distributed communities

High dimensional space is the biggest problem when classification process is carried out, because it takes longer time for computation, so that the costs involved are also expensive. In this research, the facial space generated from homogeneous and non-homogeneous polynomial was proposed to extract the facial image features. The homogeneous and non-homogeneous polynomial-based eigenspaces are the second opinion of the feature extraction of an appearance method to solve non-linear features. The kernel trick has been used to complete the matrix computation on the homogeneous and non-homogeneous polynomial. The weight and projection of the new feature space of the proposed method have been evaluated by using the three face image databases, i.e., the YALE, the ORL, and the UoB. The experimental results have produced the highest recognition rate 94.44%, 97.5%, and 94% for the YALE, ORL, and UoB, respectively. The results explain that the proposed method has produced the higher recognition than the other methods, such as the Eigenface, Fisherface, Laplacianfaces, and O-Laplacianfaces

As mobile augmented reality technologies are spreading these days, many users want to produce augmented reality (AR) contents what they need by themselves. To keep pace with such needs, we have developed a mobile AR contents builder (hereafter referred to as MARB) that enables the user to easily connect a natural marker and a virtual object with various interaction events that are used to manipulate the virtual object in a mobile environment so that users can simply produce an AR content using natural photos and virtual objects that they select. MARB consists of five major modules—target manger, virtual object manager, AR accessory manager, AR content manager, and AR viewer. The target manager, virtual object manager and AR accessory manager register and manage natural target markers, various virtual objects and content accessories (such as various decorating images), respectively. The AR content manger defines a connection between a target and a virtual object with enabling various interactions for the desired functions such as translation/rotation/scaling of the virtual object, playing of a music, etc. AR viewer augments various virtual objects (such as 2D images, 3D models and video clips) on the pertinent target. MARB has been developed in a mobile application (app) format in order to create AR contents simply using mobile smart devices without switching to a PC environment for authoring the content. In this paper, we present the detail organizations and applications of MARB. It is expected that MARB will enable ordinary users to produce diverse mobile AR contents for various purposes with ease and contribute to expanding the mobile AR market based on spread of a variety of AR contents

Due to the block-based discrete cosine transform (BDCT), JPEG compressed images usually exhibit blocking artifacts. When the bit rates are very low, blocking artifacts will seriously affect the image’s visual quality. A bilateral filter has the features for edge-preserving when it smooths images, so we propose an adaptiveweighted bilateral filter based on the features. In this paper, an image-deblocking scheme using this kind of adaptive-weighted bilateral filter is proposed to remove and reduce blocking artifacts. Two parameters of the proposed adaptive-weighted bilateral filter are adaptive-weighted so that it can avoid over-blurring unsmooth regions while eliminating blocking artifacts in smooth regions. This is achieved in two aspects: by using local entropy to control the level of filtering of each single pixel point within the image, and by using an improved blind image quality assessment (BIQA) to control the strength of filtering different images whose blocking artifacts are different. It is proved by our experimental results that our proposed image-deblocking scheme provides good performance on eliminating blocking artifacts and can avoid the over-blurring of unsmooth regions

The real-time detection of malware remains an open issue, since most of the existing approaches for malware categorization focus on improving the accuracy rather than the detection time. Therefore, finding a proper balance between these two characteristics is very important, especially for such sensitive systems. In this paper, we present a fast portable executable (PE) malware detection system, which is based on the analysis of the set of Application Programming Interfaces (APIs) called by a program and some technical PE features (TPFs). We used an efficient feature selection method, which first selects the most relevant APIs and TPFs using the chi-square (KHI²) measure, and then the Phi (?) coefficient was used to classify the features in different subsets, based on their relevance. We evaluated our method using different classifiers trained on different combinations of feature subsets. We obtained very satisfying results with more than 98% accuracy. Our system is adequate for real-time detection since it is able to categorize a file (Malware or Benign) in 0.09 seconds

The vehicle license plate recognition (VLPR) system analyzes and monitors the speed of vehicles, theft of vehicles, the violation of traffic rules, illegal parking, etc., on the motorway. The VLPR consists of three major parts: license plate detection (LPD), license plate character segmentation (LPCS), and license plate character recognition (LPCR). This paper presents an efficient method for the LPCS and LPCR of Korean vehicle license plates (LPs). LP tilt adjustment is a very important process in LPCS. Radon transformation is used to correct the tilt adjustment of LP. The global threshold segmentation method is used for segmented LP characters from two different types of Korean LPs, which are a single row LP (SRLP) and double row LP (DRLP). The cross-correlation matching method is used for LPCR. Our experimental results show that the proposed methods for LPCS and LPCR can be easily implemented, and they achieved 99.35% and 99.85% segmentation and recognition accuracy rates, respectively for Korean LPs

Today’s modern world requires a digital watermarking technique that takes the redundancy of an image into consideration for embedding a watermark. The novel algorithm used in this paper takes into consideration the redundancies of spatial domain and wavelet domain for embedding a watermark. Also, the cryptographybased secret key makes the algorithm difficult to hack and help protect ownership. Watermarking is blind, as it does not require the original image. Few coefficient matrices and secret keys are essential to retrieve the original watermark, which makes it redundant to various intentional attacks. The proposed technique resolves the challenge of optimizing transparency and robustness using a Canny-based edge detector technique. Improvements in the transparency of the cover image can be seen in the computed PSNR value, which is 44.20 dB

Our approach permits to capitalize the expert’s knowledge as business rules by using an agent-based platform. The objective of our approach is to allow experts to manage the daily evolutions of business domains without having to use a technician, and to allow them to be implied, and to participate in the development of the application to accomplish the daily tasks of their work. Therefore, the manipulation of an expert’s knowledge generates the need for information security and other associated technologies. The notion of cryptography has emerged as a basic concept in business rules modeling. The purpose of this paper is to present a cryptographic algorithm based approach to integrate the security aspect in business rules modeling. We propose integrating an agent-based approach in the framework. This solution utilizes a security agent with domain ontology. This agent applies an encryption/decryption algorithm to allow for the confidentiality, authenticity, and integrity of the most important rules. To increase the security of these rules, we used hybrid cryptography in order to take advantage of symmetric and asymmetric algorithms. We performed some experiments to find the best encryption algorithm, which provides improvement in terms of response time, space memory, and security

With the rapid development of both ubiquitous computing and the mobile internet, big data technology is gradually penetrating into various applications, such as smart traffic, smart city, and smart medical. In particular, smart medical, which is one core part of a smart city, is changing the medical structure. Specifically, it is improving treatment planning for various diseases. Since multiple treatment plans generated from smart medical have their own unique treatment costs, pollution effects, side-effects for patients, and so on, determining a sustainable strategy for treatment planning is becoming very critical in smart medical. From the sustainable point of view, this paper first presents a three-dimensional evaluation model for representing the raw medical data and then proposes a sustainable strategy for treatment planning based on the representation model. Finally, a case study on treatment planning for the group of “computer autism” patients is then presented for demonstrating the feasibility and usability of the proposed strategy

The performance issues of screening large database compounds and multiple query compounds in virtual screening highlight a common concern in Chemoinformatics applications. This study investigates these problems by choosing group fusion as a pilot model and presents efficient parallel solutions in parallel platforms, specifically, the multi-core architecture of CPU and many-core architecture of graphical processing unit (GPU). A study of sequential group fusion and a proposed design of parallel CUDA group fusion are presented in this paper. The design involves solving two important stages of group fusion, namely, similarity search and fusion (MAX rule), while addressing embarrassingly parallel and parallel reduction models. The sequential, optimized sequential and parallel OpenMP of group fusion were implemented and evaluated. The outcome of the analysis from these three different design approaches influenced the design of parallel CUDA version in order to optimize and achieve high computation intensity. The proposed parallel CUDA performed better than sequential and parallel OpenMP in terms of both execution time and speedup. The parallel CUDA was 5-10x faster than sequential and parallel OpenMP as both similarity search and fusion MAX stages had been CUDA-optimized

The latest research on the image-based fingerprint matching approaches indicates that they are less complex than the minutiae-based approaches when it comes to dealing with low quality images. Most of the approaches in the literature are not robust to fingerprint rotation and translation. In this paper, we develop a robust fingerprint matching system by extracting the circular region of interest (ROI) of a radius of 50 pixels centered at the core point. Maximizing their orientation correlation aligns two fingerprints that are to be matched. The modified Euclidean distance computed between the extracted orientation features of the sample and query images is used for matching. Extensive experiments were conducted over four benchmark fingerprint datasets of FVC2002 and two other proprietary databases of RFVC 2002 and the AITDB. The experimental results show the superiority of our proposed method over the well-known image-based approaches in the literature.

In the medical fields, many efforts have been made to develop and improve Hospital Information System (HIS) including Electronic Medical Record (EMR), Order Communication System (OCS), and Picture Archiving and Communication System (PACS). However, materials generated and used in medical fields have various types and forms. The current HISs separately store and manage them by different systems, even though they relate to each other and contain redundant data. These systems are not helpful particularly in emergency where medical experts cannot check all of clinical materials in the golden time. Therefore, in this paper, we propose a process to build an integrated data model for medical information currently stored in various HISs. The proposed data model integrates vast information by focusing on medical images since they are most important materials for the diagnosis and treatment. Moreover, the model is disease-specific to consider that medical information and clinical materials including images are different by diseases. Two case studies show the feasibility and the usefulness of our proposed data model by building models about two diseases, acute myocardial infarction (AMI) and ischemic stroke

The exhaustive list of sparsification methods for a digital image suffers from achieving an adequate number of zero and near-zero coefficients. The method proposed in this paper, which is known as the Discrete Rajan Transform Sparsification, overcomes this inadequacy. An attempt has been made to compare the simulation results for benchmark images by various popular, existing techniques and analyzing from different aspects. With the help of Discrete Rajan Transform algorithm, both lossless and lossy sparse representations are obtained. We divided an image into 8×8-sized blocks and applied the Discrete Rajan Transform algorithm to it to get a more sparsified spectrum. The image was reconstructed from the transformed output of the Discrete Rajan Transform algorithm with an acceptable peak signal-to-noise ratio. The performance of the Discrete Rajan Transform in providing sparsity was compared with the results provided by the Discrete Fourier Transform, Discrete Cosine Transform, and the Discrete Wavelet Transform by means of the Degree of Sparsity. The simulation results proved that the Discrete Rajan Transform provides better sparsification when compared to other methods

In watermarking schemes, the discrete wavelet transform (DWT) is broadly used because its frequency component separation is very useful. Moreover, LU decomposition has little influence on the visual quality of the watermark. Hence, in this paper, a novel blind watermark algorithm is presented based on LU transform and DWT for the copyright protection of digital images. In this algorithm, the color host image is first performed with DWT. Then, the horizontal and vertical diagonal high frequency components are extracted from the wavelet domain, and the sub-images are divided into 4×4 non-overlapping image blocks. Next, each sub-block is performed with LU decomposition. Finally, the color image watermark is transformed by Arnold permutation, and then it is inserted into the upper triangular matrix. The experimental results imply that this algorithm has good features of invisibility and it is robust against different attacks to a certain degree, such as contrast adjustment, JPEG compression, salt and pepper noise, cropping, and Gaussian noise

This paper presents a new combined forecasting method that is guided by the soft set theory (CFBSS) to predict business failures with different sample sizes. The proposed method combines both qualitative analysis and quantitative analysis to improve forecasting performance. We considered an expert system (ES), logistic regression (LR), and support vector machine (SVM) as forecasting components whose weights are determined by the receiver operating characteristic (ROC) curve. The proposed procedure was applied to real data sets from Chinese listed firms. For performance comparison, single ES, LR, and SVM methods, the combined forecasting method based on equal weights (CFBEWs), the combined forecasting method based on neural networks (CFBNNs), and the combined forecasting method based on rough sets and the D-S theory (CFBRSDS) were also included in the empirical experiment. CFBSS obtains the highest forecasting accuracy and the second-best forecasting stability. The empirical results demonstrate the superior forecasting performance of our method in terms of accuracy and stability.

To resolve the problems of Poisson/impulse noise, blurriness, and sharpness in degraded X-ray images, a novel and efficient enhancement algorithm based on X-ray image fusion using a discrete wavelet transform is proposed in this paper. The proposed algorithm consists of two basics. First, it applies the techniques of boundary division to detect Poisson and impulse noise corrupted pixels and then uses the Wiener filter approach to restore those corrupted pixels. Second, it applies the sharpening technique to the same degraded X-ray image. Thus, it has two source X-ray images, which individually preserve the enhancement effects. The details and approximations of these sources X-ray images are fused via different fusion rules in the wavelet domain. The results of the experiment show that the proposed algorithm successfully combines the merits of the Wiener filter and sharpening and achieves a significant proficiency in the enhancement of degraded X-ray images exhibiting Poisson noise, blurriness, and edge details.

In cognitive radio networks (CRNs), the performance of the transmission control protocol (TCP) at the secondary user (SU) severely drops due to the mistrigger of congestion control. A long disruption is caused by the transmission of primary user, leading to the mistrigger. In this paper, we propose a cross-layer approach, called a CR-aware scheme that enhances TCP performance at the SU. The scheme is a sender side addition to the standard TCP (i.e., TCP-NewReno), and utilizes an explicit cross-layer signal delivered from a physical (or link) layer and the signal gives an indication of detecting the primary transmission (i.e., transmission of the primary user). We evaluated our scheme by implementing it onto a software radio platform, the Universal Software Radio Peripheral (USRP), where many parts of lower layer operations (i.e., operations in a link or physical layer) run as user processes. In our implementation, we ran our CR-aware scheme over IEEE 802.15.4. Furthermore, for the purpose of comparison, we implemented a selective ACK-based local recovery scheme that helps TCP isolate congestive loss from a random loss in a wireless section.

Acute myocardial infarction (AMI) is one of the three emergency diseases that require urgent diagnosis and treatment in the golden hour. It is important to identify the status of the coronary artery in AMI due to the nature of disease. Therefore, multi-modal medical images, which can effectively show the status of the coronary artery, have been widely used to diagnose AMI. However, the legacy system has provided multi- modal medical images with flat and unstructured data. It has a lack of semantic information between multi- modal images, which are distributed and stored individually. If we can see the status of the coronary artery all at once by integrating the core information extracted from multi-modal medical images, the time for diagnosis and treatment will be reduced. In this paper, we analyze semantic relations between multi-modal medical images based on coronary anatomy for AMI. First, we selected a coronary arteriogram, coronary angiography, and echocardiography as the representative medical images for AMI and extracted semantic features from them, respectively. We then analyzed the semantic relations between them and defined the convergence data model for AMI. As a result, we show that the data model can present core information from multi-modal medical images and enable to diagnose through the united view of AMI intuitively.

This paper aims to assess the feasibility of a new and less-focused type of online sociability (the watching network) as a useful information source for personalized recommendations. In this paper, we recommend scientific articles of interests by using the shared interests between target users and their watching connections. Our recommendations are based on one typical social bookmarking system, CiteULike. The watching network-based recommendations, which use a much smaller size of user data, produces suggestions that are as good as the conventional Collaborative Filtering technique. The results demonstrate that the watching network is a useful information source and a feasible foundation for information personalization. Furthermore, the watching network is substitutable for anonymous peers of the Collaborative Filtering recommendations. This study shows the expandability of social network-based recommendations to the new type of online social networks.

While Modern Standard Arabic is the formal spoken and written language of the Arab world; dialects are the major communication mode for everyday life. Therefore, identifying a speaker’s dialect is critical in the Arabic-speaking world for speech processing tasks, such as automatic speech recognition or identification. In this paper, we examine two approaches that reduce the Universal Background Model (UBM) in the automatic dialect identification system across the five following Arabic Maghreb dialects: Moroccan, Tunisian, and 3 dialects of the western (Oranian), central (Algiersian), and eastern (Constantinian) regions of Algeria. We applied our approaches to the Maghreb dialect detection domain that contains a collection of 10-second utterances and we compared the performance precision gained against the dialect samples from a baseline GMM-UBM system and the ones from our own improved GMM-UBM system that uses a Reduced UBM algorithm. Our experiments show that our approaches significantly improve identification performance over purely acoustic features with an identification rate of 80.49%.

Continuous multi-interval prediction (CMIP) is used to continuously predict the trend of a data stream based on various intervals simultaneously. The continuous integrated hierarchical temporal memory (CIHTM) network performs well in CMIP. However, it is not suitable for CMIP in real-time mode, especially when the number of prediction intervals is increased. In this paper, we propose a real-time integrated hierarchical temporal memory (RIHTM) network by introducing a new type of node, which is called a Zeta1FirstSpecializedQueueNode (ZFSQNode), for the real-time continuous multi-interval prediction (RCMIP) of data streams. The ZFSQNode is constructed by using a specialized circular queue (sQUEUE) together with the modules of original hierarchical temporal memory (HTM) nodes. By using a simple structure and the easy operation characteristics of the sQUEUE, entire prediction operations are integrated in the ZFSQNode. In particular, we employed only one ZFSQNode in each level of the RIHTM network during the prediction stage to generate different intervals of prediction results. The RIHTM network efficiently reduces the response time. Our performance evaluation showed that the RIHTM was satisfied to continuously predict the trend of data streams with multi-intervals in the real-time mode.

In evaluating the performance of a dual-hop wireless link, the effects of large and small scale fading has to be considered. To overcome this fading effect, several schemes, such as multiple-input multiple-output (MIMO) with orthogonal space time block codes (OSTBC), different combining schemes at the relay and receiving end, and orthogonal frequency division multiplexing (OFDM) are used in both the transmitting and the relay links. In this paper, we first make compare the performance of a two-hop wireless link under a different combination of space diversity in the first and second hop of the amplify-and-forward (AF) case. Our second task in this paper is to incorporate the weak signal of a direct link and then by applying the channel model of two random variables (one for a direct link and another for a relayed link) we get very impressive result at a low signal-to-noise ratio (SNR) that is comparable with other models at a higher SNR. Our third task is to bring other three schemes under a two-hop wireless link: use of transmit antenna selection (TAS) on both link with weak direct link, distributed Alamouti scheme in two-hop link and single relay antenna with OFDM sub- carrier. Finally, all of the schemes mentioned above are compared to select the best possible model. The main finding of the paper is as follows: the use of MIMO on both hops but application TAS on both links with weak direct link and the full rate OFDM with the sub-carrier for an individual link provide a better result as compared to other models.

For different reasons, many viewers like to watch a summary of films without having to waste their time. Traditionally, video film was analyzed manually to provide a summary of it, but this costs an important amount of work time. Therefore, it has become urgent to propose a tool for the automatic video summarization job. The automatic video summarization aims at extracting all of the important moments in which viewers might be interested. All summarization criteria can differ from one video to another. This paper presents how the emotional dimensions issued from real viewers can be used as an important input for computing which part is the most interesting in the total time of a film. Our results, which are based on lab experiments that were carried out, are significant and promising.

This paper presents a memory efficient tree based anti-collision protocol to identify memoryless RFID (Radio Frequency Identification) tags that may be attached to products. The proposed deterministic scheme utilizes two bit arrays instead of stack or queue and requires only ?(n) space, which is better than the earlier schemes that use at least O(n2) space, where n is the length of a tag ID in a bit. Also, the size n of each bit array is independent of the number of tags to identify. Our simulation results show that our bit array scheme consumes much less memory space than the earlier schemes utilizing queue or stack.

Along with the evolution of Internet and its new emerging services, the quantity and impact of attacks have been continuously increasing. Currently, the technical capability to attack has tended to decrease. On the contrary, performances of hacking tools are evolving, growing, simple, comprehensive, and accessible to the public. In this work, network penetration testing and auditing of the Redhat operating system (OS) are highlighted as one of the most popular OS for Internet applications. Some types of attacks are from a different side and new attack method have been attempted, such as: scanning for reconnaissance, guessing the password, gaining privileged access, and flooding the victim machine to decrease availability. Some analyses in network auditing and forensic from victim server are also presented in this paper. Our proposed system aims confirmed as hackable or not and we expect for it to be used as a reference for practitioners to protect their systems from cyber-attacks.

Smart grids propose new solutions for electricity consumers as a means to help them use energy in an efficient way. In this paper, we consider the demand-side management issue that exists for a group of consumers (houses) that are equipped with renewable energy (wind turbines) and storage units (battery), and we try to find the optimal scheduling for their home appliances, in order to reduce their electricity bills. Our simulation results prove the effectiveness of our approach, as they show a significant reduction in electricity costs when using renewable energy and battery storage.

The Remote Device Management (RDM) protocol is used to manage the devices in the lighting control networks. RDM provides bi-directional communications between a controller and many lighting devices over the DMX512-A network. In RDM, using a simple binary search scheme, which is based on the 48-bit unique ID (UID) of each device, discovers the lighting devices. However, the existing binary search scheme tends to require a large delay in the device discovery process. In this paper, we propose a novel partition-based discovery scheme for fast device discovery in RDM. In the proposed scheme, all devices are divided into several partitions as per the device UID, and the controller performs device discovery for each partition by configuring a response timer that each device will use. From numerical simulations, we can see that there is an optimal number of partitions to minimize the device discovery time for a given number of devices in the proposed scheme, and also that the proposed partition-based scheme can reduce the device discovery time, as compared to the existing binary search scheme.

This paper presents the applications of Kriging spatial interpolation methods for meteorologic variables, including temperature and relative humidity, in regions of Vietnam. Three types of interpolation methods are used, which are as follows: Ordinary Kriging, Universal Kriging, and Universal Kriging plus Digital Elevation model correction. The input meteorologic data was collected from 98 ground weather stations throughout Vietnam and the outputs were interpolated temperature and relative humidity gridded fields, along with their error maps. The experimental results showed that Universal Kriging plus the digital elevation model correction method outperformed the two other methods when applied to temperature. The interpolation effectiveness of Ordinary Kriging and Universal Kriging were almost the same when applied to both temperature and relative humidity.

The use of mobile agents for collaborative processing in wireless sensor network has gained considerable attention. This is when mobile agents are used for data aggregation to exploit redundant and correlated data. The efficiency of agent-based data aggregation depends on the agent migration scheme. However, in general, most of the proposed schemes are centralized approach-based schemes where the sink node determines the migration paths for the agents before dispatching them in the sensor network. The main limitations with such schemes are that they need global network topology information for deriving the migration paths of the agents, which incurs additional communication overhead, since each node has a very limited communication range. In addition, a centralized approach does not provide fault tolerant and adaptive migration paths. In order to solve such problems, we have proposed a distributed approach-based scheme for determining the migration path of the agents where at each hop, the local information is used to decide the migration of the agents. In addition, we also propose a local repair mechanism for dealing with the faulty nodes. The simulation results show that the proposed scheme performs better than existing schemes in the presence of faulty nodes within the networks, and manages to report the aggregated data to the sink faster.

The inactive student rate is becoming a major problem in most open universities worldwide. In Indonesia, roughly 36% of students were found to be inactive, in 2005. Data mining had been successfully employed to solve problems in many domains, such as for educational purposes. We are proposing a method for preventing inactive students by mining knowledge from student record systems with several state of the art ensemble methods, such as Bagging, AdaBoost, Random Subspace, Random Forest, and Rotation Forest. The most influential attributes, as well as demographic attributes (marital status and employment), were successfully obtained which were affecting student of being inactive. The complexity and accuracy of classification techniques were also compared and the experimental results show that Rotation Forest, with decision tree as the base-classifier, denotes the best performance compared to other classifiers.

At present, it is simple to the electronic commerce credit scoring model, as a brush credit phenomenon in E- commerce has emerged. This phenomenon affects the judgment of consumers and hinders the rapid development of E-commerce. In this paper, that E-commerce credit evaluation model that uses a Gaussian density function is put forward by density test and the analysis for the anomalies of E-commerce credit rating, it can be fond out the abnormal point in credit scoring, these points were calculated by nonlinear credit scoring algorithm, thus it can effectively improve the current E-commerce credit score, and enhance the accuracy of E-commerce credit score.

Fuzzy Formal Concept Analysis (FCA) is a mathematical tool for the effective representation of imprecise and vague knowledge. However, with a large number of formal concepts from a fuzzy context, the task of knowledge representation becomes complex. Hence, knowledge reduction is an important issue in FCA with a fuzzy setting. The purpose of this current study is to address this issue by proposing a method that computes the corresponding crisp order for the fuzzy relation in a given fuzzy formal context. The obtained formal context using the proposed method provides a fewer number of concepts when compared to original fuzzy context. The resultant lattice structure is a reduced form of its corresponding fuzzy concept lattice and preserves the specialized and generalized concepts, as well as stability. This study also shows a step-by-step demonstration of the proposed method and its application.

In this paper, we present a virtual laboratory platform (VLP) baptized Mercury allowing students to make practical work (PW) on different aspects of mobile wireless sensor networks (WSNs). Our choice of WSNs is motivated mainly by the use of real experiments needed in most courses about WSNs. These experiments require an expensive investment and a lot of nodes in the classroom. To illustrate our study, we propose a course related to energy efficient and safe weighted clustering algorithm. This algorithm which is coupled with suitable routing protocols, aims to maintain stable clustering structure, to prevent most routing attacks on sensor networks, to guaranty energy saving in order to extend the lifespan of the network. It also offers a better performance in terms of the number of re-affiliations. The platform presented here aims at showing the feasibility, the flexibility and the reduced cost of such a realization. We demonstrate the performance of the proposed algorithms that contribute to the familiarization of the learners in the field of WSNs.

Web shells are programs that are written for a specific purpose in Web scripting languages, such as PHP, ASP, ASP.NET, JSP, PERL-CGI, etc. Web shells provide a means to communicate with the server’s operating system via the interpreter of the web scripting languages. Hence, web shells can execute OS specific commands over HTTP. Usually, web attacks by malicious users are made by uploading one of these web shells to compromise the target web servers. Though there have been several approaches to detect such malicious web shells, no standard dataset has been built to compare various web shell detection techniques. In this paper, we present a collection of web shell files, WebSHArk 1.0, as a standard dataset for current and future studies in malicious web shell detection. To provide baseline results for future studies and for the improvement of current tools, we also present some benchmark results by scanning the WebSHArk dataset directory with three web shell scanning tools that are publicly available on the Internet. The WebSHArk 1.0 dataset is only available upon request via email to one of the authors, due to security and legal issues.

Cryptography aims at transmitting secure data over an unsecure network in coded version so that only the intended recipient can analyze it. Communication through messages, emails, or various other modes requires high security so as to maintain the confidentiality of the content. This paper deals with IDEA’s shortcoming of generating weak keys. If these keys are used for encryption and decryption may result in the easy prediction of ciphertext corresponding to the plaintext. For applying genetic approach, which is well-known optimization technique, to the weak keys, we obtained a definite solution to convert the weaker keys to stronger ones. The chances of generating a weak key in IDEA are very rare, but if it is produced, it could lead to a huge risk of attacks being made on the key, as well as on the information. Hence, measures have been taken to safeguard the key and to ensure the privacy of information.

In this paper, we propose a maximum entropy-based model, which can mathematically explain the bio- molecular event extraction problem. The proposed model generates an event table, which can represent the relationship between an event trigger and its arguments. The complex sentences with distinctive event structures can be also represented by the event table. Previous approaches intuitively designed a pipeline system, which sequentially performs trigger detection and arguments recognition, and thus, did not clearly explain the relationship between identified triggers and arguments. On the other hand, the proposed model generates an event table that can represent triggers, their arguments, and their relationships. The desired events can be easily extracted from the event table. Experimental results show that the proposed model can cover 91.36% of events in the training dataset and that it can achieve a 50.44% recall in the test dataset by using the event table.

In this paper, we propose a novel block cipher mode of operation, which is known as the counter chain (CC) mode. The proposed CC mode integrates the cipher block chaining (CBC) block cipher mode of operation with the counter (CTR) mode in a consistent fashion. In the CC mode, the confidentiality and authenticity of data are assured by the CBC mode, while speed is achieved through the CTR mode. The proposed mode of operation overcomes the parallelization deficiency of the CBC mode and the chaining dependency of the counter mode. Experimental results indicate that the proposed CC mode achieves the encryption speed of the CTR mode, which is exceptionally faster than the encryption speed of the CBC mode. Moreover, our proposed CC mode provides better security over the CBC mode. In summary, the proposed CC block cipher mode of operation takes the advantages of both the Counter mode and the CBC mode, while avoiding their shortcomings.

This paper presents the infectious watermarking model (IWM) for the protection of video contents that are based on biological virus modeling by the infectious route and procedure. Our infectious watermarking is designed as a new paradigm protection for video contents, regarding the hidden watermark for video protection as an infectious virus, video content as host, and codec as contagion medium. We used pathogen, mutant, and contagion as the infectious watermark and defined the techniques of infectious watermark generation and authentication, kernel-based infectious watermarking, and content-based infectious watermarking. We experimented with our watermarking model by using existing watermarking methods as kernel-based infectious watermarking and content-based infectious watermarking medium, and verified the practical applications of our model based on these experiments.

Beacon scheduling is considered to be one of the most significant challenges for energy-efficient Low-Rate Wireless Personal Area Network (LR-WPAN) multi-hop networks. The emerging new standard, IEEE802.15.4e, contains a distributed beacon scheduling functionality that utilizes a specific bitmap and multi-superframe structure. However, this new standard does not provide a critical recipe for superframe duration (SD) allocation in beacon scheduling. Therefore, in this paper, we first introduce three different SD allocation approaches, LSB first, MSB first, and random. Via experiments we show that IEEE802.15.4e DSME beacon scheduling performs differently for different SD allocation schemes. Based on our experimental results we propose an adaptive SD allocation (ASDA) algorithm. It utilizes a single indicator, a distributed neighboring slot incrementer (DNSI). The experimental results demonstrate that the ASDA has a superior performance over other methods from the viewpoint of resource efficiency.

The stream cipher Salsa20 and its reduced versions are among the fastest stream ciphers available today. However, Salsa20/7 is broken and Salsa20/12 is not as safe as before. Therefore, Salsa20 must completely perform all of the four rounds of encryption to achieve a good diffusion in order to resist the known attacks. In this paper, a new variant of Salsa20 that uses the chaos theory and that can achieve diffusion faster than the original Salsa20 is presented. The method has been tested and benchmarked with the original Salsa20 with a series of tests. Most of the tests show that the proposed chaotic Salsa of two rounds is faster than the original four rounds of Salsa20/4, but it offers the same diffusion level.

This paper describes the Tezpur University dataset of online handwritten Assamese characters. The online data acquisition process involves the capturing of data as the text is written on a digitizer with an electronic pen. A sensor picks up the pen-tip movements, as well as pen-up/pen-down switching. The dataset contains 8,235 isolated online handwritten Assamese characters. Preliminary results on the classification of online handwritten Assamese characters using the above dataset are presented in this paper. The use of the support vector machine classifier and the classification accuracy for three different feature vectors are explored in our research.

It is widely accepted that single carrier frequency division multiple access (SC-FDMA) is an excellent candidate for broadband wireless systems. Channel estimation is one of the key challenges in SC-FDMA, since accurate channel estimation can significantly improve equalization at the receiver and, consequently, enhance the communication performances. In this paper, we study the application of compressive sensing for sparse channel estimation in a SC-FDMA system. By skillfully designing pilots, their patterns, and taking advantages of the sparsity of the channel impulse response, the proposed system realizes channel estimation at a low cost. Simulation results show that it can achieve significantly improved performance in a frequency selective fading sparse channel with fewer pilots.

Recently, there has been an increasing demand of high data rates services, where several multiuser multiple- input multiple-output (MU-MIMO) techniques were introduced to meet these demands. Among these tech- niques, vector perturbation combined with linear precoding techniques, such as zero-forcing and minimum mean-square error, have been proven to be efficient in reducing the transmit power and hence, perform close to the optimum algorithm. In this paper, we review several fixed-complexity vector perturbation techniques and investigate their performance under both perfect and imperfect channel knowledge at the transmitter. Also, we investigate the combination of block diagonalization with vector perturbation outline its merits.

The localization of multi-agents, such as people, animals, or robots, is a requirement to accomplish several tasks. Especially in the case of multi-robotic applications, localization is the process for determining the positions of robots and targets in an unknown environment. Many sensors like GPS, lasers, and cameras are utilized in the localization process. However, these sensors produce a large amount of computational resources to process complex algorithms, because the process requires environmental mapping. Currently, combination multi-robots or swarm robots and sensor networks, as mobile sensor nodes have been widely available in indoor and outdoor environments. They allow for a type of efficient global localization that demands a relatively low amount of computational resources and for the independence of specific environmental features. However, the inherent instability in the wireless signal does not allow for it to be directly used for very accurate position estimations and making difficulty associated with conducting the localization processes of swarm robotics system. Furthermore, these swarm systems are usually highly decentralized, which makes it hard to synthesize and access global maps, it can be decrease its flexibility. In this paper, a simple pyramid RAM-based Neural Network architecture is proposed to improve the localization process of mobile sensor nodes in indoor environments. Our approach uses the capabilities of learning and generalization to reduce the effect of incorrect information and increases the accuracy of the agent’s position. The results show that by using simple pyramid RAM-base Neural Network approach, produces low computational resources, a fast response for processing every changing in environmental situation and mobile sensor nodes have the ability to finish several tasks especially in localization processes in real time.

Most traditional database systems exploit a record-oriented model where the attributes of a record are placed contiguously in a hard disk to achieve high performance writes. However, for read-mostly data warehouse systems, the column-oriented database has become a proper model because of its superior read performance. Today, flash memory is largely recognized as the preferred storage media for high-speed database systems. In this paper, we introduce a column-oriented database model based on flash memory and then propose a new column-aware flash indexing scheme for the high-speed column-oriented data warehouse systems. Our index management scheme, which uses an enhanced B+-Tree, achieves superior search performance by indexing an embedded segment and packing an unused space in internal and leaf nodes. Based on the performance results of two test databases, we concluded that the column-aware flash index management outperforms the traditional scheme in the respect of the mixed operation throughput and its response time.

We present a secure and robust image watermarking scheme that uses combined reversible DWT-DCT-SVD transformations to increase integrity, authentication, and confidentiality. The proposed scheme uses two different kinds of watermarking images: a reversible watermark, W1, which is used for verification (ensuring integrity and authentication aspects); and a second one, W2, which is defined by a logo image that provides confidentiality. Our proposed scheme is shown to be robust, while its performances are evaluated with respect to the peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), normalized cross-correlation (NCC), and running time. The robustness of the scheme is also evaluated against different attacks, including a compression attack and Salt & Pepper attack.

This paper aims to present a supervised classification algorithm based on data fusion for the segmentation of the textured images. The feature extraction method we used is based on discrete wavelet transform (DWT). In the segmentation stage, the estimated feature vector of each pixel is sent to the support vector machine (SVM) classifier for initial labeling. To obtain a more accurate segmentation result, two strategies based on infor- mation fusion were used. We first integrated decision-level fusion strategies by combining decisions made by the SVM classifier within a sliding window. In the second strategy, the fuzzy set theory and rules based on probability theory were used to combine the scores obtained by SVM over a sliding window. Finally, the per- formance of the proposed segmentation algorithm was demonstrated on a variety of synthetic and real images and showed that the proposed data fusion method improved the classification accuracy compared to applying a SVM classifier. The results revealed that the overall accuracies of SVM classification of textured images is 88%, while our fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.

Recognition systems for scanned or printed music scores that have been implemented on personal computers have received attention from numerous scientists and have achieved significant results over many years. A modern trend with music scores being captured and played directly on mobile devices has become more interesting to researchers. The limitation of resources and the effects of illumination, distortion, and inclination on input images are still challenges to these recognition systems. In this paper, we introduce a novel approach for recognizing music scores captured by mobile cameras. To reduce the complexity, as well as the computational time of the system, we grouped all of the symbols extracted from music scores into ten main classes. We then applied each major class to SVM to classify the musical symbols separately. The experimental results showed that our proposed method could be applied to real time applications and that its performance is competitive with other methods.

In linguistics, stemming is the operation of reducing words to their more general form, which is called the ‘stem’. Stemming is an important step in information retrieval systems, natural language processing, and text mining. Information retrieval systems are evaluated by metrics like precision and recall and the fundamental superiority of an information retrieval system over another one is measured by them. Stemmers decrease the indexed file, increase the speed of information retrieval systems, and improve the performance of these sys- tems by boosting precision and recall. There are few Persian stemmers and most of them work based on mor- phological rules. In this paper we carefully study Persian stemmers, which are classified into three main clas- ses: structural stemmers, lookup table stemmers, and statistical stemmers. We describe the algorithms of each class carefully and present the weaknesses and strengths of each Persian stemmer. We also propose some metrics to compare and evaluate each stemmer by them.

IEEE 802.11p is a standard MAC protocol for wireless access in vehicular environments (WAVEs). If a packet collision happens when a safety message is sent out, IEEE 802.11p chooses a random back-off counter value in a fixed-size contention window. However, depending on the random choice of back-off counter value, it is still possible that less important messages are sent out first while more important messages are delayed longer until sent out. In this paper, we present a new scheme for safety message scheduling, called the enhanced message priority mechanism (EMPM). It consists of the following two components: the benefit-value algorithm, which calculates the priority of the messages depending on the speed, deceleration, and message lifetime; and the back-off counter selection algorithm, which chooses the non-uniform back-off counter value in order to reduce the collision probability and to enhance the throughput of the highly beneficial messages. Numerical results show that the EMPM can significantly improve the throughput and delay of messages with high benefits when compared with existing MAC protocols. Consequently, the EMPM can provide better QoS support for the more important and urgent messages.

High Efficiency Video Coding (HEVC) is the most recent video codec standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of this newly introduced standard is for catering to high-resolution video in low bandwidth environments with a higher compression ratio. This paper provides a performance comparison between HEVC and H.264/AVC video compression standards in terms of objective quality, delay, and complexity in the broadcasting environment. The experimental investigation was carried out using six test sequences in the random access configuration of the HEVC test model (HM), the HEVC reference software. This was also carried out in similar configuration settings of the Joint Scalable Video Module (JSVM), the official scalable H.264/AVC reference implementation, running on a single layer mode. According to the results obtained, the HM achieves more than double the compression ratio compared to that of JSVM and delivers the same video quality at half the bitrate. Yet, the HM encodes two times slower (at most) than JSVM. Hence, it can be concluded that the application scenarios of HM and JSVM should be judiciously selected considering the availability of system resources. For instance, HM is not suitable for low delay applications, but it can be used effectively in low bandwidth environments.

In this paper, a novel robust medical images watermarking scheme is proposed. In traditional methods, the added watermark may alter the host medical image in an irreversible manner and may mask subtle details. Consequently, we propose a method for medical image copyright protection that may remedy this problem by embedding the watermark without modifying the original host image. The proposed method is based on the visual cryptography concept and the dominant blocks of wavelet coefficients. The logic in using the blocks dominants map is that local features, such as contours or edges, are unique to each image. The experimental results show that the proposed method can withstand several image processing attacks such as cropping, filtering, compression, etc.

Wireless Video Sensor Networks (WVSNs) have become a leading solution in many important applications, such as disaster recovery. By using WVSNs in disaster scenarios, the main goal is achieving a successful immediate response including search, location, and rescue operations. The achievement of such an objective in the presence of obstacles and the risk of sensor damage being caused by disasters is a challenging task. In this paper, we propose a fault tolerance model of WVSN for efficient post-disaster management in order to assist rescue and preparedness operations. To get an overview of the monitored area, we used video sensors with a rotation capability that enables them to switch to the best direction for getting better multimedia coverage of the disaster area, while minimizing the effect of occlusions. By constructing different cover sets based on the field of view redundancy, we can provide a robust fault tolerance to the network. We demonstrate by simulating the benefits of our proposal in terms of reliability and high coverage.

Text in images is one of the most important cues for understanding a scene. In this paper, we propose a novel approach based on interest points to localize text in natural scene images. The main ideas of this approach are as follows: first we used interest point detection techniques, which extract the corner points of characters and center points of edge connected components, to select candidate regions. Second, these candidate regions were verified by using tensor voting, which is capable of extracting perceptual structures from noisy data. Finally, area, orientation, and aspect ratio were used to filter out non-text regions. The proposed method was tested on the ICDAR 2003 dataset and images of wine labels. The experiment results show the validity of this approach.

Event detection based on using features from a static grid can give poor results from the viewpoint of two main aspects: the position of the camera and the position of the event that is occurring in the scene. The former causes problems when training and test events are at different distances from the camera to the actual position of the event. The latter can be a source of problems when training events take place in any position in the scene, and the test events take place in a position different from the training events. Both issues degrade the accuracy of the static grid method. Therefore, this work proposes a method called a dynamic grid for event detection, which can tackle both aspects of the problem. In our experiment, we used the dynamic grid method to detect four types of event patterns: implosion, explosion, two-way, and one-way using a Multimedia Analysis and Discovery (MAD) pedestrian dataset. The experimental results show that the proposed method can detect the four types of event patterns with high accuracy. Additionally, the performance of the proposed method is better than the static grid method and the proposed method achieves higher accuracy than the previous method regarding the aforementioned aspects.

This paper presents the applications of spatial interpolation and assimilation methods for satellite and ground meteorological data, including temperature, relative humidity, and precipitation in regions of Vietnam. In this work, Universal Kriging is used for spatially interpolating ground data and its interpolated results are assimilated with corresponding satellite data to anticipate better gridded data. The input meteorological data was collected from 98 ground weather stations located all over Vietnam; whereas, the satellite data consists of the MODIS Atmospheric Profiles product (MOD07), the ASTER Global Digital Elevation Map (ASTER DEM), and the Tropical Rainfall Measuring Mission (TRMM) in six years. The outputs are gridded fields of temperature, relative humidity, and precipitation. The empirical results were evaluated by using the Root mean square error (RMSE) and the mean percent error (MPE), which illustrate that Universal Kriging interpolation obtains higher accuracy than other forms of Kriging; whereas, the assimilation for precipitation gradually reduces RMSE and significantly MPE. It also reveals that the accuracy of temperature and humidity when employing assimilation that is not significantly improved because of low MODIS retrieval due to cloud contamination.

Edit distance metrics are widely used for many applications such as string comparison and spelling error corrections. Hamming distance is a metric for two equal length strings and Damerau-Levenshtein distance is a well-known metrics for making spelling corrections through string-to-string comparison. Previous distance metrics seems to be appropriate for alphabetic languages like English and European languages. However, the conventional edit distance criterion is not the best method for agglutinative languages like Korean. The reason is that two or more letter units make a Korean character, which is called as a syllable. This mechanism of syllable-based word construction in the Korean language causes an edit distance calculation to be inefficient. As such, we have explored a new edit distance method by using consonant normalization and the normalization factor.

Generally, the wireless network provides priority to handover calls instead of new calls to maintain its quality of service (QoS). Because of this QoS provisioning, a call admission control (CAC) scheme is essential for the suitable management of limited radio resources of wireless networks to uphold different factors, such as new call blocking probability, handover call dropping probability, channel utilization, etc. Designing an optimal CAC scheme is still a challenging task due to having a number of considerable factors, such as new call blocking probability, handover call dropping probability, channel utilization, traffic rate, etc. Among existing CAC schemes such as, fixed guard band (FGB), fractional guard channel (FGC), limited fractional channel (LFC), and Uniform Fractional Channel (UFC), the LFC scheme is optimal considering the new call blocking and handover call dropping probability. However, this scheme does not consider channel utilization. In this paper, a CAC scheme, which is termed by a uniform fractional band (UFB) to overcome the limitations of existing schemes, is proposed. This scheme is oriented by priority and non-priority guard channels with a set of fractional channels instead of fractionizing the total channels like FGC and UFC schemes. These fractional channels in the UFB scheme accept new calls with a predefined uniform acceptance factor and assist the network in utilizing more channels. The mathematical models, operational benefits, and the limitations of existing CAC schemes are also discussed. Subsequently, we prepared a comparative study between the existing and proposed scheme in terms of the aforementioned QoS related factors. The numerical results we have obtained so far show that the proposed UFB scheme is an optimal CAC scheme in terms of QoS and resource utilization as compared to the existing schemes.

Region of interest (ROI) is the most informative part of a medical image and mostly has been used as a major part of watermark. Various shapes ROIs selection have been reported in region-based watermarking techniques. In region-based watermarking schemes an image region of non-interest (RONI) is the second important part of the image and is used mostly for watermark encapsulation. In online healthcare systems the ROI wrong selection by missing some important portions of the image to be part of ROI can create problem at the destination. This paper discusses the complete medical image availability in original at destination using the whole image as a watermark for authentication, tamper localization and lossless recovery (WITALLOR). The WITALLOR watermarking scheme ensures the complete image security without of ROI selection at the source point as compared to the other region-based watermarking techniques. The complete image is compressed using the Lempel-Ziv-Welch (LZW) lossless compression technique to get the watermark in reduced number of bits. Bits reduction occurs to a number that can be completely encapsulated into image. The watermark is randomly encapsulated at the least significant bits (LSBs) of the image without caring of the ROI and RONI to keep the image perceptual degradation negligible. After communication, the watermark is retrieved, decompressed and used for authentication of the whole image, tamper detection, localization and lossless recovery. WITALLOR scheme is capable of any number of tampers detection and recovery at any part of the image. The complete authentic image gives the opportunity to conduct an image based analysis of medical problem without restriction to a fixed ROI.

A mobile terminal will expect a number of handoffs within its call duration. In the event of a mobile call, when a mobile node moves from one cell to another, it should connect to another access point within its range. In case there is a lack of support of its own network, it must changeover to another base station. In the event of moving on to another network, quality of service parameters need to be considered. In our study we have used the Markov decision process approach for a seamless handoff as it gives the optimum results for selecting a network when compared to other multiple attribute decision making processes. We have used the network cost function for selecting the network for handoff and the connection reward function, which is based on the values of the quality of service parameters. We have also examined the constant bit rate and transmission control protocol packet delivery ratio. We used the policy iteration algorithm for determining the optimal policy. Our enhanced handoff algorithm outperforms other previous multiple attribute decision making methods.

In Jøsang’s subjective logic, the fusion operator is not able to fuse three or more opinions at a time and it cannot consider the effect of time factors on fusion. Also, the base rate (a) and non-informative prior weight (C) could not change dynamically. In this paper, we propose an Improved Subjective Logic Model with Evidence Driven (ISLM-ED) that expands and enriches the subjective logic theory. It includes the multi-agent unified fusion operator and the dynamic function for the base rate (a) and the non-informative prior weight (C) through the changes in evidence. The multi-agent unified fusion operator not only meets the commutative and associative law but is also consistent with the researchers’s cognitive rules. A strict mathematical proof was given by this paper. Finally, through the simulation experiments, the results show that the ISLM-ED is more reasonable and effective and that it can be better adapted to the changing environment.

Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we propose using the combination of Affine Scale Invariant Feature Transform (SIFT) and Probabilistic Similarity for face recognition under a large viewpoint change. Affine SIFT is an extension of SIFT algorithm to detect affine invariant local descriptors. Affine SIFT generates a series of different viewpoints using affine transformation. In this way, it allows for a viewpoint difference between the gallery face and probe face. However, the human face is not planar as it contains significant 3D depth. Affine SIFT does not work well for significant change in pose. To complement this, we combined it with probabilistic similarity, which gets the log likelihood between the probe and gallery face based on sum of squared difference (SSD) distribution in an offline learning process. Our experiment results show that our framework achieves impressive better recognition accuracy than other algorithms compared on the FERET database.

Recent technological advances provide the opportunity to use large amounts of multimedia data from a multitude of sensors with different modalities (e.g., video, text) for the detection and characterization of criminal activity. Their integration can compensate for sensor and modality deficiencies by using data from other available sensors and modalities. However, building such an integrated system at the scale of neighborhood and cities is challenging due to the large amount of data to be considered and the need to ensure a short response time to potential criminal activity. In this paper, we present a system that enables multi-modal data collection at scale and automates the detection of events of interest for the surveillance and reconnaissance of criminal activity. The proposed system showcases novel analytical tools that fuse multimedia data streams to automatically detect and identify specific criminal events and activities. More specifically, the system detects and analyzes series of incidents (an incident is an occurrence or artifact relevant to a criminal activity extracted from a single media stream) in the spatiotemporal domain to extract events (actual instances of criminal events) while cross-referencing multimodal media streams and incidents in time and space to provide a comprehensive view to a human operator while avoiding information overload. We present several case studies that demonstrate how the proposed system can provide law enforcement personnel with forensic and real time tools to identify and track potential criminal activity.

Recently, novel application areas in digital geometry processing, such as simulation, dynamics, and medical surgery simulations, have necessitated the representation of not only the surface data but also the interior volume data of a given 3D object. In this paper, we present an efficient framework for the hape approximations of spherical solid objects based on trivariate B-splines. To do this, we first constructed a smooth correspondence between a given object and a unit solid cube by computing their harmonic mapping. We set the unit solid cube as a rectilinear parametric domain for trivariate B-splines and utilized the mapping to approximate the given object with B-splines in a coarse-to-fine manner. Specifically, our framework provides usercontrollability of shape approximations, based on the control of the boundary condition of the harmonic parameterization and the level of B-spline fitting. Experimental results showed that our method is efficient enough to compute trivariate B-splines for several
models, each of whose topology is identical to a solid sphere.

Resource sharing is a major advantage of distributed computing. However, a distributed computing system may have some physical or virtual resource that may be accessible by a single process at a time. The mutual exclusion issue is to ensure that no more than one process at a time is allowed to access some shared resource. The article proposes a token-based mutual exclusion algorithm for the clustered mobile ad hoc networks (MANETs). The mechanism that is adapted to handle token passing at the inter-cluster level is different from that at the intra-cluster level. It makes our algorithm message efficient and thus suitable for MANETs. In the interest of efficiency, we implemented a centralized token passing scheme at the intra-cluster level. The centralized schemes are inherently failure prone. Thus, we have presented an intracluster token passing scheme that is able to tolerate a failure. In order to enhance reliability, we applied a distributed token circulation scheme at the inter-cluster level. More importantly, the message complexity of the proposed algorithm is independent of N, which is the total number of nodes in the system. Also, under a heavy load, it turns out to be inversely proportional to n, which is the (average) number of nodes per each cluster. We substantiated our claim with the correctness proof, complexity analysis, and simulation results. In the end, we present a simple approach to make our protocol fault tolerant.

Automatic segmentation of foreground text from the background in degraded document images is very much essential for the smooth reading of the document content and recognition tasks by machine. In this paper, we present a novel approach to the binarization of degraded document images. The proposed method uses a new local contrast feature extracted based on the stroke width of text. First, a pre-processing method is carried out for noise removal. Text boundary detection is then performed on the image constructed from the contrast feature. Then local estimation follows to extract text from the background. Finally, a refinement procedure is applied to the binarized image as a post-processing step to improve the quality of the final results. Experiments and comparisons of extracting text from degraded handwriting and machine-printed document image against some well-known binarization algorithms demonstrate the effectiveness of the proposed method.

The free distance of (n, k, l) convolutional codes has some connection with the memory length, which depends on not only l but also on k. To efficiently obtain a large memory length, we have constructed a new class of (2k, k, l) convolutional codes by (2k, k) block codes and (2, 1, l) convolutional codes, and its encoder and generation function are also given in this paper. With the help of some matrix modules, we designed a single structure Viterbi decoder with a parallel capability, obtained a unified and efficient decoding model for (2k, k, l) convolutional codes, and then give a description of the decoding process in detail. By observing the survivor path memory in a matrix viewer, and testing the role of the max module, we implemented a simulation with (2k, k, l) convolutional codes. The results show that many of them are better than conventional (2, 1, l) convolutional codes.

Widely spreading smart devices have become an important information sharing channel in everyday life. In particular, social networking services (SNS) are the hub for content creation and sharing. Users post their contents on SNS servers and receive contents of interest. Contents are delivered in either pull or push. Regarding delivery, cost and wait time are two important factors to be minimized, but they are in a trade-off relationship. The Push-N-scheme (PNS) and timeout-based push scheme (TPS) have been proposed for content delivery. PNS has an advantage in cost over TPS, whereas TPS has an edge in terms of the wait time over PNS. We propose a hybrid push scheme of PNS and TPS, called push-N-scheme with timeout (PNT), to balance the cost and the wait time. We evaluate PNT through simulations, with the results showing that PNT is effective in balancing PNS and TPS.

In recent years, there has been growing interest in secure pairing, which refers to the establishment of a secure communication channel between two mobile devices. There are a number of descriptions of the various types of out-of-band (OOB) channels, through which authentication data can be transferred under a user’s control and involvement. However, none have become widely used due to their lack of adaptability to the variety of mobile devices. In this paper, we introduce a new OOB channel, which uses accelerometer-based gesture input. The gesture-based OOB channel is suitable for all kinds of mobile devices, including input/output constraint devices, as the accelerometer is small and incurs only a small computational overhead. We implemented and evaluated the channel using an Apple iPhone handset. The results demonstrate that the channel is viable with completion times and error rates that are comparable with other OOB channels.

Over the last couple of years, traditional VANET (Vehicular Ad Hoc NETwork) evolved into VANET-based clouds. From the VANET standpoint, applications became richer by virtue of the boom in automotive telematics and infotainment technologies. Nevertheless, the research community and industries are concerned about the under-utilization of rich computation, communication, and storage resources in middle and high-end vehicles. This phenomenon became the driving force for the birth of VANET-based clouds. In this paper, we envision a novel application layer of VANET-based clouds based on the cooperation of the moving cars on the road, called CaaS (Cooperation as a Service). CaaS is divided into TIaaS (Traffic Information as a Service), WaaS (Warning as a Service), and IfaaS (Infotainment as a Service). Note, however, that this work focuses only on TIaaS and WaaS. TIaaS provides vehicular nodes, more precisely subscribers, with the fine-grained traffic information constructed by CDM (Cloud Decision Module) as a result of the cooperation of the vehicles on the roads in the form of mobility vectors. On the other hand, WaaS provides subscribers with potential warning messages in case of hazard situations on the road. Communication between the cloud infrastructure and the vehicles is done through GTs (Gateway Terminals), whereas GTs are physically realized through RSUs (Road-Side Units) and vehicles with 4G Internet access. These GTs forward the coarse-grained cooperation from vehicles to cloud and fine-grained traffic information and warnings from cloud to vehicles (subscribers) in a secure, privacy-aware fashion. In our proposed scheme, privacy is conditionally preserved wherein the location and the identity of the cooperators are preserved by leveraging the modified location-based encryption and, in case of any dispute, the node is subject to revocation. To the best of our knowledge, our proposed scheme is the first effort to offshore the extended traffic view construction function and warning messages dissemination function to the cloud.

In recent literature on traffic scheduling, the combination of the twodimensional discrete-time Markov chain (DTMC) and the Markov modulated Poisson process (MMPP) is used to analyze the capacity of VoIP traffic in the cognitive radio system. The performance of the cognitive radio system solely depends on the accuracy of spectrum sensing techniques, the minimization of false alarms, and the scheduling of traffic channels. In this paper, we only emphasize the scheduling of traffic channels (i.e., traffic handling techniques for the primary user [PU] and the secondary user [SU]). We consider the following three different traffic models: the cross-layer analytical model, M/G/1(m) traffic, and the IEEE 802.16e/m scheduling approach to evaluate the performance of the VoIP services of the cognitive radio system from the context of blocking probability and throughput.

Information always comes with security and risk problems. There is the saying that, “The tall tree catches much wind,” and the risks from cloud services will absolutely be more varied and more severe. Nowadays, handling these risks is no longer just a technology problem. So far, a good deal of literature that focuses on risk or security management and frameworks in information systems has already been submitted. This paper analyzes the causal risk factors in cloud environments through critical success factors, from a business perspective. We then integrated these critical success factors into a business model for information security by mapping out 10 principles related to cloud risks. Thus, we were able to figure out which aspects should be given more consideration in the actual transactions of cloud services, and were able to make a business-level and general-risk control model for cloud computing.

Recently, many large organizations have multiple data sources (MDS’) distributed over different branches of an interstate company. Local patterns analysis has become an effective strategy for MDS mining in national and international organizations. It consists of mining different datasets in order to obtain frequent patterns, which are forwarded to a centralized place for global pattern analysis. Various synthesizing models [2,3,4,5,6,7,8,26] have been proposed to build global patterns from the forwarded patterns. It is desired that the synthesized rules from such forwarded patterns must closely match with the mono-mining results (i.e., the results that would be obtained if all of the databases are put together and mining has been done). When the pattern is present in the site, but fails to satisfy the minimum support threshold value, it is not allowed to take part in the pattern synthesizing process. Therefore, this process can lose some interesting patterns, which can help the decider to make the right decision. In such situations we propose the application of a probabilistic model in the synthesizing process. An adequate choice for a probabilistic model can improve the quality of patterns that have been discovered. In this paper, we perform a comprehensive study on various probabilistic models that can be applied in the synthesizing process and we choose and improve one of them that works to ameliorate the synthesizing results. Finally, some experiments are presented in public database in order to improve the efficiency of our proposed synthesizing method.

This paper will focus on improving the performance of orthogonal frequency division multiplexing (OFDM) in Rayleigh fading environments. The proposed technique will use a previously published method that has been shown to improve OFDM performance in independent fading, based on ordered sub-carrier selection. Then, a simple non-iterative method for finding the optimal bit-loading allocation was proposed. It was also based on ordered sub-carrier selection. We compared both of these algorithms to an optimal bit-loading solution to determine their effectiveness in a correlated fading environment. The correlated fading was simulated using the JTC channel models. Our intent was not to create an optimal solution, but to create a low complexity solution that can be used in a wireless environment in which the channel conditions change rapidly and that require a simple algorithm for fast bit loading.

The realization of Wireless Multimedia Sensor Networks (WMSNs) has been fostered by the availability of low cost and low power CMOS devices. However, the transmission of bulk video data requires adequate bandwidth, which cannot be promised by single path communication on an intrinsically low resourced sensor network. Moreover, the distortion or artifacts in the video data and the adherence to delay threshold adds to the challenge. In this paper, we propose a two stage Quality of Service (QoS) guaranteeing scheme called Prioritized Multipath WMSN (PMW) for transmitting H.264 encoded video. Multipath selection based on QoS metrics is done in the first stage, while the second stage further prioritizes the paths for sending H.264 encoded video frames on the best available path. PMW uses two composite metrics that are comprised of hop-count, path energy, BER, and end-to-end delay. A colorcoded assisted network maintenance and failure recovery scheme has also been proposed using (a) smart greedy mode, (b) walking back mode, and (c) path switchover. Moreover, feedback controlled adaptive video encoding can smartly tune the encoding parameters based on the perceived video quality. Computer simulation using OPNET validates that the proposed scheme significantly outperforms the conventional approaches on human eye perception and delay.

Load balancing is the major benefit of any distributed system. To facilitate this advantage, task duplication and migration methodologies are employed. As this paper deals with dependent tasks (DAG), we used duplication. Task duplication reduces the overall schedule length of DAG along-with load balancing. This paper proposes a new task duplication algorithm at the time of tasks assignment on various processors. With the intention of conducting proposed algorithm performance computation; simulation has been done on the Netbeans IDE. The mesh topology of a distributed system is simulated at this juncture. For task duplication, overall schedule length of DAG is the main parameter that decides the performance of a proposed duplication algorithm. After obtaining the results we compared our performance with arbitrary task assignment, CAWF and HEFT-TD algorithms. Additionally, we also compared the complexity of the proposed algorithm with the Duplication Based Bottom Up scheduling (DBUS) and Heterogeneous Earliest Finish Time with Task Duplication (HEFT-TD).

In this paper, we discuss optical encryption and decryption considering
wireless communication channels. In wireless communication systems, the wireless channel causes noise and fading effects of the transmitted information. Optical encryption technique such as double-random-phase encryption (DRPE) is used for encrypting transmitted data. When the encrypted data is transmitted, the information may be lost or distorted because there are a lot of factors such as channel noise, propagation fading, etc. Thus, using digital modulation and maximum likelihood (ML) detection, the noise and fading effects are mitigated, and the encrypted data is estimated well at the receiver. To the best of our knowledge, this is the first report that considers the wireless channel characteristics of the optical encryption technique.

New IIR digital differintegrators (differentiator and integrator) with very minor absolute relative errors are presented in this paper. The digital integrator is designed by interpolating some of the existing integrators. The optimum value of the interpolation ratio is obtained through linear programming optimization. Subsequently, by modifying the transfer function of the proposed integrator appropriately, new digital differentiator is obtained. Simulation results demonstrate that the proposed differintegrator are a more accurate approximation of ideal ones, than the existing differintegrators. Furthermore, the proposed differentiator has been tested in an image processing application. Edges characterize boundaries and are, therefore, a problem of fundamental importance in image processing. For comparison purpose Prewitt, Sobel, Roberts, Canny, Laplacian of Gaussian (LOG), Zerocross operators were used and their results are displayed. The results of edge detection by some of the existing differentiators are also provided. The simulation results have shown the superiority of the proposed approach over existing ones.

Cloud computing has increasingly been drawing attention these days. Each
big company in IT hurries to get a chunk of meat that promises to be a whopping
market in the future. At the same time, information is always associated with security and risk problems. Nowadays, the handling of these risks is no longer just a technology problem, with a good deal of literature focusing on risk or security management and framework in the information system. In this paper, we find the specific business meaning of the BMIS model and try to apply and leverage this model to cloud risk. Through a previous study, we select and determine the causal risk factors in cloud service, which are also known as CSFs (Critical Success Factors) in information management. Subsequently, we distribute all selected CSFs into the BMIS model by mapping with ten principles in cloud risk. Finally, by using the leverage points, we try to leverage the model factors and aim to make a resource-optimized, dynamic, general risk control business model for cloud service providers.

A new morphed steganographic algorithm is proposed in this paper. Image security is a challenging problem these days. Steganography is a method of hiding secret data in cover media. The Least Significant Bit is a standard Steganographic method that has some limitations. The limitations are less capacity to hide data, poor stego image quality, and imperceptibility. The proposed algorithm focuses on these limitations. The morphing concept is being used for image steganography to overcome these limitations. The PSNR and standard deviation are considered as a measure to improve stego image quality and morphed image selection, respectively. The stego keys are generated during the morphed steganographic embedding and extracting process. Stego keys are used to embed and extract the secret image. The experimental results, which are based on hiding capacity and PSNR, are presented in this paper. Our research contributes towards creating an improved steganographic method using image morphing. The experimental result indicates that the proposed algorithm achieves an increase in hiding capacity, stego image quality, and imperceptibility. The experimental results were compared with state of the art steganographic methods.

The advancement of knowledge society has enabled the social network community (SNC) to be perceived as another space for learning where individuals produce, share, and apply content in self-directed ways. The content generated within social networks provides information of value for the participants in real time. Thus, this study proposes the social network community activity-based content model (SoACo Model), which takes SNC-based activities and embodies them within learning objects. The SoACo Model consists of content objects, aggregation levels, and information models. Content objects are composed of relationship-building elements, including real-time, changeable activities such as making friends, and participation-activity elements such as “Liking” specific content. Aggregation levels apply one of three granularity levels considering the reusability of elements: activity assets, real-time, changeable learning objects, and content. The SoACo Model is meaningful because it transforms SNC-based activities into learning objects for learning and teaching activities and applies to learning management systems since they organize activities -- such as tweets from Twitter -- depending on the teacher’s intention.

Skin detection is used in many applications, such as face recognition, hand
tracking, and human-computer interaction. There are many skin color detection
algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other"'"s thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others.

Various Time Synchronization protocols for a Wireless Sensor Network (WSN) have been developed since time synchronization is important in many timecritical WSN applications. Aside from synchronization accuracy, energy constraint should also be considered seriously for time synchronization protocols in WSNs, which typically have limited power environments. This paper performs analysis of prominent WSN time synchronization protocols in terms of power consumption and test by simulation. In the analysis and simulation tests, each protocol shows different performance in terms of power consumption. This result is helpful in choosing or developing an appropriate time synchronization protocol that meets the requirements of synchronization accuracy and power consumption (or network lifetime) for a specific WSN application.

When analyzing default predictions in real estate companies, the number of non-defaulted cases always greatly exceeds the defaulted ones, which creates the twoclass imbalance problem. This lowers the ability of prediction models to distinguish the default sample. In order to avoid this sample selection bias and to improve the prediction model, this paper applies a minority sample generation approach to create new minority samples. The logistic regression, support vector machine (SVM) classification, and neural network (NN) classification use an imbalanced dataset. They were used as benchmarks with a single prediction model that used a balanced dataset corrected by the minority samples generation approach. Instead of using predictionoriented tests and the overall accuracy, the true positive rate (TPR), the true negative rate (TNR), G-mean, and F-score are used to measure the performance of default prediction models for imbalanced dataset. In this paper, we describe an empirical experiment that used a sampling of 14 default and 315 non-default listed real estate companies in China and report that most results using single prediction models with a balanced dataset generated better results than an imbalanced dataset.

The accuracy of training-based activity recognition depends on the training procedure and the extent to which the training dataset comprehensively represents the activity and its varieties. Additionally, training incurs substantial cost and effort in the process of collecting training data. To address these limitations, we have developed a training-free activity recognition approach based on a fuzzy logic algorithm that utilizes a generic activity model and an associated activity semantic knowledge. The approach is validated through experimentation with real activity datasets. Results show that the fuzzy logic based algorithms exhibit comparable or better accuracy than other trainingbased approaches.

Internet accessibility has been growing due to the diffusion of smartphones in today’s society. Therefore, people can generate data anywhere and are confronted with the challenge that they should process a large amount of data. Since the appearance of relational database management system (RDBMS), most of the recent information systems are built by utilizing it. RDBMS uses foreign-keys to avoid data duplication. The transactions in the database use attributes, such as atomicity, consistency, isolation, durability (ACID), which ensures that data integrity and processing results are stably managed. The characteristic of RDBMS is that there is high data reliability. However, this results in performance degradation. Meanwhile, from among these information systems, some systems only require high-performance rather than high reliability. In this case, if we only consider performance, the use of NoSQL provides many advantages. It is possible to reduce the maintenance cost of the information system that continues to increase in the use of open source software based NoSQL. And has a huge advantage that is easy to use NoSQL. Therefore, in this study, we prove that the leverage of NoSQL will ensure high performance than RDBMS by applying NoSQL to database systems that implement RDBMS.

Service-oriented computing offers efficient solutions for executing complex applications in an acceptable amount of time. These solutions provide important computing and storage resources, but they are too difficult for individual users to handle. In fact, Service-oriented architectures are usually sophisticated in terms of design, specifications, and deployment. On the other hand, workflow management systems provide frameworks that help users to manage cooperative and interdependent processes in a convivial manner. In this paper, we propose a workflow-based approach to fully take advantage of new service-oriented architectures that take the users’ skills and the internal complexity of their applications into account. To get to this point, we defined a novel framework named JASMIN, which is responsible for managing service-oriented workflows on distributed systems. JASMIN has two main components: unified modeling language (UML) to specify workflow models and business process execution language (BPEL) to generate and compose Web services. In order to cover both workflow and service concepts, we describe in this paper a refinement of UML activity diagrams and present a set of rules for mapping UML activity diagrams into BPEL specifications.

The femtocell overlaid cellular network (FOCN) has been used to enhance the capacity of existing cellular systems. To obtain the desired system performance, both cross-tier interference and co-tier interference in an FOCN need to be managed. This paper proposes an interference management scheme that adaptively constructs a femtocell cluster, which is a group of femtocell base stations that share the same frequency band. The performance evaluation shows that the proposed scheme can enhance the performance of the macrocell-tier and maintain a greater signal to interference-plus-noise ratio than the outage level can for about 99% of femtocell users.

Temporal medical data is often collected during patient treatments that require personal analysis. Each observation recorded in the temporal medical data is associated with measurements and time treatments. A major problem in the analysis of temporal medical data are the missing values that are caused, for example, by patients dropping out of a study before completion. Therefore, the imputation of missing data is an important step during pre-processing and can provide useful information before the data is mined. For each patient and each variable, this imputation replaces the missing data with a value drawn from an estimated distribution of that variable. In this paper, we propose a new method, called Newton’s finite divided difference polynomial interpolation with condition order degree, for dealing with missing values in temporal medical data related to obesity. We compared the new imputation method with three existing subspace estimation techniques, including the k-nearest neighbor, local least squares, and natural cubic spline approaches. The performance of each approach was then evaluated by using the normalized root mean square error and the statistically significant test results. The experimental results have demonstrated that the proposed method provides the best fit with the smallest error and is more accurate than the other methods.

Request oriented sensor networks have stricter requirements than conventional event-driven or periodic report models. Therefore, in this paper we propose a minimum energy data aggregation (MEDA), which meets the requirements for request oriented sensor networks by exploiting a low power real-time scheduler, ondemand time synchronization, variable response frame structure, and adaptive retransmission. In addition we introduce a test bed consisting of a number of MEDA prototypes, which support near real-time bidirectional sensor networks. The experimental results also demonstrate that the MEDA guarantees deterministic aggregation time, enables minimum energy operation, and provides a reliable data aggregation service.

In this paper, we investigated the use of seasonal autoregressive integrated moving average (SARIMA) time series models for fault detection in semiconductor etch equipment data. The derivative dynamic time warping algorithm was employed for the synchronization of data. The models were generated using a set of data from healthy runs, and the established models were compared with the experimental runs to find the faulty runs. It has been shown that the SARIMA modeling for this data can detect faults in the etch tool data from the semiconductor industry with an accuracy of 80% and 90% using the parameter-wise error computation and the step-wise error computation, respectively. We found that SARIMA is useful to detect incipient faults in semiconductor fabrication.

An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a method for FER based on the histogram of orientation gradient (HOG) features using an ELM ensemble. First, the HOG features were extracted from the face image by dividing it into a number of small cells. A bagging algorithm was then used to construct many different bags of training data and each of them was trained by using separate ELMs. To recognize the expression of the input face image, HOG features were fed to each trained ELM and the results were combined by using a majority voting scheme. The ELM ensemble using bagging improves the generalized capability of the network significantly. The two available datasets (JAFFE and CK+) of facial expressions were used to evaluate the performance of the proposed classification system. Even the performance of individual ELM was smaller and the ELM ensemble using a bagging algorithm improved the recognition performance significantly.

In this paper, the average probability of the symbol error rate (SER) and throughput are studied in the presence of joint spectrum sensing and data transmission in a cognitive relay network, which is in the environment of an optimal power allocation strategy. In this investigation, the main component in calculating the secondary throughput is the inclusion of the spatial false alarms, in addition to the conventional false alarms. It has been shown that there exists an optimal secondary power amplification factor at which the probability of SER has a minimum value, whereas the throughput has a maximum value. We performed a Monte-Carlo simulation to validate the analytical results.

Influence maximization is an important problem of finding a small subset of nodes in a social network, such that by targeting this set, one will maximize the expected spread of influence in the network. To improve the efficiency of algorithm KK_Greedy proposed by Kempe et al., we propose two improved algorithms, Lv_NewGreedy and Lv_CELF. By combining all of advantages of these two algorithms, we propose a mixed algorithm Lv_MixedGreedy. We conducted experiments on two synthetically datasets and show that our improved algorithms have a matching influence with their benchmark algorithms, while being faster than them.

In today’s world, communication, the sharing of information, and money transactions are all possible to conduct via the Internet, but it is important that it these things are done by the actual person. It is possible via several means that an intruder can access user information. As such, several precautionary measures have to be taken to avoid such instances. The purpose of this paper is to introduce the idea of a one-time password (OTP), which makes unauthorized access difficult for unauthorized users. A OTP can be implemented using smart cards, time-based tokens, and short message service, but hardware based methodologies require maintenance costs and can be misplaced Therefore, the quick response code technique and personal assurance message has been added along with the OTP authentication.

In this paper we present some research results on computing intensive applications using modern high performance architectures and from the perspective of high computational needs. Computing intensive applications are an important family of applications in distributed computing domain. They have been object of study using different distributed computing paradigms and infrastructures. Such applications distinguish for their demanding needs for CPU computing, independently of the amount of data associated with the problem instance. Among computing intensive applications, there are applications based on simulations, aiming to maximize system resources for processing large computations for simulation. In this research work, we consider an application that simulates scheduling and resource allocation in a Grid computing system using Genetic Algorithms. In such application, a rather large number of simulations is needed to extract meaningful statistical results about the behavior of the simulation results. We study the performance of Oracle Grid Engine for such application running in a Cluster of high computing capacities. Several scenarios were generated to measure the response time and queuing time under different workloads and number of nodes in the cluster.

In the cursive handwriting recognition process, script trajectory segmentation and modeling represent an important task for large or open lexicon context that becomes more complicated in multi-writer applications. In this paper, we will present a developed system of Arabic online handwriting modeling based on graphemes segmentation and the extraction of its geometric features. The main contribution consists of adapting the Fourier descriptors to model the open trajectory of the segmented graphemes. To segment the trajectory of the handwriting, the system proceeds by first detecting its baseline by checking combined geometric and logic conditions. Then, the detected baseline is used as a topologic reference for the extraction of particular points that delimit the graphemes’ trajectories. Each segmented grapheme is then represented by a set of relevant geometric features that include the vector of the Fourier descriptors for trajectory shape modeling, normalized metric parameters that model the grapheme dimensions, its position in respect to the baseline, and codes for the description of its associated diacritics.

One-dimensional arrays with subscripts formed by induction variables in real programs appear quite frequently. For most famous data dependence testing methods, checking if integer-valued solutions exist for one-dimensional arrays with references created by induction variable is very difficult. The I test, which is a refined combination of the GCD and Banerjee tests, is an efficient and precise data dependence testing technique to compute if integer-valued solutions exist for one-dimensional arrays with constant bounds and single increments. In this paper, the non-continuous I test, which is an extension of the I test, is proposed to figure out whether there are integer-valued solutions for one-dimensional arrays with constant bounds and non-sing ularincrements or not. Experiments with the benchmarks that have been cited from Livermore and Vector Loop, reveal that there are definitive results for 67 pairs of one- dimensional arrays that were tested.

It is difficult for mobile learners to maintain a high level of concentration when learning content for more than an hour while they are on the move. Despite the attention span issue, many m-learning systems still provide their mobile learners with the same content once used in e-learning systems. This has called for an investigation to identify the suitable characteristics of the m-learning environment. With this in mind, we have conducted a survey in hopes of determining the requirements for developing more suitable m-learning content. Based on the results of the survey, we have developed a content model comprised of two types: a segment type and a supplement type. In addition, we have implemented a prototype system of the content model for Apple iPhones and Android smartphones in order to investigate a feasibility study of the model application.

Biometric performance improvement is a challenging task. In this paper, a hierarchical strategy fusion based on multimodal biometric system is presented. This strategy relies on a combination of several biometric traits using a multi-level biometric fusion hierarchy. The multi-level biometric fusion includes a pre-classification fusion with optimal feature selection and a post-classification fusion that is based on the similarity of the maximum of matching scores. The proposed solution enhances biometric recognition performances based on suitable feature selection and reduction, such as principal component analysis (PCA) and linear discriminant analysis (LDA), as much as not all of the feature vectors components support the performance improvement degree.

To enhance the utilization of the traffic channels of a network (instead of allocating radio channel to an individual user), a channel or a group of channels are allocated to a user group. The idea behind this is the statistical distribution of traffic arrival rates and the service time for an individual user or a group of users. In this paper, we derive the blocking probability and throughput of a subscriber station of Worldwide Interoperability for Microwave Access (WiMAX) by considering both the connection level and packet-level traffic under a complete partition scheme. The main contribution of the paper is to incorporate the traffic shaping scheme onto the incoming turbulent traffic. Hence, we have also analyzed the impact of the drain rate of the buffer on the blocking probability and throughput.

To facilitate the implementation of a wide variety of context-aware applications based on mobile devices, general-purpose context-aware framework that applications can use by calling is needed. The context-aware framework is a middleware that performs the sensing, reasoning, and retrieving based on the knowledge base. The knowledge base must systematically represent the information required on the behavior of the context-aware framework, such as context information and reasoning information. It must also provide functions for storage and retrieval. To date, previous research on the representation of the context information have been carried out, but studies on the unified representation of the knowledge base has seen little progress. This study defines the knowledge base as the unified context information, and proposes the UniOWL, which can do a good job of representing it. UniOWL is based on OWL and represents the information that is necessary for the operation of the context-aware framework. Therefore, UniOWL greatly facilitates the implementation of the knowledge base on a context-aware framework.

The performance of edge detection often relies on its ability to correctly determine the dissimilarities of connected pixels. For grayscale images, the dissimilarity of two pixels is estimated by a scalar difference of their intensities and for color images, this is done by using the vector difference (color distance) of the three-color components. The Euclidean distance in the RGB color space typically measures a color distance. However, the RGB space is not suitable for edge detection since its color components do not coincide with the information human perception uses to separate objects from backgrounds. In this paper, we propose a novel method for color edge detection by taking advantage of the HSV color space and the Mahalanobis distance. The HSV space models colors in a manner similar to human perception. The Mahalanobis distance independently considers the hue, saturation, and lightness and gives them different degrees of contribution for the measurement of color distances. Therefore, our method is robust against the change of lightness as compared to previous approaches. Furthermore, we will introduce a noise-resistant technique for determining image gradients. Various experiments on simulated and real-world images show that our approach outperforms several existing methods, especially when the images vary in lightness or are corrupted by noise.

The performance of a cognitive radio network (CRN) solely depends on how precisely the secondary users can sense the presence or absence of primary users. The incorporation of a spatial false alarm makes deriving the probability of a correct decision a cumbersome task. Previous literature performed this task for the case of a received signal under a Normal probability density function case. In this paper we enhance the previous work, including the impact of carrier frequency, the gain of antennas on both sides, and antenna heights so as to observe the robustness against noise and interference and to make the correct decision of detection. Three small scale fading channels: Rayleigh, Normal, and Weibull were considered to get the real scenario of a CRN in an urban area. The incorporation of a maximal-ratio combining and selection combing with a variation of the number of received antennas have also been studied in order to achieve the correct decision of spectral sensing, so as to serve the cognitive users. Finally, we applied the above concept to a traffic model of the CRN, which we based on a two-dimensional state transition chain.

Nowadays mobile users are using a popular service called Location-Based Services (LBS). LBS is very helpful for a mobile user in finding various Point of Interests (POIs) in their vicinity. To get these services, users must provide their personal information, such as user identity or current location, which severely risks the location privacy of the user. Many researchers are developing schemes that enable a user to use these LBS services anonymously, but these approaches have some limitations (i.e., either the privacy prevention mechanism is weak or the cost of the solution is too much). As such, we are presenting a robust scheme for mobile users that allows them to use LBS anonymously. Our scheme involves a client side application that interacts with an untrusted LBS server to find the nearest POI for a service required by a user. The scheme is not only efficient in its approach, but is also very practical with respect to the computations that are done on a client’s resource constrained device. With our scheme, not only can a client anonymously use LBS without any use of a trusted third party, but also a server’s database is completely secure from the client. We performed experiments by developing and testing an Android-based client side smartphone application to support our argument.

In past few years, distributed hash table (DHT)-based P2P systems have been proven to be a promising way to manage decentralized index information and provide efficient lookup services. However, the skewness o

The paper presents a fuzzy based impulse noise filter for both gray scale and color images. The proposed approach is based on the technique of boundary discriminative noise detection. The algorithm is a multi-step process comprising detection, filtering and color correction stages. The detection procedure classifies the pixels as corrupted and uncorrupted by computing decision boundaries, which are fuzzified to improve the outputs obtained. In the case of color images, a correction term is added by examining the interactions between the color components for further improvement. Quantitative and qualitative analysis, performed on standard gray scale and color image, shows improved performance of the proposed technique over existing state-of-the-art algorithms in terms of Peak Signal to Noise Ratio (PSNR) and color difference metrics. The analysis proves the applicability of the proposed algorithm to random valued impulse noise

Prostate cancer is one of the most frequent cancers in men and is a major cause of mortality in the most of countries. In many diagnostic and treatment procedures for prostate disease accurate detection of prostate boundaries in transrectal ultrasound(TRUS) images is required. This is a challenging and difficult task due to weak prostate boundaries, speckle noise and the short range of gray levels. In this paper a method for automatic prostate segmentation in TRUS images using Gabor feature extraction and snake-like contour is presented. This method involves preprocessing, extracting Gabor feature, training, and prostate segmentation. The speckle reduction for preprocessing step has been achieved by using stick filter and top-hat transform has been implemented for smoothing the contour. A Gabor filter bank for extraction of rotation- invariant texture features has been implemented. A support vector machine(SVM) for training step has been used to get each feature of prostate and nonprostate. Finally, the boundary of prostate is extracted by the snake-like contour algorithm. A number of experiments are conducted to validate this method and results showed that this new algorithm extracted the prostate boundary with less than 10.2% of the accuracy which is relative to boundary provided manually by experts

Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR.
The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document"'"s textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier.
In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.

In this paper we propose a method to detect human faces in color images. Many existing systems use a window-based classifier that scans the entire image for the presence of the human face and such systems suffers from scale variation, pose variation, illumination changes, etc. Here, we propose a lighting insensitive face detection method based upon the edge and skin tone information of the input color image. First, image enhancement is performed, especially if the image is acquired from an unconstrained illumination condition. Next, skin segmentation in YCbCr and RGB space is conducted. The result of skin segmentation is refined using the skin tone percentage index method. The edges of the input image are combined with the skin tone image to separate all non- face regions from candidate faces. Candidate verification using primitive shape features of the face is applied to decide which of the candidate regions corresponds to a face. The advantage of the proposed method is that it can detect faces that are of different sizes, in different poses, and that are making different expressions under unconstrained illumination conditions

There are many recommendation systems available to provide users with personalized services. Among them, the most frequently used in electronic commerce is "'"collaborative filtering"'", which is a technique that provides a process of filtering customer information for the preparation of profiles and making recommendations of products that are expected to be preferred by other users, based on such information profiles. Collaborative filtering systems, however, have in their nature both technical issues such as sparsity, scalability, and transparency, as well as security issues in the collection of the information that becomes the basis for preparation of the profiles. In this paper, we suggest a movie recommendation system, based on the selection of optimal personal propensity variables and the utilization of a secure collaborating filtering system, in order to provide a solution to such sparsity and scalability issues. At the same time,we adopt "'"push attack"'" principles to deal with the security vulnerability of collaborative filtering systems. Furthermore, we assess the system"'"s applicability by using the open database MovieLens, and present a personal propensity framework for improvement in the performance of recommender systems. We successfully come up with a movie recommendation system through the selection of optimal personalization factors and the embodiment of a safe collaborative filtering system

In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

The confinement problem was first noted four decades ago. Since then, a huge amount of efforts have been spent on defining and mitigating the problem. The evolution of technologies from traditional operating systems to mobile and cloud computing brings about new security challenges. It is perhaps timely that we review the work that has been done. We discuss the foundational principles from classical works, as well as the efforts towards solving the confinement problem in three domains: operating systems, mobile computing, and cloud computing. While common issues exist across all three domains, unique challenges arise for each of them, which we discuss.

In this paper, we analyze a technique for building a high-availability (HA) cluster system. We propose what we have termed the ‘Selective Replication Manager (SRM),’ which improves the throughput performance and reduces the latency of disk devices by means of a Distributed Replicated Block Device (DRBD), which is integrated in the recent Linux Kernel (version 2.6.33 or higher) and that still provides HA and failover capabilities. The proposed technique can be applied to any disk replication and database system with little customization and with a reasonably low performance overhead. We demonstrate that this approach using SRM increases the disk replication speed and reduces latency by 17% and 7%, respectively, as compared to the existing DRBD solution. This approach represents a good effort to increase HA with a minimum amount of risk and cost in terms of commodity hardware

Ranking thousands of web documents so that they are matched in response to a user query is really a challenging task. For this purpose, search engines use different ranking mechanisms on apparently related resultant web documents to decide the order in which documents should be displayed. Existing ranking mechanisms decide on the order of a web page based on the amount and popularity of the links pointed to and emerging from it. Sometime search engines result in placing less relevant documents in the top positions in response to a user query. There is a strong need to improve the ranking strategy. In this paper, a novel ranking mechanism is being proposed to rank the web documents that consider both the HTML structure of a page and the contextual senses of keywords that are present within it and its back-links. The approach has been tested on data sets of URLs and on their back-links in relation to different topics. The experimental result shows that the overall search results, in response to user queries, are improved. The ordering of the links that have been obtained is compared with the ordering that has been done by using the page rank score. The results obtained thereafter shows that the proposed mechanism contextually puts more related web pages in the top order, as compared to the page rank score.

Maintenance Access Hatches are used to ensure urban safety and aesthetics while facilitating the management of power lines, telecommunication lines, and gas pipes. Such facilities necessitate affordable and effective surveillance. In this paper, we propose a FiCHS (Fixed Cluster head centralized Hierarchical Static clustering) routing protocol that is suitable for underground maintenance hatches using WSN (Wireless Sensor Network) technology. FiCHS is compared with three other protocols, LEACH, LEACH-C, and a simplified LEACH, based on an ns-2 simulation. FiCHS was observed to exhibit the highest levels of power and data transfer efficiency.

A new secure network communication technique that has been designed for mobile wireless services, is presented in this paper. Its network services are mobile, distributed, seamless, and secure. We focus on the security of the scheme and achieve anonymity and reliability by using cryptographic techniques like blind signature and the electronic coin. The question we address in this paper is, “What is the best way to protect the privacy and anonymity of users of mobile wireless networks, especially in practical applications like e-commerce?” The new scheme is a flexible solution that answers this question. It efficiently protects user"'"s privacy and anonymity in mobile wireless networks and supports various applications. It is employed to implement a secure e-auction as an example, in order to show its advantages in practical network applications.

A speaker’s intentions can be represented by domain actions (domain- independent speech act and domain-dependent concept sequence pairs). Therefore, it is essential that domain actions be determined when implementing dialogue systems because a dialogue system should determine users’ intentions from their utterances and should create counterpart intentions to the users’ intentions. In this paper, a neural network model is proposed for classifying a user’s domain actions and planning a system’s domain actions. An integrated neural network model is proposed for simultaneously determining user and system domain actions using the same framework. The proposed model performed better than previous non-integrated models in an experiment using a goal-oriented dialogue corpus. This result shows that the proposed integration method contributes to improving domain action determination performance.

Recently, a pixel-chaotic-shuffling (PCS) method has been proposed by Huang et al. for encrypting color images using multiple chaotic systems like the Henon, the Lorenz, the Chua, and the Rossler systems. All of which have great encryption performance. The authors claimed that their pixel-chaotic-shuffle (PCS) encryption method has high confidential security. However, the security analysis of the PCS method against the chosen-plaintext attack (CPA) and known-plaintext attack (KPA) performed by Solak et al. successfully breaks the PCS encryption scheme without knowing the secret key. In this paper we present an improved shuffling pattern for the plaintext image bits to make the cryptosystem proposed by Huang et al. resistant to chosen-plaintext attack and known-plaintext attack. The modifications in the existing PCS encryption method are proposed to improve its security performance against the potential attacks described above. The Number of Pixel Change Rate (NPCR), Unified Average Changed Intensity (UACI), information entropy, and correlation coefficient analysis are performed to evaluate the statistical performance of the modified PCS method. The simulation analysis reveals that the modified PCS method has better statistical features and is more resistant to attacks than Huang et al.’s PCS method.

Cloud storage is provided as a service in order to keep pace with the increasing use of digital information. It can be used to store data via networks and various devices and is easy to access. Unlike existing removable storage, many users can use cloud storage because it has no storage capacity limit and does not require a storage medium. Cloud storage reliability has become a topic of importance, as many users employ it for saving great volumes of data. For protection against unethical administrators and attackers, a variety of cryptography systems, such as searchable encryption and proxy re-encryption, are being applied to cloud storage systems. However, the existing searchable encryption technology is inconvenient to use in a cloud storage environment where users upload their data. This is because this data is shared with others, as necessary, and the users with whom the data is shared change frequently. In this paper, we propose a searchable re-encryption scheme in which a user can safely share data with others by generating a searchable encryption index and then re-encrypt it.

In a parallel processing system, Multi-stage Interconnection Networks (MINs) play a vital role in making the network reliable and cost effective. The MIN is an important piece of architecture for a multiprocessor system, and it has a good impact in the field of communication. Optical Multi-stage Interconnection Networks (OMINs) are the advanced version of MINs. The main problem with OMINs is crosstalk. This paper, presents the (1) Destination Based Modified Omega Network (DBMON) and the (2) Destination Based Scheduling Algorithm (DBSA). DBSA does the scheduling for a source and their corresponding destination address for messages transmission and these scheduled addresses are passed through DBMON. Furthermore, the performance of DBMON is compared with the Crosstalk-Free Modified Omega Network (CFMON). CFMON also minimizes the crosstalk in a minimum number of passes. Results show that DBMON is better than CFMON in terms of the average number of passes and execution time. DBSA can transmit all the messages in only two passes from any source to any destination, through DBMON and without crosstalk. This network is the modified form of the original omega network. Crosstalk minimization is the main objective of the proposed algorithm and proposed network.

This paper presents an approach for improving the use of VP-tree in video indexing and searching. A vantage-point tree or VP-tree is one of the metric space-based indexing methods used in multimedia database searches and data retrieval. Instead of relying on the Euclidean distance as a measure of search space, the proposed approach focuses on the trigonometric inequality for compressing the search range, which thus, improves the search performance. A test result of using 10,000 video files shows that this method reduced the search time by 5-12%, as compared to the existing method that uses the AESA algorithm.