The Structured Query Language (SQL) Injection continues to be one of greatest security risks in the world according to the Open Web Application Security Project’s (OWASP)[1] Top 10 Security vulnerabilities 2013. The ease of exploitability and severe impact puts this attack at the top. As the countermeasures become more sophisticated, SOL Injection Attacks also continue to evolve, thus thwarting the attempt to eliminate this attack completely. The vulnerable data is a source of worry for government and financial institutions. In this paper, a detailed survey of different types of SQL Injection and proposed methods and theories are presented, along with various tools and their efficiency in intercepting and preventing SQL attacks.

Nowadays, geographic information system (GIS) is developed and implemented in many areas. A huge volume of vector map data has been accessed unlawfully by hackers, pirates, or unauthorized users. For this reason, we need the methods that help to protect GIS data for storage, multimedia applications, and transmission. In our paper, a selective encryption method is presented based on vertex randomization and hybrid transform in the GIS vector map. In the proposed algorithm, polylines and polygons are focused as the targets for encryption. Objects are classified in each layer, and all coordinates of the significant objects are encrypted by the key sets generated by using chaotic map before changing them in DWT, DFT domain. Experimental results verify the high efficiency visualization by low complexity, high security performance by random processes

Big data information and pattern analysis have applications in many industrial sectors. To reduce energy consumption effectively, the eco-driving method that reduces the fuel consumption of vehicles has recently come under scrutiny. Using big data on commercial vehicles obtained from digital tachographs (DTGs), it is possible not only to aid traffic safety but also improve eco-driving. In this study, we estimate fuel consumption efficiency by processing and analyzing DTG big data for commercial vehicles using parallel processing with the MapReduce mechanism. Compared to the conventional measurement of fuel consumption using the On-Board Diagnostics II (OBD-II) device, in this paper, we use actual DTG data and OBD-II fuel consumption data to identify meaningful relationships to calculate fuel efficiency rates. Based on the driving pattern extracted from DTG data, estimating fuel consumption is possible by analyzing driving patterns obtained only from DTG big data.

In this work, we are interested in the extraction of areas of interest from satellite images by introducing a MOTRIBES/OC-SVM approach. The One-Class Support Vector Machine (OC-SVM) is based on the estimation of a support that includes training data. It identifies areas of interest without including other classes from the scene. We propose generating optimal training data using the Multi-Objective TRIBES (MO-TRIBES) to improve the performances of the OC-SVM. The MO-TRIBES is a parameter-free optimization technique that manages the search space in tribes composed of agents. It makes different behavioral and structural adaptations to minimize the false positive and false negative rates of the OC-SVM. We have applied our proposed approach for the extraction of earthquakes and urban areas. The experimental results and comparisons with different state-of-the-art classifiers confirm the efficiency and the robustness of the proposed approach.

A watermark is a signal added to the original signal in order to preserve the copyright of the owner of the digital content. The basic challenge for designing a watermarking system is a dilemma between transparency and robustness. If we want a higher rate of transparency, there has to be a compromise in terms of its robustness and vice versa. Also, until now, watermarking is generalized, resulting in the need for a specialized algorithm to work for a specialized image processing application domain. Our proposed technique takes into consideration the image characteristics for watermark insertion and it optimizes transparency and robustness. It achieved a 99.98% retrieval efficiency for an image blurring attack and counterfeits other attacks. Our proposed technique counterfeits almost all of the image processing attacks.

Recently, the importance of big data has been emphasized with the development of smartphone, web/SNS. As a result, MapReduce, which can efficiently process big data, is receiving worldwide attention because of its excellent scalability and stability. Since big data has a large amount, fast creation speed, and various properties, it is more efficient to process big data summary information than big data itself. Wavelet histogram, which is a typical data summary information generation technique, can generate optimal data summary information that does not cause loss of information of original data. Therefore, a system applying a wavelet histogram generation technique based on MapReduce has been actively studied. However, existing research has a disadvantage in that the generation speed is slow because the wavelet histogram is generated through one or more MapReduce Jobs. And there is a high possibility that the error of the data restored by the wavelet histogram becomes large. However, since the wavelet histogram generation system based on the MapReduce developed in this paper generates the wavelet histogram through one MapReduce Job, the generation speed can be greatly increased. In addition, since the wavelet histogram is generated by adjusting the error boundary specified by the user, the error of the restored data can be adjusted from the wavelet histogram. Finally, we verified the efficiency of the wavelet histogram generation system developed in this paper through performance evaluation.

The burgeoning distribution of smartphone web applications based on various mobile environments is increasingly focusing on the performance of mobile applications implemented by JavaScript and HTML5 (Hyper Text Markup Language 5). If application software has a simple functional processing structure, then the problem is benign. However, browser loads are becoming more burdensome as the amount of JavaScript processing continues to increase. Processing time and capacity of the JavaScript in current mobile browsers are limited. As a solution, the Web Worker is designed to implement multi-threading. However, it cannot guarantee the computing ability as a native application on mobile devices, and is not sufficient to improve processing speed. The method proposed in this research overcomes the limitation of resources as a mobile client and guarantees performance by native application software by providing high computing service. It shifts the JavaScript process of a mobile device on to a cloud-based computer server. A performance evaluation experiment revealed the proposed algorithm to be up to 6 times faster in computing speed compared to the existing mobile browser’s JavaScript process, and 3 to 6 times faster than Web Worker. In addition, memory usage was also less than the existing technology.

Social networking services (SNSs) such as Twitter, MySpace, and Facebook have become progressively significant with its billions of users. Still, alongside this increase is an increase in security threats such as cross- site scripting (XSS) threat. Recently, a few approaches have been proposed to detect an XSS attack on SNSs. Due to the certain recent features of SNSs webpages such as JavaScript and AJAX, however, the existing approaches are not efficient in combating XSS attack on SNSs. In this paper, we propose a machine learning- based approach to detecting XSS attack on SNSs. In our approach, the detection of XSS attack is performed based on three features: URLs, webpage, and SNSs. A dataset is prepared by collecting 1,000 SNSs webpages and extracting the features from these webpages. Ten different machine learning classifiers are used on a prepared dataset to classify webpages into two categories: XSS or non-XSS. To validate the efficiency of the proposed approach, we evaluated and compared it with other existing approaches. The evaluation results show that our approach attains better performance in the SNS environment, recording the highest accuracy of 0.972 and lowest false positive rate of 0.87.

A primary task in wireless sensor networks (WSNs) is data collection. The main objective of this task is to collect sensor readings from sensor fields at predetermined sinks using routing protocols without conducting network processing at intermediate nodes, which have been proved as being inefficient in many research studies using a static sink. The major drawback is that sensor nodes near a data sink are prone to dissipate more energy power than those far away due to their role as relay nodes. Recently, novel WSN architectures based on mobile sinks and mobile relay nodes, which are able to move inside the region of a deployed WSN, which has been developed in most research works related to mobile WSN mainly exploit mobility to reduce and balance energy consumption to enhance communication reliability among sensor nodes. Our main purpose in this paper is to propose a solution to the problem of deploying mobile data collectors for alleviating the high traffic load and resulting bottleneck in a sink’s vicinity, which are caused by static approaches. For this reason, several WSNs based on mobile elements have been proposed. We studied two key issues in WSN mobility: the impact of the mobile element (sink or relay nodes) and the impact of the mobility model on WSN based on its performance expressed in terms of energy efficiency and reliability. We conducted an extensive set of simulation experiments. The results obtained reveal that the collection approach based on relay nodes and the mobility model based on stochastic perform better.

This research presents the battery discharge rate models for the energy consumption of mobile phone batteries based on machine learning by taking into account three usage patterns of the phone: the standby state, video playing, and web browsing. We present the experimental design methodology for collecting data, preprocessing, model construction, and parameter selections. The data is collected based on the HTC One X hardware platform. We considered various setting factors, such as Bluetooth, brightness, 3G, GPS, Wi-Fi, and Sync. The battery levels for each possible state vector were measured, and then we constructed the battery prediction model using different regression functions based on the collected data. The accuracy of the constructed models using the multi-layer perceptron (MLP) and the support vector machine (SVM) were compared using varying kernel functions. Various parameters for MLP and SVM were considered. The measurement of prediction efficiency was done by the mean absolute error (MAE) and the root mean squared error (RMSE). The experiments showed that the MLP with linear regression performs well overall, while the SVM with the polynomial kernel function based on the linear regression gives a low MAE and RMSE. As a result, we were able to demonstrate how to apply the derived model to predict the remaining battery charge.

The use of mobile agents for collaborative processing in wireless sensor network has gained considerable attention. This is when mobile agents are used for data aggregation to exploit redundant and correlated data. The efficiency of agent-based data aggregation depends on the agent migration scheme. However, in general, most of the proposed schemes are centralized approach-based schemes where the sink node determines the migration paths for the agents before dispatching them in the sensor network. The main limitations with such schemes are that they need global network topology information for deriving the migration paths of the agents, which incurs additional communication overhead, since each node has a very limited communication range. In addition, a centralized approach does not provide fault tolerant and adaptive migration paths. In order to solve such problems, we have proposed a distributed approach-based scheme for determining the migration path of the agents where at each hop, the local information is used to decide the migration of the agents. In addition, we also propose a local repair mechanism for dealing with the faulty nodes. The simulation results show that the proposed scheme performs better than existing schemes in the presence of faulty nodes within the networks, and manages to report the aggregated data to the sink faster.

In this paper, we present a virtual laboratory platform (VLP) baptized Mercury allowing students to make practical work (PW) on different aspects of mobile wireless sensor networks (WSNs). Our choice of WSNs is motivated mainly by the use of real experiments needed in most courses about WSNs. These experiments require an expensive investment and a lot of nodes in the classroom. To illustrate our study, we propose a course related to energy efficient and safe weighted clustering algorithm. This algorithm which is coupled with suitable routing protocols, aims to maintain stable clustering structure, to prevent most routing attacks on sensor networks, to guaranty energy saving in order to extend the lifespan of the network. It also offers a better performance in terms of the number of re-affiliations. The platform presented here aims at showing the feasibility, the flexibility and the reduced cost of such a realization. We demonstrate the performance of the proposed algorithms that contribute to the familiarization of the learners in the field of WSNs.

Beacon scheduling is considered to be one of the most significant challenges for energy-efficient Low-Rate Wireless Personal Area Network (LR-WPAN) multi-hop networks. The emerging new standard, IEEE802.15.4e, contains a distributed beacon scheduling functionality that utilizes a specific bitmap and multi-superframe structure. However, this new standard does not provide a critical recipe for superframe duration (SD) allocation in beacon scheduling. Therefore, in this paper, we first introduce three different SD allocation approaches, LSB first, MSB first, and random. Via experiments we show that IEEE802.15.4e DSME beacon scheduling performs differently for different SD allocation schemes. Based on our experimental results we propose an adaptive SD allocation (ASDA) algorithm. It utilizes a single indicator, a distributed neighboring slot incrementer (DNSI). The experimental results demonstrate that the ASDA has a superior performance over other methods from the viewpoint of resource efficiency.

High Efficiency Video Coding (HEVC) is the most recent video codec standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of this newly introduced standard is for catering to high-resolution video in low bandwidth environments with a higher compression ratio. This paper provides a performance comparison between HEVC and H.264/AVC video compression standards in terms of objective quality, delay, and complexity in the broadcasting environment. The experimental investigation was carried out using six test sequences in the random access configuration of the HEVC test model (HM), the HEVC reference software. This was also carried out in similar configuration settings of the Joint Scalable Video Module (JSVM), the official scalable H.264/AVC reference implementation, running on a single layer mode. According to the results obtained, the HM achieves more than double the compression ratio compared to that of JSVM and delivers the same video quality at half the bitrate. Yet, the HM encodes two times slower (at most) than JSVM. Hence, it can be concluded that the application scenarios of HM and JSVM should be judiciously selected considering the availability of system resources. For instance, HM is not suitable for low delay applications, but it can be used effectively in low bandwidth environments.

Resource sharing is a major advantage of distributed computing. However, a distributed computing system may have some physical or virtual resource that may be accessible by a single process at a time. The mutual exclusion issue is to ensure that no more than one process at a time is allowed to access some shared resource. The article proposes a token-based mutual exclusion algorithm for the clustered mobile ad hoc networks (MANETs). The mechanism that is adapted to handle token passing at the inter-cluster level is different from that at the intra-cluster level. It makes our algorithm message efficient and thus suitable for MANETs. In the interest of efficiency, we implemented a centralized token passing scheme at the intra-cluster level. The centralized schemes are inherently failure prone. Thus, we have presented an intracluster token passing scheme that is able to tolerate a failure. In order to enhance reliability, we applied a distributed token circulation scheme at the inter-cluster level. More importantly, the message complexity of the proposed algorithm is independent of N, which is the total number of nodes in the system. Also, under a heavy load, it turns out to be inversely proportional to n, which is the (average) number of nodes per each cluster. We substantiated our claim with the correctness proof, complexity analysis, and simulation results. In the end, we present a simple approach to make our protocol fault tolerant.

Recently, many large organizations have multiple data sources (MDS’) distributed over different branches of an interstate company. Local patterns analysis has become an effective strategy for MDS mining in national and international organizations. It consists of mining different datasets in order to obtain frequent patterns, which are forwarded to a centralized place for global pattern analysis. Various synthesizing models [2,3,4,5,6,7,8,26] have been proposed to build global patterns from the forwarded patterns. It is desired that the synthesized rules from such forwarded patterns must closely match with the mono-mining results (i.e., the results that would be obtained if all of the databases are put together and mining has been done). When the pattern is present in the site, but fails to satisfy the minimum support threshold value, it is not allowed to take part in the pattern synthesizing process. Therefore, this process can lose some interesting patterns, which can help the decider to make the right decision. In such situations we propose the application of a probabilistic model in the synthesizing process. An adequate choice for a probabilistic model can improve the quality of patterns that have been discovered. In this paper, we perform a comprehensive study on various probabilistic models that can be applied in the synthesizing process and we choose and improve one of them that works to ameliorate the synthesizing results. Finally, some experiments are presented in public database in order to improve the efficiency of our proposed synthesizing method.

Skin detection is used in many applications, such as face recognition, hand
tracking, and human-computer interaction. There are many skin color detection
algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other"'"s thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others.

Various Time Synchronization protocols for a Wireless Sensor Network (WSN) have been developed since time synchronization is important in many timecritical WSN applications. Aside from synchronization accuracy, energy constraint should also be considered seriously for time synchronization protocols in WSNs, which typically have limited power environments. This paper performs analysis of prominent WSN time synchronization protocols in terms of power consumption and test by simulation. In the analysis and simulation tests, each protocol shows different performance in terms of power consumption. This result is helpful in choosing or developing an appropriate time synchronization protocol that meets the requirements of synchronization accuracy and power consumption (or network lifetime) for a specific WSN application.

Influence maximization is an important problem of finding a small subset of nodes in a social network, such that by targeting this set, one will maximize the expected spread of influence in the network. To improve the efficiency of algorithm KK_Greedy proposed by Kempe et al., we propose two improved algorithms, Lv_NewGreedy and Lv_CELF. By combining all of advantages of these two algorithms, we propose a mixed algorithm Lv_MixedGreedy. We conducted experiments on two synthetically datasets and show that our improved algorithms have a matching influence with their benchmark algorithms, while being faster than them.

Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR.
The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document"'"s textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier.
In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.

Maintenance Access Hatches are used to ensure urban safety and aesthetics while facilitating the management of power lines, telecommunication lines, and gas pipes. Such facilities necessitate affordable and effective surveillance. In this paper, we propose a FiCHS (Fixed Cluster head centralized Hierarchical Static clustering) routing protocol that is suitable for underground maintenance hatches using WSN (Wireless Sensor Network) technology. FiCHS is compared with three other protocols, LEACH, LEACH-C, and a simplified LEACH, based on an ns-2 simulation. FiCHS was observed to exhibit the highest levels of power and data transfer efficiency.

One of the obstacles preventing the Zigbee protocol from being widely used is the excessive power consumption of Zigbee devices in low bandwidth and low power requirement applications. This paper proposes a protocol that resolves the power efficiency problem. The proposed protocol reduces the power consumption of Zigbee devices in beacon-enabled networks without increasing the time taken by Zigbee peripherals to communicate with their coordinator. The proposed protocol utilizes a beacon control mechanism called a “sleep pattern,” which is updated based on the previous event statistics. It determines exactly when Zigbee peripherals wake up or sleep. A simulation of the proposed protocol using realistic parameters and an experiment using commercial products yielded similar results, demonstrating that the protocol may be a solution to reduce the power consumption of Zigbee devices

In order to make cognitive radio systems a practical technology to be deployed in real-world scenarios, the core Software Defined Radio (SDR) systems must meet the stringent requirements of the target application, especially in terms of performance and energy consumption for mobile platforms. In this paper we present a feasibility study of hardware acceleration as an energy-efficient implementation for SDR. We identified the amplifier function from the Software Communication Architecture (SCA) for hardware acceleration since it is one of the functions called for most frequently and it requires intensive floating-point computation. Then, we used the Virtex5 Field- Programmable Gate Array (FPGA) to perform a comparison between compiler floatingpoint support and the on-chip floating-point support. By enabling the on-chip floating-point unit (FPU), we obtained as high as a 2X speedup and 50% of the overall energy reduction. We achieved this with an increase of the power consumption by no more than 0.68%. This demonstrates the feasibility of the proposed approach.

Shuffling is an effective method to build a publicly verifiable mix network to implement verifiable anonymous channels that can be used for important cryptographic applications like electronic voting and electronic cash. One shuffling scheme by Groth is claimed to be secure and efficient. However, its soundness has not been formally proven. An attack against the soundness of this shuffling scheme is presented in this paper. Such an attack compromises the soundness of the mix network based on it. Two new shuffling protocols are designed on the basis of Groth"'"s shuffling and batch verification techniques. The first new protocol is not completely sound, but is formally analyzed in regards to soundness, so it can be applied to build a mix network with formally proven soundness. The second new protocol is completely sound, so is more convenient to apply. Formal analysis in this paper guarantees that both new shuffling protocols can be employed to build mix networks with formally provable soundness. Both protocols prevent the attack against soundness in Groth"'"s scheme. Both new shuffling protocols are very efficient as batch-verification-based efficiency-improving mechanisms have been adopted. The second protocol is even simpler and more elegant than the first one as it is based on a novel batch cryptographic technique.

Task-based programming is becoming the state-of-the-art method of choice for extracting the desired performance from multi-core chips. It expresses a program in terms of lightweight logical tasks rather than heavyweight threads. Intel Threading Building Blocks (TBB) is a task-based parallel programming paradigm for multi-core processors. The performance gain of this paradigm depends to a great extent on the efficiency of its parallel constructs. The parallel overheads incurred by parallel constructs determine the ability for creating large-scale parallel programs, especially in the case of fine-grain parallelism. This paper presents a study of TBB parallelization overheads. For this purpose, a TBB micro-benchmarks suite called TBBench has been developed. We use TBBench to evaluate the parallelization overheads of TBB on different multi-core machines and different compilers. We report in detail in this paper on the relative overheads and analyze the running results.

PVSS stands for publicly verifiable secret sharing. In PVSS, a dealer shares a secret among multiple share holders. He encrypts the shares using the shareholders"'" encryption algorithms and publicly proves that the encrypted shares are valid. Most of the existing PVSS schemes do not employ an ElGamal encryption to encrypt the shares. Instead, they usually employ other encryption algorithms like a RSA encryption and Paillier encryption. Those encryption algorithms do not support the shareholders"'" encryption algorithms to employ the same decryption modulus. As a result, PVSS based on those encryption algorithms must employ additional range proofs to guarantee the validity of the shares obtained by the shareholders. Although the shareholders can employ ElGamal encryptions with the same decryption modulus in PVSS such that the range proof can be avoided, there are only two PVSS schemes based on ElGamal encryption. Moreover, the two schemes have their drawbacks. One of them employs a costly repeating-proof mechanism, which needs to repeat the dealer"'"s proof at least scores of times to achieve satisfactory soundness. The other requires that the dealer must know the discrete logarithm of the secret to share and thus weakens the generality and it cannot be employed in many applications. A new PVSS scheme based on an ElGamal encryption is proposed in this paper. It employs the same decryption modulus for all the shareholders"'" ElGamal encryption algorithms, so it does not need any range proof. Moreover, it is a general PVSS technique without any special limitation. Finally, an encryption-improving technique is proposed to achieve very high efficiency in the new PVSS scheme. It only needs a number of exponentiations in large cyclic groups that are linear in the number of the shareholders, while all the existing PVSS schemes need at least a number of exponentiations in large cyclic groups that are linear in the square of the number of the shareholders

For disaster exploration and surveillance application, this paper aims to present a novel application of a multi-robot agent based on WSN and to evaluate a multihop communication caused by the robotics correspondingly, which are used in the uncertain and unknown subterranean tunnel. A Primary-Scout Multi-Robot System (PSMRS) was proposed. A chain topology in a subterranean environment was implemented using a trimmed ZigBee2006 protocol stack to build the multi-hop communication network. The ZigBee IC-CC2530 modular circuit was adapted by mounting it on the PS-MRS. A physical experiment based on the strategy of PS-MRS was used in this paper to evaluate the efficiency of multi-hop communication and to realize the delivery of data packets in an unknown and uncertain underground laboratory environment

With the increasing demand for real time applications in the Wireless Senor Network (WSN), real time critical events anticipate an efficient quality-of-service (QoS) based routing for data delivery from the network infrastructure. Designing such QoS based routing protocol to meet the reliability and delay guarantee of critical events while preserving the energy efficiency is a challenging task. Considerable research has been focused on developing robust energy efficient QoS based routing protocols. In this paper, we present the state of the research by summarizing the work on QoS based routing protocols that has already been published and by highlighting the QoS issues that are being addressed. The performance comparison of QoS based routing protocols such as SAR, MMSPEED, MCMP, MCBR, and EQSR has also been analyzed using ns-2 for various parameters.

A designated verifier signature is a special type of digital signature, which convinces a designated verifier that she has signed a message in such a way that the designated verifier cannot transfer the signature to a third party. A strong designated verifier signature scheme enhances the privacy of the signer such that no one but the designated verifier can verify the signer¡¯s signatures. In this paper we present two generic frame works for constructing strong designated verifier signature schemes from any secure ring signature scheme and any deniable one-pass authenticated key exchange protocol, respectively. Compared with similar protocols, the instantiations of our construction achieve improved efficiency.

Vehicular networks are a promising application of mobile ad hoc networks. In this paper, we introduce an efficient broadcast technique, called CB-S (Cell Broadcast for Streets), for vehicular networks with occlusions such as skyscrapers. In this environment, the road network is fragmented into cells such that nodes in a cell can communicate with any node within a two cell distance. Each mobile node is equipped with a GPS (Global Positioning System) unit and a map of the cells. The cell map has information about the cells including their identifier and the coordinates of the upper-right and lower-left corner of each cell. CB-S has the following desirable property. Broadcast of a message is performed by rebroadcasting the message from every other cell in the terrain. This characteristic allows CB-S to achieve an efficient performance. Our simulation results indicate that messages always reach all nodes in the wireless network. This perfect coverage is achieved with minimal overhead. That is, CB-S uses a low number of nodes to disseminate the data packets as quickly as probabilistically possible. This efficiency gives it the advantage of low delay. To show these benefits, we give simulations results to compare CB-S with four other broadcast techniques. In practice, CB-S can be used for information dissemination, or to reduce the high cost of destination discovery in routing protocols. By also specify the radius of affected zone, CB-S is also more efficient when broadcast to a subset of the nodes is desirable.

In this article, the current technological state of OPC (Openness, Productivity, and Collaboration; formerly ¡°OLE for Process Control¡±) standards and the problem statement of these OPC standards are discussed. The development of an OPC clientserver framework for monitoring and control systems is introduced by using the new OPC Unified Architecture (UA) specifications, Service Oriented Architecture (SOA), web services, XML, etc. The developed framework in turn minimizes the efforts of developers in learning new techniques and allows system architects and designers to perform dependency analysis on the development of monitoring and control applications. The potential areas of the proposed framework and the redundancy strategies to increase the efficiency and reliability of the system are also represented according to the initial results from the system that was developed by the Visual Studio 2008 and OPC UA SDK.

Vote validity proof and verification is an efficiency bottleneck and privacy drawback in homomorphic e-voting. The existing vote validity proof technique is inefficient and only achieves honest-verifier zero knowledge. In this paper, an efficient proof and verification technique is proposed to guarantee vote validity in homomorphic e-voting. The new proof technique is mainly based on hash function operations that only need a very small number of costly public key cryptographic operations. It can handle untrusted verifiers and achieve stronger zero knowledge privacy. As a result, the efficiency and privacy of homomorphic e-voting applications will be significantly improved.

Due to the recent advancement of networking and high-performance computing technologies, scientists can easily access large-scale data captured by scientific measurement devices through a network, and use huge computational power harnessed on the Internet for their analyses of scientific data. However, visualization technology, which plays a role of great importance for scientists to intuitively understand the analysis results of such scientific data, is not fully utilized so that it can seamlessly benefit from recent high-performance and networking technologies. One of such visualization technologies is SAGE (Scalable Adaptive Graphics Environment), which allows people to build an arbitrarily sized tiled display wall and is expected to be applied to scientific research. In this paper, we present a multi-application controller for SAGE, which we have developed, in the hope that it will help scientists efficiently perform scientific research requiring high-performance computing and visualization. The evaluation in this paper indicates that the efficiency of completing a comparison task among multiple data is increased by our system.

Most organizations use performance appraisal system to evaluate the effectiveness and efficiency of their employees. In evaluating staff performance, performance appraisal usually involves awarding numerical values or linguistic labels to employees performance. These values and labels are used to represent each staff achievement by reasoning incorporated in the arithmetical or statistical methods. However, the staff performance appraisal may involve judgments which are based on imprecise data especially when a person (the superior) tries to interpret another person’s (his/her subordinate) performance. Thus, the scores awarded by the appraiser are only approximations. From fuzzy logic perspective, the performance of the appraisee involves the measurement of his/her ability, competence and skills, which are actually fuzzy concepts that can be captured in fuzzy terms. Accordingly, fuzzy approach can be used to handle these imprecision and uncertainty information. Therefore, the performance appraisal system can be examined using Fuzzy Logic Approach, which is carried out in the study. The study utilized a Cascaded fuzzy inference system to generate the performance qualities of some University non-teaching staff that are based on specific performance appraisal criteria.

?In the secure communication areas, three-party authenticated key exchange protocol is an important cryptographic technique. In this protocol, two clients will share a human-memorable password with a trusted server, in which two users can generate a secure session key. On the other hand the protocol should resist all types of password guessing attacks. Recently, STPKE’ protocol has been proposed by Kim and Choi. An undetectable online password guessing attack on STPKE’ protocol is presented in the current study. An alternative protocol to overcome undetectable online password guessing attacks is proposed. The results show that the proposed protocol can resist undetectable online password guessing attacks. Additionally, it achieves the same security level with reduced random numbers and without XOR operations. The computational efficiency is improved by ? 30% for problems of size ? 2048 bits. The proposed protocol is achieving better performance efficiency and withstands password guessing attacks. The results show that the proposed protocol is secure, efficient and practical.

The energy efficiency is a key design issue to improve the lifetime of the underwater sensor networks (UWSN) consisting of sensor nodes equipped with a small battery of limited energy resource. In this paper, we apply a hexagon tessellation with an ideal cell size to deploy the underwater sensor nodes for two-dimensional UWSN. Upon this setting, we propose an enhanced hybrid transmission method that forwards data packets in a mixed transmission way based on location dependent direct transmitting or uniform multi-hop forwarding. In order to select direct transmitting or uniform multi-hop forwarding, the proposed method applies the threshold annulus that is defined as the distance between the cluster head node and the base station (BS). Our simulation results show that the proposed method enhances the energy efficiency compared with the existing multi-hop forwarding methods and hybrid transmission methods

In this paper, we present a fine-grained localization algorithm for wireless sensor networks using a mobile beacon node. The algorithm is based on distance measurement using RSSI. The beacon node is equipped with a GPS sender and RF (radio frequency) transmitter. Each stationary sensor node is equipped with a RF. The beacon node periodically broadcasts its location information, and stationary sensor nodes perceive their positions as beacon points. A sensor node’s location is computed by measuring the distance to the beacon point using RSSI. Our proposed localization scheme is evaluated using OPNET 8.1 and compared with Ssu’s and Yu’s localization schemes. The results show that our localization scheme outperforms the other two schemes in terms of energy efficiency (overhead) and accuracy.

The interconnection of mobile devices in urban environments can open up a lot of vistas for collaboration and content-based services. This will require setting up of a network in an urban environment which not only provides the necessary services to the user but also ensures that the network is secure and energy efficient. In this paper, we propose a secure, energy efficient dynamic routing protocol for heterogeneous wireless sensor networks in urban environments. A decision is made by every node based on various parameters like longevity, distance, battery power which measure the node and link quality to decide the next hop in the route. This ensures that the total load is distributed evenly while conserving the energy of battery-constrained nodes. The protocol also maintains a trusted population for each node through Dynamic Trust Factor (DTF) which ensures secure communication in the environment by gradually isolating the malicious nodes. The results obtained show that the proposed protocol when compared with another energy efficient protocol (MMBCR) and a widely accepted protocol (DSR) gives far better results in terms of energy efficiency. Similarly, it also outdoes a secure protocol (QDV) when it comes to detecting malicious nodes in the network.

The Internet explosion and the increase in crucial web applications such as ebanking and e-commerce, make essential the need for network security tools. One of such tools is an Intrusion detection system which can be classified based on detection approachs as being signature-based or anomaly-based. Even though intrusion detection systems are well defined, their cooperation with each other to detect attacks needs to be addressed. Consequently, a new architecture that allows them to cooperate in detecting attacks is proposed. The architecture uses Software Agents to provide scalability and distributability. It works in two modes: learning and detection. During learning mode, it generates a profile for each individual system using a fuzzy data mining algorithm. During detection mode, each system uses the FuzzyJess to match network traffic against its profile. The architecture was tested against a standard data set produced by MIT Lincoln Laboratory and the primary results show its efficiency and capability to detect attacks. Finally, two new methods, the memory-window and memoryless-window, were developed for extracting useful parameters from raw packets. The parameters are used as detection metrics

Spatio-temporal patterns extracted from historical trajectories of moving objects reveal important knowledge about movement behavior for high quality LBS services. Existing approaches transform trajectories into sequences of location symbols and derive frequent subsequences by applying conventional sequential pattern mining algorithms. However, spatio-temporal correlations may be lost due to the inappropriate approximations of spatial and temporal properties. In this paper, we address the problem of mining spatio-temporal patterns from trajectory data. The inefficient description of temporal information decreases the mining efficiency and the interpretability of the patterns. We provide a formal statement of efficient representation of spatio-temporal movements and propose a new approach to discover spatio-temporal patterns in trajectory data. The proposed method first finds meaningful spatio-temporal regions and extracts frequent spatio-temporal patterns based on a prefix-projection approach from the sequences of these regions. We experimentally analyze that the proposed method improves mining performance and derives more intuitive patterns.

The core services in cloud computing environment are SaaS (Software as a Service), Paas (Platform as a Service) and IaaS (Infrastructure as a Service). Among these three core services server virtualization belongs to IaaS and is a service technology to reduce the server maintenance expenses. Normally, the primary purpose of sever virtualization is building and maintaining a new well functioning server rather than using several existing servers, and in improving the various system performances. Often times this presents an issue in that there might be a need to increase expenses in order to build a new server. This study intends to use grid service architecture for a form of server virtualization which utilizes the existing servers rather than introducing a new server. More specifically, the proposed system is to enhance system performance and to reduce the corresponding expenses, by adopting a scheduling algorithm among the distributed servers and the constituents for grid computing thereby supporting the server virtualization service. Furthermore, the proposed server virtualization system will minimize power management by adopting the sleep severs, the subsidized servers and the grid infrastructure. The power maintenance expenses for the sleep servers will be lowered by utilizing the ACPI (Advanced Configuration & Power Interface) standards with the purpose of overcoming the limits of server performance.

Software evolution is an ongoing process carried out with the aim of extending base applications either for adding new functionalities or for adapting software to changing environments. This brings about the need for estimating and determining the overall impact of changes to a software system. In the last few decades many such change/impact analysis techniques have been developed to identify consequences of making changes to software systems. In this paper we propose a new approach of estimating change/impact analysis by classifying change based on type of change classification e.g. (a) nature and (b) extent of change propagation. The impact set produced consists of two dimensions of information: (a) statements affected by change propagation and (b) percentage i.e. statements affected in each category and involving the overall system. We also propose an algorithm for classifying the type of change. To establish confidence in effectiveness and efficiency we illustrate this technique with the help of an example. Results of our analysis are promising towards achieving the aim of the proposed endeavor to enhance change classification. The proposed dynamic technique for estimating impact sets and their percentage of impact will help software maintainers in performing selective regression testing by analyzing impact sets regarding the nature of change and change dependency.

In this paper we propose a Differentiated Services Based Admission Control and Routing Algorithm for IPv6 (ACMRA). The basic DiffServ architecture lacks an admission control mechanism, the injection of more QoS sensitive traffic into the network can cause congestion at the core of the network. Our Differentiated Services Based Admission Control and Routing Algorithm for IPv6 combines the admission control phase with the route finding phase, and our routing protocol has been designed in a way to work alongside DiffServ based networks. The Differentiated Services Based Admission Control and Routing Algorithm for IPv6 constructs label switched paths in order to provide rigorous QoS provisioning. We have conducted extensive simulations to validate the effectiveness and efficiency of our proposed admission control and routing algorithm. Simulation Results show that the Differentiated Services Based Admission Control and Routing Algorithm for IPv6 provides an excellent packet delivery ratio, reduces the control packets¡¯ overhead, and makes use of the resources present on multiple paths to the destination network, while almost each admitted flow shows compliance with its Service Level Agreement.

To solve the general problems surrounding the application of genetic algorithms in stereo matching, two measures are proposed. Firstly, the strategy of simplified population-based incremental learning (PBIL) is adopted to reduce the problems with memory consumption search inefficiency£¬and a scheme for controlling the distance of neighbors for disparity smoothness is inserted to obtain a wide-area consistency of disparities. In addition, an alternative version of the proposed algorithm, without the use of a probability vector, is also presented for simpler set-ups. Secondly, programmable graphics-hardware (GPU) consists of multiple multi-processors and has a powerful parallelism which can perform operations in parallel at low cost. Therefore, in order to decrease the running time further, a model of the proposed algorithm, which can be run on programmable graphics-hardware (GPU), is presented for the first time. The algorithms are implemented on the CPU as well as on the GPU and are evaluated by experiments. The experimental results show that the proposed algorithm offers better performance than traditional BMA methods with a deliberate relaxation and its modified version in terms of both running speed and stability. The comparison of computation times for the algorithm both on the GPU and the CPU shows that the former has more speed-up than the latter, the bigger the image size is.

We introduce a novel high-level security metrics objective taxonomization model for softwareintensive systems. The model systematizes and organizes security metrics development activities. It focuses on the security level and security performance of technical systems while taking into account the alignment of metrics objectives with different business and other management goals. The model emphasizes the roles of security-enforcing mechanisms, the overall security quality of the system under investigation, and secure system lifecycle, project and business management. Security correctness, effectiveness and efficiency are seen as the fundamental measurement objectives, determining the directions for more detailed security metrics development. Integration of the proposed model with riskdriven security metrics development approaches is also discussed.

Grid environments provide the mechanism to share heterogeneous resources among nodes. Because of the similarity between grid environments and P2P networks, the structures of P2P networks can be adapted to enhance scalability and efficiency in deployment and to search for services. In this paper, we present a membership management based on a hierarchical ring which constructs P2P-like Grid environments. The proposed approach uses only a limited number of connections, reducing communication cost. Also, it only keeps local information for membership, which leads to a further reduction in management cost. This paper analyzes the performance of the approach by simulation and compares it with other approaches.

In this paper, we proposed TASL-MAC, a medium-access control (MAC) protocol for wireless sensor networks. In wireless sensor networks, sensor nodes are usually deployed in a special environment, are assigned with long-term work, and are supported by a limited battery. As such, reducing the energy consumption becomes the primary concern with regard to wireless sensor networks. At the same time, reducing the latency in multi-hop data transmission is also very important. In the existing research, sensor nodes are expected to be switched to the sleep mode in order to reduce energy consumption. However, the existing proposals tended to assign the sensors with a fixed Sleep/Listening schedule, which causes unnecessary idle listening problems and conspicuous transmission latency due to the diversity of the traffic-load in the network. TASL-MAC is designed to dynamically adjust the duty listening time based on traffic load. This protocol enables the node with a proper data transfer rate to satisfy the application¡¯s requirements. Meanwhile, it can lead to much greater power efficiency by prolonging the nodes¡¯ sleeping time when the traffic load of the network decreases. We evaluate our implementation of TASL-MAC in NS-2. The evaluation result indicates that our proposal could explicitly reduce packet delivery latency, and that it could also significantly prolong the lifetime of the entire network when traffic is low.

To achieve security in wireless sensor networks (WSN), it is important to be able to encrypt messages sent among sensor nodes. We propose a new cryptographic key management protocol, which is based on the clustering scheme but does not depend on the probabilistic key. The protocol can increase the efficiency to manage keys since, before distributing the keys by bootstrap, the use of public keys shared among nodes can eliminate the processes to send or to receive keys among the sensors. Also, to find any compromised nodes safely on the network, it solves safety problems by applying the functions of a lightweight attack-detection mechanism.

Grid computing is an emerging technology that enables global resource sharing. In Korea, the K*Grid provides an extremely powerful research environment to both industries and academia. As part of the K*Grid project, we have constructed, together with the Korea Institute of Science and Technology Information and a number of domestic universities, a supercomputer Grid test bed which connects several types of supercomputers based on the globus toolkit. To achieve efficient networking in this Grid testbed, we propose a novel method of available bandwidth measurement, called Decoupled Capacity measurement with Initial Gap (DCIG), using packet trains. DCIG can improve the network efficiency by selecting the best path among several candidates. Simulation results show that DCIG outperforms previous work in terms of accuracy and the required measurement time. We also define a new XML schema for DCIG request/response based on the schema defined by the Global Grid Forum (GGF) Network Measurement Working Group (NM-WG).

In the distributed replicating server model, the provision of replicated services will improve the performance of the providing service and efficiency for clients. Efficiently composing the server selection algorithm decreases the retrieval time for replicated data. In this paper, we define the system model that selects and connects the replicated server that provides an optimal service using the server-side downstream measurement and propose a server selection algorithm.

In software systems, it has been observed that a fault is often caused by an interaction between a small number of input parameters. Even for moderately sized software systems, exhaustive testing is practically impossible to achieve. This is either due to time or cost constraints. Combinatorial (t-way) testing provides a technique to select a subset of exhaustive test cases covering all of the t-way interactions, without much of a loss to the fault detection capability. In this paper, an approach is proposed to generate 2-way (pairwise) test sets using genetic algorithms. The performance of the algorithm is improved by creating an initial solution using the overlap coefficient (a similarity matrix). Two mutation strategies have also been modified to improve their efficiency. Furthermore, the mutation operator is improved by using a combination of three mutation strategies. A comparative survey of the techniques to generate t-way test sets using genetic algorithms was also conducted. It has been shown experimentally that the proposed approach generates faster results by achieving higher percentage coverage in a fewer number of generations. Additionally, the size of the mixed covering arrays was reduced in one of the six benchmark problems examined.

Indexing

JIPS is also selected as the Journal for Accreditation by NRF (National Research Foundation of Korea).

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License.

Society

ABOUT THE SOCIETY

Ever since information processing became one of the most important industries in the country, computing professionals have encountered a growing number of challenges.
Along with scholars and colleagues in related fields, they have gathered together at a variety of forums and meetings over the last few decades to share their knowledge and experiences,
and the outcomes of their research. These exchanges led to the founding of the Korea Information Processing Society (KIPS) on January 15, 1993. The KIPS was registered as an incorporated association under the Ministry of Science,
ICT and Future Planning under the government of the Republic of Korea. The main purpose of the KIPS organization is to improve our society by achieving the highest capability possible in the domain of information technology.
As such, it focuses on close collaboration with the nationâs industry, academic, and research communities to foster technological innovation,
to enhance its members' careers, and to promote the advanced information processing industry.