Choose your preferred view mode

Please select whether you prefer to view the MDPI pages with a view tailored for mobile displays or to view the MDPI
pages in the normal scrollable desktop version. This selection will be stored into your cookies and used automatically
in next visits. You can also change the view style at any point from the main header when using the pages with your
mobile device.

Special Issue Information

Dear Colleagues,

In 2011 we published a Special Issue on “Complex Networks” and, now, I have been asked to be the Guest Editor of a collection of new contributions on the subject. I gladly accepted this new challenge, and offer to colleagues a new ocassion for research on these exciting topics.

As we know, symmetry in a system means the invariance of its elements under conditions of transformation. When we take network structures, their symmetry means an invariance of the adjacency of nodes under the permutations of the node set. Graph isomorphism is an equivalence relation on the set of graphs. Therefore, we have partitioned the class of all graphs into equivalence classes. The underlying idea of isomorphism is that some objects have the same structures, if we omit the individual characteristics of their components. A set of graphs isopmorphic to each other is usually known as an isomorphism class of graphs. The automorphism of a graph will be an isomorphism from G onto itself. The family of all automorphisms of a graph G is a permutation group. The inner operation of such a group will be the composition of permutations. It is called the Automorphism Group of G, and is denoted by Aut (G). Conversely, all groups may be represented as the automorphism group of a connected graph. The automorphism group is an algebraic invariant of a graph. Thus, we can say that the automorphism of a graph is a form of symmetry, in which the graph is mapped onto itself while preserving the edge-node connectivity. We will say that we have either a graph invariant or a graph property, when it only depends on the abastract stucture and not on graph representations, such as particular labeling or drawings on the graph. Thus, we may define a graph property as an property that is preserved under all its possible isomorphism of the graph. Because there exist a property of the graph itself, not depending on the representation of the graph. The semantic difference also consists in its character: a qualitative or quantitative one. From a strictly mathematical viewpoint, a graph’s property can be interpreted as a class of graphs, composed by the graphs that have the accomplishment of having some conditions in common.

Here, we need to analyze very interrelated concepts regarding graphs, such as their Symmetry/Asymmetry levels, or degrees, their entropies, etc. It may be applied when we study the different types of Systems; particularly, analyzing Complex Networks. A System can be defined as any set of components functioning together as a whole. A systemic point of view allows us to isolate a part of the world, and, thus, we can focus on those aspects that interact more closely than others. Network Science is a new scientific field that analyzes the interconnection among diverse networks; for instance, among Physics, Engineering, Biology, Semantics, and so on. We may distinguish four structural models when we describe Complex Systems by Complex Networks, i.e., using Graph Theory. Thus, we can mention Regular Networks, Random Networks, Small-World Networks, and Scale-Free Networks. However, it is also possible to introduce some new versions, according to new measures.

Complex Networks are everywhere. Many phenomena in nature can be modeled as a network. The topology of different networks may be very similar. They are rooted in the Power Law, with a scale-free structure. How can very different systems have the same underlying topological features? Searching for the hidden laws of these networks, modelling, and characterizing them are current lines of research.

Symmetry and Asymmetry may be considered (on graphs and networks in general) as two sides of the same coin, but such a dichotomous classification shows a lack of necessary and realistic grades. Thus, it is convenient to introduce "shade regions", modulating their degrees. The parallel versions of different mathematical fields adapted to degrees of truth are advancing. The basic idea according to which an element does not necessarily totally belong, or does not belong in absolute, to a set, but it can belong more or less, i.e., to some degree, which signifies a change of paradigm, adapting mathematics to features of the real world. Thus, we create new tools and fields, such as Fuzzy Measure Theory, which generalizes the classical Measure Theory. We wish to dedicate this Special Issue to show measures of symmetry, very related to the measures of information and entropy.

Contributions are invited on all aspects of symmetry measures, as applied to all complex networks and systems. Pure mathematical treatments that are applicable to such concepts are welcome. Possible themes include, but are not limited to:

Symmetry and AsymmetryNear SymmetryFuzzy SymmetryFuzzy OptimizationCombinatorial OptimizationComplex NetworksClusteringPreferential AttachmentGraph TheoryEntropy MeasuresInformation TheoryChiralitySimilarityComplexity TheorySymmetry as a new and very important bridge between sciences and humanities

Prof. Dr. Angel GarridoGuest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

The goal of this paper is to compare and analyze the forecasting performance of two artificial neural network models (i.e., MLP (multi-layer perceptron) and DNN (deep neural network)), and to conduct an experimental investigation by data flow, not economic flow. In this paper,

The goal of this paper is to compare and analyze the forecasting performance of two artificial neural network models (i.e., MLP (multi-layer perceptron) and DNN (deep neural network)), and to conduct an experimental investigation by data flow, not economic flow. In this paper, we investigate beyond the scope of simple predictions, and conduct research based on the merits and data of each model, so that we can predict and forecast the most efficient outcomes based on analytical methodology with fewer errors. In particular, we focus on identifying two models of neural networks (NN), a multi-layer perceptron (i.e., MLP) model and an excellent model between the neural network (i.e., DNN) model. At this time, predictability and accuracy were found to be superior in the DNN model, and in the MLP model, it was found to be highly correlated and accessible. The major purpose of this study is to analyze the performance of MLP and DNN through a practical approach based on an artificial neural network stock forecasting method. Although we do not limit S&P (i.e., Standard&Poor’s 500 index) to escape other regional exits in order to see the proper flow of capital, we first measured S&P data for 100 months (i.e., 407 weeks) and found out the following facts: First, the traditional artificial neural network (ANN) model, according to the specificity of each model and depending on the depth of the layer, shows the model of the prediction well and is sensitive to the index data; Second, comparing the two models, the DNN model showed better accuracy in terms of data accessibility and prediction accuracy than MLP, and the error rate was also shown in the weekly and monthly data; Third, the difference in the prediction accuracy of each model is not statistically significant. However, these results are correlated with each other, and are considered robust because there are few error rates, thanks to the accessibility to various other prediction accuracy measurement methodologies.
Full article

This study proposes a fuel consumption estimation system and method with lower cost. On-board units can report vehicle speed, and user devices can send fuel information to a data analysis server. Then the data analysis server can use the proposed fuel consumption estimation

This study proposes a fuel consumption estimation system and method with lower cost. On-board units can report vehicle speed, and user devices can send fuel information to a data analysis server. Then the data analysis server can use the proposed fuel consumption estimation method to estimate the fuel consumption based on driver behaviours without fuel sensors for cost savings. The proposed fuel consumption estimation method is designed based on a genetic algorithm which can generate gene sequences and use crossover and mutation for retrieving an adaptable gene sequence. The adaptable gene sequence can be applied as the set of fuel consumption in accordance with the pattern of driver behaviour. The practical experimental results indicated that the accuracy of the proposed fuel consumption estimation method was about 95.87%.
Full article

The balanced hypercube network, which is a novel interconnection network for parallel computation and data processing, is a newly-invented variant of the hypercube. The particular feature of the balanced hypercube is that each processor has its own backup processor and they are connected

The balanced hypercube network, which is a novel interconnection network for parallel computation and data processing, is a newly-invented variant of the hypercube. The particular feature of the balanced hypercube is that each processor has its own backup processor and they are connected to the same neighbors. A Hamiltonian bipartite graph with bipartition
V0∪V1x∈V0y∈V1. It is known that each edge is on a Hamiltonian cycle of the balanced hypercube. In this paper, we prove that, for an arbitrary edge
e
in the balanced hypercube, there exists a Hamiltonian path between any two vertices
x
and
y
in different partite sets passing through
e
with
e≠xy
. This result improves some known results.
Full article

For passive radar detection system, radar waveform recognition is an important research area. In this paper, we explore an automatic radar waveform recognition system to detect, track and locate the low probability of intercept (LPI) radars. The system can classify (but not identify)

For passive radar detection system, radar waveform recognition is an important research area. In this paper, we explore an automatic radar waveform recognition system to detect, track and locate the low probability of intercept (LPI) radars. The system can classify (but not identify) 12 kinds of signals, including binary phase shift keying (BPSK) (barker codes modulated), linear frequency modulation (LFM), Costas codes, Frank code, P1-P4 codesand T1-T4 codeswith a low signal-to-noise ratio (SNR). It is one of the most extensive classification systems in the open articles. A hybrid classifier is proposed, which includes two relatively independent subsidiary networks, convolutional neural network (CNN) and Elman neural network (ENN). We determine the parameters of the architecture to make networks more effectively. Specifically, we focus on how the networks are designed, what the best set of features for classification is and what the best classified strategy is. Especially, we propose several key features for the classifier based on Choi–Williams time-frequency distribution (CWD). Finally, the recognition system is simulated by experimental data. The experiments show the overall successful recognition ratio of 94.5% at an SNR of −2 dB.
Full article

Device-to-device (D2D) communications bring significant improvements of spectral efficiency by underlaying cellular networks. However, they also lead to a more deteriorative interference environment for cellular users, especially the users in severely deep fading or shadowing. In this paper, we investigate a relay-based communication

Device-to-device (D2D) communications bring significant improvements of spectral efficiency by underlaying cellular networks. However, they also lead to a more deteriorative interference environment for cellular users, especially the users in severely deep fading or shadowing. In this paper, we investigate a relay-based communication scheme in cellular systems, where the D2D communications are exploited to aid the cellular downlink transmissions by acting as relay nodes with underlaying cellular networks. We modeled two-antenna infrastructure relays employed for D2D relay. The D2D transmitter is able to transmit and receive signals simultaneously over the same frequency band. Then we proposed an efficient power allocation algorithm for the base station (BS) and D2D relay to reduce the loopback interference which is inherent due to the two-antenna infrastructure in full-duplex (FD) mode. We derived the optimal power allocation problem in closed form under the independent power constraint. Simulation results show that the algorithm reduces the power consumption of D2D relay to the greatest extent and also guarantees cellular users’ minimum transmit rate. Moreover, it also outperforms the existing half-duplex (HD) relay mode in terms of achievable rate of D2D.
Full article

Background: An accurate and automatic computer-aided multi-class decision support system to classify the magnetic resonance imaging (MRI) scans of the human brain as normal, Alzheimer, AIDS, cerebral calcinosis, glioma, or metastatic, which helps the radiologists to diagnose the disease in brain MRIs is

Background: An accurate and automatic computer-aided multi-class decision support system to classify the magnetic resonance imaging (MRI) scans of the human brain as normal, Alzheimer, AIDS, cerebral calcinosis, glioma, or metastatic, which helps the radiologists to diagnose the disease in brain MRIs is created. Methods: The performance of the proposed system is validated by using benchmark MRI datasets (OASIS and Harvard) of 310 patients. Master features of the images are extracted using a fast discrete wavelet transform (DWT), then these discriminative features are further analysed by principal component analysis (PCA). Different subset sizes of principal feature vectors are provided to five different decision models. The classification models include the J48 decision tree, k-nearest neighbour (kNN), random forest (RF), and least-squares support vector machine (LS-SVM) with polynomial and radial basis kernels. Results: The RF-based classifier outperformed among all compared decision models and achieved an average accuracy of 96% with 4% standard deviation, and an area under the receiver operating characteristic (ROC) curve of 99%. LS-SVM (RBF) also shows promising results (i.e., 89% accuracy) when the least number of principal features was used. Furthermore, the performance of each classifier on different subset sizes of principal features was (80%–96%) for most performance metrics. Conclusion: The presented medical decision support system demonstrates the potential proof for accurate multi-class classification of brain abnormalities; therefore, it has a potential to use as a diagnostic tool for the medical practitioners.
Full article

Inspired by the generalized entropies for graphs, a class of generalized degree-based graph entropies is proposed using the known information-theoretic measures to characterize the structure of complex networks. The new entropies depend on assigning a probability distribution about the degrees to a network.

Inspired by the generalized entropies for graphs, a class of generalized degree-based graph entropies is proposed using the known information-theoretic measures to characterize the structure of complex networks. The new entropies depend on assigning a probability distribution about the degrees to a network. In this paper, some extremal properties of the generalized degree-based graph entropies by using the degree powers are proved. Moreover, the relationships among the entropies are studied. Finally, numerical results are presented to illustrate the features of the new entropies.
Full article

Deformable objects have changeable shapes and they require a different method of matching algorithm compared to rigid objects. This paper proposes a fast and robust deformable object matching algorithm. First, robust feature points are selected using a statistical characteristic to obtain the feature

Deformable objects have changeable shapes and they require a different method of matching algorithm compared to rigid objects. This paper proposes a fast and robust deformable object matching algorithm. First, robust feature points are selected using a statistical characteristic to obtain the feature points with the extraction method. Next, matching pairs are composed by the feature point matching of two images using the matching method. Rapid clustering is performed using the BST (Binary Search Tree) method by obtaining the geometric similarity between the matching pairs. Finally, the matching of the two images is determined after verifying the suitability of the composed cluster. An experiment with five different image sets with deformable objects confirmed the superior robustness and independence of the proposed algorithm while demonstrating up to 60 times faster matching speed compared to the conventional deformable object matching algorithms.
Full article

This study proposes gaze-based hand interaction, which is helpful for improving the user’s immersion in the production process of virtual reality content for the mobile platform, and analyzes efficiency through an experiment using a questionnaire. First, three-dimensional interactive content is produced for use

This study proposes gaze-based hand interaction, which is helpful for improving the user’s immersion in the production process of virtual reality content for the mobile platform, and analyzes efficiency through an experiment using a questionnaire. First, three-dimensional interactive content is produced for use in the proposed interaction experiment while presenting an experiential environment that gives users a high sense of immersion in the mobile virtual reality environment. This is designed to induce the tension and concentration of users in line with the immersive virtual reality environment. Additionally, a hand interaction method based on gaze—which is mainly used for the entry of mobile virtual reality content—is proposed as a design method for immersive mobile virtual reality environment. The user satisfaction level of the immersive environment provided by the proposed gaze-based hand interaction is analyzed through experiments in comparison with the general method that uses gaze only. Furthermore, detailed analysis is conducted by dividing the effects of the proposed interaction method on user’s psychology into positive factors such as immersion and interest and negative factors such as virtual reality (VR) sickness and dizziness. In this process, a new direction is proposed for improving the immersion of users in the production of mobile platform virtual reality content.
Full article

The Zagreb eccentricity indices are the eccentricity reformulation of the Zagreb indices. Let H be a simple graph. The first Zagreb eccentricity index (E1(H)) is defined to be the summation of squares of the eccentricity of vertices,

The Zagreb eccentricity indices are the eccentricity reformulation of the Zagreb indices. Let H be a simple graph. The first Zagreb eccentricity index (E1(H)) is defined to be the summation of squares of the eccentricity of vertices, i.e., E1(H)=∑u∈V(H)ƐH2(u). The second Zagreb eccentricity index (E2(H)) is the summation of product of the eccentricities of the adjacent vertices, i.e., E2(H)=∑uv∈E(H)ƐH(u)ƐH(v). We obtain the thorny graph of a graph H by attaching thorns i.e., vertices of degree one to every vertex of H. In this paper, we will find closed formulation for the first Zagreb eccentricity index and second Zagreb eccentricity index of different well known classes of thorny graphs.
Full article

This paper presents a segmentation-based stereo matching algorithm using an adaptive multi-cost approach, which is exploited for obtaining accuracy disparity maps. The main contribution is to integrate the appealing properties of multi-cost approach into the segmentation-based framework. Firstly, the reference image is segmented

This paper presents a segmentation-based stereo matching algorithm using an adaptive multi-cost approach, which is exploited for obtaining accuracy disparity maps. The main contribution is to integrate the appealing properties of multi-cost approach into the segmentation-based framework. Firstly, the reference image is segmented by using the mean-shift algorithm. Secondly, the initial disparity of each segment is estimated by an adaptive multi-cost method, which consists of a novel multi-cost function and an adaptive support window cost aggregation strategy. The multi-cost function increases the robustness of the initial raw matching costs calculation and the adaptive window reduces the matching ambiguity effectively. Thirdly, an iterative outlier suppression and disparity plane parameters fitting algorithm is designed to estimate the disparity plane parameters. Lastly, an energy function is formulated in segment domain, and the optimal plane label is approximated by belief propagation. The experimental results with the Middlebury stereo datasets, along with synthesized and real-world stereo images, demonstrate the effectiveness of the proposed approach.
Full article

Collaborative spectrum sensing (CSS) was envisioned to improve the reliability of spectrum sensing in centralized cognitive radio networks (CRNs). However, secondary users (SUs)’ changeable environment and ease of compromise make CSS vulnerable to security threats, which further mislead the global decision making and

Collaborative spectrum sensing (CSS) was envisioned to improve the reliability of spectrum sensing in centralized cognitive radio networks (CRNs). However, secondary users (SUs)’ changeable environment and ease of compromise make CSS vulnerable to security threats, which further mislead the global decision making and degrade the overall performance. A popular attack in CSS is the called spectrum sensing data falsification (SSDF) attack. In the SSDF attack, malicious cognitive users (MUs) send false sensing results to the fusion center, which significantly degrades detection accuracy. In this paper, a comprehensive reputation-based security mechanism against dynamic SSDF attack for CRNs is proposed. In the mechanism, the reliability of SUs in collaborative sensing is measured with comprehensive reputation values in accordance with the SUs’ current and historical sensing behaviors. Meanwhile a punishment strategy is presented to revise the reputation, in which a reward factor and a penalty factor are introduced to encourage SUs to engage in positive and honest sensing activities. The whole mechanism focuses on ensuring the correctness of the global decision continuously. Specifically, the proposed security scheme can effectively alleviate the effect of users’ malicious behaviors on network decision making, which contributes greatly to enhancing the fairness and robustness of CRNs. Considering that the attack strategy adopted by MUs has been gradually transforming from simplicity, fixedness and singleness into complexity, dynamic and crypticity, we introduce two dynamic behavior patterns (true to false and then to true (TFT) and false to true and then to false (FTF)) to further validate the effectiveness of our proposed defense mechanism. Abundant simulation results verify the rationality and validity of our proposed mechanism.
Full article

Several factors may influence children’s lifestyle. The main purpose of this study is to introduce a children’s lifestyle index framework and model it based on structural equation modeling (SEM) with Maximum likelihood (ML) and Bayesian predictors. This framework includes parental socioeconomic status, household

Several factors may influence children’s lifestyle. The main purpose of this study is to introduce a children’s lifestyle index framework and model it based on structural equation modeling (SEM) with Maximum likelihood (ML) and Bayesian predictors. This framework includes parental socioeconomic status, household food security, parental lifestyle, and children’s lifestyle. The sample for this study involves 452 volunteer Chinese families with children 7–12 years old. The experimental results are compared in terms of root mean square error, coefficient of determination, mean absolute error, and mean absolute percentage error metrics. An analysis of the proposed causal model suggests there are multiple significant interconnections among the variables of interest. According to both Bayesian and ML techniques, the proposed framework illustrates that parental socioeconomic status and parental lifestyle strongly impact children’s lifestyle. The impact of household food security on children’s lifestyle is rejected. However, there is a strong relationship between household food security and both parental socioeconomic status and parental lifestyle. Moreover, the outputs illustrate that the Bayesian prediction model has a good fit with the data, unlike the ML approach. The reasons for this discrepancy between ML and Bayesian prediction are debated and potential advantages and caveats with the application of the Bayesian approach in future studies are discussed.
Full article

There is much uncertainty and fuzziness in product quality attributes or quality parameters of a manufacturing process, so the traditional quality control chart can be difficult to apply. This paper proposes a fuzzy control chart. The plotted data was obtained by transforming expert

There is much uncertainty and fuzziness in product quality attributes or quality parameters of a manufacturing process, so the traditional quality control chart can be difficult to apply. This paper proposes a fuzzy control chart. The plotted data was obtained by transforming expert scores into fuzzy numbers. Two types of nonconformity judgment rules—necessity and possibility measurement rules—are proposed. Through graphical analysis, the nonconformity judging method (i.e., assessing directly based on the shape feature of a fuzzy control chart) is proposed. For four different widely used membership functions, control levels were analyzed and compared by observing gaps between the upper and lower control limits. The result of the case study validates the feasibility and reliability of the proposed approach.
Full article

Topological indices and polynomials are predicting properties like boiling points, fracture toughness, heat of formation, etc., of different materials, and thus save us from extra experimental burden. In this article we compute many topological indices for the family of circulant graphs. At first,

Topological indices and polynomials are predicting properties like boiling points, fracture toughness, heat of formation, etc., of different materials, and thus save us from extra experimental burden. In this article we compute many topological indices for the family of circulant graphs. At first, we give a general closed form of M-polynomial of this family and recover many degree-based topological indices out of it. We also compute Zagreb indices and Zagreb polynomials of this family. Our results extend many existing results.
Full article

Brain tumor segmentation in magnetic resonance imaging (MRI) is considered a complex procedure because of the variability of tumor shapes and the complexity of determining the tumor location, size, and texture. Manual tumor segmentation is a time-consuming task highly prone to human error.

Brain tumor segmentation in magnetic resonance imaging (MRI) is considered a complex procedure because of the variability of tumor shapes and the complexity of determining the tumor location, size, and texture. Manual tumor segmentation is a time-consuming task highly prone to human error. Hence, this study proposes an automated method that can identify tumor slices and segment the tumor across all image slices in volumetric MRI brain scans. First, a set of algorithms in the pre-processing stage is used to clean and standardize the collected data. A modified gray-level co-occurrence matrix and Analysis of Variance (ANOVA) are employed for feature extraction and feature selection, respectively. A multi-layer perceptron neural network is adopted as a classifier, and a bounding 3D-box-based genetic algorithm is used to identify the location of pathological tissues in the MRI slices. Finally, the 3D active contour without edge is applied to segment the brain tumors in volumetric MRI scans. The experimental dataset consists of 165 patient images collected from the MRI Unit of Al-Kadhimiya Teaching Hospital in Iraq. Results of the tumor segmentation achieved an accuracy of 89% ± 4.7% compared with manual processes.
Full article

This paper first analyzes the one-dimensional Gabor function and expands it to a two-dimensional one. The two-dimensional Gabor function generates the two-dimensional Gabor wavelet through measure stretching and rotation. At last, the two-dimensional Gabor wavelet transform is employed to extract the image feature

This paper first analyzes the one-dimensional Gabor function and expands it to a two-dimensional one. The two-dimensional Gabor function generates the two-dimensional Gabor wavelet through measure stretching and rotation. At last, the two-dimensional Gabor wavelet transform is employed to extract the image feature information. Based on the back propagation (BP) neural network model, the image intelligent test model based on the Gabor wavelet and the neural network model is built. The human face image detection is adopted as an example. Results suggest that, although there are complex textures and illumination variations on the images of the face database named AT&T, the detection accuracy rate of the proposed method can reach above 0.93. In addition, extensive simulations based on the Yale and extended Yale B datasets further verify the effectiveness of the proposed method.
Full article

Quality function deployment (QFD) is a widely used quality system tool for translating customer requirements (CRs) into the engineering design requirements (DRs) of products or services. The conventional QFD analysis, however, has been criticized as having some limitations such as in the assessment

Quality function deployment (QFD) is a widely used quality system tool for translating customer requirements (CRs) into the engineering design requirements (DRs) of products or services. The conventional QFD analysis, however, has been criticized as having some limitations such as in the assessment of relationships between CRs and DRs, the determination of CR weights and the prioritization of DRs. This paper aims to develop a new hybrid group decision-making model based on hesitant 2-tuple linguistic term sets and an extended QUALIFLEX (qualitative flexible multiple criteria method) approach for handling QFD problems with incomplete weight information. First, hesitant linguistic term sets are combined with interval 2-tuple linguistic variables to express various uncertainties in the assessment information of QFD team members. Borrowing the idea of grey relational analysis (GRA), a multiple objective optimization model is constructed to determine the relative weights of CRs. Then, an extended QUALIFLEX approach with an inclusion comparison method is suggested to determine the ranking of the DRs identified in QFD. Finally, an analysis of a market segment selection problem is conducted to demonstrate and validate the proposed QFD approach.
Full article

In order to address the problem of the uncertainty of existing noise models and of the complexity and changeability of the edges and textures of low-resolution document images, this paper presents a projection onto convex sets (POCS) algorithm based on text features. The

In order to address the problem of the uncertainty of existing noise models and of the complexity and changeability of the edges and textures of low-resolution document images, this paper presents a projection onto convex sets (POCS) algorithm based on text features. The current method preserves the edge details and smooths the noise in text images by adding text features as constraints to original POCS algorithms and converting the fixed threshold to an adaptive one. In this paper, the optimized scale invariant feature transform (SIFT) algorithm was used for the registration of continuous frames, and finally the image was reconstructed under the improved POCS theoretical framework. Experimental results showed that the algorithm can significantly smooth the noise and eliminate noise caused by the shadows of the lines. The lines of the reconstructed text are smoother and the stroke contours of the reconstructed text are clearer, and this largely eliminates the text edge vibration to enhance the resolution of the document image text.
Full article

User interactions in online social networks (OSNs) enable the spread of information and enhance the information dissemination process, but at the same time they exacerbate the information overload problem. In this paper, we propose a social content recommendation method based on spatial-temporal aware

User interactions in online social networks (OSNs) enable the spread of information and enhance the information dissemination process, but at the same time they exacerbate the information overload problem. In this paper, we propose a social content recommendation method based on spatial-temporal aware controlled information diffusion modeling in OSNs. Users interact more frequently when they are close to each other geographically, have similar behaviors, and fall into similar demographic categories. Considering these facts, we propose multicriteria-based social ties relationship and temporal-aware probabilistic information diffusion modeling for controlled information spread maximization in OSNs. The proposed social ties relationship modeling takes into account user spatial information, content trust, opinion similarity, and demographics. We suggest a ranking algorithm that considers the user ties strength with friends and friends-of-friends to rank users in OSNs and select highly influential injection nodes. These nodes are able to improve social content recommendations, minimize information diffusion time, and maximize information spread. Furthermore, the proposed temporal-aware probabilistic diffusion process categorizes the nodes and diffuses the recommended content to only those users who are highly influential and can enhance information dissemination. The experimental results show the effectiveness of the proposed scheme.
Full article

As face recognition technology has developed, it has become widely used in various applications such as door access control, intelligent surveillance, and mobile phone security. One of its applications is its adoption in TV environments to supply viewers with intelligent services and high

As face recognition technology has developed, it has become widely used in various applications such as door access control, intelligent surveillance, and mobile phone security. One of its applications is its adoption in TV environments to supply viewers with intelligent services and high convenience. In a TV environment, the in-plane rotation of a viewer’s face frequently occurs because he or she may decide to watch the TV from a lying position, which degrades the accuracy of the face recognition. Nevertheless, there has been little previous research to deal with this problem. Therefore, we propose a new fuzzy system–based face detection algorithm that is robust to in-plane rotation based on the symmetrical characteristics of a face. Experimental results on two databases with one open database show that our method outperforms previous methods.
Full article

As an increasing number of people purchase goods and services online, micropayment systems are becoming particularly important for mobile and electronic commerce. We have designed and developed such a system called M&E-NetPay (Mobile and Electronic NetPay). With open interoperability and mobility, M&E-NetPay uses

As an increasing number of people purchase goods and services online, micropayment systems are becoming particularly important for mobile and electronic commerce. We have designed and developed such a system called M&E-NetPay (Mobile and Electronic NetPay). With open interoperability and mobility, M&E-NetPay uses web services to connect brokers and vendors, providing secure, flexible and reliable credit services over the Internet. In particular, M&E-NetPay makes use of a secure, inexpensive and debit-based off-line protocol that allows vendors to interact only with customers, after validating coins. The design of the architecture and protocol of M&E-NetPay are presented, together with the implementation of its prototype in ringtone and wallpaper sites. To validate our system, we have conducted its evaluations on performance, usability and heuristics. Furthermore, we compare our system to the CORBA-based (Common Object Request Broker Architecture) off-line micro-payment systems. The results have demonstrated that M&E-NetPay outperforms the .NET-based M&E-NetPay system in terms of performance and user satisfaction.
Full article

An efficient surface area evaluation method is introduced by using smooth surface reconstruction for three-dimensional scanned human body data. Surface area evaluations for various body parts are compared with the results from the traditional alginate-based method, and quite high similarity between the two

An efficient surface area evaluation method is introduced by using smooth surface reconstruction for three-dimensional scanned human body data. Surface area evaluations for various body parts are compared with the results from the traditional alginate-based method, and quite high similarity between the two results is obtained. We expect that our surface area evaluation method can be an alternative to measuring surface area by the cumbersome alginate method.
Full article

In this paper, a modified GrabCut algorithm is proposed using a clustering technique to reduce image noise. GrabCut is an image segmentation method based on GraphCut starting with a user-specified bounding box around the object to be segmented. In the modified version, the

In this paper, a modified GrabCut algorithm is proposed using a clustering technique to reduce image noise. GrabCut is an image segmentation method based on GraphCut starting with a user-specified bounding box around the object to be segmented. In the modified version, the original image is filtered using the median filter to reduce noise and then the quantized image using K-means algorithm is used for the normal GrabCut method for object segmentation. This new process showed that it improved the object segmentation performance a lot and the extract segmentation result compared to the standard method.
Full article

Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the

Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA), a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.
Full article

Frequent graph mining has been proposed to find interesting patterns (i.e., frequent sub-graphs) from databases composed of graph transaction data, which can effectively express complex and large data in the real world. In addition, various applications for graph mining have been

Frequent graph mining has been proposed to find interesting patterns (i.e., frequent sub-graphs) from databases composed of graph transaction data, which can effectively express complex and large data in the real world. In addition, various applications for graph mining have been suggested. Traditional graph pattern mining methods use a single minimum support threshold factor in order to check whether or not mined patterns are interesting. However, it is not a sufficient factor that can consider valuable characteristics of graphs such as graph sizes and features of graph elements. That is, previous methods cannot consider such important characteristics in their mining operations since they only use a fixed minimum support threshold in the mining process. For this reason, in this paper, we propose a novel graph mining algorithm that can consider various multiple, minimum support constraints according to the types of graph elements and changeable minimum support conditions, depending on lengths of graph patterns. In addition, the proposed algorithm performs in mining operations more efficiently because it can minimize duplicated operations and computational overheads by considering symmetry features of graphs. Experimental results provided in this paper demonstrate that the proposed algorithm outperforms previous mining approaches in terms of pattern generation, runtime and memory usage.
Full article