Sample records for metric based rules

In this paper, we study ruled surface in 3-dimensional almost contact metric manifolds by using surface theory defined by Gök [Surfaces theory in contact geometry, PhD thesis (2010)]. We also studied the theory of curves using cross product defined by Camcı. In this study, we obtain the distribution parameters of the ruled surface and then some results and theorems are presented with special cases. Moreover, some relationships among asymptotic curve and striction line of the base curve of the ruled surface have been found.

For many national parks and wilderness areas with special air quality protections (Class I areas) in the western United States (U.S.), wildfire smoke and dust events can have a large impact on visibility. The U.S. Environmental Protection Agency's (EPA) 1999 Regional Haze Rule used the 20% haziest days to track visibility changes over time even if they are dominated by smoke or dust. Visibility on the 20% haziest days has remained constant or degraded over the last 16 yr at some Class I areas despite widespread emission reductions from anthropogenic sources. To better track visibility changes specifically associated with anthropogenic pollution sources rather than natural sources, the EPA has revised the Regional Haze Rule to track visibility on the 20% most anthropogenically impaired (hereafter, most impaired) days rather than the haziest days. To support the implementation of this revised requirement, the EPA has proposed (but not finalized) a recommended metric for characterizing the anthropogenic and natural portions of the daily extinction budget at each site. This metric selects the 20% most impaired days based on these portions using a "delta deciview" approach to quantify the deciview scale impact of anthropogenic light extinction. Using this metric, sulfate and nitrate make up the majority of the anthropogenic extinction in 2015 on these days, with natural extinction largely made up of organic carbon mass in the eastern U.S. and a combination of organic carbon mass, dust components, and sea salt in the western U.S. For sites in the western U.S., the seasonality of days selected as the 20% most impaired is different than the seasonality of the 20% haziest days, with many more winter and spring days selected. Applying this new metric to the 2000-2015 period across sites representing Class I areas results in substantial changes in the calculated visibility trend for the northern Rockies and southwest U.S., but little change for the eastern U.S. Changing the

Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rulebase is developed further.

Much of the world's critical infrastructure is at risk from attack through electronic networks connected to control systems. Security metrics are important because they provide the basis for management decisions that affect the protection of the infrastructure. A cyber security technical metric is the security relevant output from an explicit mathematical model that makes use of objective measurements of a technical object. A specific set of technical security metrics are proposed for use by the operators of control systems. Our proposed metrics are based on seven security ideals associated with seven corresponding abstract dimensions of security. We have defined at least one metric for each of the seven ideals. Each metric is a measure of how nearly the associated ideal has been achieved. These seven ideals provide a useful structure for further metrics development. A case study shows how the proposed metrics can be applied to an operational control system.

Within the last few years, a host of value-basedmetrics like EVA, MVA, TBR, CFORI, and TSR have evolved. This paper attempts to analyze the validity and applicability of EVA and Balanced Scorecard for Internet based organizations. Despite the collapse of the dot-com model, the firms engaged in e- commerce continue to struggle to find new ways to account for customer-base, technology, employees, knowledge, etc, as part of the value of the firm. While some metrics, like the Balance Scorecard are geared towards internal use, others like EVA are for external use. Value-basedmetrics are used for performing internal audits as well as comparing firms against one another; and can also be effectively utilized by individuals outside the firm looking to determine if the firm is creating value for its stakeholders.

Reaction rules and event processing technologies play a key role in making business and IT / Internet infrastructures more agile and active. While event processing is concerned with detecting events from large event clouds or streams in almost real-time, reaction rules are concerned with the invocation of actions in response to events and actionable situations. They state the conditions under which actions must be taken. In the last decades various reaction rule and event processing approaches have been developed, which for the most part have been advanced separately. In this paper we survey reaction rule approaches and rule-based event processing systems and languages.

The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-basedmetrics to see whether they can be used as complementary tool for hydrologic m...

The number of active liquid rocket engine and solid rocket motor development programs has severely declined since the "space race" of the 1950s and 1960s center dot This downward trend has been exacerbated by the retirement of the Space Shuttle, transition from the Constellation Program to the Space launch System (SLS) and similar activity in DoD programs center dot In addition with consolidation in the industry, the rocket propulsion industrial base is under stress. To Improve the "health" of the RPIB, we need to understand - The current condition of the RPIB - How this compares to past history - The trend of RPIB health center dot This drives the need for a concise set of "metrics" - Analogous to the basic data a physician uses to determine the state of health of his patients - Easy to measure and collect - The trend is often more useful than the actual data point - Can be used to focus on problem areas and develop preventative measures The nation's capability to conceive, design, develop, manufacture, test, and support missions using liquid rocket engines and solid rocket motors that are critical to its national security, economic health and growth, and future scientific needs. center dot The RPIB encompasses US government, academic, and commercial (including industry primes and their supplier base) research, development, test, evaluation, and manufacturing capabilities and facilities. center dot The RPIB includes the skilled workforce, related intellectual property, engineering and support services, and supply chain operations and management. This definition touches the five main segments of the U.S. RPIB as categorized by the USG: defense, intelligence community, civil government, academia, and commercial sector. The nation's capability to conceive, design, develop, manufacture, test, and support missions using liquid rocket engines and solid rocket motors that are critical to its national security, economic health and growth, and future scientific needs

In this study, we evaluate established and newly developed metrics for predicting glare using data from three different research studies. The evaluation covers two different targets: 1. How well the user’s perception of glare magnitude correlates to the prediction of the glare metrics? 2. How well...... do the glare metrics describe the subjects’ disturbance by glare? We applied Spearman correlations, logistic regressions and an accuracy evaluation, based on an ROC-analysis. The results show that five of the twelve investigated metrics are failing at least one of the statistical tests. The other...... seven metrics CGI, modified DGI, DGP, Ev, average Luminance of the image Lavg, UGP and UGR are passing all statistical tests. DGP, CGI, DGI_mod and UGP have largest AUC and might be slightly more robust. The accuracy of the predictions of afore mentioned seven metrics for the disturbance by glare lies...

In this paper we address the approximate minimization problem of Markov Chains (MCs) from a behavioral metric-based perspective. Specifically, given a finite MC and a positive integer k, we are looking for an MC with at most k states having minimal distance to the original. The metric considered...

We address the behavioral metric-based approximate minimization problem of Markov Chains (MCs), i.e., given a finite MC and a positive integer k, we are interested in finding a k-state MC of minimal distance to the original. By considering as metric the bisimilarity distance of Desharnais at al...

1. Rationale for a Test Program Based on User Performance Metrics ; 2. Roberson and Associates Test Program ; 3. Status of, and Revisions to, the Roberson and Associates Test Program ; 4. Comparison of Roberson and DOT/Volpe Programs

Full Text Available Business rules and business processes are essential artifacts in defining the requirements of a software system. Business processes capture business behavior, while rules connect processes and thus control processes and business behavior. Traditionally, rules are scattered inside application code. This approach makes it very difficult to change rules and shorten the life cycle of the software system. Because rules change more quickly than the application itself, it is desirable to externalize the rules and move them outside the application. This paper analyzes and evaluates three well-known business rules approaches. It also outlines some critical factors that have to be taken into account in the decision to introduce business rules facilities in a software system. Based on the concept of explicit manipulation of business rules in a software system, the need for a general approach based on business rules is discussed.

Full Text Available The article suggests a new look at trading platforms, which is called “metrics.” Metrics are a way to look at the point of sale in a large part from the buyer’s side. The buyer enters the store and make buying decision based on those factors that the seller often does not consider, or considers in part, because “does not see” them, since he is not a buyer. The article proposes the classification of retailers, metrics and a methodology for their determination, presents the results of an audit of retailers in St. Petersburg on the proposed methodology.

Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metricsbased on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-basedmetrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-basedmetrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-basedmetrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.

A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference basedmetrics, gamut basedmetrics, memory basedmetrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

Distance metric learning algorithm has been widely applied to medical diagnosis and exhibited its strengths in classification problems. The k-nearest neighbour (KNN) is an efficient method which treats each feature equally. The large margin nearest neighbour classification (LMNN) improves the accuracy of KNN by learning a global distance metric, which did not consider the locality of data distributions. In this paper, we propose a new distance metric algorithm adopting cosine metric and LMNN named COS-SUBLMNN which takes more care about local feature of data to overcome the shortage of LMNN and improve the classification accuracy. The proposed methodology is verified on CVDs patient vector derived from real-world medical data. The Experimental results show that our method provides higher accuracy than KNN and LMNN did, which demonstrates the effectiveness of the Risk predictive model of CVDs based on COS-SUBLMNN.

In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practice and research and national health information systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch measurement model. Using data collected incorporating the 36-item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as "super items" into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians

The paper presents a set of algorithms for the conversion of rulebases between priority-based and constraint-based representations. Inspired by research in precedential reasoning in law, such algorithms can be used for the analysis of a rulebase, and for the study of the impact of the introduction

Continuing advances in real-time computational capabilities will support enhanced levels of smart automation and AI-based decision-aiding systems in the nuclear power plant (NPP) control room of the future. To support development of these aids, we describe in this paper a research tool, and more specifically, a quantitative metric, to assess the impact of proposed automation/aiding concepts in a manner that can account for a number of interlinked factors in the control room environment. In particular, we describe a cognitive operator/plant model that serves as a framework for integrating the operator`s information-processing capabilities with his procedural knowledge, to provide insight as to how situations are assessed by the operator, decisions made, procedures executed, and communications conducted. Our focus is on the situation assessment (SA) behavior of the operator, the development of a quantitative metric reflecting overall operator awareness, and the use of this metric in evaluating automation/aiding options. We describe the results of a model-based simulation of a selected emergency scenario, and metric-based evaluation of a range of contemplated NPP control room automation/aiding options. The results demonstrate the feasibility of model-based analysis of contemplated control room enhancements, and highlight the need for empirical validation.

The evaluation of scholarly communication items is now largely a matter of expert opinion or metrics derived from citation data. Both approaches can fail to take into account the myriad of factors that shape scholarly impact. Usage data has emerged as a promising complement to existing methods o fassessment but the formal groundwork to reliably and validly apply usage-basedmetrics of schlolarly impact is lacking. The Andrew W. Mellon Foundation funded MESUR project constitutes a systematic effort to define, validate and cross-validate a range of usage-basedmetrics of schlolarly impact by creating a semantic model of the scholarly communication process. The constructed model will serve as the basis of a creating a large-scale semantic network that seamlessly relates citation, bibliographic and usage data from a variety of sources. A subsequent program that uses the established semantic network as a reference data set will determine the characteristics and semantics of a variety of usage-basedmetrics of schlolarly impact. This paper outlines the architecture and methodology adopted by the MESUR project and its future direction.

In this report, we show the process of information integration. We specifically discuss the language used for integration. We show that integration consists of two phases, the schema mapping phase and the data integration phase. We formally define transformation rules, conversion, evolution and

Calculations are made of the approximate hazard due to peak normal accelerations of an airplane flying through a simulated vertical wind field associated with a convective frontal system. The calculations are based on a hazard metric developed from a systematic application of a generic math model to 1-cosine discrete gusts of various amplitudes and gust lengths. The math model simulates the three degree-of- freedom longitudinal rigid body motion to vertical gusts and includes (1) fuselage flexibility, (2) the lag in the downwash from the wing to the tail, (3) gradual lift effects, (4) a simplified autopilot, and (5) motion of an unrestrained passenger in the rear cabin. Airplane and passenger response contours are calculated for a matrix of gust amplitudes and gust lengths. The airplane response contours are used to develop an approximate hazard metric of peak normal accelerations as a function of gust amplitude and gust length. The hazard metric is then applied to a two-dimensional simulated vertical wind field of a convective frontal system. The variations of the hazard metric with gust length and airplane heading are demonstrated.

This paper presents a new and general concept, PQSM (Perceptual Quality Significance Map), to be used in measuring the visual distortion. It makes use of the selectivity characteristic of HVS (Human Visual System) that it pays more attention to certain area/regions of visual signal due to one or more of the following factors: salient features in image/video, cues from domain knowledge, and association of other media (e.g., speech or audio). PQSM is an array whose elements represent the relative perceptual-quality significance levels for the corresponding area/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics: to improve effectiveness or/and efficiency of perceptual metrics; or even to enhance a PSNR-basedmetric. A three-stage PQSM estimation method is also proposed in this paper, with an implementation of motion, texture, luminance, skin-color and face mapping. Experimental results show the scheme can improve the performance of current image/video distortion metrics.

Vehicle change in velocity (delta-v) is a widely used crash severity metric used to estimate occupant injury risk. Despite its widespread use, delta-v has several limitations. Of most concern, delta-v is a vehicle-basedmetric which does not consider the crash pulse or the performance of occupant restraints, e.g. seatbelts and airbags. Such criticisms have prompted the search for alternative impact severity metricsbased upon vehicle kinematics. The purpose of this study was to assess the ability of the occupant impact velocity (OIV), acceleration severity index (ASI), vehicle pulse index (VPI), and maximum delta-v (delta-v) to predict serious injury in real world crashes. The study was based on the analysis of event data recorders (EDRs) downloaded from the National Automotive Sampling System / Crashworthiness Data System (NASS-CDS) 2000-2013 cases. All vehicles in the sample were GM passenger cars and light trucks involved in a frontal collision. Rollover crashes were excluded. Vehicles were restricted to single-event crashes that caused an airbag deployment. All EDR data were checked for a successful, completed recording of the event and that the crash pulse was complete. The maximum abbreviated injury scale (MAIS) was used to describe occupant injury outcome. Drivers were categorized into either non-seriously injured group (MAIS2-) or seriously injured group (MAIS3+), based on the severity of any injuries to the thorax, abdomen, and spine. ASI and OIV were calculated according to the Manual for Assessing Safety Hardware. VPI was calculated according to ISO/TR 12353-3, with vehicle-specific parameters determined from U.S. New Car Assessment Program crash tests. Using binary logistic regression, the cumulative probability of injury risk was determined for each metric and assessed for statistical significance, goodness-of-fit, and prediction accuracy. The dataset included 102,744 vehicles. A Wald chi-square test showed each vehicle-based crash severity metric

Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

Rulebases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rulebases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rulebase. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rulebases, enabling their acquisition and management.

This project is a continuation of my specialization project which was focused on studying theoretical concepts related to case based reasoning method, rulebased reasoning method and integration of them. The integration of rule-based and case-based reasoning methods has shown a substantial improvement with regards to performance over the individual methods. Verdande Technology As wants to try integrating the rulebased reasoning method with an existing case based system. This project focu...

A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

Introduction: We define a collection of metrics for describing and comparing sets of terms in controlled and uncontrolled indexing languages and then show how these metrics can be used to characterize a set of languages spanning folksonomies, ontologies and thesauri. Method: Metrics for term set characterization and comparison were identified and…

This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.

The issue of plagiarism--claiming credit for work that is not one's own, rightly, continues to cause concern in the academic community. An analysis is presented that shows the effects that may arise from metrics-based assessments of research, when credit for an author's outputs (chiefly publications) is given to an institution that did not support the research but which subsequently employs the author. The incentives for what is termed here "institutional plagiarism" are demonstrated with reference to the UK Research Assessment Exercise in which submitting units of assessment are shown in some instances to derive around twice the credit for papers produced elsewhere by new recruits, compared to papers produced 'in-house'.

Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

Concept of horizontal and vertical rulebases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rulebase. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rulebase. Horizontal...

Under the sponsorship of IHI and EPRI, a rule-based screening system has been developed that can be used by utility engineers to determine which deterioration mechanisms are acting on specific LWR components, and to evaluate the efficacy of an age-related deterioration management program. The screening system was developed using the rule-based shell, NEXPERT, which provides traceability to the data sources used in the logic development. The system addresses all the deterioration mechanisms of specific metals encountered in either BWRs or PWRs. Deterioration mechanisms are listed with reasons why they may occur during the design life of LWRs, considering the plant environment, manufacturing process, service history, material chemical composition, etc. of components in a specific location of a LWR. To eliminate the evaluation of inactive deterioration quickly, a tier structure is applied to the rules. The reasons why deterioration will occur are extracted automatically by backward chaining. To reduce the amount of user input, plant environmental data are stored in files as default environmental data. (author)

predictors (P < .001) could explain 27% of the variation of citations. Highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles (9/12 or 75% of highly tweeted article were highly cited, while only 3/43 or 7% of less-tweeted articles were highly cited; rate ratio 0.75/0.07 = 10.75, 95% confidence interval, 3.4–33.6). Top-cited articles can be predicted from top-tweeted articles with 93% specificity and 75% sensitivity. Conclusions Tweets can predict highly cited articles within the first 3 days of article publication. Social media activity either increases citations or reflects the underlying qualities of the article that also predict citations, but the true use of these metrics is to measure the distinct concept of social impact. Social impact measures based on tweets are proposed to complement traditional citation metrics. The proposed twimpact factor may be a useful and timely metric to measure uptake of research findings and to filter research findings resonating with the public in real time. PMID:22173204

This paper proposes the use of a rule-based computer programming language as a standard for the expression of rules, arguing that the adoption of a standard would enable researchers to communicate about rules in a consistent and significant way. Focusing on the formal equivalence of artificial intelligence (AI) programming to different types of…

Previous studies on rule learning show a bias in favor of act-basedrules, which prohibit intentionally producing an outcome but not merely allowing the outcome. Nichols, Kumar, Lopez, Ayars, and Chan (2016) found that exposure to a single sample violation in which an agent intentionally causes the outcome was sufficient for participants to infer that the rule was act-based. One explanation is that people have an innate bias to think rules are act-based. We suggest an alternative empiricist account: since most rules that people learn are act-based, people form an overhypothesis (Goodman, 1955) that rules are typically act-based. We report three studies that indicate that people can use information about violations to form overhypotheses about rules. In study 1, participants learned either three "consequence-based" rules that prohibited allowing an outcome or three "act-based" rules that prohibiting producing the outcome; in a subsequent learning task, we found that participants who had learned three consequence-basedrules were more likely to think that the new rule prohibited allowing an outcome. In study 2, we presented participants with either 1 consequence-basedrule or 3 consequence-basedrules, and we found that those exposed to 3 such rules were more likely to think that a new rule was also consequence based. Thus, in both studies, it seems that learning 3 consequence-basedrules generates an overhypothesis to expect new rules to be consequence-based. In a final study, we used a more subtle manipulation. We exposed participants to examples act-based or accident-based (strict liability) laws and then had them learn a novel rule. We found that participants who were exposed to the accident-based laws were more likely to think a new rule was accident-based. The fact that participants' bias for act-basedrules can be shaped by evidence from other rules supports the idea that the bias for act-basedrules might be acquired as an overhypothesis from the

Business rules and business processes are essential artifacts in defining the requirements of a software system. Business processes capture business behavior, while rules connect processes and thus control processes and business behavior. Traditionally, rules are scattered inside application code. This approach makes it very difficult to change rules and shorten the life cycle of the software system. Because rules change more quickly than the application itself, it is desirable to externalize...

This paper addresses the evaluation of image quality in the context of wireless systems using feature-based objective metrics. The considered metrics comprise of a weighted combination of feature values that are used to quantify the extend by which the related artifacts are present in a processed image. In view of imaging applications in mobile radio and wireless communication systems, reduced-reference objective quality metrics are investigated for quantifying user-perceived quality. The exa...

Full Text Available Cataloguing is sometimes regarded as a rule-bound, production-based activity that offers little scope for professional judgement and decision-making. In reality, cataloguing involves challenging decisions that can have significant service and financial impacts. The current environment for cataloguing is a maelstrom of changing demands and competing visions for the future. With information-seekers turning en masse to Google and their behaviour receiving greater attention, library vendors are offering “discovery layer” products to replace traditional OPACs, and cataloguers are examining and debating a transformed version of their descriptive cataloguing rules (Resource Description and Access or RDA. In his “Perceptions of the future of cataloging: Is the sky really falling?” (2009, Ivey provides a good summary of this environment. At the same time, myriad new metadata formats and schema are being developed and applied for digital collections in libraries and other institutions. In today’s libraries, cataloguing is no longer limited to management of traditional AACR and MARC-based metadata for traditional library collections. And like their parent institutions, libraries cannot ignore growing pressures to demonstrate accountability and tangible value provided by their services. More than ever, research and an evidence based approach can help guide cataloguing decision-making.

The U.S. Nuclear Regulatory Commission has begun a transition from 'process-oriented' to 'results-oriented' regulations. The maintenance rule is a results-oriented rule that mandates consideration of risk and plant performance. The Maintenance Rule allows licensees to devise the most effective and efficient means of achieving the results described in the rule including the use of Probabilistic Risk (or Safety) Assessments. The NRC staff conducted a series of site visits to evaluate implementation of the Rule. Conclusions from the site visits indicated that the results-oriented Maintenance Rule can be successfully implemented and enforced. (author)

The U.S. Nuclear Regulatory Commission has begun a transition from 'process-oriented' to 'results-oriented' regulations. The maintenance rule is a results-oriented rule that mandates consideration of risk and plant performance. The Maintenance Rule allows licensees to devise the most effective and efficient means of achieving the results described in the rule including the use of Probabilistic Risk (or Safety) Assessments. The NRC staff conducted a series of site visits to evaluate implementation of the Rule. Conclusions from the site visits indicated that the results-oriented Maintenance Rule can be successfully implemented and enforced. (author)

Connection setup on various computer networks is now achieved by GMPLS. This technology is based on the source-routing approach, which requires the source node to store metric information of the entire network prior to computing a route. Thus all metric information must be distributed to all network nodes and kept up-to-date. However, as metric information become more diverse and generalized, it is hard to update all information due to the huge update overhead. Emerging network services and applications require the network to support diverse metrics for achieving various communication qualities. Increasing the number of metrics supported by the network causes excessive processing of metric update messages. To reduce the number of metric update messages, another scheme is required. This paper proposes a connection setup scheme that uses flooding-based signaling rather than the distribution of metric information. The proposed scheme requires only flooding of signaling messages with requested metric information, no routing protocol is required. Evaluations confirm that the proposed scheme achieves connection establishment without excessive overhead. Our analysis shows that the proposed scheme greatly reduces the number of control messages compared to the conventional scheme, while their blocking probabilities are comparable.

The ideas introduced in this book explore the relationships among rulebased systems, machine learning and big data. Rulebased systems are seen as a special type of expert systems, which can be built by using expert knowledge or learning from real data. The book focuses on the development and evaluation of rulebased systems in terms of accuracy, efficiency and interpretability. In particular, a unified framework for building rulebased systems, which consists of the operations of rule generation, rule simplification and rule representation, is presented. Each of these operations is detailed using specific methods or techniques. In addition, this book also presents some ensemble learning frameworks for building ensemble rulebased systems.

htmlabstractThis thesis studies the extraction of embedded business rules, using the idioms of the used framework to identify them. Embedded business rules exist as source code in the software system and knowledge about them may get lost. Extraction of those business rules could make them accessible

Rules, whether in the form of norms, taboos or laws, regulate and coordinate human life. Some rules, however, are arbitrary and adhering to them can be personally costly. Rigidly sticking to such rules can be considered maladaptive. Here, we test whether, at the neurobiological level, (mal)adaptive

A policy analysis of Florida's 10-factor Performance-Based Funding system for state universities. The focus of the article is on the system of performance metrics developed by the state Board of Governors and their impact on institutions and their missions. The paper also discusses problems and issues with the metrics, their ongoing evolution, and…

Explores electronic journal usage statistics and develops three metrics and three benchmarks based on those metrics. Topics include earlier work that assessed the value of print journals and was modified for the electronic format; the evaluation of potential purchases; and implications for standards development, including the need for content…

As web applications become more complex and the number of internet users rises, so does the need to optimize the use of hardware supporting these applications. Optimization can be achieved with microservices, as they offer several advantages compared to the monolithic approach, such as better utilization of resources, scalability and isolation of different parts of an application. Another important part is collecting metrics, since they can be used for analysis and debugging as well as the ba...

Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

textabstractOn-line vehicles dispatching rules are widely used in many facilities such as warehouses to control vehicles' movements. Single-attribute dispatching rules, which dispatch vehicles based on only one parameter, are used commonly. However, multi-attribute dispatching rules prove to be

The state of cyber security has begun to attract more attention and interest outside the community of computer security experts. Cyber security is not a single problem, but rather a group of highly different problems involving different sets of threats. Fuzzy Rulebased system for cyber security is a system consists of a rule depository and a mechanism for accessing and running the rules. The depository is usually constructed with a collection of related rule sets. The aim of this study is to...

This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metricsbased on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

Along with the development of human civilization in Indonesia, the changes and reform of Islamic inheritance law so as to conform to the conditions and culture cannot be denied. The distribution of inheritance in Indonesia can be done automatically by storing the rule of Islamic inheritance law in the expert system. In this study, we analyze the knowledge of experts in Islamic inheritance in Indonesia and represent it in the form of rules using rule-based Forward Chaining (FC) and Davis-Putman-Logemann-Loveland (DPLL) algorithms. By hybridizing FC and DPLL algorithms, the rules of Islamic inheritance law in Indonesia are clearly defined and measured. The rules were conceptually validated by some experts in Islamic laws and informatics. The results revealed that generally all rules were ready for use in an expert system.

National Aeronautics and Space Administration — APRIL 2016 NOTE: Principal Investigator moved to Rice University in mid-2015. Project continues at Rice with the same title (Evidence-basedMetrics Toolkit for...

Full Text Available In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

Full Text Available A C++ class, called Tripod, was created as a tool to assist with the development of rule-base decision support systems. The Tripod class contains data structures for the rule-base and member functions for operating on the data. The rule-base is defined by three ASCII files. These files are translated by a preprocessor into a single file that is located when a rule-base object is instantiated. The Tripod class was tested as part of a proto-type decision support system (DSS for winter highway maintenance in the Intermountain West. The DSS is composed of two principal modules: the main program, called the wrapper, and a Tripod rule-base object. The wrapper is a procedural module that interfaces with remote sensors and an external meterological database. The rule-base contains the logic for advising an inexperienced user and for assisting with the decision making process.

ABSTRACT The inspection and maintenance of bridges of all types is critical to the public safety and often critical to the economy of a region. Recent advanced sensor technologies provide accurate and easy-to-deploy means for structural health monitoring and, if the critical locations are known a priori, can be monitored by direct measurements. However, for today's complex civil infrastructure, the critical locations are numerous and often difficult to identify. This paper presents an innovative framework for structural monitoring at arbitrary locations on the structure combining computational models and limited physical sensor information. The use of multi-metric measurements is advocated to improve the accuracy of the approach. A numerical example is provided to illustrate the proposed hybrid monitoring framework, particularly focusing on fatigue life assessment of steel structures.

Tuberculosis (TB) is a disease caused by bacteria called Mycobacterium tuberculosis. It usually spreads through the air and attacks low immune bodies such as patients with Human Immunodeficiency Virus (HIV). This work focuses on finding close association rules, a promising technique in Data Mining, within TB data. The proposed method first normalizes of raw data from medical records which includes categorical, nominal and continuous attributes and then determines Association Rules from the normalized data with different support and confidence. Association rules are applied on a real data set containing medical records of patients with TB obtained from a state hospital. The rules determined describes close association between one symptom to another; as an example, likelihood that an occurrence of sputum is closely associated with blood cough and HIV.

Traditional human factors design involves the development of human factors requirements based on a desire to accommodate a certain percentage of the intended user population. As the product is developed human factors evaluation involves comparison between the resulting design and the specifications. Sometimes performance metrics are involved that allow leniency in the design requirements given that the human performance result is satisfactory. Clearly such approaches may work but they give rise to uncertainty and negotiation. An alternative approach is to adopt human factors design rules that articulate a range of each design continuum over which there are varying outcome expectations and interactions with other variables, including time. These rules are based on a consensus of human factors specialists, designers, managers and customers. The International Space Station faces exactly this challenge in interior volume control, which is based on anthropometric, performance and subjective preference criteria. This paper describes the traditional approach and then proposes a rule-based alternative. The proposed rules involve spatial, temporal and importance dimensions. If successful this rule-based concept could be applied to many traditional human factors design variables and could lead to a more effective and efficient contribution of human factors input to the design process.

Int. J. of Electric and Hybrid Vehicles (IJEHV), The highest control layer of a (hybrid) vehicular drive train is termed the Energy Management Strategy (EMS). In this paper an overview of different control methods is given and a new rule-based EMS is introduced based on the combination of Rule-Based

Full Text Available Precision-Recall is one of the main metrics for evaluating content-based image retrieval techniques. However, it does not provide an ample perception of the properties of an image dataset immersed in a metric space. In this work, we describe an alternative metric named H-Metric, which is determined along a sequence of controlled modifications in the image dataset. The process is named homogenization and works by altering the homogeneity characteristics of the classes of the images. The result is a process that measures how hard it is to deal with a set of images in respect to content-based retrieval, offering support in the task of analyzing configurations of distance functions and of features extractors.

Sustainability measurement in economics involves evaluation of environmental and economic impact in an integrated manner. In this study, system level economic data are combined with environmental impact from a life cycle assessment (LCA) of a common product. We are exploring a costing approach that captures traditional costs but also incorporates externality costs to provide a convenient, easily interpretable metric. Green Net Value Added (GNVA) is a type of full cost accounting that incorporates total revenue, the cost of materials and services, depreciation, and environmental externalities. Two, but not all, of the potential environmental impacts calculated by the standard LCIA method (TRACI) could be converted to externality cost values. We compute externality costs disaggregated by upstream sectors, full cost, and GNVA to evaluate the relative sustainability of Bounty® paper towels manufactured at two production facilities. We found that the longer running, more established line had a higher GNVA than the newer line. The dominant factors contributing to externality costs are calculated to come from the stationary sources in the supply chain: electricity generation (27-35%), refineries (20-21%), pulp and paper making (15-23%). Health related externalities from Particulate Matter (PM2.5) and Carbon Dioxide equivalent (CO2e) emissions appear largely driven by electricity usage and emissions by the facilities, followed by pulp processing and transport. Supply

Guided weapons, are a potent threat to both air and surface platforms; to protect the platform, Countermeasures are often used to disrupt the operation of the tracking system. Development of effective techniques to defeat the guidance sensors is a complex activity. The countermeasure often responds to the behaviour of a responsive sensor system, creating a "closed loop" interaction. Performance assessment is difficult, and determining that enough knowledge exists to make a case that a platform is adequately protected is challenging. A set of metrics known as Countermeasure Confidence Levels (CCL) is described. These set out a measure of confidence in prediction of Countermeasure performance. The CCL scale provides, for the first time, a method to determine whether enough evidence exists to support development activity and introduction to operational service. Application of the CCL scale to development of a hypothetical countermeasure is described. This tracks how the countermeasure is matured from initial concept to in-service application. The purpose of each stage is described, together with a description of what work is likely to be needed. This will involve timely use of analysis, simulation, laboratory work and field testing. The use of the CCL scale at key decision points is described. These include procurement decision points, and entry-to-service decisions. Each stage requires collection of evidence of effectiveness. Completeness of the available evidence can be assessed, and duplication can be avoided. Read-across between concepts, weapon systems and platforms can be addressed and the impact of technology insertion can be assessed.

This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

In the last third of the 20th century, fuzzy logic has risen from a mathematical concept to an applicable approach in soft computing. Today, fuzzy logic is used in control systems for various applications, such as washing machines, train-brake systems, automobile automatic gear, and so forth. The approach of optical implementation of fuzzy inferencing was given by the authors in previous papers, giving an extra emphasis to applications with two dominant inputs. In this paper the authors introduce a real-time optical rule generator for the dual-input fuzzy-inference engine. The paper briefly goes over the dual-input optical implementation of fuzzy-logic inferencing. Then, the concept of constructing a set of rules from given data is discussed. Next, the authors show ways to implement this procedure optically. The discussion is accompanied by an example that illustrates the transformation from raw data into fuzzy set rules.

Full Text Available Biometric traits, such as fingerprints, faces and signatures have been employed in bio-cryptosystems to secure cryptographic keys within digital security schemes. Reliable implementations of these systems employ error correction codes formulated as simple distance thresholds, although they may not effectively model the complex variability of behavioral biometrics like signatures. In this paper, a Global-Local Distance Metric (GLDM framework is proposed to learn cost-effective distance metrics, which reduce within-class variability and augment between-class variability, so that simple error correction thresholds of bio-cryptosystems provide high classification accuracy. First, a large number of samples from a development dataset are used to train a global distance metric that differentiates within-class from between-class samples of the population. Then, once user-specific samples are available for enrollment, the global metric is tuned to a local user-specific one. Proof-of-concept experiments on two reference offline signature databases confirm the viability of the proposed approach. Distance metrics are produced based on concise signature representations consisting of about 20 features and a single prototype. A signature-based bio-cryptosystem is designed using the produced metrics and has shown average classification error rates of about 7% and 17% for the PUCPR and the GPDS-300 databases, respectively. This level of performance is comparable to that obtained with complex state-of-the-art classifiers.

Abstract. We study a new approach to regression analysis. We propose a new rule-based regression model using the theoretical framework of belief functions. For this purpose we use the recently proposed Evidential c-means (ECM) to derive rule-based models solely from data. ECM allocates, for each

A fuzzy rule-based controller is applied to lateral guidance of a vehicle for an automated highway system. The fuzzy rules, based on human drivers' experiences, are developed to track the center of a lane in the presence of external disturbances and over a range of vehicle operating conditions.

We present a novel approach and implementation for ana- lysing weighted timed automata (WTA) with respect to the weighted metric temporal logic (WMTL≤ ). Based on a stochastic semantics of WTAs, we apply statistical model checking (SMC) to estimate and test probabilities of satisfaction with desi......We present a novel approach and implementation for ana- lysing weighted timed automata (WTA) with respect to the weighted metric temporal logic (WMTL≤ ). Based on a stochastic semantics of WTAs, we apply statistical model checking (SMC) to estimate and test probabilities of satisfaction...

Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

Purpose: The purpose of this work is to explore the usefulness of the gamma passing rate metric for per-patient, pretreatment dose QA and to validate a novel patient-dose/DVH-based method and its accuracy and correlation. Specifically, correlations between: (1) gamma passing rates for three 3D dosimeter detector geometries vs clinically relevant patient DVH-basedmetrics; (2) Gamma passing rates of whole patient dose grids vs DVH-basedmetrics, (3) gamma passing rates filtered by region of interest (ROI) vs DVH-basedmetrics, and (4) the capability of a novel software algorithm that estimates corrected patient Dose-DVH based on conventional phan-tom QA data are analyzed. Methods: Ninety six unique ''imperfect'' step-and-shoot IMRT plans were generated by applying four different types of errors on 24 clinical Head/Neck patients. The 3D patient doses as well as the dose to a cylindrical QA phantom were then recalculated using an error-free beam model to serve as a simulated measurement for comparison. Resulting deviations to the planned vs simulated measured DVH-basedmetrics were generated, as were gamma passing rates for a variety of difference/distance criteria covering: dose-in-phantom comparisons and dose-in-patient comparisons, with the in-patient results calculated both over the whole grid and per-ROI volume. Finally, patient dose and DVH were predicted using the conventional per-beam planar data as input into a commercial ''planned dose perturbation'' (PDP) algorithm, and the results of these predicted DVH-basedmetrics were compared to the known values. Results: A range of weak to moderate correlations were found between clinically relevant patient DVH metrics (CTV-D95, parotid D{sub mean}, spinal cord D1cc, and larynx D{sub mean}) and both 3D detector and 3D patient gamma passing rate (3%/3 mm, 2%/2 mm) for dose-in-phantom along with dose-in-patient for both whole patient volume and filtered per-ROI. There was

Full Text Available As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures, by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max of morphometric quantities-Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as

Full Text Available This paper proposes a methodology based on system connections to calculate its complexity. Two study cases are proposed: the dining Chinese philosophers’ problem and the distribution center. Both studies are modeled using the theory of Discrete Event Systems and simulations in different contexts were performed in order to measure their complexities. The obtained results present i the static complexity as a limiting factor for the dynamic complexity, ii the lowest cost in terms of complexity for each unit of measure of the system performance and iii the output sensitivity to the input parameters. The associated complexity and performance measures aggregate knowledge about the system.

The objective of this study was to develop and assess the feasibility of utilizing consensus-based penalty metrics for the purpose of critical structure and organ at risk (OAR) contouring quality assurance and improvement. A Delphi study was conducted to obtain consensus on contouring penalty metrics to assess trainee-generated OAR contours. Voxel-based penalty metric equations were used to score regions of discordance between trainee and expert contour sets. The utility of these penalty metric scores for objective feedback on contouring quality was assessed by using cases prepared for weekly radiation oncology radiation oncology trainee treatment planning rounds. In two Delphi rounds, six radiation oncology specialists reached agreement on clinical importance/impact and organ radiosensitivity as the two primary criteria for the creation of the Critical Structure Inter-comparison of Segmentation (CriSIS) penalty functions. Linear/quadratic penalty scoring functions (for over- and under-contouring) with one of four levels of severity (none, low, moderate and high) were assigned for each of 20 OARs in order to generate a CriSIS score when new OAR contours are compared with reference/expert standards. Six cases (central nervous system, head and neck, gastrointestinal, genitourinary, gynaecological and thoracic) then were used to validate 18 OAR metrics through comparison of trainee and expert contour sets using the consensus derived CriSIS functions. For 14 OARs, there was an improvement in CriSIS score post-educational intervention. The use of consensus-based contouring penalty metrics to provide quantitative information for contouring improvement is feasible.

We propose a novel GPS phase-lock loop (PLL) performance metricbased on the standard deviation of tracking error (defined as the discriminator's estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered.

Full Text Available Concept of horizontal and vertical rulebases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rulebase. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rulebase. Horizontal and vertical rulebases method has a great roll in easing of extracting the optimum control surface by using too lesser rules than traditional fuzzy systems. This research involves with control of a system with high nonlinearity and in difficulty to model it with classical methods. As a case study for testing proposed method in real condition, the designed controller is applied to steaming room with uncertain data and variable parameters. A comparison between PID and traditional fuzzy counterpart and our proposed system shows that our proposed system outperforms PID and traditional fuzzy systems in point of view of number of valve switching and better surface following. The evaluations have done both with model simulation and DSP implementation.

The International Maritime Organisation (IMO) has recommended a method called formal safety assessment (FSA) for future development of rules and regulations. The FSA method has been applied in a pilot research project for development of risk-basedrules and functional requirements for systems and components for offshore crane systems. This paper reports some developments in the project. A method for estimating target reliability for the risk-control options (safety functions) by means of the cost/benefit decision criterion has been developed in the project and is presented in this paper. Finally, a structure for risk-basedrules is proposed and presented.

The International Maritime Organisation (IMO) has recommended a method called formal safety assessment (FSA) for future development of rules and regulations. The FSA method has been applied in a pilot research project for development of risk-basedrules and functional requirements for systems and components for offshore crane systems. This paper reports some developments in the project. A method for estimating target reliability for the risk-control options (safety functions) by means of the cost/benefit decision criterion has been developed in the project and is presented in this paper. Finally, a structure for risk-basedrules is proposed and presented

The term 'Technology Base' is commonly used but what does it mean? Is there a common understanding of the components that comprise a technology base? Does a formal process exist to assess the health of a given technology base? These are important questions the relevance of which is even more pressing given the USDOE/NNSA initiatives to strengthen the safeguards technology base through investments in research and development and human capital development. Accordingly, the authors will establish a high-level framework to define and understand what comprises a technology base. Potential goal-driven metrics to assess the health of a technology base will also be explored, such as linear demographics and resource availability, in the hope that they can be used to better understand and improve the health of the U.S. safeguards technology base. Finally, through the identification of such metrics, the authors will offer suggestions and highlight choices for addressing potential shortfalls.

The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-basedmetrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-basedmetrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube.

Full Text Available The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube can be predicted by using neural networks and neuroscience-basedmetrics (brain response, heart rate variability and eye tracking. Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-basedmetrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy and estimate the number of online views (mean error of 0.199. The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube.

Graduate medical education (GME) in the United States is financed by contributions from both federal and state entities that total over $15 billion annually. Within institutions, these funds are distributed with limited transparency to achieve ill-defined outcomes. To address this, the Institute of Medicine convened a committee on the governance and financing of GME to recommend finance reform that would promote a physician training system that meets society's current and future needs. The resulting report provided several recommendations regarding the oversight and mechanisms of GME funding, including implementation of performance-based GME payments, but did not provide specific details about the content and development of metrics for these payments. To initiate a national conversation about performance-based GME funding, the authors asked: What should GME be held accountable for in exchange for public funding? In answer to this question, the authors propose 17 potential performance-basedmetrics for GME funding that could inform future funding decisions. Eight of the metrics are described as exemplars to add context and to help readers obtain a deeper understanding of the inherent complexities of performance-based GME funding. The authors also describe considerations and precautions for metric implementation.

The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-basedmetrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-basedmetrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube. PMID:29163251

The term “Technology Base” is commonly used but what does it mean? Is there a common understanding of the components that comprise a technology base? Does a formal process exist to assess the health of a given technology base? These are important questions the relevance of which is even more pressing given the USDOE/NNSA initiatives to strengthen the safeguards technology base through investments in research & development and human capital development. Accordingly, the authors will establish a high-level framework to define and understand what comprises a technology base. Potential goal-driven metrics to assess the health of a technology base will also be explored, such as linear demographics and resource availability, in the hope that they can be used to better understand and improve the health of the U.S. safeguards technology base. Finally, through the identification of such metrics, the authors will offer suggestions and highlight choices for addressing potential shortfalls.

Multi-atlas based image segmentation sees unprecedented opportunities but also demanding challenges in the big data era. Relevant atlas selection before label fusion plays a crucial role in reducing potential performance loss from heterogeneous data quality and high computation cost from extensive data. This paper starts with investigating the image similarity metric (termed ‘surrogate’), an alternative to the inaccessible geometric agreement metric (termed ‘oracle’) in atlas relevance assessment, and probes into the problem of how to select the ‘most-relevant’ atlases and how many such atlases to incorporate. We propose an inference model to relate the surrogates and the oracle geometric agreement metrics. Based on this model, we quantify the behavior of the surrogates in mimicking oracle metrics for atlas relevance ordering. Finally, analytical insights on the choice of fusion set size are presented from a probabilistic perspective, with the integrated goal of including the most relevant atlases and excluding the irrelevant ones. Empirical evidence and performance assessment are provided based on prostate and corpus callosum segmentation. (paper)

Real-time hydropower reservoir operation is a continuous decision-making process of determining the water level of a reservoir or the volume of water released from it. The hydropower operation is usually based on operating policies and rules defined and decided upon in strategic planning. This paper presents a fuzzy rule-based model for the operation of hydropower reservoirs. The proposed fuzzy rule-based model presents a set of suitable operating rules for release from the reservoir based on ideal or target storage levels. The model operates on an 'if-then' principle, in which the 'if' is a vector of fuzzy premises and the 'then' is a vector of fuzzy consequences. In this paper, reservoir storage, inflow, and period are used as premises and the release as the consequence. The steps involved in the development of the model include, construction of membership functions for the inflow, storage and the release, formulation of fuzzy rules, implication, aggregation and defuzzification. The required knowledge bases for the formulation of the fuzzy rules is obtained form a stochastic dynamic programming (SDP) model with a steady state policy. The proposed model is applied to the hydropower operation of ''Dez'' reservoir in Iran and the results are presented and compared with those of the SDP model. The results indicate the ability of the method to solve hydropower reservoir operation problems. (author)

Artifact-metrics is an automated method of authenticating artifacts based on a measurable intrinsic characteristic. Intrinsic characters, such as microscopic random-patterns made during the manufacturing process, are very difficult to copy. A transmitted light image of the distribution can be used for artifact-metrics, since the fiber distribution of paper is random. Little is known about the individuality of the transmitted light image although it is an important requirement for intrinsic characteristic artifact-metrics. Measuring individuality requires that the intrinsic characteristic of each artifact significantly differs, so having sufficient individuality can make an artifact-metric system highly resistant to brute force attack. Here we investigate the influence of paper category, matching size of sample, and image-resolution on the individuality of a transmitted light image of paper through a matching test using those images. More concretely, we evaluate FMR/FNMR curves by calculating similarity scores with matches using correlation coefficients between pairs of scanner input images, and the individuality of paper by way of estimated EER with probabilistic measure through a matching method based on line segments, which can localize the influence of rotation gaps of a sample in the case of large matching size. As a result, we found that the transmitted light image of paper has a sufficient individuality.

Full Text Available Progressive Edge Growth (PEG constructions are usually based on optimizing the distance metric by using various methods. In this work however, the distance metric is replaced by a different one, namely the betweenness centrality metric, which was shown to enhance routing performance in wireless mesh networks. A new type of PEG construction for Low-Density Parity-Check (LDPC codes is introduced based on the betweenness centrality metric borrowed from social networks terminology given that the bipartite graph describing the LDPC is analogous to a network of nodes. The algorithm is very efficient in filling edges on the bipartite graph by adding its connections in an edge-by-edge manner. The smallest graph size the new code could construct surpasses those obtained from a modified PEG algorithm - the RandPEG algorithm. To the best of the authors' knowledge, this paper produces the best regular LDPC column-weight two graphs. In addition, the technique proves to be competitive in terms of error-correcting performance. When compared to MacKay, PEG and other recent modified-PEG codes, the algorithm gives better performance over high SNR due to its particular edge and local graph properties.

A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

Full Text Available Emergence is a common phenomenon, and it is also a general and important concept in complex dynamic systems like artificial societies. Usually, artificial societies are used for assisting in resolving several complex social issues (e.g., emergency management, intelligent transportation system with the aid of computer science. The levels of an emergence may have an effect on decisions making, and the occurrence and degree of an emergence are generally perceived by human observers. However, due to the ambiguity and inaccuracy of human observers, to propose a quantitative method to measure emergences in artificial societies is a meaningful and challenging task. This article mainly concentrates upon three kinds of emergences in artificial societies, including emergence of attribution, emergence of behavior, and emergence of structure. Based on information entropy, three metrics have been proposed to measure emergences in a quantitative way. Meanwhile, the correctness of these metrics has been verified through three case studies (the spread of an infectious influenza, a dynamic microblog network, and a flock of birds with several experimental simulations on the Netlogo platform. These experimental results confirm that these metrics increase with the rising degree of emergences. In addition, this article also has discussed the limitations and extended applications of these metrics.

, development and application of an expert system to diagnose influenza under uncertainty. The recently developed generic belief rule-based inference methodology by using the evidential reasoning (RIMER) approach is employed to develop this expert system, termed as Belief RuleBased Expert System (BRBES......). The RIMER approach can handle different types of uncertainties, both in knowledge representation, and in inference procedures. The knowledge-base of this system was constructed by using records of the real patient data along with in consultation with the Influenza specialists of Bangladesh. Practical case...

Full Text Available We identify three different levels of correlation (pair-wise relative ordering, network-wide ranking and linear regression that could be assessed between a computationally-light centrality metric and a computationally-heavy centrality metric for real-world networks. The Kendall's concordance-based correlation measure could be used to quantitatively assess how well we could consider the relative ordering of two vertices vi and vj with respect to a computationally-light centrality metric as the relative ordering of the same two vertices with respect to a computationally-heavy centrality metric. We hypothesize that the pair-wise relative ordering (concordance-based assessment of the correlation between centrality metrics is the most strictest of all the three levels of correlation and claim that the Kendall's concordance-based correlation coefficient will be lower than the correlation coefficient observed with the more relaxed levels of correlation measures (linear regression-based Pearson's product-moment correlation coefficient and the network wide ranking-based Spearman's correlation coefficient. We validate our hypothesis by evaluating the three correlation coefficients between two sets of centrality metrics: the computationally-light degree and local clustering coefficient complement-based degree centrality metrics and the computationally-heavy eigenvector centrality, betweenness centrality and closeness centrality metrics for a diverse collection of 50 real-world networks.

Full Text Available Security is one of the major challenges in open network. There are so many types of attacks which follow fixed patterns or frequently change their patterns. It is difficult to find the malicious attack which does not have any fixed patterns. The Distributed Denial of Service (DDoS attacks like Botnets are used to slow down the system performance. To address such problems Collaborative Network Security Management System (CNSMS is proposed along with the association mining rule. CNSMS system is consists of collaborative Unified Threat Management (UTM, cloud based security centre and traffic prober. The traffic prober captures the internet traffic and given to the collaborative UTM. Traffic is analysed by the Collaborative UTM, to determine whether it contains any malicious attack or not. If any security event occurs, it will reports to the cloud based security centre. The security centre generates security rulesbased on association mining rule and distributes to the network. The cloud based security centre is used to store the huge amount of tragic, their logs and the security rule generated. The feedback is evaluated and the invalid rules are eliminated to improve the system efficiency.

Orchard navigation using sensor-based localization and exible mission management facilitates successful missions independent of the Global Positioning System (GPS). This is especially important while driving between tight tree rows where the GPS coverage is poor. This paper suggests localization ...

Markov's chains (MC) are well-know and widely applied in dependability and performability analysis of safety-critical systems, because of the flexible representation of system components dependencies and synchronization. There are few radblocks for greater application of the MC: accounting the additional system components increases the model state-space and complicates analysis; the non-numerically sophisticated user may find it difficult to decide between the variety of numerical methods to determine the most suitable and accurate for their application. Thus obtaining the high accurate and trusted modeling results becomes a nontrivial task. In this paper, we present the metric-based approach for selection of the applicable solution approach, based on the analysis of MCs stiffness, decomposability, sparsity and fragmentedness. Using this selection procedure the modeler can provide the verification of earlier obtained results. The presented approach was implemented in utility MSMC, which supports the MC construction, metric-based analysis, recommendations shaping and model solution. The model can be exported to the wall-known off-the-shelf mathematical packages for verification. The paper presents the case study of the industrial NPP I and C system, manufactured by RPC Radiy. The paper shows an application of metric-based approach and MSMC fool for dependability and safety analysis of RTS, and procedure of results verification. (author)

Full Text Available The usage of Web services has recently increased. Therefore, it is important to select right type of Web services at the project design stage. The most common implementations are based on SOAP (Simple Object Access Protocol and REST (Representational State Transfer Protocol styles. Maintainability of REST and SOAP Web services has become an important issue as popularity of Web services is increasing. Choice of the right approach is not an easy decision since it is influenced by development requirements and maintenance considerations. In the present research, we present the comparison of SOAP and REST based Web services using software evaluation metrics. To achieve this aim, a systematic literature review will be made to compare REST and SOAP Web services in terms of the software evaluation metrics.

Full Text Available In recent years, various real life applications such as talking books, gadgets and humanoid robots have drawn the attention to pursue research in the area of expressive speech synthesis. Speech synthesis is widely used in various applications. However, there is a growing need for an expressive speech synthesis especially for communication and robotic. In this paper, global and local rule are developed to convert neutral to storytelling style speech for the Malay language. In order to generate rules, modification of prosodic parameters such as pitch, intensity, duration, tempo and pauses are considered. Modification of prosodic parameters is examined by performing prosodic analysis on a story collected from an experienced female and male storyteller. The global and local rule is applied in sentence level and synthesized using HNM. Subjective tests are conducted to evaluate the synthesized storytelling speech quality of both rulesbased on naturalness, intelligibility, and similarity to the original storytelling speech. The results showed that global rule give a better result than local rule

Full Text Available Abstract Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal of an edge represents a class of association (dissociation reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for

Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with it portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assumed a growing role in providing human-like reasoning capability expertise for computer systems. The integration is discussed of expert system technology with Ada programming language, especially a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell. NASA Lewis was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-based power expert system, in ART-Ada. Three components, the rule-based expert systems, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The rules were written in the ART-Ada development environment and converted to Ada source code. The graphics interface was developed with the Transportable Application Environment (TAE) Plus, which generates Ada source code to control graphics images. SMART-Ada communicates with a remote host to obtain either simulated or real data. The Ada source code generated with ART-Ada, TAE Plus, and communications code was incorporated into an Ada expert system that reads the data from a power distribution test bed, applies the rule to determine a fault, if one exists, and graphically displays it on the screen. The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

The United States and Canada currently use exposure-basedmetrics to protect vegetation from O 3 . Using 5 years (1999-2003) of co-measured O 3 , meteorology and growth response, we have developed exposure-based regression models that predict Populus tremuloides growth change within the North American ambient air quality context. The models comprised growing season fourth-highest daily maximum 8-h average O 3 concentration, growing degree days, and wind speed. They had high statistical significance, high goodness of fit, include 95% confidence intervals for tree growth change, and are simple to use. Averaged across a wide range of clonal sensitivity, historical 2001-2003 growth change over most of the 26 M ha P. tremuloides distribution was estimated to have ranged from no impact (0%) to strong negative impacts (-31%). With four aspen clones responding negatively (one responded positively) to O 3 , the growing season fourth-highest daily maximum 8-h average O 3 concentration performed much better than growing season SUM06, AOT40 or maximum 1 h average O 3 concentration metrics as a single indicator of aspen stem cross-sectional area growth. - A new exposure-basedmetric approach to predict O 3 risk to North American aspen forests has been developed

Seasonal and interannual variability in snow cover affects socio-environmental systems including water resources, forest ecology, freshwater and terrestrial habitat, and winter recreation. We have developed two new seasonal snow metrics: snow cover frequency (SCF) and snow disappearance date (SDD). These metrics are calculated at 500-m resolution using NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data (MOD10A1). SCF is the number of times snow is observed in a pixel over the user-defined observation period. SDD is the last date of observed snow in a water year. These pixel-level metrics are calculated rapidly and globally in the Google Earth Engine cloud-based environment. SCF and SDD can be interactively visualized in a map-based interface, allowing users to explore spatial and temporal snowcover patterns from 2000-present. These metrics are especially valuable in regions where snow data are sparse or non-existent. We have used these metrics in several ongoing projects. When SCF was linked with a simple hydrologic model in the La Laguna watershed in northern Chile, it successfully predicted summer low flows with a Nash-Sutcliffe value of 0.86. SCF has also been used to help explain changes in Dall sheep populations in Alaska where sheep populations are negatively impacted by late snow cover and low snowline elevation during the spring lambing season. In forest management, SCF and SDD appear to be valuable predictors of post-wildfire vegetation growth. We see a positive relationship between winter SCF and subsequent summer greening for several years post-fire. For western US winter recreation, we are exploring trends in SDD and SCF for regions where snow sports are economically important. In a world with declining snowpacks and increasing uncertainty, these metrics extend across elevations and fill data gaps to provide valuable information for decision-making. SCF and SDD are being produced so that anyone with Internet access and a Google

Software size is one of the most important inputs of many software cost and effort estimation models. Early estimation of software plays an important role at the time of project inception. An accurate estimate of software size is, therefore, crucial for planning, managing, and controlling software development projects dealing with the development of software games. However, software size is unavailable during early phase of software development. This research determines the utility of CK (Chidamber and Kemerer) metrics, a well-known suite of object-oriented metrics, in estimating the size of software applications using the information from its UML (Unified Modeling Language) class diagram. This work focuses on a small subset dealing with board-based software games. Almost sixty games written using an object-oriented programming language are downloaded from open source repositories, analyzed and used to calibrate a regression-based size estimation model. Forward stepwise MLR (Multiple Linear Regression) is used for model fitting. The model thus obtained is assessed using a variety of accuracy measures such as MMRE (Mean Magnitude of Relative Error), Prediction of x(PRED(x)), MdMRE (Median of Relative Error) and validated using K-fold cross validation. The accuracy of this model is also compared with an existing model tailored for size estimation of board games. Based on a small subset of desktop games developed in various object-oriented languages, we obtained a model using CK metrics and forward stepwise multiple linear regression with reasonable estimation accuracy as indicated by the value of the coefficient of determination (R2 = 0.756).Comparison results indicate that the existing size estimation model outperforms the model derived using CK metrics in terms of accuracy of prediction. (author)

Full Text Available Though Information Retrieval (IR in big data has been an active field of research for past few years; the popularity of the native languages presents a unique challenge in big data information retrieval systems. There is a need to retrieve information which is present in English and display it in the native language for users. This aim of cross language information retrieval is complicated by unique features of the native languages such as: morphology, compound word formations, word spelling variations, ambiguity, word synonym, other language influence and etc. To overcome some of these issues, the native language is modeled using a grammar rulebased approach in this work. The advantage of this approach is that the native language is modeled and its unique features are encoded using a set of inference rules. This rulebase coupled with the customized ontological system shows considerable potential and is found to show better precision and recall.

The software reliability of TELLERFAST ATM software is analyzed by using two metric-based software reliability analysis methods, a state transition diagram-based method and a test coverage-based method. The procedures for the software reliability analysis by using the two methods and the analysis results are provided in this report. It is found that the two methods have a relation of complementary cooperation, and therefore further researches on combining the two methods to reflect the benefit of the complementary cooperative effect to the software reliability analysis are recommended

Enhancing reliable online transaction with intelligent rule-based fraud detection technique. ... These are with a bid to reducing amongst other things the cost of production and also dissuade the poor handling of Nigeria currency. The CBN pronouncement has necessitated the upsurge in transactions completed with credit ...

We consider the problem of ranking sets of objects, the members of which are mutually compatible.Assuming that each object is either good or bad, we axiomatically characterize three cardinality-basedrules which arise naturally in this dichotomous setting.They are what we call the symmetric

In late 1983, the Electric Power Research Institute (EPRI) began a program to encourage and stimulate the development of artificial intelligence (AI) applications for the nuclear industry. Development of a rule-based emergency action level classification system prototype is discussed. The paper describes both the full prototype currently under development and the completed, simplified prototype

Full Text Available This paper introduces basic concepts of rulebased test generation with mind maps, and reports experiences learned from industrial application of this technique in the domain of smart card testing by Giesecke & Devrient GmbH over the last years. It describes the formalization of test selection criteria used by our test generator, our test generation architecture and test generation framework.

Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

Network embedding which encodes all vertices in a network as a set of numerical vectors in accordance with it's local and global structures, has drawn widespread attention. Network embedding not only learns significant features of a network, such as the clustering and linking prediction but also learns the latent vector representation of the nodes which provides theoretical support for a variety of applications, such as visualization, link prediction, node classification, and recommendation. As the latest progress of the research, several algorithms based on random walks have been devised. Although those algorithms have drawn much attention for their high scores in learning efficiency and accuracy, there is still a lack of theoretical explanation, and the transparency of those algorithms has been doubted. Here, we propose an approach based on the open-flow network model to reveal the underlying flow structure and its hidden metric space of different random walk strategies on networks. We show that the essence of embedding based on random walks is the latent metric structure defined on the open-flow network. This not only deepens our understanding of random- walk-based embedding algorithms but also helps in finding new potential applications in network embedding.

Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that

Full Text Available The present paper proposes a methodology of analyzing the metrics related to electronic business. The drafts of the optimizing models include KPIs that can highlight the business specific, if only they are integrated by using learning-based techniques. Having set the most important and high-impact elements of the business, the models should get in the end the link between them, by automating business flows. The human resource will be found in the situation of collaborating more and more with the optimizing models which will translate into high quality decisions followed by profitability increase.

Purpose The categorical definition of response assessed via the Response Evaluation Criteria in Solid Tumors has documented limitations. We sought to identify alternative metrics for tumor response that improve prediction of overall survival. Experimental Design Individual patient data from three North Central Cancer Treatment Group trials (N0026, n=117; N9741, n=1109; N9841, n=332) were used. Continuous metrics of tumor size based on longitudinal tumor measurements were considered in addition to a trichotomized response (TriTR: Response vs. Stable vs. Progression). Cox proportional hazards models, adjusted for treatment arm and baseline tumor burden, were used to assess the impact of the metrics on subsequent overall survival, using a landmark analysis approach at 12-, 16- and 24-weeks post baseline. Model discrimination was evaluated using the concordance (c) index. Results The overall best response rates for the three trials were 26%, 45%, and 25% respectively. While nearly all metrics were statistically significantly associated with overall survival at the different landmark time points, the c-indices for the traditional response metrics ranged from 0.59-0.65; for the continuous metrics from 0.60-0.66 and for the TriTR metrics from 0.64-0.69. The c-indices for TriTR at 12-weeks were comparable to those at 16- and 24-weeks. Conclusions Continuous tumor-measurement-basedmetrics provided no predictive improvement over traditional response basedmetrics or TriTR; TriTR had better predictive ability than best TriTR or confirmed response. If confirmed, TriTR represents a promising endpoint for future Phase II trials. PMID:21880789

Taiwan is a populated island with a majority of residents settled in the western plains where soils are suitable for rice cultivation. Rice is not only the most important commodity, but also plays a critical role for agricultural and food marketing. Information of rice production is thus important for policymakers to devise timely plans for ensuring sustainably socioeconomic development. Because rice fields in Taiwan are generally small and yet crop monitoring requires information of crop phenology associating with the spatiotemporal resolution of satellite data, this study used Landsat-MODIS fusion data for rice yield modeling in Taiwan. We processed the data for the first crop (Feb-Mar to Jun-Jul) and the second (Aug-Sep to Nov-Dec) in 2014 through five main steps: (1) data pre-processing to account for geometric and radiometric errors of Landsat data, (2) Landsat-MODIS data fusion using using the spatial-temporal adaptive reflectance fusion model, (3) construction of the smooth time-series enhanced vegetation index 2 (EVI2), (4) rice yield modeling using EVI2-based crop phenology metrics, and (5) error verification. The fusion results by a comparison bewteen EVI2 derived from the fusion image and that from the reference Landsat image indicated close agreement between the two datasets (R2 > 0.8). We analysed smooth EVI2 curves to extract phenology metrics or phenological variables for establishment of rice yield models. The results indicated that the established yield models significantly explained more than 70% variability in the data (p-value 0.8), in both cases. The root mean square error (RMSE) and mean absolute error (MAE) used to measure the model accuracy revealed the consistency between the estimated yields and the government's yield statistics. This study demonstrates advantages of using EVI2-based phenology metrics (derived from Landsat-MODIS fusion data) for rice yield estimation in Taiwan prior to the harvest period.

In sufficiently dense bacterial populations (>40% bacteria by volume), unusual collective swimming behaviors have been consistently observed, resembling von Karman vortex streets. The source of these collective swimming behavior has yet to be fully determined, and as of yet, no research has been conducted that would define whether or not this behavior is derived predominantly from the properties of the surrounding media, or if it is an emergent behavior as a result of the ``rules'' governing the behavior of individual bacteria. The goal of this research is to ascertain whether or not it is possible to design a simulation that can replicate the qualitative behavior of the densely packed bacterial populations using only behavioral rules to govern the actions of each bacteria, with the physical properties of the media being neglected. The results of the simulation will address whether or not it is possible for the system's overall behavior to be driven exclusively by these rule-based dynamics. In order to examine this, the behavioral simulation was written in MATLAB on a fixed grid, and updated sequentially with the bacterial behavior, including randomized tumbling, gathering and perceptual sub-functions. If the simulation is successful, it will serve as confirmation that it is possible to generate these qualitatively vortex-like behaviors without specific physical media (that the phenomena arises in emergent fashion from behavioral rules), or as evidence that the observed behavior requires some specific set of physical parameters.

Rule-based modeling provides a means to represent cell signaling systems in a way that captures site-specific details of molecular interactions. For rule-based models to be more widely understood and (re)used, conventions for model visualization and annotation are needed. We have developed the concepts of an extended contact map and a model guide for illustrating and annotating rule-based models. An extended contact map represents the scope of a model by providing an illustration of each molecule, molecular component, direct physical interaction, post-translational modification, and enzyme-substrate relationship considered in a model. A map can also illustrate allosteric effects, structural relationships among molecular components, and compartmental locations of molecules. A model guide associates elements of a contact map with annotation and elements of an underlying model, which may be fully or partially specified. A guide can also serve to document the biological knowledge upon which a model is based. We provide examples of a map and guide for a published rule-based model that characterizes early events in IgE receptor (FcεRI) signaling. We also provide examples of how to visualize a variety of processes that are common in cell signaling systems but not considered in the example model, such as ubiquitination. An extended contact map and an associated guide can document knowledge of a cell signaling system in a form that is visual as well as executable. As a tool for model annotation, a map and guide can communicate the content of a model clearly and with precision, even for large models. PMID:21647530

Rule-based modeling provides a means to represent cell signaling systems in a way that captures site-specific details of molecular interactions. For rule-based models to be more widely understood and (re)used, conventions for model visualization and annotation are needed. We have developed the concepts of an extended contact map and a model guide for illustrating and annotating rule-based models. An extended contact map represents the scope of a model by providing an illustration of each molecule, molecular component, direct physical interaction, post-translational modification, and enzyme-substrate relationship considered in a model. A map can also illustrate allosteric effects, structural relationships among molecular components, and compartmental locations of molecules. A model guide associates elements of a contact map with annotation and elements of an underlying model, which may be fully or partially specified. A guide can also serve to document the biological knowledge upon which a model is based. We provide examples of a map and guide for a published rule-based model that characterizes early events in IgE receptor (FcεRI) signaling. We also provide examples of how to visualize a variety of processes that are common in cell signaling systems but not considered in the example model, such as ubiquitination. An extended contact map and an associated guide can document knowledge of a cell signaling system in a form that is visual as well as executable. As a tool for model annotation, a map and guide can communicate the content of a model clearly and with precision, even for large models.

Smart automation in industries has become very important as it can improve the reliability and efficiency of the systems. The use of smart technologies in agriculture have increased over the year to ensure and control the production of crop and address food security. However, it is important to use proper irrigation systems avoid water wastage and overfeeding of the plant. In this paper, a Smart Rule-based Automated Fertilization and Irrigation System is proposed and evaluated. We propose a rulebased decision making algorithm to monitor and control the food supply to the plant and the soil quality. A build-in alert system is also used to update the farmer using a text message. The system is developed and evaluated using a real hardware.

Full Text Available In this paper some problems arising in the interface between two different areas, Decision Support Systems and Fuzzy Sets and Systems, are considered. The Model-Base Management System of a Decision Support System which involves some fuzziness is considered, and in that context the questions on the management of the fuzziness in some optimisation models, and then of using fuzzy rules for terminating conventional algorithms are presented, discussed and analyzed. Finally, for the concrete case of the Travelling Salesman Problem, and as an illustration of determination, management and using the fuzzy rules, a new algorithm easy to implement in the Model-Base Management System of any oriented Decision Support System is shown.

Purpose: To improve the efficiency of atlas-based segmentation without compromising accuracy, and to demonstrate the validity of the proposed method on MRI-based prostate segmentation application. Methods: Accurate and efficient automatic structure segmentation is an important task in medical image processing. Atlas-based methods, as the state-of-the-art, provide good segmentation at the cost of a large number of computationally intensive nonrigid registrations, for anatomical sites/structures that are subject to deformation. In this study, the authors propose to utilize a combination of global, regional, and local metrics to improve the accuracy yet significantly reduce the number of required nonrigid registrations. The authors first perform an affine registration to minimize the global mean squared error (gMSE) to coarsely align each atlas image to the target. Subsequently, atarget-specific regional MSE (rMSE), demonstrated to be a good surrogate for dice similarity coefficient (DSC), is used to select a relevant subset from the training atlas. Only within this subset are nonrigid registrations performed between the training images and the target image, to minimize a weighted combination of gMSE and rMSE. Finally, structure labels are propagated from the selected training samples to the target via the estimated deformation fields, and label fusion is performed based on a weighted combination of rMSE and local MSE (lMSE) discrepancy, with proper total-variation-based spatial regularization. Results: The proposed method was applied to a public database of 30 prostate MR images with expert-segmented structures. The authors’ method, utilizing only eight nonrigid registrations, achieved a performance with a median/mean DSC of over 0.87/0.86, outperforming the state-of-the-art full-fledged atlas-based segmentation approach of which the median/mean DSC was 0.84/0.82 when applying to their data set. Conclusions: The proposed method requires a fixed number of nonrigid

A symmetric Kullback-Leibler metricbased tracking system, capable of tracking moving targets, is presented for a bionic spherical parallel mechanism to minimize a tracking error function to simulate smooth pursuit of human eyes. More specifically, we propose a real-time moving target tracking algorithm which utilizes spatial histograms taking into account symmetric Kullback-Leibler metric. In the proposed algorithm, the key spatial histograms are extracted and taken into particle filtering framework. Once the target is identified, an image-based control scheme is implemented to drive bionic spherical parallel mechanism such that the identified target is to be tracked at the center of the captured images. Meanwhile, the robot motion information is fed forward to develop an adaptive smooth tracking controller inspired by the Vestibuloocular Reflex mechanism. The proposed tracking system is designed to make the robot track dynamic objects when the robot travels through transmittable terrains, especially bumpy environment. To perform bumpy-resist capability under the condition of violent attitude variation when the robot works in the bumpy environment mentioned, experimental results demonstrate the effectiveness and robustness of our bioinspired tracking system using bionic spherical parallel mechanism inspired by head-eye coordination.

Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.

Full Text Available A symmetric Kullback-Leibler metricbased tracking system, capable of tracking moving targets, is presented for a bionic spherical parallel mechanism to minimize a tracking error function to simulate smooth pursuit of human eyes. More specifically, we propose a real-time moving target tracking algorithm which utilizes spatial histograms taking into account symmetric Kullback-Leibler metric. In the proposed algorithm, the key spatial histograms are extracted and taken into particle filtering framework. Once the target is identified, an image-based control scheme is implemented to drive bionic spherical parallel mechanism such that the identified target is to be tracked at the center of the captured images. Meanwhile, the robot motion information is fed forward to develop an adaptive smooth tracking controller inspired by the Vestibuloocular Reflex mechanism. The proposed tracking system is designed to make the robot track dynamic objects when the robot travels through transmittable terrains, especially bumpy environment. To perform bumpy-resist capability under the condition of violent attitude variation when the robot works in the bumpy environment mentioned, experimental results demonstrate the effectiveness and robustness of our bioinspired tracking system using bionic spherical parallel mechanism inspired by head-eye coordination.

Full Text Available Recognition of emotions is still an unresolved challenge, which could be helpful to improve current human-machine interfaces. Recently, nonlinear analysis of some physiological signals has shown to play a more relevant role in this context than their traditional linear exploration. Thus, the present work introduces for the first time the application of three recent entropy-basedmetrics: sample entropy (SE, quadratic SE (QSE and distribution entropy (DE to discern between emotional states of calm and negative stress (also called distress. In the last few years, distress has received growing attention because it is a common negative factor in the modern lifestyle of people from developed countries and, moreover, it may lead to serious mental and physical health problems. Precisely, 279 segments of 32-channel electroencephalographic (EEG recordings from 32 subjects elicited to be calm or negatively stressed have been analyzed. Results provide that QSE is the first single metric presented to date with the ability to identify negative stress. Indeed, this metric has reported a discriminant ability of around 70%, which is only slightly lower than the one obtained by some previous works. Nonetheless, discriminant models from dozens or even hundreds of features have been previously obtained by using advanced classifiers to yield diagnostic accuracies about 80%. Moreover, in agreement with previous neuroanatomy findings, QSE has also revealed notable differences for all the brain regions in the neural activation triggered by the two considered emotions. Consequently, given these results, as well as easy interpretation of QSE, this work opens a new standpoint in the detection of emotional distress, which may gain new insights about the brain’s behavior under this negative emotion.

Rule induction is a practical approach to knowledge discovery. Provided that a problem is developed, rule induction is able to return the knowledge that addresses the goal of this problem as if-then rules. The primary goals of knowledge discovery are for prediction and description. The rule format knowledge representation is easily understandable so as to enable users to make decisions. This paper presents the potential of rule induction for energy efficiency. In particular, three rule induct...

Storm-based stream processing is widely used for real-time large-scale distributed processing. Knowing the run-time status and ensuring performance is critical to providing expected dependability for some applications, e.g., continuous video processing for security surveillance. The existing scheduling strategies' granularity is too coarse to have good performance, and mainly considers network resources without computing resources while scheduling. In this paper, we propose Healthcare4Storm, a framework that finds Storm insights based on Storm metrics to gain knowledge from the health status of an application, finally ending up with smart scheduling decisions. It takes into account both network and computing resources and conducts scheduling at a fine-grained level using tuples instead of topologies. The comprehensive evaluation shows that the proposed framework has good performance and can improve the dependability of the Storm-based applications.

Full Text Available The intrusion detection system (IDS is an important network security tool for securing computer and network systems. It is able to detect and monitor network traffic data. Snort IDS is an open-source network security tool. It can search and match rules with network traffic data in order to detect attacks, and generate an alert. However, the Snort IDS can detect only known attacks. Therefore, we have proposed a procedure for improving Snort IDS rules, based on the association rules data mining technique for detection of network probe attacks. We employed the MIT-DARPA 1999 data set for the experimental evaluation. Since behavior pattern traffic data are both normal and abnormal, the abnormal behavior data is detected by way of the Snort IDS. The experimental results showed that the proposed Snort IDS rules, based on data mining detection of network probe attacks, proved more efficient than the original Snort IDS rules, as well as icmp.rules and icmp-info.rules of Snort IDS. The suitable parameters for the proposed Snort IDS rules are defined as follows: Min_sup set to 10%, and Min_conf set to 100%, and through the application of eight variable attributes. As more suitable parameters are applied, higher accuracy is achieved.

Aphasia diagnosis is a particularly challenging medical diagnostic task due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. To efficiently address this diagnostic process, a hierarchical fuzzy rule-based structure is proposed here that considers the effect of different features of aphasia by statistical analysis in its construction. This approach can be efficient for diagnosis of aphasia and possibly other medical diagnostic applications due to its fuzzy and hierarchical reasoning construction. Initially, the symptoms of the disease which each consists of different features are analyzed statistically. The measured statistical parameters from the training set are then used to define membership functions and the fuzzy rules. The resulting two-layered fuzzy rule-based system is then compared with a back propagating feed-forward neural network for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. In order to reduce the number of required inputs, the technique is applied and compared on both comprehensive and spontaneous speech tests. Statistical t-test analysis confirms that the proposed approach uses fewer Aphasia features while also presenting a significant improvement in terms of accuracy.

Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages.

Full Text Available The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA. DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems.

Video-based facial expression recognition has become increasingly important for plenty of applications in the real world. Despite that numerous efforts have been made for the single sequence, how to balance the complex distribution of intra- and interclass variations well between sequences has remained a great difficulty in this area. We propose the adaptive (N+M)-tuplet clusters loss function and optimize it with the softmax loss simultaneously in the training phrase. The variations introduced by personal attributes are alleviated using the similarity measurements of multiple samples in the feature space with many fewer comparison times as conventional deep metric learning approaches, which enables the metric calculations for large data applications (e.g., videos). Both the spatial and temporal relations are well explored by a unified framework that consists of an Inception-ResNet network with long short term memory and the two fully connected layer branches structure. Our proposed method has been evaluated with three well-known databases, and the experimental results show that our method outperforms many state-of-the-art approaches.

METRIC is an architecture for a simple but powerful Reduced Instruction Set Computer (RISC). Its speed comes from the simultaneous processing of several instruction streams, with instructions from the various streams being dispatched into METRIC's execution pipeline as they become available for execution. The pipeline is thus kept full, with a mix of instructions for several contexts in execution at the same time. True parallel programming is supported within a single execution unit, the METRIC Context Unit. METRIC's architecture provides for expansion through the addition of multiple Context Units and of specialized Functional Units. The architecture thus spans a range of size and performance from a single-chip microcomputer up through large and powerful multiprocessors. This research concentrates on the specification of the METRIC Context Unit at the architectural level. Performance tradeoffs made during METRIC's design are discussed, and projections of METRIC's performance are made based on simulation studies.

To assess the biogeophysical impacts of land cover/land use change (LCLUC) on surface temperature, two observation-basedmetrics and their applicability in climate modeling were explored in this study. Both metrics were developed based on the surface energy balance, and provided insight into the contribution of different aspects of land surface change (such as albedo, surface roughness, net radiation and surface heat fluxes) to changing climate. A revision of the first metric, the intrinsic biophysical mechanism, can be used to distinguish the direct and indirect effects of LCLUC on surface temperature. The other, a decomposed temperature metric, gives a straightforward depiction of separate contributions of all components of the surface energy balance. These two metrics well capture observed and model simulated surface temperature changes in response to LCLUC. Results from paired FLUXNET sites and land surface model sensitivity experiments indicate that surface roughness effects usually dominate the direct biogeophysical feedback of LCLUC, while other effects play a secondary role. However, coupled climate model experiments show that these direct effects can be attenuated by large scale atmospheric changes (indirect feedbacks). When applied to real-time transient LCLUC experiments, the metrics also demonstrate usefulness for assessing the performance of climate models and quantifying land–atmosphere interactions in response to LCLUC. (letter)

Full Text Available Arabic dialects arewidely used from many years ago instead of Modern Standard Arabic language in many fields. The presence of dialects in any language is a big challenge. Dialects add a new set of variational dimensions in some fields like natural language processing, information retrieval and even in Arabic chatting between different Arab nationals. Spoken dialects have no standard morphological, phonological and lexical like Modern Standard Arabic. Hence, the objective of this paper is to describe a procedure or algorithm by which a stem for the Arabian Gulf dialect can be defined. The algorithm is rulebased. Special rules are created to remove the suffixes and prefixes of the dialect words. Also, the algorithm applies rules related to the word size and the relation between adjacent letters. The algorithm was tested for a number of words and given a good correct stem ratio. The algorithm is also compared with two Modern Standard Arabic algorithms. The results showed that Modern Standard Arabic stemmers performed poorly with Arabic Gulf dialect and our algorithm performed poorly when applied for Modern Standard Arabic words.

As assumption is made on the generative nature of the verbal thought process, based on an analogy between language use and verbal thought. A procedure is then presented for acquiring the set of generative rules from a given set of concept strings, leading to an efficient representation of verbal knowledge. The non-terminal symbols derived in the acquisition process are found to correspond to concepts and superordinate concepts in the human process of verbal thought. The validity of the formulation and the efficiency of knowledge representation is demonstrated by an example in which knowledge of biological properties of animals is reorganized into a set of generative rules. The process of inductive inference is then defined as a generalization of the acquired knowledge, and the principle of maximum simplicity of rules is proposed as a possible criterion for such generalization. The proposal is also tested by an example in which only a small part of a systematic body of knowledge is utilized to make interferences on the unknown parts of the system. 6 references.

Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rulebase is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rulebase is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

How to effectively retrieve desired 3D models with simple queries is a long-standing problem in computer vision community. The model-based approach is quite straightforward but nontrivial, since people could not always have the desired 3D query model available by side. Recently, large amounts of wide-screen electronic devices are prevail in our daily lives, which makes the sketch-based 3D shape retrieval a promising candidate due to its simpleness and efficiency. The main challenge of sketch-based approach is the huge modality gap between sketch and 3D shape. In this paper, we proposed a novel deep correlated holistic metric learning (DCHML) method to mitigate the discrepancy between sketch and 3D shape domains. The proposed DCHML trains two distinct deep neural networks (one for each domain) jointly, which learns two deep nonlinear transformations to map features from both domains into a new feature space. The proposed loss, including discriminative loss and correlation loss, aims to increase the discrimination of features within each domain as well as the correlation between different domains. In the new feature space, the discriminative loss minimizes the intra-class distance of the deep transformed features and maximizes the inter-class distance of the deep transformed features to a large margin within each domain, while the correlation loss focused on mitigating the distribution discrepancy across different domains. Different from existing deep metric learning methods only with loss at the output layer, our proposed DCHML is trained with loss at both hidden layer and output layer to further improve the performance by encouraging features in the hidden layer also with desired properties. Our proposed method is evaluated on three benchmarks, including 3D Shape Retrieval Contest 2013, 2014, and 2016 benchmarks, and the experimental results demonstrate the superiority of our proposed method over the state-of-the-art methods.

Full Text Available The structure and constraint of MANETS influence negatively the performance of QoS, moreover the main routing protocols proposed generally operate in flat routing. Hence, this structure gives the bad results of QoS when the network becomes larger and denser. To solve this problem we use one of the most popular methods named clustering. The present paper comes within the frameworks of research to improve the QoS in MANETs. In this paper we propose a new algorithm of clustering based on the new mobility metric and K-Medoid to distribute the nodes into several clusters. Intuitively our algorithm can give good results in terms of stability of the cluster, and can also extend life time of cluster head.

K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O( n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).

The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rulebased image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

New sensitive and reliable methods for assessing alterations in regional lung structure and function are critically important for the investigation and treatment of pulmonary diseases. Accurate identification of the airway tree will provide an assessment of airway structure and will provide a means by which multiple volumetric images of the lung at the same lung volume over time can be used to assess regional parenchymal changes. The authors describe a novel rule-based method for the segmentation of airway trees from three-dimensional (3-D) sets of computed tomography (CT) images, and its validation. The presented method takes advantage of a priori anatomical knowledge about pulmonary airway and vascular trees and their interrelationships. The method is based on a combination of 3-D seeded region growing that is used to identify large airways, rule-based two-dimensional (2-D) segmentation of individual CT slices to identify probable locations of smaller diameter airways, and merging of airway regions across the 3-D set of slices resulting in a tree-like airway structure. The method was validated in 40 3-mm-thick CT sections from five data sets of canine lungs scanned via electron beam CT in vivo with lung volume held at a constant pressure. The method's performance was compared with that of the conventional 3-D region growing method. The method substantially outperformed an existing conventional approach to airway tree detection

Full Text Available Under the trend of increasing installed capacity of wind power, the intelligent fault diagnosis of wind turbine is of great significance to the safe and efficient operation of wind farms. Based on the knowledge of fault diagnosis of wind turbines, this paper builds expert system diagnostic knowledge base by using confidence production rules and expert system self-learning method. In Visual Studio 2013 platform, C # language is selected and ADO.NET technology is used to access the database. Development of Fault Diagnosis Expert System for Wind Turbine. The purpose of this paper is to realize on-line diagnosis of wind turbine fault through human-computer interaction, and to improve the diagnostic capability of the system through the continuous improvement of the knowledge base.

Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.

Full Text Available Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, because people spend more time indoors, using ambient concentration to represent exposure may cause error. To quantify the associated exposure error, we computed a series of six different hourly-based exposure metrics at 16,095 Census blocks of three Counties in North Carolina for CO, NOx, PM2.5, and elemental carbon (EC during 2012. These metrics include ambient background concentration from space-time ordinary kriging (STOK, ambient on-road concentration from the Research LINE source dispersion model (R-LINE, a hybrid concentration combining STOK and R-LINE, and their associated indoor concentrations from an indoor infiltration mass balance model. Using a hybrid-based indoor concentration as the standard, the comparison showed that outdoor STOK metrics yielded large error at both population (67% to 93% and individual level (average bias between −10% to 95%. For pollutants with significant contribution from on-road emission (EC and NOx, the on-road based indoor metric performs the best at the population level (error less than 52%. At the individual level, however, the STOK-based indoor concentration performs the best (average bias below 30%. For PM2.5, due to the relatively low contribution from on-road emission (7%, STOK-based indoor metric performs the best at both population (error below 40% and individual level (error below 25%. The results of the study will help future epidemiology studies to select appropriate exposure metric and reduce potential bias in exposure characterization.

Maritime domain operators/analysts have a mandate to be aware of all that is happening within their areas of responsibility. This mandate derives from the needs to defend sovereignty, protect infrastructures, counter terrorism, detect illegal activities, etc., and it has become more challenging in the past decade, as commercial shipping turned into a potential threat. In particular, a huge portion of the data and information made available to the operators/analysts is mundane, from maritime platforms going about normal, legitimate activities, and it is very challenging for them to detect and identify the non-mundane. To achieve such anomaly detection, they must establish numerous relevant situational facts from a variety of sensor data streams. Unfortunately, many of the facts of interest just cannot be observed; the operators/analysts thus use their knowledge of the maritime domain and their reasoning faculties to infer these facts. As they are often overwhelmed by the large amount of data and information, automated reasoning tools could be used to support them by inferring the necessary facts, ultimately providing indications and warning on a small number of anomalous events worthy of their attention. Along this line of thought, this paper describes a proof-of-concept prototype of a rule-based expert system implementing automated rule-based reasoning in support of maritime anomaly detection.

The second edition of this textbook provides a fully updated approach to fuzzy sets and systems that can model uncertainty — i.e., “type-2” fuzzy sets and systems. The author demonstrates how to overcome the limitations of classical fuzzy sets and systems, enabling a wide range of applications from time-series forecasting to knowledge mining to control. In this new edition, a bottom-up approach is presented that begins by introducing classical (type-1) fuzzy sets and systems, and then explains how they can be modified to handle uncertainty. The author covers fuzzy rule-based systems – from type-1 to interval type-2 to general type-2 – in one volume. For hands-on experience, the book provides information on accessing MatLab and Java software to complement the content. The book features a full suite of classroom material. Presents fully updated material on new breakthroughs in human-inspired rule-based techniques for handling real-world uncertainties; Allows those already familiar with type-1 fuzzy se...

association rule mining algorithm for extracting both association rules and member- .... The disadvantage of this work is in considering the generalization at each ... If the new attribute is entered, the generalization process does not consider the ...

An interoperation study, WellnessRules, is described, where rules about wellness opportunities are created by participants in rule languages such as Prolog and N3, and translated within a wellness community using RuleML/XML. The wellness rules are centered around participants, as profiles, encoding knowledge about their activities conditional on the season, the time-of-day, the weather, etc. This distributed knowledge base extends FOAF profiles with a vocabulary and rules about wellness group networking. The communication between participants is organized through Rule Responder, permitting wellness-profile translation and distributed querying across engines. WellnessRules interoperates between rules and queries in the relational (Datalog) paradigm of the pure-Prolog subset of POSL and in the frame (F-logic) paradigm of N3. An evaluation of Rule Responder instantiated for WellnessRules revealed acceptable Web response times.

In the context of the Semantic Web, many ontology-related operations, e.g. ontology ranking, segmentation, alignment, articulation, reuse, evaluation, can be boiled down to one fundamental operation: computing the similarity and/or dissimilarity among ontological entities, and in some cases among ontologies themselves. In this paper, we review standard metrics for computing distance measures and we propose a series of semantic metrics. We give a formal account of semantic metrics drawn from a...

Often the information present in a spatial knowledge base is represented at a different level of granularity and abstraction than the query constraints. For querying ontology's containing spatial information, the precise relationships between spatial entities has to be specified in the basic graph pattern of SPARQL query which can result in long and complex queries. We present a novel approach to help users intuitively write SPARQL queries to query spatial data, rather than relying on knowledge of the ontology structure. Our framework re-writes queries, using transformation rules to exploit part-whole relations between geographical entities to address the mismatches between query constraints and knowledge base. Our experiments were performed on completely third party datasets and queries. Evaluations were performed on Geonames dataset using questions from National Geographic Bee serialized into SPARQL and British Administrative Geography Ontology using questions from a popular trivia website. These experiments demonstrate high precision in retrieval of results and ease in writing queries.

Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rulebase as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

Fault detection has a vital role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. This paper proposes an innovative multivariate fault detection method that can be used for monitoring nonlinear processes. The proposed method merges advantages of nonlinear projection to latent structures (NLPLS) modeling and those of Hellinger distance (HD) metric to identify abnormal changes in highly correlated multivariate data. Specifically, the HD is used to quantify the dissimilarity between current NLPLS-based residual and reference probability distributions obtained using fault-free data. Furthermore, to enhance further the robustness of these methods to measurement noise, and reduce the false alarms due to modeling errors, wavelet-based multiscale filtering of residuals is used before the application of the HD-based monitoring scheme. The performances of the developed NLPLS-HD fault detection technique is illustrated using simulated plug flow reactor data. The results show that the proposed method provides favorable performance for detection of faults compared to the conventional NLPLS method.

Conclusions: Limit setting is associated with greater SV. Collaborative rule setting may be effective for managing boys' game-console use. More research is needed to understand rule-based parenting practices.

This paper presents Porgy – an interactive visual environment for rule-based modelling of biochemical systems. We model molecules and molecule interactions as port graphs and port graph rewrite rules, respectively. We use rewriting strategies to control which rules to apply, and where and when to apply them. Our main contributions to rule-based modelling of biochemical systems lie in the strategy language and the associated visual and interactive features offered by Porgy. These features faci...

Full Text Available The derivative-based trapezoid rule for the Riemann-Stieltjes integral is presented which uses 2 derivative values at the endpoints. This kind of quadrature rule obtains an increase of two orders of precision over the trapezoid rule for the Riemann-Stieltjes integral and the error term is investigated. At last, the rationality of the generalization of derivative-based trapezoid rule for Riemann-Stieltjes integral is demonstrated.

Full Text Available We investigate a novel way of robust face image feature extraction by adopting the methods based on Unsupervised Linear Subspace Learning to extract a small number of good features. Firstly, the face image is divided into blocks with the specified size, and then we propose and extract pooled Histogram of Oriented Gradient (pHOG over each block. Secondly, an improved Earth Mover’s Distance (EMD metric is adopted to measure the dissimilarity between blocks of one face image and the corresponding blocks from the rest of face images. Thirdly, considering the limitations of the original Locality Preserving Projections (LPP, we proposed the Block Structure LPP (BSLPP, which effectively preserves the structural information of face images. Finally, an adjacency graph is constructed and a small number of good features of a face image are obtained by methods based on Unsupervised Linear Subspace Learning. A series of experiments have been conducted on several well-known face databases to evaluate the effectiveness of the proposed algorithm. In addition, we construct the noise, geometric distortion, slight translation, slight rotation AR, and Extended Yale B face databases, and we verify the robustness of the proposed algorithm when faced with a certain degree of these disturbances.

The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.

Success of Cloud computing requires that both customers and providers can be confident that signed Service Level Agreements (SLA) are supporting their respective business activities to their best extent. Currently used SLAs fail in providing such confidence, especially when providers outsource resources to other providers. These resource providers typically support very simple metrics, or metrics that hinder an efficient exploitation of their resources. In this paper, we propose a re...

The article searches to map the manager’s decisions in order to understand what has been the franchisor system for choose regarding to characteristics, and what the metrics has been adopted to measure the performance Though 15 interviews with Brazilian franchise there was confirmation that revenue is the main metric used by national franchises to measure performance, although other indicators are also used in a complementary way. In addition, two other factors were cited by the interviewees a...

The paper describes the design and implementation of an independent, third party contract monitoring service called Contract Compliance Checker (CCC). The CCC is provided with the specification of the contract in force, and is capable of observing and logging the relevant business-to-business (B2B) interaction events, in order to determine whether the actions of the business partners are consistent with the contract. A contract specification language called EROP (for Events, Rights, Obligations and Prohibitions) for the CCC has been developed based on business rules, that provides constructs to specify what rights, obligation and prohibitions become active and inactive after the occurrence of events related to the execution of business operations. The system has been designed to work with B2B industry standards such as ebXML and RosettaNet.

Full Text Available We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI system. The NAVI has a single board processing system (SBPS, a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rulebase. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI) system. The NAVI has a single board processing system (SBPS), a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rulebase. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.

Full Text Available When vein segments are implanted into the arterial system for use in arterial bypass grafting, adaptation to the higher pressure and flow of the arterial system is accomplished thorough wall thickening and expansion. These early remodeling events have been found to be closely coupled to the local hemodynamic forces, such as shear stress and wall tension, and are believed to be the foundation for later vein graft failure. To further our mechanistic understanding of the cellular and extracellular interactions that lead to global changes in tissue architecture, a rule-based modeling method is developed through the application of basic rules of behaviors for these molecular and cellular activities. In the current method, smooth muscle cell (SMC, extracellular matrix (ECM, and monocytes are selected as the three components that occupy the elements of a grid system that comprise the developing vein graft intima. The probabilities of the cellular behaviors are developed based on data extracted from in vivo experiments. At each time step, the various probabilities are computed and applied to the SMC and ECM elements to determine their next physical state and behavior. One- and two-dimensional models are developed to test and validate the computational approach. The importance of monocyte infiltration, and the associated effect in augmenting extracellular matrix deposition, was evaluated and found to be an important component in model development. Final model validation is performed using an independent set of experiments, where model predictions of intimal growth are evaluated against experimental data obtained from the complex geometry and shear stress patterns offered by a mid-graft focal stenosis, where simulation results show good agreements with the experimental data.

Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. for the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.

This paper aims to develop new, resilience type metrics for long-term water resources management under uncertain climate change and population growth. Resilience is defined here as the ability of a water resources management system to 'bounce back', i.e. absorb and then recover from a water deficit event, restoring the normal system operation. Ten alternative metrics are proposed and analysed addressing a range of different resilience aspects including duration, magnitude, frequency and volume of related water deficit events. The metrics were analysed on a real-world case study of the Bristol Water supply system in the UK and compared with current practice. The analyses included an examination of metrics' sensitivity and correlation, as well as a detailed examination into the behaviour of metrics during water deficit periods. The results obtained suggest that multiple metrics which cover different aspects of resilience should be used simultaneously when assessing the resilience of a water resources management system, leading to a more complete understanding of resilience compared with current practice approaches. It was also observed that calculating the total duration of a water deficit period provided a clearer and more consistent indication of system performance compared to splitting the deficit periods into the time to reach and time to recover from the worst deficit events.

The assumptions used in the design basis of process equipment have always been as much art as science. The usually imprecise boundaries of the equipments' operational envelope provide opportunities for two major improvements in the operations and maintenance (O and M) of process machinery: (a) the actual versus intended machine environment can be understood and brought into much better alignment and (b) the end goal can define O and M strategies in terms of life cycle and economic management of plant assets.Scientists at the Pacific Northwest National Laboratory (PNNL) have performed experiments aimed at understanding and controlling aging of both safety-specific nuclear plant components and the infrastructure that supports essential plant processes. In this paper we examine the development of aging precursor metrics and their correlation with degradation rate and projected machinery failure.Degradation-specific correlations have been developed at PNNL that will allow accurate physics-based diagnostic and prognostic determinations to be derived from a new view of condition-based maintenance. This view, founded in root cause analysis, is focused on quantifying the primary stressor(s) responsible for degradation in the component of interest and formulating a deterministic relationship between the stressor intensity and the resulting degradation rate. This precursive relationship between the performance, degradation, and underlying stressor set is used to gain a first-principles approach to prognostic determinations. A holistic infrastructure approach, as applied through a conditions-based maintenance framework, will allow intelligent, automated diagnostic and prognostic programming to provide O and M practitioners with an understanding of the condition of their machinery today and an assurance of its operational state tomorrow

Over the last quarter of a century, two types of fuzzy rule-based (FRB) systems dominated, namely Mamdani and Takagi-Sugeno type. They use the same type of scalar fuzzy sets defined per input variable in their antecedent part which are aggregated at the inference stage by t-norms or co-norms representing logical AND/OR operations. In this paper, we propose a significantly simplified alternative to define the antecedent part of FRB systems by data Clouds and density distribution. This new type of FRB systems goes further in the conceptual and computational simplification while preserving the best features (flexibility, modularity, and human intelligibility) of its predecessors. The proposed concept offers alternative non-parametric form of the rules antecedents, which fully reflects the real data distribution and does not require any explicit aggregation operations and scalar membership functions to be imposed. Instead, it derives the fuzzy membership of a particular data sample to a Cloud by the data density distribution of the data associated with that Cloud. Contrast this to the clustering which is parametric data space decomposition/partitioning where the fuzzy membership to a cluster is measured by the distance to the cluster centre/prototype ignoring all the data that form that cluster or approximating their distribution. The proposed new approach takes into account fully and exactly the spatial distribution and similarity of all the real data by proposing an innovative and much simplified form of the antecedent part. In this paper, we provide several numerical examples aiming to illustrate the concept.

Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...

Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rulebased technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

Business rules are statements that express (certain parts of) a business policy, defining business terms and defining or constraining the operations of an enterprise, in a declarative manner. Since these rules define and constrain the interaction among business agents in the course of business

This paper proposes an approach for a more effective definition of cellular automata transition rules for landscape change modeling using an advanced spatial metrics analysis. This approach considers a four-stage methodology based on: (i) the search for the appropriate spatial metrics with minimal correlations; (ii) the selection of the appropriate neighborhood size; (iii) the selection of the appropriate technique for spatial metrics application; and (iv) the analysis of the contribution level of each spatial metric for joint use. The case study uses an initial set of 7 spatial metrics of which 4 are selected for modeling. Results show a better model performance when compared to modeling without any spatial metrics or with the initial set of 7 metrics.

Neoliberalism exults the ability of unregulated markets to optimise human relations. Yet, as David Graeber has recently illustrated, it is paradoxically built on rigorous systems of rules, metrics and managers. The potential transition to a market-based tuition and research-funding model for higher education in Australia has, not surprisingly,…

Full Text Available This study explores a method to classify seven tropical rainforest tree species from full-range (400–2,500 nm hyperspectral data acquired at tissue (leaf and bark, pixel and crown scales using laboratory and airborne sensors. Metrics that respond to vegetation chemistry and structure were derived using narrowband indices, derivative- and absorption-based techniques, and spectral mixture analysis. We then used the Random Forests tree-based classifier to discriminate species with minimally-correlated, importance-ranked metrics. At all scales, best overall accuracies were achieved with metrics derived from all four techniques and that targeted chemical and structural properties across the visible to shortwave infrared spectrum (400–2500 nm. For tissue spectra, overall accuracies were 86.8% for leaves, 74.2% for bark, and 84.9% for leaves plus bark. Variation in tissue metrics was best explained by an axis of red absorption related to photosynthetic leaves and an axis distinguishing bark water and other chemical absorption features. Overall accuracies for individual tree crowns were 71.5% for pixel spectra, 70.6% crown-mean spectra, and 87.4% for a pixel-majority technique. At pixel and crown scales, tree structure and phenology at the time of image acquisition were important factors that determined species spectral separability.

Objective To develop new standardized eye tracking based measures and metrics for infants’ gaze dynamics in the face-distractor competition paradigm. Method Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45), as well as one sample of 5-month-old infants (n = 22) in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants’ initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability) and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus). Results The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants. Conclusion The results suggest that eye tracking based assessments of infants’ cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals. Significance Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development. PMID:24845102

Scientists at the Pacific Northwest National Laboratory (PNNL) have examined the necessity for understanding and controlling the aging process of both safety-specific plant components and the infrastructure that supports these processes. In this paper we examine the preliminary development of aging precursor metrics and their correlation with degradation rate and projected machine failure. Degradation specific correlations are currently being developed at PNNL that will allow accurate physics-based diagnostic and prognostic determinations to be derived from a new view of condition based maintenance. This view, founded in root cause analysis, is focused on quantifying the primary stressor(s) responsible for degradation in the component of interest. The derivative relationship between the performance, degradation and the underlying stressor set is used to gain a first principles approach to prognostic determinations. The assumptions used for the design basis of process equipment have always been as much art as science and for this reason have been misused or relegated into obscurity in all but the nuclear industry. The ability to successfully link degradation and expected equipment life to stressor intensity level is valuable in that it quantifies the degree of machine stress for a given production level. This allows two major improvements in the O and M of process machinery: (1) the actual versus intended machine environment can be understood and brought into much better alignment, and (2) the end goal can define operations and maintenance strategies in terms of life cycle and economic management of plant assets. A holistic infrastructure approach, as applied through a CBM framework, will allow intelligent, automated diagnostic and prognostic programs to provide O and M practitioners with an understanding of the condition of their machinery today and an assurance of its operational state tomorrow

Full Text Available To develop new standardized eye tracking based measures and metrics for infants' gaze dynamics in the face-distractor competition paradigm.Eye tracking data were collected from two samples of healthy 7-month-old (total n = 45, as well as one sample of 5-month-old infants (n = 22 in a paradigm with a picture of a face or a non-face pattern as a central stimulus, and a geometric shape as a lateral stimulus. The data were analyzed by using conventional measures of infants' initial disengagement from the central to the lateral stimulus (i.e., saccadic reaction time and probability and, additionally, novel measures reflecting infants gaze dynamics after the initial disengagement (i.e., cumulative allocation of attention to the central vs. peripheral stimulus.The results showed that the initial saccade away from the centrally presented stimulus is followed by a rapid re-engagement of attention with the central stimulus, leading to cumulative preference for the central stimulus over the lateral stimulus over time. This pattern tended to be stronger for salient facial expressions as compared to non-face patterns, was replicable across two independent samples of 7-month-old infants, and differentiated between 7 and 5 month-old infants.The results suggest that eye tracking based assessments of infants' cumulative preference for faces over time can be readily parameterized and standardized, and may provide valuable techniques for future studies examining normative developmental changes in preference for social signals.Standardized measures of early developing face preferences may have potential to become surrogate biomarkers of neurocognitive and social development.

People reason about real-estate prices both in terms of general rules and in terms of analogies to similar cases. We propose to empirically test which mode of reasoning fits the data better. To this end, we develop the statistical techniques required for the estimation of the case-based model. It is hypothesized that case-based reasoning will have relatively more explanatory power in databases of rental apartments, whereas rule-based reasoning will have a relative advantage in sales data. We ...

Purpose: Task Group 204 introduced effective diameter (ED) as the patient size metric used to correlate size-specific-dose-estimates. However, this size metric fails to account for patient attenuation properties and has been suggested to be replaced by an attenuation-based size metric, water equivalent diameter (D{sub W}). The purpose of this study is to investigate different size metrics, effective diameter, and water equivalent diameter, in combination with regional descriptions of scanner output to establish the most appropriate size metric to be used as a predictor for organ dose in tube current modulated CT exams. Methods: 101 thoracic and 82 abdomen/pelvis scans from clinically indicated CT exams were collected retrospectively from a multidetector row CT (Sensation 64, Siemens Healthcare) with Institutional Review Board approval to generate voxelized patient models. Fully irradiated organs (lung and breasts in thoracic scans and liver, kidneys, and spleen in abdominal scans) were segmented and used as tally regions in Monte Carlo simulations for reporting organ dose. Along with image data, raw projection data were collected to obtain tube current information for simulating tube current modulation scans using Monte Carlo methods. Additionally, previously described patient size metrics [ED, D{sub W}, and approximated water equivalent diameter (D{sub Wa})] were calculated for each patient and reported in three different ways: a single value averaged over the entire scan, a single value averaged over the region of interest, and a single value from a location in the middle of the scan volume. Organ doses were normalized by an appropriate mAs weighted CTDI{sub vol} to reflect regional variation of tube current. Linear regression analysis was used to evaluate the correlations between normalized organ doses and each size metric. Results: For the abdominal organs, the correlations between normalized organ dose and size metric were overall slightly higher for all three

Full Text Available Agricultural subsurface drainage changes the field hydrology and potentially the amount of water available to the crop by altering the flow path and the rate and timing of water removal. Evapotranspiration (ET is normally among the largest components of the field water budget, and the changes in ET from the introduction of subsurface drainage are likely to have a greater influence on the overall water yield (surface runoff plus subsurface drainage from subsurface drained (TD fields compared to fields without subsurface drainage (UD. To test this hypothesis, we examined the impact of subsurface drainage on ET at two sites located in the Upper Midwest (North Dakota-Site 1 and South Dakota-Site 2 using the Landsat imagery-basedMETRIC (Mapping Evapotranspiration at high Resolution with Internalized Calibration model. Site 1 was planted with corn (Zea mays L. and soybean (Glycine max L. during the 2009 and 2010 growing seasons, respectively. Site 2 was planted with corn for the 2013 growing season. During the corn growing seasons (2009 and 2013, differences between the total ET from TD and UD fields were less than 5 mm. For the soybean year (2010, ET from the UD field was 10% (53 mm greater than that from the TD field. During the peak ET period from June to September for all study years, ET differences from TD and UD fields were within 15 mm (<3%. Overall, differences between daily ET from TD and UD fields were not statistically significant (p > 0.05 and showed no consistent relationship.

Correlation of data within electronic health records is necessary for implementation of various clinical decision support functions, including patient summarization. A key type of correlation is linking medications to clinical problems; while some databases of problem-medication links are available, they are not robust and depend on problems and medications being encoded in particular terminologies. Crowdsourcing represents one approach to generating robust knowledge bases across a variety of terminologies, but more sophisticated approaches are necessary to improve accuracy and reduce manual data review requirements. We sought to develop and evaluate a clinician reputation metric to facilitate the identification of appropriate problem-medication pairs through crowdsourcing without requiring extensive manual review. We retrieved medications from our clinical data warehouse that had been prescribed and manually linked to one or more problems by clinicians during e-prescribing between June 1, 2010 and May 31, 2011. We identified measures likely to be associated with the percentage of accurate problem-medication links made by clinicians. Using logistic regression, we created a metric for identifying clinicians who had made greater than or equal to 95% appropriate links. We evaluated the accuracy of the approach by comparing links made by those physicians identified as having appropriate links to a previously manually validated subset of problem-medication pairs. Of 867 clinicians who asserted a total of 237,748 problem-medication links during the study period, 125 had a reputation metric that predicted the percentage of appropriate links greater than or equal to 95%. These clinicians asserted a total of 2464 linked problem-medication pairs (983 distinct pairs). Compared to a previously validated set of problem-medication pairs, the reputation metric achieved a specificity of 99.5% and marginally improved the sensitivity of previously described knowledge bases. A

The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy. PMID:24511304

The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy.

Customers are the most important assets of most companies, such that customer equity has been used as a proxy for shareholder value. However, linking customer metrics to shareholder value without considering debt and non-operating assets ignores their effects on relative changes in customer equity

A modified evidence combination rule with a combination parameter λ is proposed to solve some problems in D-S theory by considering the correlation and complement among the evidences as well as the size and intersection of subsets in evidence. It can get reasonable results even the evidences are conflicting. Applying this rule to the real infrared/millimetre wave fusion system, a satisfactory result has been obtained.

We describe the construction of light-cone sum rules (LCSRs) for exclusive $B$-meson decays into light energetic hadrons from correlation functions within soft-collinear effective theory (SCET). As an example, we consider the SCET sum rule for the $B \\to \\pi$ transition form factor at large recoil, including radiative corrections from hard-collinear loop diagrams at first order in the strong coupling constant.

be causing the infection (.2) [RULE633]. {The student asks, "Does the patient have a fever ?") " FEBRILE MYCIN never needed to inquire about whether...remaining clauses, some we classified most as restrictions, and the one or two that remained constituted the key factor(s) of the rule. The " petechial ...Infection is bacterial, KEY-FACTORt 4) Petechial is one of the types of rash which the patient has, RESTRICTIONS 5) Purpuric is not one of the types

The need of communication between people is something that is associated in our nature as human beings, but the way people do it has completely changed since the smartphone and Internet appeared. Otherwise, knowing human personality of someone is something really difficult that we gain after working communication skills with others. Based on this two principal points in my TFG election, whose aim is predict human personality by recollecting information of smartphones, using Big Data and Machi...

Growing numbers of users and many access control policies which involve many different resource attributes in service-oriented environments bring various problems in protecting resource. This paper analyzes the relationships of resource attributes to user attributes in all policies, and propose a general attribute and rulebased role-based access control(GAR-RBAC) model to meet the security needs. The model can dynamically assign users to roles via rules to meet the need of growing numbers of users. These rules use different attribute expression and permission as a part of authorization constraints, and are defined by analyzing relations of resource attributes to user attributes in many access policies that are defined by the enterprise. The model is a general access control model, and can support many access control policies, and also can be used to wider application for service. The paper also describes how to use the GAR-RBAC model in Web service environments.

Ranking of association rules is currently an interesting topic in data mining and bioinformatics. The huge number of evolved rules of items (or, genes) by association rule mining (ARM) algorithms makes confusion to the decision maker. In this article, we propose a weighted rule-mining technique (say, RANWAR or rank-based weighted association rule-mining) to rank the rules using two novel rule-interestingness measures, viz., rank-based weighted condensed support (wcs) and weighted condensed confidence (wcc) measures to bypass the problem. These measures are basically depended on the rank of items (genes). Using the rank, we assign weight to each item. RANWAR generates much less number of frequent itemsets than the state-of-the-art association rule mining algorithms. Thus, it saves time of execution of the algorithm. We run RANWAR on gene expression and methylation datasets. The genes of the top rules are biologically validated by Gene Ontologies (GOs) and KEGG pathway analyses. Many top ranked rules extracted from RANWAR that hold poor ranks in traditional Apriori, are highly biologically significant to the related diseases. Finally, the top rules evolved from RANWAR, that are not in Apriori, are reported.

Purpose: With the ever-growing data of heterogeneous quality, relevance assessment of atlases becomes increasingly critical for multi-atlas-based image segmentation. However, there is no universally recognized best relevance metric and even a standard to compare amongst candidates remains elusive. This study, for the first time, designs a quantification to assess relevance metrics’ quality, based on a novel perspective of the metric as surrogate for inferring the inaccessible oracle geometric agreement. Methods: We first develop an inference model to relate surrogate metrics in image space to the underlying oracle relevance metric in segmentation label space, with a monotonically non-decreasing function subject to random perturbations. Subsequently, we investigate model parameters to reveal key contributing factors to surrogates’ ability in prognosticating the oracle relevance value, for the specific task of atlas selection. Finally, we design an effective contract-to-noise ratio (eCNR) to quantify surrogates’ quality based on insights from these analyses and empirical observations. Results: The inference model was specialized to a linear function with normally distributed perturbations, with surrogate metric exemplified by several widely-used image similarity metrics, i.e., MSD/NCC/(N)MI. Surrogates’ behaviors in selecting the most relevant atlases were assessed under varying eCNR, showing that surrogates with high eCNR dominated those with low eCNR in retaining the most relevant atlases. In an end-to-end validation, NCC/(N)MI with eCNR of 0.12 compared to MSD with eCNR of 0.10 resulted in statistically better segmentation with mean DSC of about 0.85 and the first and third quartiles of (0.83, 0.89), compared to MSD with mean DSC of 0.84 and the first and third quartiles of (0.81, 0.89). Conclusion: The designed eCNR is capable of characterizing surrogate metrics’ quality in prognosticating the oracle relevance value. It has been demonstrated to be

Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The

The move from cash accounting to accrual accounting, or rule-based to principle-based accounting, by many governments is part of an ongoing efforts in promoting a more business-like and performance-focused public sector. Using questionnaire responses from preparers of financial statements of public universities in Malaysia, this study examines the implementation challenges and benefits of principle-based accounting. Results from these responses suggest that most respondents perceived signific...

Similarity between objects plays an important role in both human cognitive processes and artificial systems for recognition and categorization. How to appropriately measure such similarities for a given task is crucial to the performance of many machine learning, pattern recognition and data mining methods. This book is devoted to metric learning, a set of techniques to automatically learn similarity and distance functions from data that has attracted a lot of interest in machine learning and related fields in the past ten years. In this book, we provide a thorough review of the metric learnin

Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC

Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at

Design criteria for carbon-based Ultracapacitors have been determined for specified energy and power requirements, using the geometry of the components and such material properties as density, porosity and conductivity as parameters, while also considering chemical compatibility. This analysis shows that the weights of active and inactive components of the capacitor structure must be carefully balanced for maximum energy and power density. When applied to nonaqueous electrolytes, the design rules for a 5 Wh/kg device call for porous carbon with a specific capacitance of about 30 F/cu cm. This performance is not achievable with pure, electrostatic double layer capacitance. Double layer capacitance is only 5 to 30% of that observed in aqueous electrolyte. Tests also showed that nonaqueous electrolytes have a diminished capability to access micropores in activated carbon, in one case yielding a capacitance of less than 1 F/cu cm for carbon that had 100 F/cu cm in aqueous electrolyte. With negative results on nonaqueous electrolytes dominating the present study, the obvious conclusion is to concentrate on aqueous systems. Only aqueous double layer capacitors offer adequate electrostatic charging characteristics which is the basis for high power performance. There arc many opportunities for further advancing aqueous double layer capacitors, one being the use of highly activated carbon films, as opposed to powders, fibers and foams. While the manufacture of carbon films is still costly, and while the energy and power density of the resulting devices may not meet the optimistic goals that have been proposed, this technology could produce true double layer capacitors with significantly improved performance and large commercial potential.

Full Text Available We explore influences on unlisted companies when Portugal moved from a code law, rules-based accounting system, to a principles-based accounting system of adapted International Financial Reporting Standards (IFRS. Institutionalisation of the new principles-based system was generally facilitated by a socio-economic and political context that increasingly supported IFRS logic. This helped central actors gain political opportunity, mobilise important allies, and accommodate major protagonists. The preparedness of unlisted companies to adopt the new IFRS-based accounting system voluntarily was explained by their desire to maintain social legitimacy. However, it was affected negatively by the embeddedness of rule-based practices in the ‘old’ prevailing institutional logic.

In recent years, some studies have been carried out on the landscape analysis of urban thermal patterns. With the prevalence of thermal landscape, a key problem has come forth, which is how to classify thermal landscape into thermal patches. Current researches used different methods of thermal landscape classification such as standard deviation method (SD) and R method. To find out the differences, a comparative study was carried out in Xiamen using a 20-year winter time-serial Landsat images. After the retrieval of land surface temperature (LST), the thermal landscape was classified using the two methods separately. Then landscape metrics, 6 at class level and 14 at landscape level, were calculated and analyzed using Fragstats 3.3. We found that: (1) at the class level, all the metrics with SD method were evened and did not show an obvious trend along with the process of urbanization, while the R method could. (2) While at the landscape level, 6 of the 14 metrics remains the similar trends, 5 were different at local turn points of the curve, 3 of them differed completely in the shape of curves. (3) When examined with visual interpretation, SD method tended to exaggerate urban heat island effects than the R method

The main objective of design of a rulebase expert system using fuzzy logic approach is to predict and forecast the risk level of cardiac patients to avoid sudden death. In this proposed system, uncertainty is captured using rulebase and classification using fuzzy c-means clustering is discussed to overcome the risk level, ...

Full Text Available and performance of a rule-based Dutch-to-Afrikaans converter, with the aim to transform Dutch text so that it looks more like an Afrikaans text (even though it might not even be a good Dutch translation). The rules we used is based on systematic orthographic...

In this paper, we apply cellular automata rules, which can be given by a truth table, to human memory. We design each memory as a tracking survey mode that keeps the most recent three opinions. Each cellular automata rule, as a personal mechanism, gives the final ruling in one time period based on the data stored in one's memory. The key focus of the paper is to research the evolution of people's attitudes to the same question. Based on a great deal of empirical observations from computer simulations, all the rules can be classified into 20 groups. We highlight the fact that the phenomenon shown by some rules belonging to the same group will be altered within several steps by other rules in different groups. It is truly amazing that, compared with the last hundreds of presidential voting in America, the eras of important events in America's history coincide with the simulation results obtained by our model.

This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles.

Coordination shall be deemed to the result of interindividual interaction among natural gregarious animal groups. However, revealing the underlying interaction rules and decision-making strategies governing highly coordinated motion in bird flocks is still a long-standing challenge. Based on analysis of high spatial-temporal resolution GPS data of three pigeon flocks, we extract the hidden interaction principle by using a newly emerging machine learning method, namely the sparse Bayesian learning. It is observed that the interaction probability has an inflection point at pairwise distance of 3-4 m closer than the average maximum interindividual distance, after which it decays strictly with rising pairwise metric distances. Significantly, the density of spatial neighbor distribution is strongly anisotropic, with an evident lack of interactions along individual velocity. Thus, it is found that in small-sized bird flocks, individuals reciprocally cooperate with a variational number of neighbors in metric space and tend to interact with closer time-varying neighbors, rather than interacting with a fixed number of topological ones. Finally, extensive numerical investigation is conducted to verify both the revealed interaction and decision-making principle during circular flights of pigeon flocks.

In this paper we present a framework for selecting the proper instructional strategy for a given teaching material based on its attributes. The new approach is based on a flexible design by means of generic rules. The framework was adapted in an Intelligent Tutoring System to teach Modern Standard Arabic language to adult English-speaking learners with no pre-knowledge of Arabic language is required.

Food frequency questionnaires (FFQs) are well established in the nutrition field, but there remain important questions around how to develop online tools in a way that can facilitate wider uptake. Also, FFQ user acceptance and evaluation have not been investigated extensively. This paper presents a Web-based graphical food frequency assessment system that addresses challenges of reproducibility, scalability, mobile friendliness, security, and usability and also presents the utilization metrics and user feedback from a deployment study. The application design employs a single-page application Web architecture with back-end services (database, authentication, and authorization) provided by Google Firebase's free plan. Its design and responsiveness take advantage of the Bootstrap framework. The FFQ was deployed in Kuwait as part of the EatWellQ8 study during 2016. The EatWellQ8 FFQ contains 146 food items (including drinks). Participants were recruited in Kuwait without financial incentive. Completion time was based on browser timestamps and usability was measured using the System Usability Scale (SUS), scoring between 0 and 100. Products with a SUS higher than 70 are considered to be good. A total of 235 participants created accounts in the system, and 163 completed the FFQ. Of those 163 participants, 142 reported their gender (93 female, 49 male) and 144 reported their date of birth (mean age of 35 years, range from 18-65 years). The mean completion time for all FFQs (n=163), excluding periods of interruption, was 14.2 minutes (95% CI 13.3-15.1 minutes). Female participants (n=93) completed in 14.1 minutes (95% CI 12.9-15.3 minutes) and male participants (n=49) completed in 14.3 minutes (95% CI 12.6-15.9 minutes). Participants using laptops or desktops (n=69) completed the FFQ in an average of 13.9 minutes (95% CI 12.6-15.1 minutes) and participants using smartphones or tablets (n=91) completed in an average of 14.5 minutes (95% CI 13.2-15.8 minutes). The median SUS

In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rulesbased on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps.

Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V 10Gy (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM clin − QM pred , and a coefficient of determination, R 2 . For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are stratified based on

Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

Climate models are a key provider to investigate impacts of projected future climate conditions on regional hydrologic systems. However, there is a considerable mismatch of spatial resolution between GCMs and regional applications, in particular a region characterized by complex terrain such as Korean peninsula. Therefore, a downscaling procedure is an essential to assess regional impacts of climate change. Numerous statistical downscaling methods have been used mainly due to the computational efficiency and simplicity. In this study, four statistical downscaling methods [Bias-Correction/Spatial Disaggregation (BCSD), Bias-Correction/Constructed Analogue (BCCA), Multivariate Adaptive Constructed Analogs (MACA), and Bias-Correction/Climate Imprint (BCCI)] are applied to downscale the latest Climate Forecast System Reanalysis data to stations for precipitation, maximum temperature, and minimum temperature over South Korea. By split sampling scheme, all methods are calibrated with observational station data for 19 years from 1973 to 1991 are and tested for the recent 19 years from 1992 to 2010. To assess skill of the downscaling methods, we construct a comprehensive suite of performance metrics that measure an ability of reproducing temporal correlation, distribution, spatial correlation, and extreme events. In addition, we employ Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to identify robust statistical downscaling methods based on the performance metrics for each season. The results show that downscaling skill is considerably affected by the skill of CFSR and all methods lead to large improvements in representing all performance metrics. According to seasonal performance metrics evaluated, when TOPSIS is applied, MACA is identified as the most reliable and robust method for all variables and seasons. Note that such result is derived from CFSR output which is recognized as near perfect climate data in climate studies. Therefore, the

Summary Objectives We aimed to determine the characteristics of quantitative metrics for nursing narratives documented in electronic nursing records and their association with hospital admission traits and diagnoses in a large data set not limited to specific patient events or hypotheses. Methods We collected 135,406,873 electronic, structured coded nursing narratives from 231,494 hospital admissions of patients discharged between 2008 and 2012 at a tertiary teaching institution that routinely uses an electronic health records system. The standardized number of nursing narratives (i.e., the total number of nursing narratives divided by the length of the hospital stay) was suggested to integrate the frequency and quantity of nursing documentation. Results The standardized number of nursing narratives was higher for patients aged 70 years (median = 30.2 narratives/day, interquartile range [IQR] = 24.0–39.4 narratives/day), long (8 days) hospital stays (median = 34.6 narratives/day, IQR = 27.2–43.5 narratives/day), and hospital deaths (median = 59.1 narratives/day, IQR = 47.0–74.8 narratives/day). The standardized number of narratives was higher in “pregnancy, childbirth, and puerperium” (median = 46.5, IQR = 39.0–54.7) and “diseases of the circulatory system” admissions (median = 35.7, IQR = 29.0–43.4). Conclusions Diverse hospital admissions can be consistently described with nursing-document-derived metrics for similar hospital admissions and diagnoses. Some areas of hospital admissions may have consistently increasing volumes of nursing documentation across years. Usability of electronic nursing document metrics for evaluating healthcare requires multiple aspects of hospital admissions to be considered. PMID:27901174

Objective. For brain-machine interfaces (BMI) to be used in activities of daily living by paralyzed individuals, the BMI should be as autonomous as possible. One of the challenges is how the feedback is extracted and utilized in the BMI. Our long-term goal is to create autonomous BMIs that can utilize an evaluative feedback from the brain to update the decoding algorithm and use it intelligently in order to adapt the decoder. In this study, we show how to extract the necessary evaluative feedback from a biologically realistic (synthetic) source, use both the quantity and the quality of the feedback, and how that feedback information can be incorporated into a reinforcement learning (RL) controller architecture to maximize its performance. Approach. Motivated by the perception-action-reward cycle (PARC) in the brain which links reward for cognitive decision making and goal-directed behavior, we used a reward-based RL architecture named Actor-Critic RL as the model. Instead of using an error signal towards building an autonomous BMI, we envision to use a reward signal from the nucleus accumbens (NAcc) which plays a key role in the linking of reward to motor behaviors. To deal with the complexity and non-stationarity of biological reward signals, we used a confidence metric which was used to indicate the degree of feedback accuracy. This confidence was added to the Actor’s weight update equation in the RL controller architecture. If the confidence was high (>0.2), the BMI decoder used this feedback to update its parameters. However, when the confidence was low, the BMI decoder ignored the feedback and did not update its parameters. The range between high confidence and low confidence was termed as the ‘ambiguous’ region. When the feedback was within this region, the BMI decoder updated its weight at a lower rate than when fully confident, which was decided by the confidence. We used two biologically realistic models to generate synthetic data for MI (Izhikevich

Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.

A rule-based topology software system providing a highly flexible and fast procedure to enforce integrity in spatial relationships among datasets is presented. This improved topology rule system is built over the spatial extension Jaspa. Both projects are open source, freely available software developed by the corresponding author of this paper. Currently, there is no spatial DBMS that implements a rule-based topology engine (considering that the topology rules are designed and performed in the spatial backend). If the topology rules are applied in the frontend (as in many GIS desktop programs), ArcGIS is the most advanced solution. The system presented in this paper has several major advantages over the ArcGIS approach: it can be extended with new topology rules, it has a much wider set of rules, and it can mix feature attributes with topology rules as filters. In addition, the topology rule system can work with various DBMSs, including PostgreSQL, H2 or Oracle, and the logic is performed in the spatial backend. The proposed topology system allows users to check the complex spatial relationships among features (from one or several spatial layers) that require some complex cartographic datasets, such as the data specifications proposed by INSPIRE in Europe and the Land Administration Domain Model (LADM) for Cadastral data.

In recent automated and integrated manufacturing, so-called intelligence skill is becoming more and more important and its efficient transfer to next-generation engineers is one of the urgent issues. In this paper, we propose a new approach without costly OJT (on-the-job training), that is, combinational usage of a domain ontology, a rule ontology and a rule-based system. Intelligence skill can be decomposed into pieces of simple engineering rules. A rule ontology consists of these engineering rules as primitives and the semantic relations among them. A domain ontology consists of technical terms in the engineering rules and the semantic relations among them. A rule ontology helps novices get the total picture of the intelligence skill and a domain ontology helps them understand the exact meanings of the engineering rules. A rule-based system helps domain experts externalize their tacit intelligence skill to ontologies and also helps novices internalize them. As a case study, we applied our proposal to some actual job at a remote control and maintenance office of hydroelectric power stations in Tokyo Electric Power Co., Inc. We also did an evaluation experiment for this case study and the result supports our proposal.

In April 1978 a meeting of senior metrication officers convened by the Commonwealth Science Council of the Commonwealth Secretariat, was held in London. The participants were drawn from Australia, Bangladesh, Britain, Canada, Ghana, Guyana, India, Jamaica, Papua New Guinea, Solomon Islands and Trinidad and Tobago. Among other things, the meeting resolved to develop a set of guidelines to assist countries to change to SI and to compile such guidelines in the form of a working manual

Full Text Available We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE, other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.

Distance metrics play significant roles in spatial modeling tasks, such as flood inundation (Tucker and Hancock 2010), stream extraction (Stanislawski et al. 2015), power line routing (Kiessling et al. 2003) and analysis of surface pollutants such as nitrogen (Harms et al. 2009). Avalanche risk is based on slope, aspect, and curvature, all directly computed from distance metrics (Gutiérrez 2012). Distance metrics anchor variogram analysis, kernel estimation, and spatial interpolation (Cressie 1993). Several approaches are employed to measure distance. Planar metrics measure straight line distance between two points (“as the crow flies”) and are simple and intuitive, but suffer from uncertainties. Planar metrics assume that Digital Elevation Model (DEM) pixels are rigid and flat, as tiny facets of ceramic tile approximating a continuous terrain surface. In truth, terrain can bend, twist and undulate within each pixel.Work with Light Detection and Ranging (lidar) data or High Resolution Topography to achieve precise measurements present challenges, as filtering can eliminate or distort significant features (Passalacqua et al. 2015). The current availability of lidar data is far from comprehensive in developed nations, and non-existent in many rural and undeveloped regions. Notwithstanding computational advances, distance estimation on DEMs has never been systematically assessed, due to assumptions that improvements are so small that surface adjustment is unwarranted. For individual pixels inaccuracies may be small, but additive effects can propagate dramatically, especially in regional models (e.g., disaster evacuation) or global models (e.g., sea level rise) where pixels span dozens to hundreds of kilometers (Usery et al 2003). Such models are increasingly common, lending compelling reasons to understand shortcomings in the use of planar distance metrics. Researchers have studied curvature-based terrain modeling. Jenny et al. (2011) use curvature to generate

This article introduces XChange, a rule-based reactive language for the Web. Stressing application scenarios, it first argues that high-level reactive languages are needed for bothWeb and SemanticWeb applications. Then, it discusses technologies and paradigms relevant to high-level reactive languages for the (Semantic) Web. Finally, it presents the Event-Condition-Action rules of XChange.

In this paper, a self-learning RuleBase for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the RuleBase was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules.

The steel rule plays an important role in quantity transmission. However, the traditional verification method of steel rulebased on manual operation and reading brings about low precision and low efficiency. A machine vison based verification system of steel rule is designed referring to JJG1-1999-Verificaiton Regulation of Steel Rule [1]. What differentiates this system is that it uses a new calibration method of pixel equivalent and decontaminates the surface of steel rule. Experiments show that these two methods fully meet the requirements of the verification system. Measuring results strongly prove that these methods not only meet the precision of verification regulation, but also improve the reliability and efficiency of the verification system.

Full Text Available The paper presents how an assessment system is implemented to evaluate the IT&C audit process quality. Issues regarding theoretical and practical terms are presented together with a brief presentation of the metrics and indicators developed in previous researches. The implementation process of an indicator system is highlighted and linked to specification stated in international standards regarding the measurement process. Also, the effects of an assessment system on the IT&C audit process quality are emphasized to demonstrate the importance of such assessment system. The audit process quality is an iterative process consisting of repetitive improvements based on objective measures established on analytical models of the indicators.

Mobile robots are used across many domains from personal care to agriculture. Working in dynamic open-ended environments puts high constraints on the robot perception system, which is critical for the safety of the system as a whole. To achieve the required safety levels the perception system needs...... to be certified, but no specific standards exist for computer vision systems, and the concept of safe vision systems remains largely unexplored. In this paper we present a novel domain-specific language that allows the programmer to express image quality detection rules for enforcing safety constraints...

This paper uses gap metric analysis to derive robustness and performance margins for feedback linearising controllers. Distinct from previous robustness analysis, it incorporates the case of output unstructured uncertainties, and is shown to yield general stability conditions which can be applied to both stable and unstable plants. It then expands on existing feedback linearising control schemes by introducing a more general robust feedback linearising control design which classifies the system nonlinearity into stable and unstable components and cancels only the unstable plant nonlinearities. This is done in order to preserve the stabilising action of the inherently stabilising nonlinearities. Robustness and performance margins are derived for this control scheme, and are expressed in terms of bounds on the plant nonlinearities and the accuracy of the cancellation of the unstable plant nonlinearity by the controller. Case studies then confirm reduced conservatism compared with standard methods.

Full Text Available Rules are a common symbolic model of knowledge. Rule-based systems share roots in cognitive science and artificial intelligence. In the former, they are mostly used in cognitive architectures; in the latter, they are developed in several domains including knowledge engineering and machine learning. This paper aims to give an overview of these issues with the focus on the current research perspective of artificial intelligence. Moreover, in this setting we discuss our results in the design of rule-based systems and their applications in context-aware and business intelligence systems.

In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.

the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory

In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.

This paper reflects the final phase of the EPSRC project, and the PhD work of Marjanovic, on rule-based control in naturally ventilated buildings. Marjanovic is the second author. Eftekhari was her PhD supervisor.

A series of constant load and temperature creep rupture tests and varying load and/or temperature creep rupture tests was carried out on a nickel-base heat-resistant alloy Hastelloy XR, which was developed for applications in the High-Temperature Engineering Test Reactor, at temperatures ranging from 850 to 1000deg C in order to examine the applicability of the conventional creep damage rules, i.e., the life fraction, the strain fraction and their mixed rules. The life fraction rule showed the best applicability of these three criteria. The good applicability of the rule was considered to result from the fact that the creep strength of Hastelloy XR was not strongly affected by the change of the chemical composition and/or the microstructure during exposure to the high-temperature simulated HTGR helium environment. In conclusion the life fraction rule is applicable in engineering design of high-temperature components made of Hastelloy XR. (orig.)

Aging infrastructure, population growth, and urbanization are demanding new approaches to management of all components of the urban water cycle, including stormwater. Traditionally, urban stormwater infrastructure was designed to capture and convey rainfall-induced runoff out of a city through a network of curbs, gutters, drains, and pipes, also known as grey infrastructure. These systems were planned with a single-purpose and designed under the assumption of hydrologic stationarity, a notion that no longer holds true in the face of a changing climate. One solution gaining momentum around the world is green infrastructure (GI). Beyond stormwater quality improvement and quantity reduction (or technical benefits), GI solutions offer many environmental, economic, and social benefits. Yet many practical barriers have prevented the widespread adoption of these systems worldwide. At the center of these challenges is the inability of stakeholders to know how to monitor, measure, and assess the multi-sector performance of GI systems. Traditional grey infrastructure projects require different monitoring strategies than natural systems; there are no overarching policies on how to best design GI monitoring and evaluation systems and measure performance. Previous studies have attempted to quantify the performance of GI, mostly using one evaluation method on a specific case study. We use a case study approach to address these knowledge gaps and develop a conceptual model of how to evaluate the performance of GI through the lens of financing. First, we examined many different case studies of successfully implemented GI around the world. Then we narrowed in on 10 exemplary case studies. For each case studies, we determined what performance method the project developer used such as LCA, TBL, Low Impact Design Assessment (LIDA) and others. Then, we determined which performance metrics were used to determine success and what data was needed to calculate those metrics. Finally, we

Traditional vegetation maps capture the horizontal distribution of various vegetation properties, for example, type, species and age/senescence, across a landscape. Ecologists have long known, however, that many important forest properties, for example, interior microclimate, carbon capacity, biomass and habitat suitability, are also dependent on the vertical arrangement of branches and leaves within tree canopies. The objective of this study was to use a digital elevation model (DEM) along with tree canopy-structure metrics derived from a lidar survey conducted using the Experimental Advanced Airborne Research Lidar (EAARL) to capture a three-dimensional view of vegetation communities in the Barataria Preserve unit of Jean Lafitte National Historical Park and Preserve, Louisiana. The EAARL instrument is a raster-scanning, full waveform-resolving, small-footprint, green-wavelength (532-nanometer) lidar system designed to map coastal bathymetry, topography and vegetation structure simultaneously. An unsupervised clustering procedure was then applied to the 3-dimensional-basedmetrics and DEM to produce a vegetation map based on the vertical structure of the park's vegetation, which includes a flotant marsh, scrub-shrub wetland, bottomland hardwood forest, and baldcypress-tupelo swamp forest. This study was completed in collaboration with the National Park Service Inventory and Monitoring Program's Gulf Coast Network. The methods presented herein are intended to be used as part of a cost-effective monitoring tool to capture change in park resources.

Full Text Available A fuzzy rule-based expert system is developed for evaluating intellectual capital. A fuzzy linguistic approach assists managers to understand and evaluate the level of each intellectual capital item. The proposed fuzzy rule-based expert system applies fuzzy linguistic variables to express the level of qualitative evaluation and criteria of experts. Feasibility of the proposed model is demonstrated by the result of intellectual capital performance evaluation for a sample company.

In this paper, we propose a learning rulebased on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron net...

In traditional force fields (FFs), van der Waals interactions have been usually described by the Lennard-Jones potentials. Conventional combination rules for the parameters of van der Waals (VDW) cross-termed interactions were developed for the Lennard-Jones based FFs. Here, we report that the Morse potentials were a better function to describe VDW interactions calculated by highly precise quantum mechanics methods. A new set of combination rules was developed for Morse-based FFs, in which VDW interactions were described by Morse potentials. The new set of combination rules has been verified by comparing the second virial coefficients of 11 noble gas mixtures. For all of the mixed binaries considered in this work, the combination rules work very well and are superior to all three other existing sets of combination rules reported in the literature. We further used the Morse-based FF by using the combination rules to simulate the adsorption isotherms of CH 4 at 298 K in four covalent-organic frameworks (COFs). The overall agreement is great, which supports the further applications of this new set of combination rules in more realistic simulation systems.

Climate change is affecting hydrological variables and consequently is impacting water resources management. Historical strategies are no longer applicable under climate change. Therefore, adaptive management, especially adaptive operating rules for reservoirs, has been developed to mitigate the possible adverse effects of climate change. However, to date, adaptive operating rules are generally based on future projections involving uncertainties under climate change, yet ignoring historical information. To address this, we propose an approach for deriving adaptive operating rules considering both historical information and future projections, namely historical and future operating rules (HAFOR). A robustness index was developed by comparing benefits from HAFOR with benefits from conventional operating rules (COR). For both historical and future streamflow series, maximizations of both average benefits and the robustness index were employed as objectives, and four trade-offs were implemented to solve the multi-objective problem. Based on the integrated objective, the simulation-based optimization method was used to optimize the parameters of HAFOR. Using the Dongwushi Reservoir in China as a case study, HAFOR was demonstrated to be an effective and robust method for developing adaptive operating rules under the uncertain changing environment. Compared with historical or projected future operating rules (HOR or FPOR), HAFOR can reduce the uncertainty and increase the robustness for future projections, especially regarding results of reservoir releases and volumes. HAFOR, therefore, facilitates adaptive management in the context that climate change is difficult to predict accurately.

Technical trading rules have a long history of being used by practitioners in financial markets. The profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousand traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and China Securities Index 300 (CSI 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for CSI 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of CSI 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that during the financial bubble from 2005 to 2007, the effectiveness of technical trading rules is greatly improved. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

Full Text Available So far, in cloud computing distinct customer is accessed and consumed enormous amount of services through web, offered by cloud service provider (CSP. However cloud is providing one of the services is, security-as-a-service to its clients, still people are terrified to use the service from cloud vendor. Number of solutions, security components and measurements are coming with the new scope for the cloud security issue, but 79.2% security outcome only obtained from the different scientists, researchers and other cloud based academy community. To overcome the problem of cloud security the proposed model that is, “Quality based Enhancing the user data protection via fuzzy rulebased systems in cloud environment”, will helps to the cloud clients by the way of accessing the cloud resources through remote monitoring management (RMMM and what are all the services are currently requesting and consuming by the cloud users that can be well analyzed with Managed service provider (MSP rather than a traditional CSP. Normally, people are trying to secure their own private data by applying some key management and cryptographic based computations again it will direct to the security problem. In order to provide good quality of security target result by making use of fuzzy rulebased systems (Constraint & Conclusion segments in cloud environment. By using this technique, users may obtain an efficient security outcome through the cloud simulation tool of Apache cloud stack simulator.

Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists detect various lesions in medical images. In CAD schemes, classifiers play a key role in achieving a high lesion detection rate and a low false-positive rate. Although many popular classifiers such as linear discriminant analysis and artificial neural networks have been employed in CAD schemes for reduction of false positives, a rule-based classifier has probably been the simplest and most frequently used one since the early days of development of various CAD schemes. However, with existing rule-based classifiers, there are major disadvantages that significantly reduce their practicality and credibility. The disadvantages include manual design, poor reproducibility, poor evaluation methods such as resubstitution, and a large overtraining effect. An automated rule-based classifier with a minimized overtraining effect can overcome or significantly reduce the extent of the above-mentioned disadvantages. In this study, we developed an 'optimal' method for the selection of cutoff thresholds and a fully automated rule-based classifier. Experimental results performed with Monte Carlo simulation and a real lung nodule CT data set demonstrated that the automated threshold selection method can completely eliminate overtraining effect in the procedure of cutoff threshold selection, and thus can minimize overall overtraining effect in the constructed rule-based classifier. We believe that this threshold selection method is very useful in the construction of automated rule-based classifiers with minimized overtraining effect

Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.

An expert system type of knowledge rulebase has been developed for the input parameters used by the particle beam transport program TRACE 3-D. The goal has been to provide the program's user with adequate on-screen information to allow him to initially set up a problem with minimal open-quotes off-lineclose quotes calculations. The focus of this work has been in developing rules for the parameters which define the beam line transport elements. Ten global parameters, the particle mass and charge, beam energy, etc., are used to provide open-quotes expertclose quotes estimates of lower and upper limits for each of the transport element parameters. For example, the limits for the field strength of the quadrupole element are based on a water-cooled, iron-core electromagnet with dimensions derived from practical engineering constraints, and the upper limit for the effective length is scaled with the particle momenta so that initially parallel trajectories do not cross the axis inside the magnet. Limits for the quadrupole doublet and triplet parameters incorporate these rules and additional rulesbased on stable FODO lattices and bidirectional focusing requirements. The structure of the rulebase is outlined and examples for the quadrupole singlet, doublet and triplet are described. The rulebase has been implemented within the Shell for Particle Accelerator Related Codes (SPARC) graphical user interface (GUI)

This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

Full Text Available Rice plants can be attacked by various kinds of diseases which are possible to be determined from their symptoms. However, it is to recognize that to find out the exact type of disease, an agricultural expert’s opinion is needed, meanwhile the numbers of agricultural experts are limited and there are too many problems to be solved at the same time. This makes a system with a capability as an expert is required. This system must contain the knowledge of the diseases and symptom of rice plants as an agricultural expert has to have. This research designs a web-based expert system using rule-based reasoning. The rule are modified from the method of forward chaining inference and backward chaining in order to to help farmers in the rice plant disease diagnosis. The web-based rice plants disease diagnosis expert system has the advantages to access and use easily. With web-based features inside, it is expected that the farmer can accesse the expert system everywhere to overcome the problem to diagnose rice diseases.

Full Text Available This study proposes a hedging rule model which is composed of a two-period reservior operation model considering the damage depth and hedging rule parameter optimization model. The former solves hedging rulesbased on a given poriod’s water supply weighting factor and carryover storage target, while the latter optimization model is used to optimize the weighting factor and carryover storage target based on the hedging rules. The coupling model gives the optimal poriod’s water supply weighting factor and carryover storage target to guide release. The conclusions achieved from this study as follows: (1 the water supply weighting factor and carryover storage target have a direct impact on the three elements of the hedging rule; (2 parameters can guide reservoirs to supply water reasonably after optimization of the simulation and optimization model; and (3 in order to verify the utility of the hedging rule, the Heiquan reservoir is used as a case study and particle swarm optimization algorithm with a simulation model is adopted for optimizing the parameter. The results show that the proposed hedging rule can improve the operation performances of the water supply reservoir.

Full Text Available Introduction: This study examines the use of the number of night-time sleep disturbances as a health-basedmetric to assess the cost effectiveness of rail noise mitigation strategies for situations, wherein high-intensity noises dominate such as freight train pass-bys and wheel squeal. Materials and Methods: Twenty residential properties adjacent to the existing and proposed rail tracks in a noise catchment area of the Epping to Thornleigh Third Track project were used as a case study. Awakening probabilities were calculated for individual’s awakening 1, 3 and 5 times a night when subjected to 10 independent freight train pass-by noise events using internal maximum sound pressure levels (LAFmax. Results: Awakenings were predicted using a random intercept multivariate logistic regression model. With source mitigation in place, the majority of the residents were still predicted to be awoken at least once per night (median 88.0%, although substantial reductions in the median probabilities of awakening three and five times per night from 50.9 to 29.4% and 9.2 to 2.7%, respectively, were predicted. This resulted in a cost-effective estimate of 7.6–8.8 less people being awoken at least three times per night per A$1 million spent on noise barriers. Conclusion: The study demonstrates that an easily understood metric can be readily used to assist making decisions related to noise mitigation for large-scale transport projects.

Full Text Available Bacteria cells are protected from osmotic and environmental stresses by an exoskeleton-like polymeric structure called peptidoglycan (PG or murein sacculus. This structure is fundamental for bacteria's viability and thus, the mechanisms underlying cell wall assembly and how it is modulated serve as targets for many of our most successful antibiotics. Therefore, it is now more important than ever to understand the genetics and structural chemistry of the bacterial cell walls in order to find new and effective methods of blocking it for the treatment of disease. In the last decades, liquid chromatography and mass spectrometry have been demonstrated to provide the required resolution and sensitivity to characterize the fine chemical structure of PG. However, the large volume of data sets that can be produced by these instruments today are difficult to handle without a proper data analysis workflow. Here, we present PG-metrics, a chemometric based pipeline that allows fast and easy classification of bacteria according to their muropeptide chromatographic profiles and identification of the subjacent PG chemical variability between e.g. bacterial species, growth conditions and, mutant libraries. The pipeline is successfully validated here using PG samples from different bacterial species and mutants in cell wall proteins. The obtained results clearly demonstrated that PG-metrics pipeline is a valuable bioanalytical tool that can lead us to cell wall classification and biomarker discovery.

Full Text Available With the urgent demand for automatic management of large numbers of high-resolution remote sensing images, content-based high-resolution remote sensing image retrieval (CB-HRRS-IR has attracted much research interest. Accordingly, this paper proposes a novel high-resolution remote sensing image retrieval approach via multiple feature representation and collaborative affinity metric fusion (IRMFRCAMF. In IRMFRCAMF, we design four unsupervised convolutional neural networks with different layers to generate four types of unsupervised features from the fine level to the coarse level. In addition to these four types of unsupervised features, we also implement four traditional feature descriptors, including local binary pattern (LBP, gray level co-occurrence (GLCM, maximal response 8 (MR8, and scale-invariant feature transform (SIFT. In order to fully incorporate the complementary information among multiple features of one image and the mutual information across auxiliary images in the image dataset, this paper advocates collaborative affinity metric fusion to measure the similarity between images. The performance evaluation of high-resolution remote sensing image retrieval is implemented on two public datasets, the UC Merced (UCM dataset and the Wuhan University (WH dataset. Large numbers of experiments show that our proposed IRMFRCAMF can significantly outperform the state-of-the-art approaches.

Multi-objective operation of reservoirs are increasingly asked to consider the environmental flow to support ecosystem health. Indicators of Hydrologic Alteration (IHA) is widely used to describe environmental flow regimes, but few studies have explicitly formulated it into optimization models and thus is difficult to direct reservoir release. In an attempt to incorporate the benefit of environmental flow into economic achievement, a two-objective reservoir optimization model is developed and all 33 hydrologic parameters of IHA are explicitly formulated into constraints. The benefit of economic is defined by Hydropower Production (HP) while the benefit of environmental flow is transformed into Eco-Index (EI) that combined 5 of the 33 IHA parameters chosen by principal component analysis method. Five scenarios (A to E) with different constraints are tested and solved by nonlinear programming. The case study of Jing Hong reservoir, located in the upstream of Mekong basin, China, shows: 1. A Pareto frontier is formed by maximizing on only HP objective in scenario A and on only EI objective in scenario B. 2. Scenario D using IHA parameters as constraints obtains the optimal benefits of both economic and ecological. 3. A sensitive weight coefficient is found in scenario E, but the trade-offs between HP and EI objectives are not within the Pareto frontier. 4. When the fraction of reservoir utilizable capacity reaches 0.8, both HP and EI capture acceptable values. At last, to make this modelmore conveniently applied to everyday practice, a simplified operation rule curve is extracted.

While obesity is considered a prognostic factor in colorectal cancer (CRC), there is increasing evidence that not simply body mass index (BMI) alone but specifically abdominal fat distribution is what matters. As part of the ColoCare study, this study measured the distribution of adipose tissue compartments in CRC patients and aimed to identify the body metric that best correlates with these measurements as a useful proxy for adipose tissue distribution. In 120 newly-diagnosed CRC patients who underwent multidetector computed tomography (CT), densitometric quantification of total (TFA), visceral (VFA), intraperitoneal (IFA), retroperitoneal (RFA), and subcutaneous fat area (SFA), as well as the M. erector spinae and psoas was performed to test the association with gender, age, tumor stage, metabolic equivalents, BMI, waist-to-height (WHtR) and waist-to-hip ratio (WHR). VFA was 28.8 % higher in men (p{sub VFA}<0.0001) and 30.5 % higher in patients older than 61 years (p{sub VFA}<0.0001). WHtR correlated best with all adipose tissue compartments (r{sub VFA}=0.69, r{sub TFA}=0.84, p<0.0001) and visceral-to-subcutaneous-fat-ratio (VFR, r{sub VFR}=0.22, p=<0.05). Patients with tumor stages III/IV showed significantly lower overall adipose tissue than I/II. Increased M. erector spinae mass was inversely correlated with all compartments. Densitometric quantification on CT is a highly reproducible and reliable method to show fat distribution across adipose tissue compartments. This distribution might be best reflected by WHtR, rather than by BMI or WHR. (orig.)

While obesity is considered a prognostic factor in colorectal cancer (CRC), there is increasing evidence that not simply body mass index (BMI) alone but specifically abdominal fat distribution is what matters. As part of the ColoCare study, this study measured the distribution of adipose tissue compartments in CRC patients and aimed to identify the body metric that best correlates with these measurements as a useful proxy for adipose tissue distribution. In 120 newly-diagnosed CRC patients who underwent multidetector computed tomography (CT), densitometric quantification of total (TFA), visceral (VFA), intraperitoneal (IFA), retroperitoneal (RFA), and subcutaneous fat area (SFA), as well as the M. erector spinae and psoas was performed to test the association with gender, age, tumor stage, metabolic equivalents, BMI, waist-to-height (WHtR) and waist-to-hip ratio (WHR). VFA was 28.8 % higher in men (p_V_F_A<0.0001) and 30.5 % higher in patients older than 61 years (p_V_F_A<0.0001). WHtR correlated best with all adipose tissue compartments (r_V_F_A=0.69, r_T_F_A=0.84, p<0.0001) and visceral-to-subcutaneous-fat-ratio (VFR, r_V_F_R=0.22, p=<0.05). Patients with tumor stages III/IV showed significantly lower overall adipose tissue than I/II. Increased M. erector spinae mass was inversely correlated with all compartments. Densitometric quantification on CT is a highly reproducible and reliable method to show fat distribution across adipose tissue compartments. This distribution might be best reflected by WHtR, rather than by BMI or WHR. (orig.)

Full Text Available The stabilization of dynamic switched control systems is focused on and based on an operator-based formulation. It is assumed that the controlled object and the controller are described by sequences of closed operator pairs (L,C on a Hilbert space H of the input and output spaces and it is related to the existence of the inverse of the resulting input-output operator being admissible and bounded. The technical mechanism addressed to get the results is the appropriate use of the fact that closed operators being sufficiently close to bounded operators, in terms of the gap metric, are also bounded. That philosophy is followed for the operators describing the input-output relations in switched feedback control systems so as to guarantee the closed-loop stabilization.

Full Text Available One of the problems in the criminal case completions is that the difficulty of making decision to estimate when the settlement of the case file will be fulfilled. It is caused by the number of case files handled and detention time changing. Therefore, the fast and accurate information is needed. The research aims to develop a monitoring system tracking and tracking of scheduling rules using RuleBased Expert Systems method with 17 rules, and supported by Radio Frequency Identification technology (RFID in the form of computer applications. Based on the output of the system, an analysis is performed in the criminal case settlement process with a set of IF-THEN rules. The RFID reader read the data of case files through radio wave signals emitted by the antenna toward active-Tag attached in the criminal case file. The system is designed to monitor the tracking and tracing of RFID-based scheduling rules in realtime way that was built in the form of computer application in accordance with the system design. This study results in no failure in reading active tags by the RFID reader to detect criminal case files that had been examined. There were many case files handled in three different location, they were the constabulary, prosecutor, and judges of district court and RFID was able to identify them simultaneously. So, RFID supports the implementation of RuleBased Expert Systems very much for realtime monitoring in criminal case accomplishment.

The rule/plan motor cognition (RPMC) paradigm elicits visually indistinguishable motor outputs, resulting from either plan- or rule-based action-selection, using a combination of essentially interchangeable stimuli. Previous implementations of the RPMC paradigm have used pantomimed movements to compare plan- vs. rule-based action-selection. In the present work we attempt to determine the generalizability of previous RPMC findings to real object interaction by use of a grasp-to-rotate task. In the plan task, participants had to use prospective planning to achieve a comfortable post-handle rotation hand posture. The rule task used implementation intentions (if-then rules) leading to the same comfortable end-state. In Experiment A, we compare RPMC performance of 16 healthy participants in pantomime and real object conditions of the experiment, within-subjects. Higher processing efficiency of rule- vs. plan-based action-selection was supported by diffusion model analysis. Results show a significant response-time increase in the pantomime condition compared to the real object condition and a greater response-time advantage of rule-based vs. plan-based actions in the pantomime compared to the real object condition. In Experiment B, 24 healthy participants performed the real object RPMC task in a task switching vs. a blocked condition. Results indicate that plan-based action-selection leads to longer response-times and less efficient information processing than rule-based action-selection in line with previous RPMC findings derived from the pantomime action-mode. Particularly in the task switching mode, responses were faster in the rule compared to the plan task suggesting a modulating influence of cognitive load. Overall, results suggest an advantage of rule-based action-selection over plan-based action-selection; whereby differential mechanisms appear to be involved depending on the action-mode. We propose that cognitive load is a factor that modulates the advantageous

Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

This Final rule changes TRICARE's current regulatory provision for inpatient hospital claims priced under the DRG-based payment system. Claims are currently priced by using the rates and weights that are in effect on a beneficiary's date of admission. This Final rule changes that provision to price such claims by using the rates and weights that are in effect on a beneficiary's date of discharge.

Woody plants play a major role for the resilience of drylands and in peoples' livelihoods. However, due to their scattered distribution, quantifying and monitoring woody cover over space and time is challenging. We develop a phenology driven model and train/validate MODIS (MCD43A4, 500m) derived metrics with 178 ground observations from Niger, Senegal and Mali to estimate woody cover trends from 2000 to 2014 over the entire Sahel. The annual woody cover estimation at 500 m scale is fairly accurate with an RMSE of 4.3 (woody cover %) and r(exp 2) = 0.74. Over the 15 year period we observed an average increase of 1.7 (+/- 5.0) woody cover (%) with large spatial differences: No clear change can be observed in densely populated areas (0.2 +/- 4.2), whereas a positive change is seen in sparsely populated areas (2.1 +/- 5.2). Woody cover is generally stable in cropland areas (0.9 +/- 4.6), reflecting the protective management of parkland trees by the farmers. Positive changes are observed in savannas (2.5 +/- 5.4) and woodland areas (3.9 +/- 7.3). The major pattern of woody cover change reveals strong increases in the sparsely populated Sahel zones of eastern Senegal, western Mali and central Chad, but a decreasing trend is observed in the densely populated western parts of Senegal, northern Nigeria, Sudan and southwestern Niger. This decrease is often local and limited to woodlands, being an indication of ongoing expansion of cultivated areas and selective logging.We show that an overall positive trend is found in areas of low anthropogenic pressure demonstrating the potential of these ecosystems to provide services such as carbon storage, if not over-utilized. Taken together, our results provide an unprecedented synthesis of woody cover dynamics in theSahel, and point to land use and human population density as important drivers, however only partially and locally offsetting a general post-drought increase.

Full Text Available The purpose of this study was to investigate the usage of electronic journals in Ferdowsi University, Iran based on e-metrics. The paper also aimed to emphasize the analysis of cost-benefit and the correlation between the journal impact factors and the usage data. In this study experiences of Ferdowsi University library on licensing and usage of electronic resources was evaluated by providing a cost-benefit analysis based on the cost and usage statistics of electronic resources. Vendor-provided data were also compared with local usage data. The usage data were collected by tracking web-based access locally, and by collecting vender-provided usage data. The data sources were one-year of vendor-supplied e-resource usage data such as Ebsco, Elsevier, Proquest, Emerald, Oxford and Springer and local usage data collected from the Ferdowsi university web server. The study found that actual usage values differ for vendor-provided data and local usage data. Elsevier has got the highest usage degree in searches, sessions and downloads. Statistics also showed that a small number of journals satisfy significant amount of use while the majority of journals were used less frequent and some were never used at all. The users preferred the PDF rather than HTML format. The data in subject profile suggested that the provided e-resources were best suited to certain subjects. There was no correlation between IF and electronic journal use. Monitoring the usage of e-resources gained increasing importance for acquisition policy and budget decisions. The article provided information about local metrics for the six surveyed vendors/publishers, e.g. usage trends, requests per package, cost per use as related to the scientific specialty of the university.

Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rulesbased on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

This paper studies ways for modularizing transformation definitions in current rule-based model transformation languages. Two scenarios are shown in which the modular units are identified on the base of the relations between source and target metamodels and on the base of generic transformation

Full Text Available In this paper, we proposed a hybrid system to predict corporate bankruptcy. The whole procedure consists of the following four stages: first, sequential forward selection was used to extract the most important features; second, a rule-based model was chosen to fit the given dataset since it can present physical meaning; third, a genetic ant colony algorithm (GACA was introduced; the fitness scaling strategy and the chaotic operator were incorporated with GACA, forming a new algorithm—fitness-scaling chaotic GACA (FSCGACA, which was used to seek the optimal parameters of the rule-based model; and finally, the stratified K-fold cross-validation technique was used to enhance the generalization of the model. Simulation experiments of 1000 corporations’ data collected from 2006 to 2009 demonstrated that the proposed model was effective. It selected the 5 most important factors as “net income to stock broker’s equality,” “quick ratio,” “retained earnings to total assets,” “stockholders’ equity to total assets,” and “financial expenses to sales.” The total misclassification error of the proposed FSCGACA was only 7.9%, exceeding the results of genetic algorithm (GA, ant colony algorithm (ACA, and GACA. The average computation time of the model is 2.02 s.

Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

To demonstrate and compare the application of different genetic programming (GP) based intelligent methodologies for the construction of rule-based systems in two medical domains: the diagnosis of aphasia's subtypes and the classification of pap-smear examinations. Past data representing (a) successful diagnosis of aphasia's subtypes from collaborating medical experts through a free interview per patient, and (b) correctly classified smears (images of cells) by cyto-technologists, previously stained using the Papanicolaou method. Initially a hybrid approach is proposed, which combines standard genetic programming and heuristic hierarchical crisp rule-base construction. Then, genetic programming for the production of crisp rulebased systems is attempted. Finally, another hybrid intelligent model is composed by a grammar driven genetic programming system for the generation of fuzzy rule-based systems. Results denote the effectiveness of the proposed systems, while they are also compared for their efficiency, accuracy and comprehensibility, to those of an inductive machine learning approach as well as to those of a standard genetic programming symbolic expression approach. The proposed GP-based intelligent methodologies are able to produce accurate and comprehensible results for medical experts performing competitive to other intelligent approaches. The aim of the authors was the production of accurate but also sensible decision rules that could potentially help medical doctors to extract conclusions, even at the expense of a higher classification score achievement.

Full Text Available Measurement confounding due to socioeconomic differences between world regions may bias the estimations of countries’ happiness and global inequality. Potential implications of this bias have not been researched. In this study, the consequential validity of the Happy Planet Index, 2012 as an indicator of global inequality is evaluated from the Rasch measurement perspective. Differential Item Functioning by world region and bias in the estimated magnitude of inequalities were analyzed. The recalculated measure showed a good fit to Rasch model assumptions. The original index underestimated relative inequalities between world regions by 20%. DIF had no effect on relative measures but affected absolute measures by overestimating world average happiness and underestimating its variance. These findings suggest measurement confounding by unmeasured characteristics. Metric disadvantages must be adjusted to make fair comparisons. Public policy decisions based on biased estimations could have relevant negative consequences on people’s health and well-being by not focusing efforts on real vulnerable populations.

A powerful tool to investigate speech perception is the use of speech intelligibility prediction models. Recently, a model was presented, termed correlation-based speechbased envelope power spectrum model (sEPSMcorr) [1], based on the auditory processing of the multi-resolution speech-based Envel...

We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results

Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.

Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10 CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect

Motivation: Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. Results: We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. Availability and implementation: The annotation ontology for rule-based models can be found at http

Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training.

In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

Full Text Available Rule-based category learning was examined in 4-11 year-olds and adults. Participants were asked to learn a set of novel perceptual categories in a classification learning task. Categorization performance improved with age, with younger children showing the strongest rule-based deficit relative to older children and adults. Model-based analyses provided insight regarding the type of strategy being used to solve the categorization task, demonstrating that the use of the task appropriate strategy increased with age. When children and adults who identified the correct categorization rule were compared, the performance deficit was no longer evident. Executive functions were also measured. While both working memory and inhibitory control were related to rule-based categorization and improved with age, working memory specifically was found to marginally mediate the age-related improvements in categorization. When analyses focused only on the sample of children, results showed that working memory ability and inhibitory control were associated with categorization performance and strategy use. The current findings track changes in categorization performance across childhood, demonstrating at which points performance begins to mature and resemble that of adults. Additionally, findings highlight the potential role that working memory and inhibitory control may play in rule-based category learning.

Agile software development has evolved into an increasingly mature software development approach and has been applied successfully in many software vendors’ development departments. In this position paper, we address the broader agile service development. Based on method engineering principles we

of relevant pressure peaks at the various recording levels. Until now, this selection has been performed entirely by rule-based systems, requiring each pressure deflection to fit within predefined rigid numerical limits in order to be detected. However, due to great variations in the shapes of the pressure...... curves generated by muscular contractions, rule-based criteria do not always select the pressure events most relevant for further analysis. We have therefore been searching for a new concept for automatic event recognition. The present study describes a new system, based on the method of neurocomputing.......79-0.99 and accuracies of 0.89-0.98, depending on the recording level within the esophageal lumen. The neural networks often recognized peaks that clearly represented true contractions but that had been rejected by a rule-based system. We conclude that neural networks have potentials for automatic detections...

The recently developed extended belief rule-based inference methodology (RIMER+) recognizes the need of modeling different types of information and uncertainty that usually coexist in real environments. A home setting with sensors located in different rooms and on different appliances can be considered as a particularly relevant example of such an environment, which brings a range of challenges for sensor-based activity recognition. Although RIMER+ has been designed as a generic decision model that could be applied in a wide range of situations, this paper discusses how this methodology can be adapted to recognize human activities using binary sensors within smart environments. The evaluation of RIMER+ against other state-of-the-art classifiers in terms of accuracy, efficiency and applicability was found to be significantly relevant, specially in situations of input data incompleteness, and it demonstrates the potential of this methodology and underpins the basis to develop further research on the topic.

Full Text Available This paper proposes an enhanced rule-based web scanner in order to get better accuracy in detecting web vulnerabilities than the existing tools, which have relatively high false alarm rate when the web pages are installed in unconventional directory paths. Using the proposed matching method based on similarity score, the proposed scheme can determine whether two pages have the same vulnerabilities or not. With this method, the proposed scheme is able to figure out the target web pages are vulnerable by comparing them to the web pages that are known to have vulnerabilities. We show the proposed scanner reduces 12% false alarm rate compared to the existing well-known scanner through the performance evaluation via various experiments. The proposed scheme is especially helpful in detecting vulnerabilities of the web applications which come from well-known open-source web applications after small customization, which happens frequently in many small-sized companies.

Post-fire response from vegetation is determined by the intensity and timing of fires as well as the nature of local biomes. Though the field-based studies focusing on selected study sites helped to understand the mechanisms of post-fire response, there is a need to extend the analysis to a broader spatial extent with the assistance of remotely sensed imagery of fires and vegetation. Pheno-metrics, a series of variables on the growing cycle extracted from basic satellite measurements of vegetation coverage, translate the basic remote sensing measurements such as NDVI to the language of phenology and fire ecology in a quantitative form. In this study, we analyzed the rate of biomass removal after ignition and the speed of post-fire recovery in California protected areas from 2000 to 2014 with USGS MTBS fire data and USGS eMODIS pheno-metrics. NDVI drop caused by fire showed the aboveground biomass of evergreen forest was removed much slower than shrubland because of higher moisture level and greater density of fuel. In addition, the above two major land cover types experienced a greatly weakened immediate post-fire growing season, featuring a later start and peak of season, a shorter length of season, and a lower start and peak of NDVI. Such weakening was highly correlated with burn severity, and also influenced by the season of fire and the land cover type, according to our modeling between the anomalies of pheno-metrics and the difference of normalized burn ratio (dNBR). The influence generally decayed over time, but can remain high within the first 5 years after fire, mostly because of the introduction of exotic species when the native species were missing. Local-specific variables are necessary to better address the variance within the same fire and improve the outcomes of models. This study can help ecologists in validating the theories of post-fire vegetation response mechanisms and assist local fire managers in post-fire vegetation recovery.

Full Text Available A rule-based control is presented for desalination plants operating under variable, renewable power availability. This control algorithm is based on two sets of rules: first, a list that prioritizes the reverse osmosis (RO units of the plant is created, based on the current state and the expected water demand; secondly, the available energy is then dispatched to these units following this prioritized list. The selected strategy is tested on a specific case study: a reverse osmosis plant designed for the production of desalinated water powered by wind and wave energy. Simulation results illustrate the correct performance of the plant under this control.

conditions of uncertainty. The Belief Rule-Based Inference Methodology Using the Evidential Reasoning (RIMER) approach was adopted to develop this expert system; which is named the Belief Rule-Based Expert System (BRBES). The system can handle various types of uncertainty in knowledge representation...... and inference procedures. The knowledge base of this system was constructed by using real patient data and expert opinion. Practical case studies were used to validate the system. The system-generated results are more effective and reliable in terms of accuracy than the results generated by a manual system....

Full Text Available To provide efficient services to end-users, variability and commonality among the features of the product line is a challenge for industrialist and researchers. Feature modeling provides great services to deal with variability and commonality among the features of product line. Cardinality based service feature diagrams changed the basic framework of service feature diagrams by putting constraints to them, which make service specifications more flexible, but apart from their variation in selection third party services may have to be customizable. Although to control variability, cardinality based service feature diagrams provide high level visual notations. For specifying variability, the use of cardinality based service feature diagrams raises the problem of matching a required feature diagram against the set of provided diagrams.

This paper presents a general passivity based interaction controller design approach that utilizes a combined energy and power based safety norms to assert safety of domestic robots. Since these robots are expected to co-habit the same environment with a human user, analysing and ensuring their

depends on that animal’s resource specialization, mobility, and life history strategies (Jeanneret et al. 2003a, b; Jennings & Pocock 2009). The knowledge necessary to define the biodiversity contribution of agricultural lands is specialized, dispersed, and nuanced, and thus not readily accessible. Given access to clearly defined biodiversity tradeoffs between alternative agricultural practices, landowners, land managers and farm operators could collectively enhance the conservation and economic value of agricultural landscapes. Therefore, Field to Market: The Keystone Alliance for Sustainable Agriculture and The Nature Conservancy jointly funded a pilot project to develop a biodiversity metric to integrate into Field to Market’s existing sustainability calculator, The Fieldprint Calculator (http://www. fieldtomarket.org/). Field to Market: The Keystone Alliance for Sustainable Agriculture is an alliance among producers, agribusinesses, food companies, and conservation organizations seeking to create sustainable outcomes for agriculture. The Fieldprint Calculator supports the Keystone Alliance’s vision to achieve safe, accessible, and nutritious food, fiber and fuel in thriving ecosystems to meet the needs of 9 billion people in 2050. In support of this same vision, our project provides proof-of-concept for an outcome-based biodiversity metric for Field to Market to quantify biodiversity impacts of commercial row crop production on terrestrial vertebrate richness. Little research exists examining the impacts of alternative commercial agricultural practices on overall terrestrial biodiversity (McLaughlin & Mineau 1995). Instead, most studies compare organic versus conventional practices (e.g. Freemark & Kirk 2001; Wickramasinghe et al. 2004), and most studies focus on flora, avian, or invertebrate communities (Jeanneret et al. 2003a; Maes et al. 2008; Pollard & Relton 1970). Therefore, we used an expert-knowledge-based approach to develop a metric that predicts

Current research on image quality assessment tends to include visual attention in objective metrics to further enhance their performance. A variety of computational models of visual attention are implemented in different metrics, but their accuracy in representing human visual attention is not fully

In joint timing and carrier offset estimation algorithms for Time Division Duplexing (TDD) OFDM systems, different timing metrics are proposed to determine the beginning of a burst or symbol. In this contribution we investigated the different timing metrics in order to establish their impact on the

Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error

Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error

This paper studies ways for modularizing transformation definitions in current rule-based model transformation languages. Two scenarios are shown in which the modular units are identified on the basis of relations between source and target metamodels and on the base of generic transformation

Full Text Available In the work the topics of software implementation of rule induction method based on sequential covering algorithm are considered. Such approach allows us to develop clinical decision support system. The project is implemented within Netbeans IDE based on Java-classes.

Two strategies for control of nitrogen removal in an alternating activated sludge plant are compared. One is based on simple model predictions determining the cycle length at the beginning of each cycle. The other is based on simple rules relating present ammonia and nitrate concentrations. Both ...

Full Text Available We present DEVELOP-FPS, a software tool specially designed for the development of First Person Shooter (FPS players controlled by RuleBased Scripts. DEVELOP-FPS may be used by FPS developers to create, debug, maintain and compare rulebase player behaviours, providing a set of useful functionalities: i for an easy preparation of the right scenarios for game debugging and testing; ii for controlling the game execution: users can stop and resume the game execution at any instant, monitoring and controlling every player in the game, monitoring the state of each player, their rulebase activation, being able to issue commands to control their behaviour; and iii to automatically run a certain number of game executions and collect data in order to evaluate and compare the players performance along a sufficient number of similar experiments.

A methodology has been developed for probability-based standards for low-pressure piping systems that are attached to the reactor coolant loops of advanced light water reactors (ALWRs) which could experience reactor coolant loop temperatures and pressures because of multiple isolation valve failures. This accident condition is called an intersystem loss-of-coolant accident (ISLOCA). The methodology was applied to various sizes of carbon and stainless steel piping designed to advanced boiling water reactor (ABWR) temperatures and pressures

Full Text Available Field trees are an integral part of the farmed parkland landscape in West Africa and provide multiple benefits to the local environment and livelihoods. While field trees have received increasing interest in the context of strengthening resilience to climate variability and change, the actual extent of farmed parkland and spatial patterns of tree cover are largely unknown. We used the rule-based predictive modeling tool Cubist® to estimate field tree cover in the west-central agricultural region of Senegal. A collection of rules and associated multiple linear regression models was constructed from (1 a reference dataset of percent tree cover derived from very high spatial resolution data (2 m Orbview as the dependent variable, and (2 ten years of 10-day 250 m Moderate Resolution Imaging Spectrometer (MODIS Normalized Difference Vegetation Index (NDVI composites and derived phenological metrics as independent variables. Correlation coefficients between modeled and reference percent tree cover of 0.88 and 0.77 were achieved for training and validation data respectively, with absolute mean errors of 1.07 and 1.03 percent tree cover. The resulting map shows a west-east gradient from high tree cover in the peri-urban areas of horticulture and arboriculture to low tree cover in the more sparsely populated eastern part of the study area. A comparison of current (2000s tree cover along this gradient with historic cover as seen on Corona images reveals dynamics of change but also areas of remarkable stability of field tree cover since 1968. The proposed modeling approach can help to identify locations of high and low tree cover in dryland environments and guide ground studies and management interventions aimed at promoting the integration of field trees in agricultural systems.

Present air quality standards to protect vegetation from ozone are based on measured concentrations (i.e., exposure) rather than on plant uptake rates (or dose). Some familiar cumulative exposure-based indices include SUM06, AOT40, and W126. However, plant injury is more closely related to dose, or more appropriately to effective dose, than to exposure. This study develops and applies a simple model for estimating effective ozone dose that combines the plant canopy's rate of stomatal ozone uptake with the plant's defense to ozone uptake. Here the plant defense is explicitly parameterized as a function of gross photosynthesis and the model is applied using eddy covariance (ozone and CO 2) flux data obtained at a vineyard site in the San Joaquin Valley during the California Ozone Deposition Experiment (CODE91). With the ultimate intention of applying these concepts using prognostic models and remotely sensed data, the pathways for ozone deposition are parameterized (as much as possible) in terms of canopy LAI and the surface friction velocity. Results indicate that (1) the daily maximum potential for plant injury (based on effective dose) tends to coincide with the daily peak in ozone mixing ratio (ppbV), (2) potentially there are some significant differences between ozone metricsbased on dose (no plant defense) and effective dose, and (3) nocturnal conductance can contribute significantly to the potential for plant ozone injury.

In 2002, the Regulation (EC) 178 of the European Parliament and of the Council states that, in order to achieve the general objective of a high level of protection of human health and life, food law shall be based on risk analysis. However, the Commission Regulation No 2073/2005 on microbiological criteria for foodstuffs requires that food business operators ensure that foodstuffs comply with the relevant microbiological criteria. Such criteria define the acceptability of a product, a batch of foodstuffs or a process, based on the absence, presence or number of micro-organisms, and/or on the quantity of their toxins/metabolites, per unit(s) of mass, volume, area or batch. The same Regulation describes a food safety criterion as a mean to define the acceptability of a product or a batch of foodstuff applicable to products placed on the market; moreover, it states a process hygiene criterion as a mean indicating the acceptable functioning of the production process. Both food safety criteria and process hygiene criteria are not based on risk analysis. On the contrary, the metrics formulated by the Codex Alimentarius Commission in 2004, named Food Safety Objective (FSO) and Performance Objective (PO), are risk-based and fit the indications of Regulation 178/2002. The main aims of this review are to illustrate the key differences between microbiological criteria and the risk-basedmetrics defined by the Codex Alimentarius Commission and to explore the opportunity and also the possibility to implement future European Regulations including PO and FSO as supporting parameters to microbiological criteria. This review clarifies also the implications of defining an appropriate level of human protection, how to establish FSO and PO and how to implement them in practice linked to each other through quantitative risk assessment models. The contents of this review should clarify the context for application of the results collected during the EU funded project named BASELINE (www

The topic of this study is word stress, more specifically the relation between rules of stress and destressing within the framework of metrical phonology. Our claims will be largely based on in-depth analyses of two word stress systems: those of English and Dutch. We intend to offer a

Full Text Available Despite real changes in the work place and the negative consequences of prevailing hierarchical structures with rigid management systems, little attention has yet been paid to shifting management modes to accommodate the dynamics of the external environment, particularly when a firm’s operating environment demands a high degree of flexibility. Building on the resource-based view as a basis for competitive advantage, we posit that differences in the stability of an organization’s environment and the degree of managerial control explain variations in the management mode used in firms. Unlike other studies which mainly focus on either the dynamics of the external environment or management control, we have developed a theoretical model combining both streams of research, in a context frame to describe under what conditions firms engage in rules-based, change-based, engagement-based and capability-based management modes. To test our theoretical framework, we conducted a survey with 54 firms in various industries and nations on how their organizations cope with a dynamic environment and what management style they used in response. Our study reveals that the appropriate mode can be determined by analyzing purpose, motivation, knowledge and information, as well as the degree of complexity, volatility and uncertainty the firm is exposed to. With our framework, we attempt to advance the understanding of when organizations should adapt their management style to the changing business environment.

to ignorance, incompleteness, and randomness. So, a belief rule-based expert system (BRBES) has been designed and developed with the capability of handling the uncertainties mentioned. Evidential reasoning works as the inference engine and the belief rulebase as the knowledge representation schema......Mental disorder is a change of mental or behavioral pattern that causes sufferings and impairs the ability to function in ordinary life. In psychopathology, the assessment methods of mental disorder contain various types of uncertainties associated with signs and symptoms. This study identifies...

In this paper the metrics-based results of the Dst part of the 2008-2009 GEM Metrics Challenge are reported. The Metrics Challenge asked modelers to submit results for 4 geomagnetic storm events and 5 different types of observations that can be modeled by statistical or climatological or physics-based (e.g. MHD) models of the magnetosphere-ionosphere system. We present the results of over 25 model settings that were run at the Community Coordinated Modeling Center (CCMC) and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations we use comparisons of one-hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of one-minute model data with the one-minute Dst index calculated by the United States Geologic Survey (USGS).

A speech intelligibility prediction model is proposed that combines the auditory processing front end of the multi-resolution speech-based envelope power spectrum model [mr-sEPSM; Jørgensen, Ewert, and Dau (2013). J. Acoust. Soc. Am. 134(1), 436–446] with a correlation back end inspired by the sh...

There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.

Full Text Available Vehicular Ad hoc NETworks (VANETs represent a particular mobile technology that permits the communication among vehicles, offering security and comfort. Nowadays, distributed mobile wireless computing is becoming a very important communications paradigm, due to its flexibility to adapt to different mobile applications. VANETs are a practical example of data exchanging among real mobile nodes. To enable communications within an ad-hoc network, characterized by continuous node movements, routing protocols are needed to react to frequent changes in network topology. In this paper, the attention is focused mainly on the network layer of VANETs, proposing a novel approach to reduce the interference level during mobile transmission, based on the multi-channel nature of IEEE 802.11p (1609.4 standard. In this work a new routing protocol based on Distance Vector algorithm is presented to reduce the delay end to end and to increase packet delivery ratio (PDR and throughput in VANETs. A new metric is also proposed, based on the maximization of the average Signal-to-Interference Ratio (SIR level and the link duration probability between two VANET nodes. In order to relieve the effects of the co-channel interference perceived by mobile nodes, transmission channels are switched on a basis of a periodical SIR evaluation. A Network Simulator has been used for implementing and testing the proposed idea.

The use of fault-tolerant EP architectures has induced growing constraints, whose influence on reliability-based performance metrics is no more negligible. To face up the growing influence of simultaneous failure, this thesis proposes, for safety-related functions, a new-trend assessment method of reliability, based on a better taking into account of time-aspect. This report introduces the concept of information and uses it to interpret the failure modes of safety-related function as the direct result of the initiation and propagation of erroneous information until the actuator-level. The main idea is to distinguish the apparition and disappearance of erroneous states, which could be defined as intrinsically dependent of HW-characteristic and maintenance policies, and their possible activation, constrained through architectural choices, leading to the failure of safety-related function. This approach is based on a low level on deterministic SED models of the architecture and use non homogeneous Markov chains to depict the time-evolution of probabilities of errors. (author)

The challenges related to problem of assessment of actual diversity level and evaluation of diversity-oriented NPP I and C systems safety are analyzed. There are risks of inaccurate assessment and problems of insufficient decreasing probability of CCFs. CCF probability of safety-critical systems may be essentially decreased due to application of several different types of diversity (multi-diversity). Different diversity types of FPGA-based NPP I and C systems, general approach and stages of diversity and safety assessment as a whole are described. Objectives of the report are: (a) analysis of the challenges caused by use of diversity approach in NPP I and C systems in context of FPGA and other modern technologies application; (b) development of multi-version NPP I and C systems assessment technique and tool based on check-list and metric-oriented approach; (c) case-study of the technique: assessment of multi-version FPGA-based NPP I and C developed by use of Radiy TM Platform. (author)

Highlights: • Multilevel Flow Model represents system knowledge as a domain map in expert system. • Rule-based fault diagnostic expert system can identify root cause via a causal chain. • Rule-based fault diagnostic expert system can be used for fault simulation training. - Abstract: This paper presents the development and implementation of a real-time rule-based diagnostic platform. The knowledge is acquired from domain experts and textbooks and the design of the fault diagnosis expert system was performed in the following ways: (i) establishing of corresponding classes and instances to build the domain map, (ii) creating of generic fault models based on events, and (iii) building of diagnostic reasoning based on rules. Knowledge representation is a complicated issue of expert systems. One highlight of this paper is that the Multilevel Flow Model has been used to represent the knowledge, which composes the domain map within the expert system as well as providing a concise description of the system. The developed platform is illustrated using the pressure safety system of a pressurized water reactor as an example of the simulation test bed; the platform is developed using the commercial and industrially validated software G2. The emulation test was conducted and it has been proven that the fault diagnosis expert system can identify the faults correctly and in a timely way; this system can be used as a simulation-based training tool to assist operators to make better decisions.

Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with its portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assured a growing role in providing human-like reasoning capability and expertise for computer systems. The integration of expert system technology with Ada programming language, specifically a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell is discussed. The NASA Lewis Research Center was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-base power expert system, in ART-Ada. Three components, the rule-based expert system, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

Energy lies at the backbone of any advanced society and constitutes an essential prerequisite for economic growth, social order and national defense. However there is an Achilles heel to today's energy and technology relationship; namely a precarious intimacy between energy and the fiscal, social, and technical systems it supports. Recently, widespread and persistent disruptions in energy systems have highlighted the extent of this dependence and the vulnerability of increasingly optimized systems to changing conditions. Resilience is an emerging concept that offers to reconcile considerations of performance under dynamic environments and across multiple time frames by supplementing traditionally static system performance measures to consider behaviors under changing conditions and complex interactions among physical, information and human domains. This paper identifies metrics useful to implement guidance for energy-related planning, design, investment, and operation. Recommendations are presented using a matrix format to provide a structured and comprehensive framework of metrics relevant to a system's energy resilience. The study synthesizes previously proposed metrics and emergent resilience literature to provide a multi-dimensional model intended for use by leaders and practitioners as they transform our energy posture from one of stasis and reaction to one that is proactive and which fosters sustainable growth. - Highlights: • Resilience is the ability of a system to recover from adversity. • There is a need for methods to quantify and measure system resilience. • We developed a matrix-based approach to generate energy resilience metrics. • These metrics can be used in energy planning, system design, and operations

Basic personality traits are believed to be expressed in, and predictable from, smart phone data. We investigate the extent of this predictability using data (n = 636) from the Copenhagen Network Study, which to our knowledge is the most extensive study concerning smartphone usage and personality...... traits. Based on phone usage patterns, earlier studies have reported surprisingly high predictability of all Big Five personality traits. We predict personality trait tertiles (low, medum, high) from a set of behavioral variables extracted from the data, and find that only extraversion can be predicted...

traits. Based on phone usage patterns, earlier studies have reported surprisingly high predictability of all Big Five personality traits. We predict personality trait tertiles (low, medum, high) from a set of behavioral variables extracted from the data, and find that only extraversion can be predicted......Basic personality traits are believed to be expressed in, and predictable from, smart phone data. We investigate the extent of this predictability using data (n = 636) from the Copenhagen Network Study, which to our knowledge is the most extensive study concerning smartphone usage and personality...

A Medical Expert System (MES) was developed which uses Reduced RuleBase to diagnose cancer risk according to the symptoms in an individual. A total of 13 symptoms were used. With the new MES, the reduced rules are controlled instead of all possibilities (2 13 = 8192 different possibilities occur). By controlling reduced rules, results are found more quickly. The method of two-level simplification of Boolean functions was used to obtain Reduced RuleBase. Thanks to the developed application with the number of dynamic inputs and outputs on different platforms, anyone can easily test their own cancer easily. More accurate results were obtained considering all the possibilities related to cancer. Thirteen different risk factors were determined to determine the type of cancer. The truth table produced in our study has 13 inputs and 4 outputs. The Boolean Function Minimization method is used to obtain less situations by simplifying logical functions. Diagnosis of cancer quickly thanks to control of the simplified 4 output functions. Diagnosis made with the 4 output values obtained using Reduced RuleBase was found to be quicker than diagnosis made by screening all 2 13 = 8192 possibilities. With the improved MES, more probabilities were added to the process and more accurate diagnostic results were obtained. As a result of the simplification process in breast and renal cancer diagnosis 100% diagnosis speed gain, in cervical cancer and lung cancer diagnosis rate gain of 99% was obtained. With Boolean function minimization, less number of rules is evaluated instead of evaluating a large number of rules. Reducing the number of rules allows the designed system to work more efficiently and to save time, and facilitates to transfer the rules to the designed Expert systems. Interfaces were developed in different software platforms to enable users to test the accuracy of the application. Any one is able to diagnose the cancer itself using determinative risk factors. Thereby

Full Text Available The needs for continuous quality improvement resulting in the more complex. The research aims to develop system of quality assurance evaluation using rulebased system to monitor the quality of higher education. This process of the research begins by documenting the daily activity of study program which consists of lecturer data, research data, service data, staff data, student data, and infrastructure data into a database. The data were evaluated by using rulebased system by adopting rules on quality standards of study program of National Accreditation Board for Higher Education as the knowledge base. Evaluation process was carried out by using the forward chaining methods by matching the existing data to the knowledge base to determine the quality status of each quality standard. While the reccomendation process was carried out by using the backward chaining methods by matching the results of quality status to the desired projection of quality status to determine the nearest target which can be achieved. The result of the research is system of quality assurance evaluation with rulebased system that is capable of producing an output system in the form of internal evaluation report and recommendation system that can be used to monitor the quality of higher education.

In response to the increasing frequency and economic damages of natural disasters globally, disaster risk management has evolved to incorporate risk assessments that are multi-dimensional, integrated and metric-based. This is to support knowledge-based decision making and hence sustainable risk reduction. In Malawi and most of Sub-Saharan Africa (SSA), however, flood risk studies remain focussed on understanding causation, impacts, perceptions and coping and adaptation measures. Using the IPCC Framework, this study has quantified and profiled risk to flooding of rural, subsistent communities in the Lower Shire Valley, Malawi. Flood risk was obtained by integrating hazard and vulnerability. Flood hazard was characterised in terms of flood depth and inundation area obtained through hydraulic modelling in the valley with Lisflood-FP, while the vulnerability was indexed through analysis of exposure, susceptibility and capacity that were linked to social, economic, environmental and physical perspectives. Data on these were collected through structured interviews of the communities. The implementation of the entire analysis within GIS enabled the visualisation of spatial variability in flood risk in the valley. The results show predominantly medium levels in hazardousness, vulnerability and risk. The vulnerability is dominated by a high to very high susceptibility. Economic and physical capacities tend to be predominantly low but social capacity is significantly high, resulting in overall medium levels of capacity-induced vulnerability. Exposure manifests as medium. The vulnerability and risk showed marginal spatial variability. The paper concludes with recommendations on how these outcomes could inform policy interventions in the Valley.

The Southern Ocean is central to the climate's response to increasing levels of atmospheric greenhouse gases as it ventilates a large fraction of the global ocean volume. Global coupled climate models and earth system models, however, vary widely in their simulations of the Southern Ocean and its role in, and response to, the ongoing anthropogenic forcing. Due to its complex water-mass structure and dynamics, Southern Ocean carbon and heat uptake depend on a combination of winds, eddies, mixing, buoyancy fluxes and topography. Understanding how the ocean carries heat and carbon into its interior and how the observed wind changes are affecting this uptake is essential to accurately projecting transient climate sensitivity. Observationally-basedmetrics are critical for discerning processes and mechanisms, and for validating and comparing climate models. As the community shifts toward Earth system models with explicit carbon simulations, more direct observations of important biogeochemical parameters, like those obtained from the biogeochemically-sensored floats that are part of the Southern Ocean Carbon and Climate Observations and Modeling project, are essential. One goal of future observing systems should be to create observationally-based benchmarks that will lead to reducing uncertainties in climate projections, and especially uncertainties related to oceanic heat and carbon uptake.

Full Text Available Nowadays, how to evaluate image quality reasonably is a basic and challenging problem. In view of the present no reference evaluation methods, they cannot reflect the human visual perception of image quality accurately. In this paper, we propose an efficient general-purpose no reference image quality assessment (NRIQA method based on visual perception, and effectively integrates human visual characteristics into the NRIQA fields. First, a novel algorithm for salient region extraction is presented. Two characteristics graphs of texture and edging of the original image are added to the Itti model. Due to the normalized luminance coefficients of natural images obey the generalized Gauss probability distribution, we utilize this characteristic to extract statistical features in the regions of interest (ROI and regions of non-interest respectively. Then, the extracted features are fused to be an input to establish the support vector regression (SVR model. Finally, the IQA model obtained by training is used to predict the quality of the image. Experimental results show that this method has good predictive ability, and the evaluation effect is better than existing classical algorithms. Moreover, the predicted results are more consistent with human subjective perception, which can accurately reflect the human visual perception to image quality.

A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.

The BioQuaRT project within the European Metrology Research Programme aims at correlating ion track structure characteristics with the biological effects of radiation and develops measurement and simulation techniques for determining ion track structure on different length scales from about 2 nm to about 10 μm. Within this framework, we investigate methods to translate track-structure quantities derived on a nanometer scale to macroscopic dimensions. Input data sets were generated by simulations of ion tracks of protons and carbon ions in liquid water using the Geant-4 Monte Carlo tool-kit with the Geant-4-DNA processes. Based on the energy transfer points - recorded with nanometer resolution - we investigated parametrizations of overall properties of ion track structure. Three different track structure parametrizations have been developed using the distances to the 10 next neighbouring ionizations, the radial energy distribution and ionisation cluster size distributions. These parametrizations of nanometer-scale track structure build a basis for deriving biologically relevant mean values which are essential in the clinical situation where each voxel is exposed to a mixed radiation field. (authors)

Full Text Available Objective – To develop a set of generic outcome-based performance measures for Irishhospital libraries.Methods – Various models and frameworks of performance measurement were used as atheoretical paradigm to link the impact of library services directly with measurablehealthcare objectives and outcomes. Strategic objectives were identified, mapped toperformance indicators, and finally translated into response choices to a single-questiononline survey for distribution via email.Results – The set of performance indicators represents an impact assessment tool whichis easy to administer across a variety of healthcare settings. In using a model directlyaligned with the mission and goals of the organization, and linked to core activities andoperations in an accountable way, the indicators can also be used as a channel throughwhich to implement action, change, and improvement.Conclusion – The indicators can be adopted at a local and potentially a national level, asboth a tool for advocacy and to assess and improve service delivery at a macro level. Toovercome the constraints posed by necessary simplifications, substantial further research is needed by hospital libraries to develop more sophisticated and meaningful measures of impact to further aid decision making at a micro level.

Full Text Available A self-learning based methodology for building the rule-base of a fuzzy logic controller (FLC is presented and verified, aiming to engage intelligent characteristics to a fuzzy logic control systems. The methodology is a simplified version of those presented in today literature. Some aspects are intentionally ignored since it rarely appears in control system engineering and a SISO process is considered here. The fuzzy inference system obtained is a table-based Sugeno-Takagi type. System’s desired performance is defined by a reference model and rules are extracted from recorded data, after the correct control actions are learned. The presented algorithm is tested in constructing the rule-base of a fuzzy controller for a DC drive application. System’s performances and method’s viability are analyzed.

We propose a fuzzy rule-based system to map representations of the emotional state of an animated agent onto muscle contraction values for the appropriate facial expressions. Our implementation pays special attention to the way in which continuous changes in the intensity of emotions can be

Full Text Available RATIONALE: The prediction rules for the evaluation of the acid-base status in patients with chronic respiratory acidosis, derived primarily from an experimental canine model, suggest that complete compensation should not occur. This appears to contradict frequent observations of normal or near-normal pH levels in patients with chronic hypercapnia.

We demonstrate, compare and discuss the application of two genetic programming methodologies for the construction of rule-based systems in two medical domains: the diagnosis of Aphasia's subtypes and the classification of Pap-Smear Test examinations. The first approach consists of a scheme...

There is an optimum pressure for the normal operation of nuclear power plant reactors and thresholds that must be respected during transients, what make the pressurizer an important control mechanism. Inside a pressurizer there are heaters and a shower. From their actuation levels, they control the vapor pressure inside the pressurizer and, consequently, inside the primary circuit. Therefore, the control of the pressurizer consists in controlling the actuation levels of the heaters and of the shower. In the present work this function is implemented through a fuzzy controller. Besides the efficient way of exerting control, this approach presents the possibility of extracting knowledge of how this control is been made. A fuzzy controller consists basically in an inference machine and a rulebase, the later been constructed with specialized knowledge. In some circumstances, however, this knowledge is not accurate, and may lead to non-efficient results. With the development of artificial intelligence techniques, there wore found methods to substitute specialists, simulating its knowledge. Genetic programming is an evolutionary algorithm particularly efficient in manipulating rulebase structures. In this work genetic programming was used as a substitute for the specialist. The goal is to test if an irrational object, a computer, is capable, by it self, to find out a rulebase reproducing a pre-established actuation levels profile. The result is positive, with the discovery of a fuzzy rulebase presenting an insignificant error. A remarkable result that proves the efficiency of the approach. (author)

We study sentiment analysis beyond the typical granularity of polarity and instead use Plutchik's wheel of emotions model. We introduce RBEM-Emo as an extension to the Rule-Based Emission Model algorithm to deduce such emotions from human-written messages. We evaluate our approach on two different

There are usually limited user evaluation of resources on a recommender system, which caused an extremely sparse user rating matrix, and this greatly reduce the accuracy of personalized recommendation, especially for new users or new items. This paper presents a recommendation method based on rating prediction using causal association rules.…

textabstractTo study whether probabilistic selection by the use of a nomogram could improve patient selection for active surveillance (AS) compared to the various sets of rule-based AS inclusion criteria currently used. We studied Dutch and Swedish patients participating in the European Randomized

We manually designed rules for a backchannel (BC) prediction model based on pitch and pause information. In short, the model predicts a BC when there is a pause of a certain length that is preceded by a falling or rising pitch. This model was validated against the Dutch IFADV Corpus in a

Both in the US and in Europe anti money laundering policy switched from a rule-to a risk-based reporting system in order to avoid over-reporting by the private sector. However, reporting increased in most countries, while the quality of information decreased. Governments drowned in data because

, known as the Belief RuleBased Expert System (BRBES) and implemented in the local e-government of Bangladesh. The results have been compared with a recently developed method of evaluating e-government, and it is demonstrated that the results of the BRBES are more accurate and reliable. The BRBES can...

Although full-text articles are provided by the publishers in electronic formats, it remains a challenge to find related work beyond the title and abstract context. Identifying related articles based on their abstract is indeed a good starting point; this process is straightforward and does not consume as many resources as full-text based similarity would require. However, further analyses may require in-depth understanding of the full content. Two articles with highly related abstracts can be substantially different regarding the full content. How similarity differs when considering title-and-abstract versus full-text and which semantic similarity metric provides better results when dealing with full-text articles are the main issues addressed in this manuscript. We have benchmarked three similarity metrics - BM25, PMRA, and Cosine, in order to determine which one performs best when using concept-based annotations on full-text documents. We also evaluated variations in similarity values based on title-and-abstract against those relying on full-text. Our test dataset comprises the Genomics track article collection from the 2005 Text Retrieval Conference. Initially, we used an entity recognition software to semantically annotate titles and abstracts as well as full-text with concepts defined in the Unified Medical Language System (UMLS®). For each article, we created a document profile, i.e., a set of identified concepts, term frequency, and inverse document frequency; we then applied various similarity metrics to those document profiles. We considered correlation, precision, recall, and F1 in order to determine which similarity metric performs best with concept-based annotations. For those full-text articles available in PubMed Central Open Access (PMC-OA), we also performed dispersion analyses in order to understand how similarity varies when considering full-text articles. We have found that the PubMed Related Articles similarity metric is the most suitable for

Full Text Available In the paper, the problem of extraction of complex decision rules in simple decision systems over ontological graphs is considered. The extracted rules are consistent with the dominance principle similar to that applied in the dominancebased rough set approach (DRSA. In our study, we propose to use a heuristic algorithm, utilizing the ant-based clustering approach, searching the semantic spaces of concepts presented by means of ontological graphs. Concepts included in the semantic spaces are values of attributes describing objects in simple decision systems

Full Text Available Fuzzy rule optimization is a challenging step in the development of a fuzzy model. A simple two inputs fuzzy model may have thousands of combination of fuzzy rules when it deals with large number of input variations. Intuitively and trial‐error determination of fuzzy rule is very difficult. This paper addresses the problem of optimizing Fuzzy rule using Genetic Algorithm to compensate illumination effect in face recognition. Since uneven illumination contributes negative effects to the performance of face recognition, those effects must be compensated. We have developed a novel algorithmbased on a reflectance model to compensate the effect of illumination for human face recognition. We build a pair of model from a single image and reason those modelsusing Fuzzy.Fuzzy rule, then, is optimized using Genetic Algorithm. This approachspendsless computation cost by still keepinga high performance. Based on the experimental result, we can show that our algorithm is feasiblefor recognizing desired person under variable lighting conditions with faster computation time. Keywords: Face recognition, harsh illumination, reflectance model, fuzzy, genetic algorithm

Full Text Available This paper is concerned with the problem of multilevel association rule mining for bridge resource management (BRM which is announced by IMO in 2010. The goal of this paper is to mine the association rules among the items of BRM and the vessel accidents. However, due to the indirect data that can be collected, which seems useless for the analysis of the relationship between items of BIM and the accidents, the cross level association rules need to be studied, which builds the relation between the indirect data and items of BRM. In this paper, firstly, a cross level coding scheme for mining the multilevel association rules is proposed. Secondly, we execute the immune genetic algorithm with the coding scheme for analyzing BRM. Thirdly, based on the basic maritime investigation reports, some important association rules of the items of BRM are mined and studied. Finally, according to the results of the analysis, we provide the suggestions for the work of seafarer training, assessment, and management.

Most simulations of neuroplasticity in memristors, which are potentially used to develop artificial synapses, are confined to the basic biological Hebbian rules. However, the simplex rules potentially can induce excessive excitation/inhibition, even collapse of neural activities, because they neglect the properties of long-term homeostasis involved in the frameworks of realistic neural networks. Here, we develop organic CuPc-based memristors of which excitatory and inhibitory conductivities can implement both Hebbian rules and homeostatic plasticity, complementary to Hebbian patterns and conductive to the long-term homeostasis. In another adaptive situation for homeostasis, in thicker samples, the overall excitement under periodic moderate stimuli tends to decrease and be recovered under intense inputs. Interestingly, the prototypes can be equipped with bio-inspired habituation and sensitization functions outperforming the conventional simplified algorithms. They mutually regulate each other to obtain the homeostasis. Therefore, we develop a novel versatile memristor with advanced synaptic homeostasis for comprehensive neural functions.

An automatic control rod operation method using rule-based control is proposed. Its features are as follows: (1) a production system to recognize plant events, determine control actions and realize fast inference (fast selection of a suitable production rule), (2) use of the fuzzy control technique to determine quantitative control variables. The method's performance was evaluated by simulation tests on automatic control rod operation at a BWR plant start-up. The results were as follows; (1) The performance which is related to stabilization of controlled variables and time required for reactor start-up, was superior to that of other methods such as PID control and program control methods, (2) the process time to select and interpret the suitable production rule, which was the same as required for event recognition or determination of control action, was short (below 1 s) enough for real time control. The results showed that the method is effective for automatic control rod operation. (author)

In this article, we report results on an experimental study conducted with volunteer subjects playing a touch-screen game with two unique difficulty levels. Subjects have knowledge about the rules of both game levels, but only sufficient playing experience with the easy level of the game, making them vulnerable with the difficult level. Several behavioral metrics associated with subjects' playing the game are studied in order to assess subjects' mental-workload changes induced by their vulnerability. Specifically, these metrics are calculated based on subjects' finger kinematics and decision making times, which are then compared with baseline metrics, namely, performance metrics pertaining to how well the game is played and a physiological metric called pnn50 extracted from heart rate measurements. In balanced experiments and supported by comparisons with baseline metrics, it is found that some of the studied behavioral metrics have the potential to be used to infer subjects' mental workload changes through different levels of the game. These metrics, which are decoupled from task specifics, relate to subjects' ability to develop strategies to play the game, and hence have the advantage of offering insight into subjects' task-load and vulnerability assessment across various experimental settings.

developed generic belief rule-based inference methodology by using evidential reasoning (RIMER) acts as the inference engine of this BRBES while belief rulebase as the knowledge representation schema. The knowledge base of the system is constructed by using real patient data and expert opinion from...

Recent years have seen a significant increase of the human influence on the natural systems both at the global and local scale. Accurately modeling the human component and its interaction with the natural environment is key to characterize the real system dynamics and anticipate future potential changes to the hydrological regimes. Modern distributed, physically-based hydrological models are able to describe hydrological processes with high level of detail and high spatiotemporal resolution. Yet, they lack in sophistication for the behavior component and human decisions are usually described by very simplistic rules, which might underperform in reproducing the catchment dynamics. In the case of water reservoir operators, these simplistic rules usually consist of target-level rule curves, which represent the average historical level trajectory. Whilst these rules can reasonably reproduce the average seasonal water volume shifts due to the reservoirs' operation, they cannot properly represent peculiar conditions, which influence the actual reservoirs' operation, e.g., variations in energy price or water demand, dry or wet meteorological conditions. Moreover, target-level rule curves are not suitable to explore the water system response to climate and socio economic changing contexts, because they assume a business-as-usual operation. In this work, we quantitatively assess how the inclusion of adaptive reservoirs' operating rules into physically-based hydrological models contribute to the proper representation of the hydrological regime at the catchment scale. In particular, we contrast target-level rule curves and detailed optimization-based behavioral models. We, first, perform the comparison on past observational records, showing that target-level rule curves underperform in representing the hydrological regime over multiple time scales (e.g., weekly, seasonal, inter-annual). Then, we compare how future hydrological changes are affected by the two modeling

Analog I and C systems have been replaced by digital I and C systems because the digital systems have many potential benefits to nuclear power plants in terms of operational and safety performance. For example, digital systems are essentially free of drifts, have higher data handling and storage capabilities, and provide improved performance by accuracy and computational capabilities. In addition, analog replacement parts become more difficult to obtain since they are obsolete and discontinued. There are, however, challenges to the introduction of digital technology into the nuclear power plants because digital systems are more complex than analog systems and their operation and failure modes are different. Especially, software, which can be the core of functionality in the digital systems, does not wear out physically like hardware and its failure modes are not yet defined clearly. Thus, some researches to develop the methodology for software reliability assessment are still proceeding in the safety-critical areas such as nuclear system, aerospace and medical devices. Among them, software metric-based methodology has been considered for the digital I and C systems of Korean nuclear power plants. Advantages and limitations of that methodology are identified and requirements for its application to the digital I and C systems are considered in this study

In this work we study metrics of quantum states, which are natural generalizations of the usual trace metric and Bures metric. Some useful properties of the metrics are proved, such as the joint convexity and contractivity under quantum operations. Our result has a potential application in studying the geometry of quantum states as well as the entanglement detection.

The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rulebased system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rulebased knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.

Sequence recognition through base-pairing is essential for DNA repair and gene regulation, but the basic rules governing this process remain elusive. In particular, the kinetics of annealing between two imperfectly matched strands is not well characterized, despite its potential importance in nucleic acid-based biotechnologies and gene silencing. Here we use single-molecule fluorescence to visualize the multiple annealing and melting reactions of two untethered strands inside a porous vesicle, allowing us to precisely quantify the annealing and melting rates. The data as a function of mismatch position suggest that seven contiguous base pairs are needed for rapid annealing of DNA and RNA. This phenomenological rule of seven may underlie the requirement for seven nucleotides of complementarity to seed gene silencing by small noncoding RNA and may help guide performance improvement in DNA- and RNA-based bio- and nanotechnologies, in which off-target effects can be detrimental.

Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-basedmetrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

In this paper, we discuss recent results about generalized metric spaces and fixed point theory. We introduce the notion of $\\eta$-cone metric spaces, give some topological properties and prove some fixed point theorems for contractive type maps on these spaces. In particular we show that theses $\\eta$-cone metric spaces are natural generalizations of both cone metric spaces and metric type spaces.

Full Text Available A plenty of XML documents exist in petroleum engineering field, but traditional XML integration solution can’t provide semantic query, which leads to low data use efficiency. In light of WeXML(oil&gas well XML data semantic integration and query requirement, this paper proposes a semantic integration method based on extraction rules and ontology mapping. The method firstly defines a series of extraction rules with which elements and properties of WeXML Schema are mapped to classes and properties in WeOWL ontology, respectively; secondly, an algorithm is used to transform WeXML documents into WeOWL instances. Because WeOWL provides limited semantics, ontology mappings between two ontologies are then built to explain class and property of global ontology with terms of WeOWL, and semantic query based on global domain concepts model is provided. By constructing a WeXML data semantic integration prototype system, the proposed transformational rule, the transfer algorithm and the mapping rule are tested.

Full Text Available The purpose of Intelligent Transport System (ITS is mainly to increase the driving safety and efficiency. Data exchange is an important way to achieve the purpose. An on-demand data exchange is especially useful to assist a driver avoiding some emergent events. In order to handle the data exchange under dynamic situations, a rule-based data transfer protocol is proposed in this paper. A set of rules is designed according to the principle of request-forward-reply (RFR. That is, they are used to determine the timing of data broadcasting, forwarding, and replying automatically. Two typical situations are used to demonstrate the operation of rules. One is the front view of a driver occluded by other vehicles. The other is the traffic jam. The proposed protocol is flexible and extensible for unforeseen situations. Three simulation tools were also implemented to demonstrate the feasibility of the protocol and measure the network transmission under high density of vehicles. The simulation results show that the rule-based protocol is efficient on data exchange to increase the driving safety.

Pelvic fractures are serious injuries resulting in high mortality and morbidity. The objective of this study is to develop and validate local pelvic anatomical, cross-section-based injury risk metrics for a finite element (FE) model of the human body. Cross-sectional instrumentation was implemented in the pelvic region of the Global Human Body Models Consortium (GHBMC M50-O) 50th percentile detailed male FE model (v4.3). In total, 25 lateral impact FE simulations were performed using input data from cadaveric lateral impact tests performed by Bouquet et al. The experimental force-time data was scaled using five normalization techniques, which were evaluated using log rank, Wilcoxon rank sum, and correlation and analysis (CORA) testing. Survival analyses with Weibull distribution were performed on the experimental peak force (scaled and unscaled) and the simulation test data to generate injury risk curves (IRCs) for total pelvic injury. Additionally, IRCs were developed for regional injury using cross-sectional forces from the simulation results and injuries documented in the experimental autopsies. These regional IRCs were also evaluated using the receiver operator characteristic (ROC) curve analysis. Based on the results of the all the evaluation methods, the Equal Stress Equal Velocity (ESEV) and ESEV using effective mass (ESEV-EM) scaling techniques performed best. The simulation IRC shows slight under prediction of injury in comparison to these scaled experimental data curves. However, this difference was determined to not be statistically significant. Additionally, the ROC curve analysis showed moderate predictive power for all regional IRCs.

This protocol outlines large-scale reconstructions of neurons combined with the use of independent and unbiased clustering analyses to create a comprehensive survey of the morphological characteristics observed among a selective neuronal population. Combination of these techniques constitutes a novel approach for the collection and analysis of neuroanatomical data. Together, these techniques enable large-scale, and therefore more comprehensive, sampling of selective neuronal populations and establish unbiased quantitative methods for describing morphologically unique neuronal classes within a population. The protocol outlines the use of modified rabies virus to selectively label neurons. G-deleted rabies virus acts like a retrograde tracer following stereotaxic injection into a target brain structure of interest and serves as a vehicle for the delivery and expression of EGFP in neurons. Large numbers of neurons are infected using this technique and express GFP throughout their dendrites, producing "Golgi-like" complete fills of individual neurons. Accordingly, the virus-mediated retrograde tracing method improves upon traditional dye-based retrograde tracing techniques by producing complete intracellular fills. Individual well-isolated neurons spanning all regions of the brain area under study are selected for reconstruction in order to obtain a representative sample of neurons. The protocol outlines procedures to reconstruct cell bodies and complete dendritic arborization patterns of labeled neurons spanning multiple tissue sections. Morphological data, including positions of each neuron within the brain structure, are extracted for further analysis. Standard programming functions were utilized to perform independent cluster analyses and cluster evaluations based on morphological metrics. To verify the utility of these analyses, statistical evaluation of a cluster analysis performed on 160 neurons reconstructed in the thalamic reticular nucleus of the thalamus

The Princeton Beta Experiment (PBX) neutral beams have been routinely operated under automatic computer control. A major upgrade of the computer configuration was undertaken to coincide with the PBX machine modification. The primary tasks included in the computer control system are data acquisition, waveform reduction, automatic control and data storage. The portion of the system which will remain intact is the rule-based approach to automatic control. Increased computational and storage capability will allow the expansion of the knowledge base previously used. The hardware configuration supported by the PBX Neutral Beam (XNB) software includes a dedicated Microvax with five CAMAC crates and four process controllers. The control algorithms are rule-based and goal-driven. The automatic control system raises ion source electrical parameters to selected energy goals and maintains these levels until new goals are requested or faults are detected

Full Text Available ABSTRACT Assessing the quality of e-learning courses to measure the success of e-learning systems in online learning is essential. The system can be used to improve education. The study analyzes the quality of e-learning course on the web site www.kulon.undip.ac.id used a questionnaire with questions based on the variables of ISO 9126. Penilaiann Likert scale was used with a web app. Rule-base reasoning method is used to subject the quality of e-learningyang assessed. A case study conducted in four e-learning courses with 133 sample / respondents as users of the e-learning course. From the obtained results of research conducted both for the value of e-learning from each subject tested. In addition, each e-learning courses have different advantages depending on certain variables. Keywords : E-Learning, Rule-Base, Questionnaire, Likert, Measuring.

This paper demonstrates two methodologies for the construction of rule-based systems in medical decision making. The first approach consists of a method combining genetic programming and heuristic hierarchical rule-base construction. The second model is composed by a strongly-typed genetic...

The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

Full Text Available The fields of antibody engineering, enzyme optimization and pathway construction rely increasingly on screening complex variant DNA libraries. These highly diverse libraries allow researchers to sample a maximized sequence space; and therefore, more rapidly identify proteins with significantly improved activity. The current state of the art in synthetic biology allows for libraries with billions of variants, pushing the limits of researchers' ability to qualify libraries for screening by measuring the traditional quality metrics of fidelity and diversity of variants. Instead, when screening variant libraries, researchers typically use a generic, and often insufficient, oversampling rate based on a common rule-of-thumb. We have developed methods to calculate a library-specific oversampling metric, based on fidelity, diversity, and representation of variants, which informs researchers, prior to screening the library, of the amount of oversampling required to ensure that the desired fraction of variant molecules will be sampled. To derive this oversampling metric, we developed a novel alignment tool to efficiently measure frequency counts of individual nucleotide variant positions using next-generation sequencing data. Next, we apply a method based on the "coupon collector" probability theory to construct a curve of upper bound estimates of the sampling size required for any desired variant coverage. The calculated oversampling metric will guide researchers to maximize their efficiency in using highly variant libraries.

Survival is considered an important indicator of the quality of cancer care, but the validity of different methodologies to measure comparative survival rates is less well understood. We explored whether the National Cancer Data Base (NCDB) could serve as a source of unadjusted and risk-adjusted cancer survival data and whether these data could be used as quality indicators for individual hospitals or in the aggregate by hospital type. The NCDB, an aggregate of > 1,500 hospital cancer registries, was queried to analyze unadjusted and risk-adjusted hazards of death for patients with stage III breast cancer (n = 116,787) and stage IIIB or IV non-small-cell lung cancer (n = 252,392). Data were analyzed at the individual hospital level and by hospital type. At the hospital level, after risk adjustment, few hospitals had comparative risk-adjusted survival rates that were statistically better or worse. By hospital type, National Cancer Institute-designated comprehensive cancer centers had risk-adjusted survival ratios that were statistically significantly better than those of academic cancer centers and community hospitals. Using the NCDB as the data source, survival rates for patients with stage III breast cancer and stage IIIB or IV non-small-cell lung cancer were statistically better at National Cancer Institute-designated comprehensive cancer centers when compared with other hospital types. Compared with academic hospitals, risk-adjusted survival was lower in community hospitals. At the individual hospital level, after risk adjustment, few hospitals were shown to have statistically better or worse survival, suggesting that, using NCDB data, survival may not be a good metric to determine relative quality of cancer care at this level.

Full Text Available The study of cellular signalling pathways and their deregulation in disease states, such as cancer, is a large and extremely complex task. Indeed, these systems involve many parts and processes but are studied piecewise and their literatures and data are consequently fragmented, distributed and sometimes—at least apparently—inconsistent. This makes it extremely difficult to build significant explanatory models with the result that effects in these systems that are brought about by many interacting factors are poorly understood. The rule-based approach to modelling has shown some promise for the representation of the highly combinatorial systems typically found in signalling where many of the proteins are composed of multiple binding domains, capable of simultaneous interactions, and/or peptide motifs controlled by post-translational modifications. However, the rule-based approach requires highly detailed information about the precise conditions for each and every interaction which is rarely available from any one single source. Rather, these conditions must be painstakingly inferred and curated, by hand, from information contained in many papers—each of which contains only part of the story. In this paper, we introduce a graph-based meta-model, attuned to the representation of cellular signalling networks, which aims to ease this massive cognitive burden on the rule-based curation process. This meta-model is a generalization of that used by Kappa and BNGL which allows for the flexible representation of knowledge at various levels of granularity. In particular, it allows us to deal with information which has either too little, or too much, detail with respect to the strict rule-based meta-model. Our approach provides a basis for the gradual aggregation of fragmented biological knowledge extracted from the literature into an instance of the meta-model from which we can define an automated translation into executable Kappa programs.

In this work, I present two projects that both contribute to the aim of discovering how intelligence manifests in the brain. The first project is a method for analyzing recorded neural signals, which takes the form of a convolution-basedmetric on neural membrane potential recordings. Relying only on integral and algebraic operations, the metric compares the timing and number of spikes within recordings as well as the recordings' subthreshold features: summarizing differences in these with a single "distance" between the recordings. Like van Rossum's (2001) metric for spike trains, the metric is based on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. The second project is a combinatorial syntax method for connectionist semantic network encoding. Combinatorial syntax has been a point on which those who support a symbol-processing view of intelligent processing and those who favor a connectionist view have had difficulty seeing eye-to-eye. Symbol-processing theorists have persuasively argued that combinatorial syntax is necessary for certain intelligent mental operations, such as reasoning by analogy. Connectionists have focused on the versatility and adaptability offered by self-organizing networks of simple processing units. With this project, I show that there is a way to reconcile the two perspectives and to ascribe a combinatorial syntax to a connectionist network. The critical principle is to interpret nodes, or units, in the connectionist network as bound integrations of the interpretations for nodes that they share links with. Nodes need not correspond exactly to neurons and may correspond instead to distributed sets, or assemblies, of

Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.

Full Text Available Association rules mining is an important topic in the domain of data mining and knowledge discovering. Some papers have presented several interestingness measure methods; the most typical are Support, Confidence, Lift, Improve, and so forth. But their limitations are obvious, like no objective criterion, lack of statistical base, disability of defining negative relationship, and so forth. This paper proposes three new methods, Bi-lift, Bi-improve, and Bi-confidence, for Lift, Improve, and Confidence, respectively. Then, on the basis of utility function and the executing cost of rules, we propose interestingness function based on profit (IFBP considering subjective preferences and characteristics of specific application object. Finally, a novel measure framework is proposed to improve the traditional one through experimental analysis. In conclusion, the new methods and measure framework are prior to the traditional ones in the aspects of objective criterion, comprehensive definition, and practical application.

Full Text Available Rulebased system is very efficient to ensure stock of drug to remain available by utilizing Radio Frequency Identification (RFID as input means automatically. This method can ensure the stock of drugs to remain available by analyzing the needs of drug users. The research data was the amount of drug usage in hospital for 1 year. The data was processed by using ABC classification to determine the drug with fast, medium and slow movement. In each classification result, rulebased algorithm was given for determination of safety stock and Reorder Point (ROP. This research yielded safety stock and ROP values that vary depending on the class of each drug. Validation is done by comparing the calculation of safety stock and reorder point both manually and by system, then, it was found that the mean deviation value at safety stock was 0,03 and and ROP was 0,08.

Full Text Available nformation and communication technologies have shown a significant advance and fast pace in their performance and pervasiveness. Knowledge has become a significant asset for organizations, which need to deal with large amounts of data and information to produce valuable knowledge. Dealing with knowledge is turning the axis for organizations in the new economy. One of the choices to gather the goal of knowledge managing is the use of rule-based systems. This kind of approach is the new chance for expert-systems’ technology. Modern languages and cheap computing allow the implementation of concurrent systems for dealing huge volumes of information in organizations. The present work is aimed at proposing the use of contemporary programming elements, as easy to exploit threading, when implementing rule-based treatment over huge data volumes.

Rulebased system is very efficient to ensure stock of drug to remain available by utilizing Radio Frequency Identification (RFID) as input means automatically. This method can ensure the stock of drugs to remain available by analyzing the needs of drug users. The research data was the amount of drug usage in hospital for 1 year. The data was processed by using ABC classification to determine the drug with fast, medium and slow movement. In each classification result, rulebased algorithm was given for determination of safety stock and Reorder Point (ROP). This research yielded safety stock and ROP values that vary depending on the class of each drug. Validation is done by comparing the calculation of safety stock and reorder point both manually and by system, then, it was found that the mean deviation value at safety stock was 0,03 and and ROP was 0,08.

A novel first-principles-based expert system is proposed for on-line detection and identification of faulty component candidates during incipient off-normal process operations. The system performs function-oriented diagnostics and can be reused for diagnosing single-component failures in different processes and different plants through the provision of the appropriate process schematics information. The function-oriented and process-independent diagnostic features of the proposed expert system are achieved by constructing a knowledge base containing three distinct types of information, qualitative balance equation rules, functional classification of process components, and the process piping and instrumentation diagram. The various types of qualitative balance equation rules for processes utilizing single-phase liquids are derived and their usage is illustrated through simulation results of a realistic process in a nuclear power plant

Water resources systems are operated, mostly, using a set of pre-defined rules not regarding, usually, to an optimal allocation in terms of water use or economic benefits, but to historical and institutional reasons. These operating policies are reproduced, commonly, as hedging rules, pack rules or zone-based operations, and simulation models can be used to test their performance under a wide range of hydrological and/or socio-economic hypothesis. Despite the high degree of acceptation and testing that these models have achieved, the actual operation of water resources systems hardly follows all the time the pre-defined rules with the consequent uncertainty on the system performance. Real-world reservoir operation is very complex, affected by input uncertainty (imprecision in forecast inflow, seepage and evaporation losses, etc.), filtered by the reservoir operator's experience and natural risk-aversion, while considering the different physical and legal/institutional constraints in order to meet the different demands and system requirements. The aim of this work is to expose a fuzzy logic approach to derive and assess the historical operation of a system. This framework uses a fuzzy rule-based system to reproduce pre-defined rules and also to match as close as possible the actual decisions made by managers. After built up, the fuzzy rule-based system can be integrated in a water resources management model, making possible to assess the system performance at the basin scale. The case study of the Mijares basin (eastern Spain) is used to illustrate the method. A reservoir operating curve regulates the two main reservoir releases (operated in a conjunctive way) with the purpose of guaranteeing a high realiability of supply to the traditional irrigation districts with higher priority (more senior demands that funded the reservoir construction). A fuzzy rule-based system has been created to reproduce the operating curve's performance, defining the system state (total

This study aims to determine an effective method of measuring the efficiency of promotional strategies for tourist destinations. Complicating factors that influence promotional efficiency (PE), such as promotional activities (PA), destination attribute (DA), and destination image (DI), make it difficult to evaluate the effectiveness of PE. This study develops a rule-based decision support mechanism using fuzzy set theory and the Analytic Hierarchy Process (AHP) to evaluate the effectiveness o...

Full Text Available We present Depfix, an open-source system for automatic post-editing of phrase-based machine translation outputs. Depfix employs a range of natural language processing tools to obtain analyses of the input sentences, and uses a set of rules to correct common or serious errors in machine translation outputs. Depfix is currently implemented only for English-to-Czech translation direction, but extending it to other languages is planned.

A new knowledge representation system called LPS is presented. The global control structure of LPS is rule-based, but the local representational structure is schema-oriented. The present version of LPS was designed to increase the understandability of representation while keeping time efficiency reasonable. Pattern matching through slot-networks and meta-actions from among the implemented facilities of LPS, are especially described in detail. 7 references.

Full Text Available Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-basedrule mining to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution. The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post

Microarray and beadchip are two most efficient techniques for measuring gene expression and methylation data in bioinformatics. Biclustering deals with the simultaneous clustering of genes and samples. In this article, we propose a computational rule mining framework, StatBicRM (i.e., statistical biclustering-basedrule mining) to identify special type of rules and potential biomarkers using integrated approaches of statistical and binary inclusion-maximal biclustering techniques from the biological datasets. At first, a novel statistical strategy has been utilized to eliminate the insignificant/low-significant/redundant genes in such way that significance level must satisfy the data distribution property (viz., either normal distribution or non-normal distribution). The data is then discretized and post-discretized, consecutively. Thereafter, the biclustering technique is applied to identify maximal frequent closed homogeneous itemsets. Corresponding special type of rules are then extracted from the selected itemsets. Our proposed rule mining method performs better than the other rule mining algorithms as it generates maximal frequent closed homogeneous itemsets instead of frequent itemsets. Thus, it saves elapsed time, and can work on big dataset. Pathway and Gene Ontology analyses are conducted on the genes of the evolved rules using David database. Frequency analysis of the genes appearing in the evolved rules is performed to determine potential biomarkers. Furthermore, we also classify the data to know how much the evolved rules are able to describe accurately the remaining test (unknown) data. Subsequently, we also compare the average classification accuracy, and other related factors with other rule-based classifiers. Statistical significance tests are also performed for verifying the statistical relevance of the comparative results. Here, each of the other rule mining methods or rule-based classifiers is also starting with the same post-discretized data

Selection of medicines that is inappropriate will lead to an empty result at medicines, this has an impact on medical services and economic value in hospital. The importance of an appropriate medicine selection process requires an automated way to select need based on the development of the patient's illness. In this study, we analyzed patient prescriptions to identify the relationship between the disease and the medicine used by the physician in treating the patient's illness. The analytical framework includes: (1) patient prescription data collection, (2) applying k-means clustering to classify the top 10 diseases, (3) applying Apriori algorithm to find association rulesbased on support, confidence and lift value. The results of the tests of patient prescription datasets in 2015-2016, the application of the k-means algorithm for the clustering of 10 dominant diseases significantly affects the value of trust and support of all association rules on the Apriori algorithm making it more consistent with finding association rules of disease and related medicine. The value of support, confidence and the lift value of disease and related medicine can be used as recommendations for appropriate medicine selection. Based on the conditions of disease progressions of the hospital, there is so more optimal medicine procurement.

Full Text Available A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models.

Constraint Handling Rules (CHR) is an extension to Prolog which opens up a spectrum of hypotheses-based reasoning in logic programs without additional interpretation overhead. Abduction with integrity constraints is one example of hypotheses-based reasoning which can be implemented directly...... in Prolog and CHR with a straightforward use of available and efficiently implemented facilities The present paper clarifies the semantic foundations for this way of doing abduction in CHR and Prolog as well as other examples of hypotheses-based reasoning that is possible, including assumptive logic...

Full Text Available Conflict management in Dempster-Shafer theory (D-S theory is a hot topic in information fusion. In this paper, a novel weighted evidence combination rulebased on evidence distance and uncertainty measure is proposed. The proposed approach consists of two steps. First, the weight is determined based on the evidence distance. Then, the weight value obtained in first step is modified by taking advantage of uncertainty. Our proposed method can efficiently handle high conflicting evidences with better performance of convergence. A numerical example and an application based on sensor fusion in fault diagnosis are given to demonstrate the efficiency of our proposed method.

A content-based image retrieval (CBIR) method for T1-weighted contrast-enhanced MRI (CE-MRI) images of brain tumors is presented for diagnosis aid. The method is thoroughly evaluated on a large image dataset. Using the tumor region as a query, the authors' CBIR system attempts to retrieve tumors of the same pathological category. Aside from commonly used features such as intensity, texture, and shape features, the authors use a margin information descriptor (MID), which is capable of describing the characteristics of tissue surrounding a tumor, for representing image contents. In addition, the authors designed a distance metric learning algorithm called Maximum mean average Precision Projection (MPP) to maximize the smooth approximated mean average precision (mAP) to optimize retrieval performance. The effectiveness of MID and MPP algorithms was evaluated using a brain CE-MRI dataset consisting of 3108 2D scans acquired from 235 patients with three categories of brain tumors (meningioma, glioma, and pituitary tumor). By combining MID and other features, the mAP of retrieval increased by more than 6% with the learned distance metrics. The distance metric learned by MPP significantly outperformed the other two existing distance metric learning methods in terms of mAP. The CBIR system using the proposed strategies achieved a mAP of 87.3% and a precision of 89.3% when top 10 images were returned by the system. Compared with scale-invariant feature transform, the MID, which uses the intensity profile as descriptor, achieves better retrieval performance. Incorporating tumor margin information represented by MID with the distance metric learned by the MPP algorithm can substantially improve the retrieval performance for brain tumors in CE-MRI.

Full Text Available For transcriptomic analysis, there are numerous microarray-based genomic data, especially those generated for cancer research. The typical analysis measures the difference between a cancer sample-group and a matched control group for each transcript or gene. Association rule mining is used to discover interesting item sets through rule-based methodology. Thus, it has advantages to find causal effect relationships between the transcripts. In this work, we introduce two new rule-based similarity measures—weighted rank-based Jaccard and Cosine measures—and then propose a novel computational framework to detect condensed gene co-expression modules ( C o n G E M s through the association rule-based learning system and the weighted similarity scores. In practice, the list of evolved condensed markers that consists of both singular and complex markers in nature depends on the corresponding condensed gene sets in either antecedent or consequent of the rules of the resultant modules. In our evaluation, these markers could be supported by literature evidence, KEGG (Kyoto Encyclopedia of Genes and Genomes pathway and Gene Ontology annotations. Specifically, we preliminarily identified differentially expressed genes using an empirical Bayes test. A recently developed algorithm—RANWAR—was then utilized to determine the association rules from these genes. Based on that, we computed the integrated similarity scores of these rule-based similarity measures between each rule-pair, and the resultant scores were used for clustering to identify the co-expressed rule-modules. We applied our method to a gene expression dataset for lung squamous cell carcinoma and a genome methylation dataset for uterine cervical carcinogenesis. Our proposed module discovery method produced better results than the traditional gene-module discovery measures. In summary, our proposed rule-based method is useful for exploring biomarker modules from transcriptomic data.

For transcriptomic analysis, there are numerous microarray-based genomic data, especially those generated for cancer research. The typical analysis measures the difference between a cancer sample-group and a matched control group for each transcript or gene. Association rule mining is used to discover interesting item sets through rule-based methodology. Thus, it has advantages to find causal effect relationships between the transcripts. In this work, we introduce two new rule-based similarity measures-weighted rank-based Jaccard and Cosine measures-and then propose a novel computational framework to detect condensed gene co-expression modules ( C o n G E M s) through the association rule-based learning system and the weighted similarity scores. In practice, the list of evolved condensed markers that consists of both singular and complex markers in nature depends on the corresponding condensed gene sets in either antecedent or consequent of the rules of the resultant modules. In our evaluation, these markers could be supported by literature evidence, KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway and Gene Ontology annotations. Specifically, we preliminarily identified differentially expressed genes using an empirical Bayes test. A recently developed algorithm-RANWAR-was then utilized to determine the association rules from these genes. Based on that, we computed the integrated similarity scores of these rule-based similarity measures between each rule-pair, and the resultant scores were used for clustering to identify the co-expressed rule-modules. We applied our method to a gene expression dataset for lung squamous cell carcinoma and a genome methylation dataset for uterine cervical carcinogenesis. Our proposed module discovery method produced better results than the traditional gene-module discovery measures. In summary, our proposed rule-based method is useful for exploring biomarker modules from transcriptomic data.

Full Text Available Making accurate judgments is a core human competence and a prerequisite for success in many areas of life. Plenty of evidence exists that people can employ different judgment strategies to solve identical judgment problems. In categorization, it has been demonstrated that similarity-based and rule-based strategies are associated with activity in different brain regions. Building on this research, the present work tests whether solving two identical judgment problems recruits different neural substrates depending on people's judgment strategies. Combining cognitive modeling of judgment strategies at the behavioral level with functional magnetic resonance imaging (fMRI, we compare brain activity when using two archetypal judgment strategies: a similarity-based exemplar strategy and a rule-based heuristic strategy. Using an exemplar-based strategy should recruit areas involved in long-term memory processes to a larger extent than a heuristic strategy. In contrast, using a heuristic strategy should recruit areas involved in the application of rules to a larger extent than an exemplar-based strategy. Largely consistent with our hypotheses, we found that using an exemplar-based strategy led to relatively higher BOLD activity in the anterior prefrontal and inferior parietal cortex, presumably related to retrieval and selective attention processes. In contrast, using a heuristic strategy led to relatively higher activity in areas in the dorsolateral prefrontal and the temporal-parietal cortex associated with cognitive control and information integration. Thus, even when people solve identical judgment problems, different neural substrates can be recruited depending on the judgment strategy involved.

Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

In this paper, a technique based on image pyramid and Bayes rule for reducing noise effects in unsupervised change detection is proposed. By using Gaussian pyramid to process two multitemporal images respectively, two image pyramids are constructed. The difference pyramid images are obtained by point-by-point subtraction between the same level images of the two image pyramids. By resizing all difference pyramid images to the size of the original multitemporal image and then making product operator among them, a map being similar to the difference image is obtained. The difference image is generated by point-by-point subtraction between the two multitemporal images directly. At last, the Bayes rule is used to distinguish the changed pixels. Both synthetic and real data sets are used to evaluate the performance of the proposed technique. Experimental results show that the map from the proposed technique is more robust to noise than the difference image.

In commerce, businesses use branding to differentiate their product and service offerings from those of their competitors. The brand incorporates a set of product or service features that are associated with that particular brand name and identifies the product/service segmentation in the market. This study proposes a new data mining approach, a rough set-based association rule induction, implemented on a brand trust evaluation model. In addition, it presents as one way to deal with data uncertainty to analyse ratio scale data, while creating predictive if-then rules that generalise data values to the retail region. As such, this study uses the analysis of algorithms to find alcoholic beverages brand trust recall. Finally, discussions and conclusion are presented for further managerial implications.

Data from an audit relating to transfusion decisions during intermediate or major surgery were analysed to determine the strengths of certain factors in the decision making process. The analysis, using orthogonal search-basedrule extraction (OSRE) from a trained neural network, demonstrated that the risk of tissue hypoxia (ROTH) assessed using a 100-mm visual analogue scale, the haemoglobin value (Hb) and the presence or absence of on-going haemorrhage (OGH) were able to reproduce the transfusion decisions with a joint specificity of 0.96 and sensitivity of 0.93 and a positive predictive value of 0.9. The rules indicating transfusion were: 1. ROTH > 32 mm and Hb 13 mm and Hb 38 mm, Hb < 102 g x l(-1) and OGH; 4. Hb < 78 g x l(-1).

The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) eliminated the flawed Sustainable Growth Rate (SGR) act formula - a longstanding crucial issue of concern for health care providers and Medicare beneficiaries. MACRA also included a quality improvement program entitled, "The Merit-Based Incentive Payment System, or MIPS." The proposed rule of MIPS sought to streamline existing federal quality efforts and therefore linked 4 distinct programs into one. Three existing programs, meaningful use (MU), Physician Quality Reporting System (PQRS), value-based payment (VBP) system were merged with the addition of Clinical Improvement Activity category. The proposed rule also changed the name of MU to Advancing Care Information, or ACI. ACI contributes to 25% of composite score of the four programs, PQRS contributes 50% of the composite score, while VBP system, which deals with resource use or cost, contributes to 10% of the composite score. The newest category, Improvement Activities or IA, contributes 15% to the composite score. The proposed rule also created what it called a design incentive that drives movement to delivery system reform principles with the inclusion of Advanced Alternative Payment Models (APMs).Following the release of the proposed rule, the medical community, as well as Congress, provided substantial input to Centers for Medicare and Medicaid Services (CMS),expressing their concern. American Society of Interventional Pain Physicians (ASIPP) focused on 3 important aspects: delay the implementation, provide a 3-month performance period, and provide ability to submit meaningful quality measures in a timely and economic manner. The final rule accepted many of the comments from various organizations, including several of those specifically emphasized by ASIPP, with acceptance of 3-month reporting period, as well as the ability to submit non-MIPS measures to improve real quality and make the system meaningful. CMS also provided a mechanism for

´eraud and Pottier’s type and capability system including both frame and anti-frame rules. The model is a possible worlds model based on the operational semantics and step-indexed heap relations, and the worlds are constructed as a recursively defined predicate on a recursively defined metric space. We also extend...

for Chargu´eraud and Pottier’s type and capability system including frame and anti-frame rules, based on the operational semantics and step-indexed heap relations. The worlds are constructed as a recursively defined predicate on a recursively defined metric space, which provides a considerably simpler...

The computer-aided detection of cardiac arrhythmias stills a crucial application in medical technologies. The rulebased systems RBS ensure a high level of transparency and interpretability of the obtained results. To facilitate the diagnosis of the cardiologists and to reduce the uncertainty made in this diagnosis. In this research article, we have realized a classification and automatic recognition of cardiac arrhythmias, by using XML rules that represent the cardiologist knowledge. Thirteen experiments with different knowledge bases were realized for improving the performance of the used method in the detection of 13 cardiac arrhythmias. In the first 12 experiments, we have designed a specialized knowledge base for each cardiac arrhythmia, which contains just one arrhythmia detection rule. In the last experiment, we applied the knowledge base which contains rules of 12 arrhythmias. We used, for the experiments, an international data set with 279 features and 452 records characterizing 12 leads of ECG signal and social information of patients. The data sets were constructed and published at Bilkent University of Ankara, Turkey. In addition, the second version of the self-developed software "XMLRULE" was used; the software can infer more than one class and facilitate the interpretability of the obtained results. The 12 first experiments give 82.80% of correct detection as the mean of all experiments, the results were between 19% and 100% with a low rate in just one experiment. The last experiment in which all arrhythmias are considered, the results of correct detection was 38.33% with 90.55% of sensibility and 46.24% of specificity. It was clearly show that in these results the good choice of the classification model is very beneficial in terms of performance. The obtained results were better than the published results with other computational methods for the mono class detection, but it was less in multi-class detection. The RBS is the most transparent method for

Full Text Available Analysis of Arabic language has become a necessity because of its big evolution; we propose in this paper a rulebased extraction method of Arabic text to solve some weaknesses founded on previous research works. Our approach is divided on preprocessing phase, on which we proceed to the tokenization of the text, and formatting it by removing any punctuation, diacritics and non-letter characters. Treatment phase based on the elimination of several sets of affixes (diacritics, prefixes, and suffixes, and on the application of several patterns. A check phase that verifies if the root extracted is correct, by searching the result in root dictionaries.

The advantages provided by Photonic Computing has been well documented. An Optical arithmetic processor has to take full advantage of the massive parallelism in optical signals. Such a processor, using the Modified - Signed - Digit (MSD) number . (i) representation, has been presented here based (2) on the symbolic substitution 1ogi. The higher order symbolic substitution rules are formulated for the addition operation, which is carried out in just two steps. Based on the addition operation, the other arithmetic operations - subtraction, multiplication and division - are implemented. Finally, the usefulness of this MSD system is studied.

One hundred and twenty-four high school students were randomly assigned to four groups: 33 subjects memorized the rule statement before, 29 subjects memorized the rule statement during, and 30 subjects memorized the rule statement after instruction in rule application skills. Thirty-two subjects were not required to memorize rule statements.…

Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.

QCD sum rule was developed about thirty years ago and has been used up to the present to calculate various physical quantities like hadrons. It has been, however, needed to assume 'pole + continuum' for the spectral function in the conventional analyses. Application of this method therefore came across with difficulties when the above assumption is not satisfied. In order to avoid this difficulty, analysis to make use of the maximum entropy method (MEM) has been developed by the present author. It is reported here how far this new method can be successfully applied. In the first section, the general feature of the QCD sum rule is introduced. In section 2, it is discussed why the analysis by the QCD sum rulebased on the MEM is so effective. In section 3, the MEM analysis process is described, and in the subsection 3.1 likelihood function and prior probability are considered then in subsection 3.2 numerical analyses are picked up. In section 4, some cases of applications are described starting with ρ mesons, then charmoniums in the finite temperature and finally recent developments. Some figures of the spectral functions are shown. In section 5, summing up of the present analysis method and future view are given. (S. Funahashi)

In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

Acute administration of N-methyl-D-aspartate receptor (NMDAR) antagonists in healthy humans and animals produces working memory deficits similar to those observed in schizophrenia. However, it is unclear whether they also lead to altered low-frequency (rule-based prosaccade and antisaccade working memory task, both before and after systemic injections of a subanesthetic dose (delay periods and inter-trial intervals. It also increased task-related alpha-band activities, likely reflecting compromised attention. Beta-band oscillations may be especially relevant to working memory processes, as stronger beta power weakly but significantly predicted shorter saccadic reaction time. Also in beta band, ketamine reduced the performance-related oscillation as well as the rule information encoded in the spectral power. Ketamine also reduced rule information in the spike-field phase consistency in almost all frequencies up to 60Hz. Our findings support NMDAR antagonists in non-human primates as a meaningful model for altered neural oscillations and synchrony, which reflect a disorganized network underlying the working memory deficits in schizophrenia. SIGNIFICANCE STATEMENT Low doses of ketamine-an NMDA receptor blocker-produce working memory deficits similar to those observed in schizophrenia. In the LPFC, a key brain region for working memory, we found that ketamine altered neural oscillatory activities in similar ways that differentiate schizophrenic patients and healthy subjects, during both task and non-task periods. Ketamine induced stronger gamma (30-60Hz) and weaker beta (13-30Hz) oscillations, reflecting local hyperactivity and reduced long-range communications. Furthermore, ketamine reduced performance-related oscillatory activities, as well as the rule information encoded in the oscillations and in the synchrony between single cell activities and oscillations. The ketamine model helps link the molecular and cellular basis of neural oscillatory changes to the working

Full Text Available In recent years, Solar Photovoltaic (PV system has presented itself as one of the main solutions to the electricity poverty plaguing the majority of buildings in rural communities with solar energy potential. However, the stochasticity associated with solar PV power output owing to vagaries in weather conditions is a major challenge in the deployment of the systems. This study investigates approach for maximizing the benefits of a Stand-Alone Photovoltaic-Battery (SAPVB system via techniques that provide for optimum energy gleaning and management. A rule-based load management scheme is developed and tested for a residential building. The approach allows load prioritizing and shifting based on certain rules. To achieve this, the residential loads are classified into Critical Loads (CLs and Uncritical Loads (ULs. The CLs are given higher priority and therefore are allowed to operate at their scheduled time while the ULs are of less priority, hence can be shifted to a time where there is enough electric power generation from the PV arrays rather than the loads being operated at the time period set by the user. Four scenarios were created to give insight into the applicability of the proposed rulebased load management scheme. The result revealed that when the load management technique is not utilized as in the case of scenario 1 (Base case, the percentage satisfaction of the critical and uncritical loads by the PV system are 49.8% and 23.7%. However with the implementation of the load management scheme in scenarios 2, 3 and 4, the percentage satisfaction of the loads (CLs, ULs are (93.8%, 74.2%, (90.9%, 70.1% and (87.2%, 65.4% for scenarios 2, 3 and 4, respectively.

Multiagent systems and data mining have recently attracted considerable attention in the field of computing. Reinforcement learning is the most commonly used learning process for multiagent systems. However, it still has some drawbacks, including modeling other learning agents present in the domain as part of the state of the environment, and some states are experienced much less than others, or some state-action pairs are never visited during the learning phase. Further, before completing the learning process, an agent cannot exhibit a certain behavior in some states that may be experienced sufficiently. In this study, we propose a novel multiagent learning approach to handle these problems. Our approach is based on utilizing the mining process for modular cooperative learning systems. It incorporates fuzziness and online analytical processing (OLAP) based mining to effectively process the information reported by agents. First, we describe a fuzzy data cube OLAP architecture which facilitates effective storage and processing of the state information reported by agents. This way, the action of the other agent, not even in the visual environment. of the agent under consideration, can simply be predicted by extracting online association rules, a well-known data mining technique, from the constructed data cube. Second, we present a new action selection model, which is also based on association rules mining. Finally, we generalize not sufficiently experienced states, by mining multilevel association rules from the proposed fuzzy data cube. Experimental results obtained on two different versions of a well-known pursuit domain show the robustness and effectiveness of the proposed fuzzy OLAP mining based modular learning approach. Finally, we tested the scalability of the approach presented in this paper and compared it with our previous work on modular-fuzzy Q-learning and ordinary Q-learning.

Full Text Available Rulebased systems constitute a powerful tool for specification of knowledge in design and implementation of knowledge based systems. They provide also a universal programming paradigm for domains such as intelligent control, decision support, situation classification and operational knowledge encoding. In order to assure safe and reliable performance, such system should satisfy certain formal requirements, including completeness and consistency. This paper addresses the issue of analysis and verification of selected properties of a class of such system in a systematic way. A uniform, tabular scheme of single-level rule-based systems is considered. Such systems can be applied as a generalized form of databases for specification of data pattern (unconditional knowledge, or can be used for defining attributive decision tables (conditional knowledge in form of rules. They can also serve as lower-level components of a hierarchical multi-level control and decision support knowledge-based systems. An algebraic knowledge representation paradigm using extended tabular representation, similar to relational database tables is presented and algebraic bases for system analysis, verification and design support are outlined.

Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex. PMID:26422522

There is a processing need for a fast, easy and accurate classification system for oil palm fruit ripeness. Such a system will be invaluable to farmers and plantation managers who need to sell their oil palm fresh fruit bunch (FFB) for the mill as this will avoid disputes. In this paper,a new approach was developed under the name of expert rules-based systembased on the image processing techniques results of thethree different oil palm FFB region of interests (ROIs), namely; ROI1 (300x300 pixels), ROI2 (50x50 pixels) and ROI3 (100x100 pixels). The results show that the best rule-based ROIs for statistical colour feature extraction with k-nearest neighbors (KNN) classifier at 94% were chosen as well as the ROIs that indicated results higher than the rule-based outcome, such as the ROIs of statistical colour feature extraction with artificial neural network (ANN) classifier at 94%, were selected for further FFB ripeness inspection system

The evaluation of exposure to extremely low-frequency (ELF) magnetic fields using broadband measurement techniques gives satisfactory results when the field has essentially a single frequency. Nevertheless, magnetic fields are in most cases distorted by harmonic components. This work analyses the harmonic components of the ELF magnetic field in an outdoor urban context and compares the evaluation of the exposure based on broadband measurements with that based on spectral analysis. The multiple frequency rule of the International Commission on Non-ionizing Radiation Protection (ICNIRP) regulatory guidelines was applied. With the 1998 ICNIRP guideline, harmonics dominated the exposure with a 55 % contribution. With the 2010 ICNIRP guideline, however, the primary frequency dominated the exposure with a 78 % contribution. Values of the exposure based on spectral analysis were significantly higher than those based on broadband measurements. Hence, it is clearly necessary to determine the harmonic components of the ELF magnetic field to assess exposure in urban contexts. (authors)

We present the concept of typelets, a specification technique for dynamic graphical user interfaces (GUIs) based on types. The technique is implemented in a dialect of ML, called MLFi (MLFi is a derivative of OCaml, extended by LexiFi with extensions targeted at the financial industry), which...... specification language allows layout programmers (e.g., end-users) to reorganize layouts in a type-safe way without being allowed to alter the rule machinery. The resulting framework is highly flexible and allows for creating highly maintainable modules. It is used with success in the context of SimCorp's high...

In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

The Core Protection Calculator System(CPCS) was developed to initiate a Reactor Trip under the circumstance of certain transients by Combustion Engineering Company. The major function of the CPCS is to generate contact outputs for the Departure from Nucleate Boiling Ratio(DNBR) Trip and Local Power Density(LPD) Trip. But in CPCS the trip causes can not be identified, only trip status is displayed. It may take much time and efforts for plant operator to analyse the trip causes of CPCS. So, the Cause Analysis System for CPCS(CASCPCS) has been developed using the rule-base deduction method to aid the operators in Nuclear Power Plant

Full Text Available Developments in technology increased the importance of safety management in work life. These improvements also resulted in a requirement of more investment and assignment on human in work systems. Here we face this problem: Can we make it possible to forecast the possible accidents that workers can face, and prevent these accidents by taking necessary precautions? In this study made, we aimed at developing an rule-based system to forecast the occupational accidents in coming periods at the departments of the facilities in hazardous work systems. The validity of the developed system was proved by implementing it into practice in hazardous work systems in manufacturing industry.

Understanding the causes of rainfall anomalies in the West African Sahel to effectively predict drought events remains a challenge. The physical mechanisms that influence precipitation in this region are complex, uncertain, and imprecise in nature. Fuzzy logic techniques are renowned to be highly efficient in modeling such dynamics. This paper attempts to forecast meteorological drought in Western Niger using fuzzy rule-based modeling techniques. The 3-month scale standardized precipitation index (SPI-3) of four rainfall stations was used as predictand. Monthly data of southern oscillation index (SOI), South Atlantic sea surface temperature (SST), relative humidity (RH), and Atlantic sea level pressure (SLP), sourced from the National Oceanic and Atmosphere Administration (NOAA), were used as predictors. Fuzzy rules and membership functions were generated using fuzzy c-means clustering approach, expert decision, and literature review. For a minimum lead time of 1 month, the model has a coefficient of determination R 2 between 0.80 and 0.88, mean square error (MSE) below 0.17, and Nash-Sutcliffe efficiency (NSE) ranging between 0.79 and 0.87. The empirical frequency distributions of the predicted and the observed drought classes are equal at the 99% of confidence level based on two-sample t test. Results also revealed the discrepancy in the influence of SOI and SLP on drought occurrence at the four stations while the effect of SST and RH are space independent, being both significantly correlated (at α based forecast model shows better forecast skills.

It is commonly assumed that belief-based reasoning is fast and automatic, whereas rule-based reasoning is slower and more effortful. Dual-Process theories of reasoning rely on this speed-asymmetry explanation to account for a number of reasoning phenomena, such as base-rate neglect and belief-bias. The goal of the current study was to test this…

This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

Bioindicators, based on a large variety of organisms, have been increasingly used in the assessment of the status of aquatic systems. In marine coastal waters, seagrasses have shown a great potential as bioindicator organisms, probably due to both their environmental sensitivity and the large amount of knowledge available. However, and as far as we are aware, only little attention has been paid to euryhaline species suitable for biomonitoring both transitional and marine waters. With the aim to contribute to this expanding field, and provide new and useful tools for managers, we develop here a multi-bioindicator index based on the seagrass Cymodocea nodosa. We first compiled from the literature a suite of 54 candidate metrics, i. e. measurable attribute of the concerned organism or community that adequately reflects properties of the environment, obtained from C. nodosa and its associated ecosystem, putatively responding to environmental deterioration. We then evaluated them empirically, obtaining a complete dataset on these metrics along a gradient of anthropogenic disturbance. Using this dataset, we selected the metrics to construct the index, using, successively: (i) ANOVA, to assess their capacity to discriminate among sites of different environmental conditions; (ii) PCA, to check the existence of a common pattern among the metrics reflecting the environmental gradient; and (iii) feasibility and cost-effectiveness criteria. Finally, 10 metrics (out of the 54 tested) encompassing from the physiological (δ15N, δ34S, % N, % P content of rhizomes), through the individual (shoot size) and the population (root weight ratio), to the community (epiphytes load) organisation levels, and some metallic pollution descriptors (Cd, Cu and Zn content of rhizomes) were retained and integrated into a single index (CYMOX) using the scores of the sites on the first axis of a PCA. These scores were reduced to a 0-1 (Ecological Quality Ratio) scale by referring the values to the

Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.

Aimed toward researchers and graduate students familiar with elements of functional analysis, linear algebra, and general topology; this book contains a general study of modulars, modular spaces, and metric modular spaces. Modulars may be thought of as generalized velocity fields and serve two important purposes: generate metric spaces in a unified manner and provide a weaker convergence, the modular convergence, whose topology is non-metrizable in general. Metric modular spaces are extensions of metric spaces, metric linear spaces, and classical modular linear spaces. The topics covered include the classification of modulars, metrizability of modular spaces, modular transforms and duality between modular spaces, metric and modular topologies. Applications illustrated in this book include: the description of superposition operators acting in modular spaces, the existence of regular selections of set-valued mappings, new interpretations of spaces of Lipschitzian and absolutely continuous mappings, the existe...

Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.

Energy budget models for five marine ecosystems were compared to identify differences and similarities in trophic and community structure. We examined the Gulf of Maine and Georges Bank in the northwest Atlantic Ocean, the combined Norwegian/Barents Seas in the northeast Atlantic Ocean, and the eastern Bering Sea and the Gulf of Alaska in the northeast Pacific Ocean. Comparable energy budgets were constructed for each ecosystem by aggregating information for similar species groups into consistent functional groups. Several ecosystem indices (e.g., functional group production, consumption and biomass ratios, cumulative biomass, food web macrodescriptors, and network metrics) were compared for each ecosystem. The comparative approach clearly identified data gaps for each ecosystem, an important outcome of this work. Commonalities across the ecosystems included overall high primary production and energy flow at low trophic levels, high production and consumption by carnivorous zooplankton, and similar proportions of apex predator to lower trophic level biomass. Major differences included distinct biomass ratios of pelagic to demersal fish, ranging from highest in the combined Norwegian/Barents ecosystem to lowest in the Alaskan systems, and notable differences in primary production per unit area, highest in the Alaskan and Georges Bank/Gulf of Maine ecosystems, and lowest in the Norwegian ecosystems. While comparing a disparate group of organisms across a wide range of marine ecosystems is challenging, this work demonstrates that standardized metrics both elucidate properties common to marine ecosystems and identify key distinctions useful for fisheries management.

In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield comp...

Variability in the representation of the decision criterion is assumed in many category-learning models, yet few studies have directly examined its impact. On each trial, criterial noise should result in drift in the criterion and will negatively impact categorization accuracy, particularly in rule-based categorization tasks, where learning depends on the maintenance and manipulation of decision criteria. In three experiments, we tested this hypothesis and examined the impact of working memory on slowing the drift rate. In Experiment 1, we examined the effect of drift by inserting a 5-sec delay between the categorization response and the delivery of corrective feedback, and working memory demand was manipulated by varying the number of decision criteria to be learned. Delayed feedback adversely affected performance, but only when working memory demand was high. In Experiment 2, we built on a classic finding in the absolute identification literature and demonstrated that distributing the criteria across multiple dimensions decreases the impact of drift during the delay. In Experiment 3, we confirmed that the effect of drift during the delay is moderated by working memory. These results provide important insights into the interplay between criterial noise and working memory, as well as providing important constraints for models of rule-based category learning.

It is shown that gradually devitrified Co-based nonmagnetostrictive metallic glass is an excellent model material to verify Louis Neel's theory of the Rayleigh rule. In the course of the calculations, Neel showed that the parameter p=bH c /a (where H c is the coercivity, a and b are the coefficients of a quadratic polynomial expressing the Rayleigh rule) is expected to range between 0.6 (hard magnets) and 1.6 (soft). However, the experimental values of this parameter, reported in the literature for a number of mono- and poly-crystalline magnets, are much greater than those expected from the theory presented by Neel (in some cases even by two orders of magnitude). The measurements, performed for a series of Co-based metallic glass samples annealed at gradually increasing temperature to produce nanocrystalline structures with differentiated density and size of the crystallites, have shown that the calculated values of the parameter p fall within the range expected from Neel's theory

Full Text Available Non-rigid multi-modal image registration plays an important role in medical image processing and analysis. Existing image registration methods based on similarity metrics such as mutual information (MI and sum of squared differences (SSD cannot achieve either high registration accuracy or high registration efficiency. To address this problem, we propose a novel two phase non-rigid multi-modal image registration method by combining Weber local descriptor (WLD based similarity metrics with the normalized mutual information (NMI using the diffeomorphic free-form deformation (FFD model. The first phase aims at recovering the large deformation component using the WLD based non-local SSD (wldNSSD or weighted structural similarity (wldWSSIM. Based on the output of the former phase, the second phase is focused on getting accurate transformation parameters related to the small deformation using the NMI. Extensive experiments on T1, T2 and PD weighted MR images demonstrate that the proposed wldNSSD-NMI or wldWSSIM-NMI method outperforms the registration methods based on the NMI, the conditional mutual information (CMI, the SSD on entropy images (ESSD and the ESSD-NMI in terms of registration accuracy and computation efficiency.

National Aeronautics and Space Administration — This chapter presents several performance metrics for offline evaluation of prognostics algorithms. A brief overview of different methods employed for performance...

Full Text Available Various kinds of metrics used for the quantitative evaluation of scholarly journals are reviewed. The impact factor and related metrics including the immediacy index and the aggregate impact factor, which are provided by the Journal Citation Reports, are explained in detail. The Eigenfactor score and the article influence score are also reviewed. In addition, journal metrics such as CiteScore, Source Normalized Impact per Paper, SCImago Journal Rank, h-index, and g-index are discussed. Limitations and problems that these metrics have are pointed out. We should be cautious to rely on those quantitative measures too much when we evaluate journals or researchers.

Full Text Available By a qualitative analysis, this research observes whether a principles-based system or a mixed version of it with the rules-based system, applied in Romania - an emergent country - is appropriate taking into account the mentalities, the traditions, and other cultural elements that were typical of a rules-based system. We support the statement that, even if certain contextual variables are common to other developed countries, their environments significantly differ. To be effective, financial reporting must reflect the firm's context in which it is functioning. The research has a deductive approach based on the analysis of the cultural factors and their influence in the last years. For Romania it is argue a lower accounting professionalism associated with a low level of ambiguity tolerance. For the stage analysed in this study (after the year 2005 the professional reasoning - a proxy for the accounting professional behaviour - took into consideration the fiscal and legal requirements rather than the accounting principles and judgments. The research suggest that the Romanian accounting practice and the professionals are not fully prepared for a principles-based system environment, associated with the ability to find undisclosed events, facing ambiguity, identifying inferred relationships and using intuition, respectively working with uncertainty. We therefore reach the conclusion that in Romania institutional amendments affecting the professional expertise would be needed. The accounting regulations must be chosen with great caution and they must answer and/ or be adjusted, even if the process would be delayed, to national values, behaviour of companies and individual expertise and beliefs. Secondly, the benefits of applying accounting reasoning in this country may be enhanced through a better understanding of their content and through practical exercise. Here regulatory bodies may intervene for organizing professional training programs and acting

Plant phenology, a sensitive indicator of climate change, influences vegetation-atmosphere interactions by changing the carbon and water cycles from local to global scales. Camera-based phenological observations of the color changes of the vegetation canopy throughout the growing season have become popular in recent years. However, the linkages between camera phenological metrics and leaf biochemical, biophysical, and spectral properties are elusive. We measured key leaf properties including chlorophyll concentration and leaf reflectance on a weekly basis from June to November 2011 in a white oak forest on the island of Martha's Vineyard, Massachusetts, USA. Concurrently, we used a digital camera to automatically acquire daily pictures of the tree canopies. We found that there was a mismatch between the camera-based phenological metric for the canopy greenness (green chromatic coordinate, gcc) and the total chlorophyll and carotenoids concentration and leaf mass per area during late spring/early summer. The seasonal peak of gcc is approximately 20 days earlier than the peak of the total chlorophyll concentration. During the fall, both canopy and leaf redness were significantly correlated with the vegetation index for anthocyanin concentration, opening a new window to quantify vegetation senescence remotely. Satellite- and camera-based vegetation indices agreed well, suggesting that camera-based observations can be used as the ground validation for satellites. Using the high-temporal resolution dataset of leaf biochemical, biophysical, and spectral properties, our results show the strengths and potential uncertainties to use canopy color as the proxy of ecosystem functioning.

Full Text Available This paper presents and discusses various metrics proposed for evaluation of polyphonic sound event detection systems used in realistic situations where there are typically multiple sound sources active simultaneously. The system output in this case contains overlapping events, marked as multiple sounds detected as being active at the same time. The polyphonic system output requires a suitable procedure for evaluation against a reference. Metrics from neighboring fields such as speech recognition and speaker diarization can be used, but they need to be partially redefined to deal with the overlapping events. We present a review of the most common metrics in the field and the way they are adapted and interpreted in the polyphonic case. We discuss segment-based and event-based definitions of each metric and explain the consequences of instance-based and class-based averaging using a case study. In parallel, we provide a toolbox containing implementations of presented metrics.

This paper presents the CONCERT framework, a push/filter information consumption paradigm, based on a rule-based semantic contextual information system for tourism. CONCERT suggests a specific insight of the notion of context from a human mobility perspective. It focuses on the particular characteristics and requirements of travellers and addresses the drawbacks found in other approaches. Additionally, CONCERT suggests the use of digital broadcasting as push communication technology, whereby tourism information is disseminated to mobile devices. This information is then automatically filtered by a network of ontologies and offered to tourists on the screen. The results obtained in the experiments carried out show evidence that the information disseminated through digital broadcasting can be manipulated by the network of ontologies, providing contextualized information that produces user satisfaction. PMID:22778584

Full Text Available Recommender systems capable of discovering patterns in stock price movements and generating stock recommendations based on the patterns thus discovered can significantly supplement the decision-making process of a stock trader. Such recommender systems are of great significance to a layperson who wishes to profit by stock trading even while not possessing the skill or expertise of a seasoned trader. A genetic algorithm optimized Symbolic Aggregate approXimation (SAX–Apriori based stock trading recommender system, which can mine temporal association rules from the stock price data set to generate stock trading recommendations, is presented in this article. The proposed system is validated on 12 different data sets. The results indicate that the proposed system significantly outperforms the passive buy-and-hold strategy, offering scope for a layperson to successfully invest in capital markets.

RFID technology tracks the flow of physical items and goods in supply chains to help users detect inefficiencies, such as shipment delays, theft, or inventory problems. An inevitable consequence, however, is that it generates huge numbers of events. To exploit these large amounts of data, the Sup......RFID technology tracks the flow of physical items and goods in supply chains to help users detect inefficiencies, such as shipment delays, theft, or inventory problems. An inevitable consequence, however, is that it generates huge numbers of events. To exploit these large amounts of data......, the Supply Chain Visualizer increases supply-chain visibility by analyzing RFID data, using a mix of automated analysis techniques and human effort. The tool's core concepts include rule-based analysis techniques and a map-based representation interface. With these features, it lets users visualize...

The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.

A new type of detector which we call the Catalytic Activity Aerosol Monitor (CAAM) was investigated towards its capability to detect traces of commonly used industrial catalysts in ambient air in quasi real time. Its metric is defined as the catalytic activity concentration (CAC) expressed per volume of sampled workplace air. We thus propose a new metric which expresses the presence of nanoparticles in terms of their functionality - in this case a functionality of potential relevance for damaging effects - rather than their number, surface, or mass concentration in workplace air. The CAAM samples a few micrograms of known or anticipated airborne catalyst material onto a filter first and then initiates a chemical reaction which is specific to that catalyst. The concentration of specific gases is recorded using an IR sensor, thereby giving the desired catalytic activity. Due to a miniaturization effort, the laboratory prototype is compact and portable. Sensitivity and linearity of the CAAM response were investigated with catalytically active palladium and nickel nano-aerosols of known mass concentration and precisely adjustable primary particle size in the range of 3-30 nm. With the miniature IR sensor, the smallest detectable particle mass was found to be in the range of a few micrograms, giving estimated sampling times on the order of minutes for workplace aerosol concentrations typically reported in the literature. Tests were also performed in the presence of inert background aerosols of SiO2, TiO2, and Al2O3. It was found that the active material is detectable via its catalytic activity even when the particles are attached to a non-active background aerosol.

Full Text Available Abstract Background Spinal pain is a common and often disabling problem. The research on various treatments for spinal pain has, for the most part, suggested that while several interventions have demonstrated mild to moderate short-term benefit, no single treatment has a major impact on either pain or disability. There is great need for more accurate diagnosis in patients with spinal pain. In a previous paper, the theoretical model of a diagnosis-based clinical decision rule was presented. The approach is designed to provide the clinician with a strategy for arriving at a specific working diagnosis from which treatment decisions can be made. It is based on three questions of diagnosis. In the current paper, the literature on the reliability and validity of the assessment procedures that are included in the diagnosis-based clinical decision rule is presented. Methods The databases of Medline, Cinahl, Embase and MANTIS were searched for studies that evaluated the reliability and validity of clinic-based diagnostic procedures for patients with spinal pain that have relevance for questions 2 (which investigates characteristics of the pain source and 3 (which investigates perpetuating factors of the pain experience. In addition, the reference list of identified papers and authors' libraries were searched. Results A total of 1769 articles were retrieved, of which 138 were deemed relevant. Fifty-one studies related to reliability and 76 related to validity. One study evaluated both reliability and validity. Conclusion Regarding some aspects of the DBCDR, there are a number of studies that allow the clinician to have a reasonable degree of confidence in his or her findings. This is particularly true for centralization signs, neurodynamic signs and psychological perpetuating factors. There are other aspects of the DBCDR in which a lesser degree of confidence is warranted, and in which further research is needed.

Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and costsensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method;candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal parameters

Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and cost-sensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method; candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal

Nearly one-third of surgical residents will enter into academic development during their surgical residency by dedicating time to a research fellowship for 1-3 y. Major interest lies in understanding how laboratory residents' surgical skills are affected by minimal clinical exposure during academic development. A widely held concern is that the time away from clinical exposure results in surgical skills decay. This study examines the impact of the academic development years on residents' operative performance. We hypothesize that the use of repeated, annual assessments may result in learning even without individual feedback on participants simulated performance. Surgical performance data were collected from laboratory residents (postgraduate years 2-5) during the summers of 2014, 2015, and 2016. Residents had 15 min to complete a shortened, simulated laparoscopic ventral hernia repair procedure. Final hernia repair skins from all participants were scored using a previously validated checklist. An analysis of variance test compared the mean performance scores of repeat participants to those of first time participants. Twenty-seven (37% female) laboratory residents provided 2-year assessment data over the 3-year span of the study. Second time performance revealed improvement from a mean score of 14 (standard error = 1.0) in the first year to 17.2 (SD = 0.9) in the second year, (F[1, 52] = 5.6, P = 0.022). Detailed analysis demonstrated improvement in performance for 3 grading criteria that were considered to be rule-based errors. There was no improvement in operative strategy errors. Analysis of longitudinal performance of laboratory residents shows higher scores for repeat participants in the category of rule-based errors. These findings suggest that laboratory residents can learn from rule-based mistakes when provided with annual performance-based assessments. This benefit was not seen with operative strategy errors and has important implications for

The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the

Quay container crane hoisting motor is a complex system, and the characteristics of long-term evolution and change of running status of there is a rule, and use it. Through association rules analysis, this paper introduced the similarity in association rules, and quay container crane hoisting motor status identification. Finally validated by an example, some rules change amplitude is small, regular monitoring, not easy to find, but it is precisely because of these small changes led to mechanical failure. Therefore, using the association rules change in monitoring the motor status has the very strong practical significance.

New techniques are now available for use in the protection of the environment. One of these techniques is the use of expert system for prediction groundwater pollution potential. Groundwater Pollution Expert system (GWPES) rules are a collection of principles and procedures used to know the comprehension of groundwater pollution prediction. The rules of groundwater pollution expert system in the form of questions, choice, radio-box, slide rule, button or frame are translated in to IF-THEN rule. The rules including of variables, types, domains and descriptions were used by the function of wxCLIPS (C Language Integrate Production System) expert system shell. (author)

As U.S. regional electricity markets continue to refine their market structures, designs and rules of operation in various ways, two critical issues are emerging. First, although much experience has been gained and costly and valuable lessons have been learned, there is still a lack of a systematic platform for evaluation of the impact of a new market design from both engineering and economic points of view. Second, the transition from a monopoly paradigm characterized by a guaranteed rate of return to a competitive market created various unfamiliar financial risks for various market participants, especially for the Investor Owned Utilities (IOUs) and Independent Power Producers (IPPs). This dissertation uses agent-based simulation methods to tackle the market rules evaluation and financial risk management problems. The California energy crisis in 2000-01 showed what could happen to an electricity market if it did not go through a comprehensive and rigorous testing before its implementation. Due to the complexity of the market structure, strategic interaction between the participants, and the underlying physics, it is difficult to fully evaluate the implications of potential changes to market rules. This dissertation presents a flexible and integrative method to assess market designs through agent-based simulations. Realistic simulation scenarios on a 225-bus system are constructed for evaluation of the proposed PJM-like market power mitigation rules of the California electricity market. Simulation results show that in the absence of market power mitigation, generation company (GenCo) agents facilitated by Q-learning are able to exploit the market flaws and make significantly higher profits relative to the competitive benchmark. The incorporation of PJM-like local market power mitigation rules is shown to be effective in suppressing the exercise of market power. The importance of financial risk management is exemplified by the recent financial crisis. In this

Full Text Available In the tanker industry, there are a lot of uncertain conditions that tanker companies have to deal with. For example, the global financial crisis and economic recession, the increase of bunker fuel prices and global climate change. Such conditions have forced tanker companies to change tankers speed from full speed to slow speed, extra slow speed and super slow speed. Due to such conditions, the objective of this paper is to present a methodology for determining vessel speeds of tankers that minimize the cost of the vessels under such conditions. The four levels of vessel speed in the tanker industry will be investigated and will incorporate a number of uncertain conditions. This will be done by developing a scientific model using a rule-based Bayesian reasoning method. The proposed model has produced 96 rules that can be used as guidance in the decision making process. Such results help tanker companies to determine the appropriate vessel speed to be used in a dynamic operational environmental.

In order to solve the problem that the traditional association rules mining algorithm has been unable to meet the mining needs of large amount of data in the aspect of efficiency and scalability, take FP-Growth as an example, the algorithm is realized in the parallelization based on Hadoop framework and Map Reduce model. On the basis, it is improved using the transaction reduce method for further enhancement of the algorithm's mining efficiency. The experiment, which consists of verification of parallel mining results, comparison on efficiency between serials and parallel, variable relationship between mining time and node number and between mining time and data amount, is carried out in the mining results and efficiency by Hadoop clustering. Experiments show that the paralleled FP-Growth algorithm implemented is able to accurately mine frequent item sets, with a better performance and scalability. It can be better to meet the requirements of big data mining and efficiently mine frequent item sets and association rules from large dataset.

Robust techniques for automatic or semi-automatic segmentation of objects in single photon emission computed tomography (SPECT) are still the subject of development. This paper describes a threshold based method which uses empirical rules derived from analysis of computer simulated images of a large number of objects. The use of simulation allowed the factors affecting the threshold which correctly segmented objects to be investigated systematically. Rules could then be derived from these data to define the threshold in any particular context. The technique operated iteratively and calculated local context sensitive thresholds along radial profiles from the centre of gravity of the object. It was evaluated in a further series of simulated objects and in human studies, and compared to the use of a global fixed threshold. The method was capable of improving accuracy of segmentation and volume assessment compared to the global threshold technique. The improvements were greater for small volumes, shapes with large surface area to volume ratio, variable surrounding activity and non-uniform distributions. The method was applied successfully to simulated objects and human studies and is considered to be a significant advance on global fixed threshold techniques. (author)

In today’s ever changing environment organizations need flexibility and agility to be able to deal with changes. Flexibility is necessary to adapt to changes in their environment, whilst agility is needed to rapidly response to changing customer demands. In this paper a mechanism based on the

textabstractThis paper aims at defining a set of privacy metrics (quantitative and qualitative) in the case of the relation between a privacy protector ,and an information gatherer .The aims with such metrics are: -to allow to assess and compare different user scenarios and their differences; for

Full Text Available Based on the data mining research, the data mining based on genetic algorithm method, the genetic algorithm is briefly introduced, while the genetic algorithm based on two important theories and theoretical templates principle implicit parallelism is also discussed. Focuses on the application of genetic algorithms for association rule mining method based on association rule mining, this paper proposes a genetic algorithm fitness function structure, data encoding, such as the title of the improvement program, in particular through the early issues study, proposed the improved adaptive Pc, Pm algorithm is applied to the genetic algorithm, thereby improving efficiency of the algorithm. Finally, a genetic algorithm based association rule mining algorithm, and be applied in sea water samples database in data mining and prove its effective.

In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

In this work, we derive the universal characteristic metrics set for authentication models based on security, usability and design issues. We then compute the probability of the occurrence of each characteristic metrics in some single factor and multifactor authentication models in order to determine the effectiveness of these ...

Full Text Available Australian Rules Football, governed by the Australian Football League (AFL is the most popular winter sport played in Australia. Like North American team based leagues such as the NFL, NBA and NHL, the AFL uses a draft system for rookie players to join a team's list. The existing method of allocating draft selections in the AFL is simply based on the reverse order of each team's finishing position for that season, with teams winning less than or equal to 5 regular season matches obtaining an additional early round priority draft pick. Much criticism has been levelled at the existing system since it rewards losing teams and does not encourage poorly performing teams to win matches once their season is effectively over. We propose a probability-based system that allocates a score based on teams that win 'unimportant' matches (akin to Carl Morris' definition of importance. We base the calculation of 'unimportance' on the likelihood of a team making the final eight following each round of the season. We then investigate a variety of approaches based on the 'unimportance' measure to derive a score for 'unimportant' and unlikely wins. We explore derivatives of this system, compare past draft picks with those obtained under our system, and discuss the attractiveness of teams knowing the draft reward for winning each match in a season

The main approaches used in the development of the Safety Guide (SG) “Recommendations to the structure and content of the manual for the management of beyond-design-basis accidents, including severe accidents” (BDBA MG) are described. The manual was developed taking into account the provisions of the current IAEA standards relevant to the affected area, taking into account the specifics of the Russian nuclear power industry. In the draft SG, three types of behavior of personnel are considered - based on skills, rules and knowledge. When developing BDBA MG, it is recommended to give priority to a knowledge-based approach. At the same time, when performing well-designed and worked-out activities, work is possible based on rules and skills (for example, using step-by-step procedures). The SG project provides for a unified organizational structure for managing beyond-design-basis accidents, both at the stage of preventing severe damage to the core, and at the stage of managing a heavy accident. In SG the order of management of beyond-design-basis accidents for both of the indicated stages examined in detail [ru

We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework with mul......We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework...... with multiple neighborhoods and a loosely coupled rule engine based on simulated annealing is presented. Computational experiments on real-life data from various airport ground handling organization show the performance and flexibility of the proposed algorithm....

A great number of associative memory models have been proposed to realize information storage and retrieval inspired by human brain in the last few years. However, there is still much room for improvement for those models. In this paper, we extend a binary pattern associative memory model to accomplish real-world image recognition. The learning process is based on the fundamental Hebb rules and the retrieval is implemented by a normalized dot product operation. Our proposed model can not only fulfill rapid memory storage and retrieval for visual information but also have the ability on incremental learning without destroying the previous learned information. Experimental results demonstrate that our model outperforms the existing Self-Organizing Incremental Neural Network (SOINN) and Back Propagation Neuron Network (BPNN) on recognition accuracy and time efficiency.

The immune system is comprised of numerous components that interact with one another to give rise to phenotypic behaviors that are sometimes unexpected. Agent-based modeling (ABM) and cellular automata (CA) belong to a class of discrete mathematical approaches in which autonomous entities detect local information and act over time according to logical rules. The power of this approach lies in the emergence of behavior that arises from interactions between agents, which would otherwise be impossible to know a priori. Recent work exploring the immune system with ABM and CA has revealed novel insights into immunological processes. Here, we summarize these applications to immunology and, particularly, how ABM can help formulate hypotheses that might drive further experimental investigations of disease mechanisms.

Full Text Available This study aims to determine an effective method of measuring the efficiency of promotional strategies for tourist destinations. Complicating factors that influence promotional efficiency (PE, such as promotional activities (PA, destination attribute (DA, and destination image (DI, make it difficult to evaluate the effectiveness of PE. This study develops a rule-based decision support mechanism using fuzzy set theory and the Analytic Hierarchy Process (AHP to evaluate the effectiveness of promotional strategies. Additionally, a statistical analysis is conducted using SPSS (Statistics Package for Social Science to confirm the results of the fuzzy AHP analysis. This study finds that government policy is the most important factor for PE and that service staff (internal beauty is more important than tourism infrastructure (external beauty in terms of customer satisfaction and long-term strategy in PE. With respect to DI, experts are concerned first with tourist perceived value, second with tourist satisfaction and finally with tourist loyalty.

The integration of a rule-based expert system for generating screen displays for controlling and monitoring instrumentation under the Experimental Physics and Industrial Control System (EPICS) is presented. The expert system is implemented using CLIPS, an expert system shell from the Software Technology Branch at Lyndon B. Johnson Space Center. The user selects the hardware input and output to be displayed and the expert system constructs a graphical control screen appropriate for the data. Such a system provides a method for implementing a common look and feel for displays created by several different users and reduces the amount of time required to create displays for new hardware configurations. Users are able to modify the displays as needed using the EPICS display editor tool

The integration of a rule-based expert system for generating screen displays for controlling and monitoring instrumentation under the Experimental Physics and Industrial Control System (EPICS) is presented. The expert system is implemented using CLIPS, an expert system shell from the Software Technology Branch at Lyndon B. Johnson Space Center. The user selects the hardware input and output to be displayed and the expert system constructs a graphical control screen appropriate for the data. Such a system provides a method for implementing a common look and feel for displays created by several different users and reduces the amount of time required to create displays for new hardware configurations. Users are able to modify the displays as needed using the EPICS display editor tool. ((orig.))

Purpose: To evaluate methods of pretreatment IMRT analysis, using real measurements performed with a commercial 2D detector array, for clinical relevance and accuracy by comparing clinical DVH parameters. Methods: We divided the work into two parts. The first part consisted of six in-phantom tests aimed to study the sensitivity of the different analysis methods. Beam fluences, 3D dose distribution, and DVH of an unaltered original plan were compared to those of the delivered plan, in which an error had been intentionally introduced. The second part consisted of comparing gamma analysis with DVH metrics for 17 patient plans from various sites. Beam fluences were measured with the MapCHECK 2 detector, per-beam planar analysis was performed with the MapCHECK software, and 3D gamma analysis and the DVH evaluation were performed using 3DVH software. Results: In a per-beam gamma analysis some of the tests yielded false positives or false negatives. However, the 3DVH software correctly described the DVH of the plan which included the error. The measured DVH from the plan with controlled error agreed with the planned DVH within 2% dose or 2% volume. We also found that a gamma criterion of 3%/3 mm was too lax to detect some of the forced errors. Global analysis masked some problems, while local analysis magnified irrelevant errors at low doses. Small hotspots were missed for all metrics due to the spatial resolution of the detector panel. DVH analysis for patient plans revealed small differences between treatment plan calculations and 3DVH results, with the exception of very small volume structures such as the eyes and the lenses. Target coverage (D 98 and D 95 ) of the measured plan was systematically lower than that predicted by the treatment planning system, while other DVH characteristics varied depending on the parameter and organ. Conclusions: We found no correlation between the gamma index and the clinical impact of a discrepancy for any of the gamma index evaluation

Full Text Available The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make deadly force decisions in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects’ performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback, and applied across a variety of training environments.

Full Text Available Collaborative working environments for distance education sets a goal of convenience and an adaptation into our technologically advanced societies. To achieve this revolutionary new way of learning, environments must allow the different participants to communicate and coordinate with each other in a productive manner. Productivity and efficiency is obtained through synchronized communication between the different coordinating partners, which means that multiple users can execute an experiment simultaneously. Within this process, coordination can be accomplished by voice communication and chat tools. In recent times, multi-user environments have been successfully applied in many applications such as air traffic control systems, team-oriented military systems, chat text tools, and multi-player games. Thus, understanding the ideas and the techniques behind these systems can be of great significance regarding the contribution of newer ideas to collaborative working e-learning environments. However, many problems still exist in distance learning and tele-education, such as not finding the proper assistance while performing the remote experiment. Therefore, the students become overwhelmed and the experiment will fail. In this paper, we are going to discuss a solution that enables students to obtain an automated help by either a human tutor or a rule-based e-tutor (embedded rule-based system for the purpose of student support in complex remote experimentative environments. The technical implementation of the system can be realized by using the powerful Microsoft .NET, which offers a complete integrated developmental environment (IDE with a wide collection of products and technologies. Once the system is developed, groups of students are independently able to coordinate and to execute the experiment at any time and from any place, organizing the work between them positively.

This report presents the Omega Version 2.2 (Ωs) rule-based computer program for identifying material deteriorations in the metallic structures, systems and components of LWR nuclear power units. The basis of Us is that understanding what material deteriorations might occur as a function of service life is fundamental to: (1) the development and optimization of preventive maintenance programs, (2) ensuring that current maintenance programs recognize applicable degradations, and (3) demonstrating the adequacy of deterioration management to safety regulatory authorities. The system was developed to assist utility engineers in determining which aging degradation mechanisms are acting on specific components. Direction is also provided to extend this system to manage deterioration and evaluate the efficacy of existing age-related degradation mitigation programs. This system can provide support for justification for continued operation and license renewal. It provides traceability to the data sources used in the logic development. A tiered approach is used to quickly isolate potential age-related degradation for components in a particular location. A potential degradation mechanism is then screened by additional rules to establish its plausibility. Ωs includes a user-friendly system interface and provides default environmental data and materials in the event they are unknown to the user. Ωs produces a report, with references, that validates the elimination of a degradation mechanism from further consideration or the determination that a specific degradation mechanism is acting on a specific material. This report also describes logic for identifying deterioration caused by intrusions and inspection-based deteriorations, along with future plans to program and integrate these features with Ωs

This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rulebase. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rulebase from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rulebase modelling the human driving behaviour. Finally, the effectiveness of the generated rulebase has been validated using the simulator in autopilot mode.

The holographic principle (HP) conjectures, that the maximum number of degrees of freedom of any realistic physical system is proportional to the system's boundary area. The HP has its roots in the study of black holes. It has recently been applied to cosmological solutions. In this article we apply the HP to spherically symmetric static space-times. We find that any regular spherically symmetric object saturating the HP is subject to tight constraints on the (interior) metric, energy-density, temperature and entropy-density. Whenever gravity can be described by a metric theory, gravity is macroscopically scale invariant and the laws of thermodynamics hold locally and globally, the (interior) metric of a regular holographic object is uniquely determined up to a constant factor and the interior matter-state must follow well defined scaling relations. When the metric theory of gravity is general relativity, the interior matter has an overall string equation of state (EOS) and a unique total energy-density. Thus the holographic metric derived in this article can serve as simple interior 4D realization of Mathur's string fuzzball proposal. Some properties of the holographic metric and its possible experimental verification are discussed. The geodesics of the holographic metric describe an isotropically expanding (or contracting) universe with a nearly homogeneous matter-distribution within the local Hubble volume. Due to the overall string EOS the active gravitational mass-density is zero, resulting in a coasting expansion with Ht = 1, which is compatible with the recent GRB-data.

Full Text Available We analyzed the spike discharge patterns of two types of neurons in the rodent peripheral gustatory system, Na specialists (NS and acid generalists (AG to lingual stimulation with NaCl, acetic acid, and mixtures of the two stimuli. Previous computational investigations found that both spike rate and spike timing contribute to taste quality coding. These studies used commonly accepted computational methods, but they do not provide a consistent statistical evaluation of spike trains. In this paper, we adopted a new computational framework that treated each spike train as an individual data point for computing summary statistics such as mean and variance in the spike train space. We found that these statistical summaries properly characterized the firing patterns (e. g. template and variability and quantified the differences between NS and AG neurons. The same framework was also used to assess the discrimination performance of NS and AG neurons and to remove spontaneous background activity or "noise" from the spike train responses. The results indicated that the new metric system provided the desired decoding performance and noise-removal improved stimulus classification accuracy, especially of neurons with high spontaneous rates. In summary, this new method naturally conducts statistical analysis and neural decoding under one consistent framework, and the results demonstrated that individual peripheral-gustatory neurons generate a unique and reliable firing pattern during sensory stimulation and that this pattern can be reliably decoded.

OSHA is confirming the effective date of its direct final rule that revises a number of standards for general industry that refer to national consensus standards. The direct final rule states that it would become effective on March 13, 2008 unless OSHA receives significant adverse comment on these revisions by January 14, 2008. OSHA received no adverse comments by that date and, therefore, is confirming that the rule will become effective on March 13, 2008.

In this direct final rule, the Agency is removing several references to consensus standards that have requirements that duplicate, or are comparable to, other OSHA rules; this action includes correcting a paragraph citation in one of these OSHA rules. The Agency also is removing a reference to American Welding Society standard A3.0-1969 ("Terms and Definitions") in its general-industry welding standards. This rulemaking is a continuation of OSHA's ongoing effort to update references to consensus and industry standards used throughout its rules.

In flexible automated manufacturing, robots can perform routine operations as well as recover from atypical events, provided that process-relevant information is available to the robot controller. Real time vision is among the most versatile sensing tools, yet the reliability of machine-based scene interpretation can be questionable. The effort described here is focused on the development of machine-based vision methods to support autonomous nuclear fuel manufacturing operations in hot cells. This thesis presents a method to efficiently recognize 3D objects from 2D images based on feature-based indexing. Object recognition is the identification of correspondences between parts of a current scene and stored views of known objects, using chains of segments or indexing vectors. To create indexed object models, characteristic model image features are extracted during preprocessing. Feature vectors representing model object contours are acquired from several points of view around each object and stored. Recognition is the process of matching stored views with features or patterns detected in a test scene. Two sets of algorithms were developed, one for preprocessing and indexed database creation, and one for pattern searching and matching during recognition. At recognition time, those indexing vectors with the highest match probability are retrieved from the model image database, using a nearest neighbor search algorithm. The nearest neighbor search predicts the best possible match candidates. Extended searches are guided by a search strategy that employs knowledge-base (KB) selection criteria. The knowledge-based system simplifies the recognition process and minimizes the number of iterations and memory usage. Novel contributions include the use of a feature-based indexing data structure together with a knowledge base. Both components improve the efficiency of the recognition process by improved structuring of the database of object features and reducing data base size

Full Text Available The scarcity of fuel oil in Indonesia often occurs due to delays in delivery caused by natural factors or transportation constraints. Theaim of this research is to develop systems of fuel distribution monitoring online and realtime using rulebase reasoning method and radio frequency identification technology. The rule-based reasoning method is used as a rule-based reasoning model used for monitoring distribution and determine rule-based safety stock. The monitoring system program is run with a web-based computer application. Radio frequency identification technology is used by utilizing radio waves as an media identification. This technology is used as a system of tracking and gathering information from objects automatically. The research data uses data of delayed distribution of fuel from fuel terminal to consumer. The monitoring technique uses the time of departure, the estimated time to arrive, the route / route passed by a fuel tanker attached to the radio frequency Identification tag. This monitoring system is carried out by the radio frequency identification reader connected online at any gas station or specified position that has been designed with study case in Semarang. The results of the research covering the status of rulebased reasoning that sends status, that is timely and appropriate paths, timely and truncated pathways, late and on track, late and cut off, and tank lost. The monitoring system is also used in determining the safety stock warehouse, with the safety stock value determined based on the condition of the stock warehouse rules.

Malay compound noun is defined as a form of words that exists when two or more words are combined into a single syntax and it gives a specific meaning. Compound noun acts as one unit and it is spelled separately unless an established compound noun is written closely from two words. The basic characteristics of compound noun can be seen in the Malay sentences which are the frequency of that word in the text itself. Thus, this extraction of compound nouns is significant for the following research which is text summarization, grammar checker, sentiments analysis, machine translation and word categorization. There are many research efforts that have been proposed in extracting Malay compound noun using linguistic approaches. Most of the existing methods were done on the extraction of bi-gram noun+noun compound. However, the result still produces some problems as to give a better result. This paper explores a linguistic method for extracting compound Noun from stand Malay corpus. A standard dataset are used to provide a common platform for evaluating research on the recognition of compound Nouns in Malay sentences. Therefore, an improvement for the effectiveness of the compound noun extraction is needed because the result can be compromised. Thus, this study proposed a modification of linguistic approach in order to enhance the extraction of compound nouns processing. Several pre-processing steps are involved including normalization, tokenization and tagging. The first step that uses the linguistic approach in this study is Part-of-Speech (POS) tagging. Finally, we describe several rules-based and modify the rules to get the most relevant relation between the first word and the second word in order to assist us in solving of the problems. The effectiveness of the relations used in our study can be measured using recall, precision and F1-score techniques. The comparison of the baseline values is very essential because it can provide whether there has been an improvement

Purpose: Webinars have become an evolving tool with greater or lesser success in reaching health care providers (HCPs). This study seeks to assess best practices and metrics for success in webinar deployment for optimal global reach. Methods: Webinars have been developed and launched to reach practicing health care providers in the field of radiation oncology and radiosurgery. One such webinar was launched in early February 2016. “Multiple Brain Metastases & Volumetric Modulated Arc Radiosurgery: Refining the Single-Isocenter Technique to Benefit Surgeons and Patients” presented by Drs. Fiveash and Thomas from UAB was submitted to and accredited by the Institute for Medical Education as qualifying for CME as well as MDCB for educational credit for dosimetrists, in order to encourage participation. MedicalPhysicsWeb was chosen as the platform to inform attendees regarding the webinar. Further IME accredited the activity for 1 AMA PRA Category 1 credit for physicians & medical physicists. The program was qualified by the ABR in meeting the criteria for self-assessment towards fulfilling MOC requirements. Free SAMs credits were underwritten by an educational grant from Varian Medical Systems. Results: The webinar in question attracted 992 pre-registrants from 66 countries. Outside the US and Canada; 11 were from the Americas; 32 were from Europe; 9 from the Middle East and Africa. Australasia and the Indian subcontinent represented the remaining 14 countries. Pre-registrants included 423 Medical Physicists, 225 Medical Dosimetrists, 24 Radiation Therapists, 66 Radiation Oncologists & other. Conclusion: The effectiveness of CME and SAM-CME programs such as this can be gauged by the high rate of respondents who state an intention to change practice habits, a primary goal of continuing medical education and self-assessment. This webinar succeeded in being the most successful webinar on Medical Physics Web as measured by pre-registration, participation and