Sample records for bien phu fault

This thesis documents the hand-woven textiles that the Phu Thai ethnic group living in Savannakhet Province, Laos, produce. The various stages of textile production and the uses of textiles in Phu Thai society, especially as identity markers, are also examined. Textiles of neighboring groups are also investigated to how knowledge of textile technology, types, and aesthetics are transferred between the Phu Thai and other ethnicities, specifically the Lao and Katang. The study's field research ...

Objective was to characterize the antibacterial action for six combination of PHU-AgNO 3 synthesized in 'Petru Poni' Institute of Macromolecular Chemistry, Iasi, Romania. The advantages of Ag nanoparticles are durability, heat resistant, low toxicity. Silver is known for its antibacterial qualities for a long time and has been used in medicine in topical treatment.

Green tea is a popular consumption product in Vietnam. Especially, tea which origins from Tan Cuong, Thai Nguyen has been known for long by its better quality than those coming from other regions on the country. The study aims at comparing and finding out if the difference between tea in Thai Nguyen and Phu Tho can be figured by sensory tasting. Two products picked from Tan Cuong, Thai Nguyen province and two others from Phu Ho district, Phu Tho are were evaluated by a panel of twelve judg...

We have previously found a positive selection for promoter mutations in Pseudomonas aeruginosa DK2 leading to increased expression of the phu (Pseudomonas heme utilization) system. By mimicking conditions of the CF airways in vitro, we experimentally demonstrated that increased expression of phu......R confers a growth advantage in the presence of hemoglobin, thus suggesting that P. aeruginosa evolves towards iron acquisition from hemoglobin....

Biodegradation of POPs contaminant in soil collected from Phu Tho province and Nghe An province had carried out. The process comprises treating soil, which contains anaerobic and aerobic microbes capable of transforming lindane and DDT into harmless material and being under anaerobic and aerobic steps. Significant biodegradation of POPs contaminants occurred in there tests. But some of toxic organic compounds remained. (author)

Full Text Available Craft tourism is attracting many domestic and foreign tourists. In recent years in Vietnam, craft villages have exploited their potentials in tourism industry. For many different causes, tourism activities have yet reached expectations and their potentials for tourism development. This paper is to review their currents, tourism potentials and limitations and then formulate recommendations to the tourism development in Phu Quoc island. The data for this paper are from two sources. Secondary data were collected from the vast literature and journals. Primary data were from interviews with village owners, related authorities, tourists, tourism corporate, etc. and results serve as guidelines to develop the tourism industry and management.

The volatile constituents isolated from roots and rhizomes of Valeriana phu L. were investigated by GC and GUMS (EI) analysis. The roots and rhizomes yielded 0.64% (v/w) essential oil on a dry weight basis. From the oil 70 compounds Could he identified with a valerenal isomer (11.3%), valerianol

Glasses in forms of ornament and decorative objects have been found in Thailand for several hundred years. The mosaic glass used in this work was only one piece that excavated at Phu Khao thong archaeological site in Ranong Province, southern area of Thailand. Micro-beam X-ray fluorescence spectroscopy (μ-XRF) based on synchrotron radiation was firstly carried out to analyze its elemental composition and distribution. Scanning electron microscope coupled with energy dispersive X-ray fluorescence spectroscopy (SEM-EDS) and (PIXE) were also used to characterize the composition. The main composition of this mosaic glass sample found in Thailand was a lead-based silicate glass. The colorations were affected from transition metals, especially iron, copper and manganese. It was shown that although it look-liked the same, but the main composition was differ to that of Persia and South Asia, especially the lead content. However, it demonstrated the long distance trade or exchange network of the ancient time.

Quiet days variations in the Earth's magnetic field (the Sq current system) are compared and contrasted for the Asian, African and American sectors using a new dataset from Vietnam. This is the first presentation of the variation of the Earth's magnetic field (Sq), during the solar cycle 23, at Phu Thuy, Vietnam (geographic latitudes 21.03° N and longitude: 105.95° E). Phu Thuy observatory is located below the crest of the equatorial fountain in the Asian longitude sector of the Northern Hemisphere. The morphology of the Sq daily variation is presented as a function of solar cycle and seasons. The diurnal variation of Phu Thuy is compared to those obtained in different magnetic observatories over the world to highlight the characteristics of the Phu Thuy observations. In other longitude sectors we find different patterns. At Phu Thuy the solar cycle variation of the amplitude of the daily variation of the X component is correlated to the F.10.7 cm solar radiation (~0.74). This correlation factor is greater than the correlation factor obtained in two observatories located at the same magnetic latitudes in other longitude sectors: at Tamanrasset in the African sector (~0.42, geographic latitude ~22.79) and San Juan in the American sector (~0.03, geographic latitude ~18.38). At Phu Thuy, the Sq field exhibits an equinoctial and a diurnal asymmetry: - The seasonal variation of the monthly mean of X component exhibits the well known semiannual pattern with 2 equinox maxima, but the X component is larger in spring than in autumn. Depending of the phase of the sunspot cycle, the maximum amplitude of the X component varies in spring from 30 nT to 75 nT and in autumn from 20 nT to 60 nT. The maximum amplitude of the X component exhibits roughly the same variation in both solstices, varying from about ~20 nT to 50 nT, depending on the position into the solar cycle. - In all seasons, the mean equinoctial diurnal Y component has a morning maximum Larger than the afternoon

Full Text Available Quiet days variations in the Earth's magnetic field (the Sq current system are compared and contrasted for the Asian, African and American sectors using a new dataset from Vietnam. This is the first presentation of the variation of the Earth's magnetic field (Sq, during the solar cycle 23, at Phu Thuy, Vietnam (geographic latitudes 21.03° N and longitude: 105.95° E. Phu Thuy observatory is located below the crest of the equatorial fountain in the Asian longitude sector of the Northern Hemisphere. The morphology of the Sq daily variation is presented as a function of solar cycle and seasons. The diurnal variation of Phu Thuy is compared to those obtained in different magnetic observatories over the world to highlight the characteristics of the Phu Thuy observations. In other longitude sectors we find different patterns. At Phu Thuy the solar cycle variation of the amplitude of the daily variation of the X component is correlated to the F.10.7 cm solar radiation (~0.74. This correlation factor is greater than the correlation factor obtained in two observatories located at the same magnetic latitudes in other longitude sectors: at Tamanrasset in the African sector (~0.42, geographic latitude ~22.79 and San Juan in the American sector (~0.03, geographic latitude ~18.38. At Phu Thuy, the Sq field exhibits an equinoctial and a diurnal asymmetry: – The seasonal variation of the monthly mean of X component exhibits the well known semiannual pattern with 2 equinox maxima, but the X component is larger in spring than in autumn. Depending of the phase of the sunspot cycle, the maximum amplitude of the X component varies in spring from 30 nT to 75 nT and in autumn from 20 nT to 60 nT. The maximum amplitude of the X component exhibits roughly the same variation in both solstices, varying from about ~20 nT to 50 nT, depending on the position into the solar cycle. – In all seasons, the mean equinoctial diurnal Y component has a morning maximum Larger

Full Text Available The purpose of this article is to propose a conceptual green practices model in the tourism industry of Phu Quoc island (destination in Vietnam. The model is developed with purpose of providing direction for researchers to empirically examine relationships among demographic variables, innovation characteristics, performance expectancy, social influence, facilitating conditions and effort expectancy, funding availability and environment and business performances. This study uses the secondary research data which is collected from different sources as books, journals, research papers and other online and print media (publications on the subject. The main method used in this study is the content review and analysis. The author suggests that an empirical study should be done to confirm if relationships of variables exit or need to be changed to adapt with the currents of the destination to increase business performance. This model is expected to contribute to the theory of tourism and to apply to Phu Quoc island.

District 9 is a new urban development area which is well-known as an industrial city in the rear of Ho Chi Minh City. This area has special characteristics which are different from central downtown. The housing of district 9 is a new urban area so that it is easier to orient particular architectural style accordance with climatic conditions and other conditions. However, at present, these new residential areas are rising with no clear management of architectural form. That is the reason lead to shortcomings such as: increasing the use of electricity, affecting to climate change and creating urban heat island, increasing costs of energy use. Those problems will be difficult to overcome in the future if we don’t have the right attention on it. By using a combination of multiple methods such as: data collection, case study analysis method, GIS, research by design, etc., this research topic will pay attention on analysis the row house in Tang Nhon Phu A Ward of district 9 and trying to propose some solution and management criteria to local government. The analysis results will ensure that the conclusions reflect the realities of the situation, those also become the basis for the proposed solutions to deal with the existence problems. The result of research may become an application research platform for the related research topics.

The purpose of this thesis is to upgrade the uranium ore at Phu Wiang district. Because of the fine grains and high degree of dissemination of uranium in ores, resulting practically complete envelopment of the uranium minerals by the gangue minerals, the ore must first undergo digestion in order to reveal the uranium minerals. After digestion, sodium hydroxide of 0.05 normal was added to the ore and the mixture was fed into the agitator provided with baffles and two specially designed propellers. Due to the 'Push - Pull' motion of the propellers a zone of specially high turbulence was created between them. Also in this region higher concentration of uranium is revealed and the high concentrated uranium ore was regularly stripped off for further analysis. It was found that by using mineral of grain size of 100 mesh and 0.0187% of uranium content a concentration up to 0.063% uranium content (an upgrading better than by a factor of three times) was achieved with the above method. The uranium content was analyzed with 3'' x 3'' NaI (Tl) detector and 1024 channels MCA

Tropical forest is the most important and largest source for stocking CO 2 from the atmosphere which might be one of the main sources of carbon emission, global warming and climate change in recent decades. There are two main objectives of this study. The first one is to establish a relationship between above ground biomass and vegetation indices and the other is to evaluate above ground biomass and carbon sequestration for evergreen forest areas in Phu Hin Rong Kla National park, Thailand. Random sampling design based was applied for calculating the above ground biomass at stand level in the selected area by using Brown and Tsutsumi allometric equations. Landsat 7 ETM+ data in February 2009 was used. Support Vector Machine (SVM) was applied for identifying evergreen forest area. Forty-three of vegetation indices and image transformations were used for finding the best correlation with forest stand biomass. Regression analysis was used to investigate the relationship between the biomass volume at stand level and digital data from the satellite image. TM51 which derived from Tsutsumi allometric equation was the highest correlation with stand biomass. Normalized Difference Vegetation Index (NDVI) was not the best correlation in this study. The best biomass estimation model was from TM51 and ND71 (R2 =0.658). The totals of above ground biomass and carbon sequestration were 112,062,010 ton and 56,031,005 ton respectively. The application of this study would be quite useful for understanding the terrestrial carbon dynamics and global climate change. (author)

Full Text Available The purposes of this study were to study population characteristics of hog deer released into the wild, namely: density,age structure, sex ratio, recruitment rate, threats to hog deer, carrying capacity and inter-specific relationships, as well as toassess the population viability over time. In this study, direct observation was used to study the hog deer population characteristics,and population density was estimated from the pellet-group count method. Vortex program was used to analyze thepopulation viability. Results showed that the population density of hog deer at Thung Ka Mung (TKM in Phu Khieo WildlifeSanctuary (PKWS was 2.03-2.04 individuals/hectare (SD = 1.25. The population structure showed that the average herd sizewas 9.57 individuals. Hog deer in TKM preferred to stay with a group (91.5%, rather than being solitary (8.5%. The sex ratiofor males to females was 54.64:100, and for females to fawns was 100:26.18. The annual recruitment rate was 16.98 %. Theirpredators were Asian wild dogs, Burmese pythons, Asiatic jackals, leopard cats and clouded leopards. The mortality rate ofthe existing hog deer in TKM during the study period was 18.1%. The habitat sharing by camera traps revealed 4 ungulatespecies. They were sambar deer, barking deer, wild boar, and elephant, and their relative abundance were 28.41%, 7.38%,4.70%, and 2.01% respectively. Fifty-year simulation modeling using population viability analysis indicated the sustainabilityof this population. Hog deer population in the simulations did not exhibit sensitivity to an increase or decrease in carryingcapacity. Habitat management should be carried out continuously in TKM area, which is the main habitat for hog deer inPKWS.

The research measure the profitability and technical efficiency of Black tiger shrimp farms and White leg shrimp farms in Song Cau district, Phu Yen province, Vietnam. Cross-sectional data of 62 Black tiger shrimp samples and 88 White leg shrimp samples were used for comparison two production systems. The profitability analysis shows that White leg shrimp farms achieved an average profit per hectare of 78,883,209 VND ($3,944.16), which was approximately 4 times as much as Black tiger shrimp f...

The concentration of rare earths elements such as La, Ce, Pr, Nd, Gd and other elements as Ca, Fe, U, Th in Yen Phu rare earth ore and other intermediate products from the flotation and hydrometallurgical process was determined by using Si-PIN detector fluorescence spectrometry. The precision and accuracy of quantitative analysis was tested by standard reference materials and comparative analysis with different analytical methods. The analytical procedures were set-up and applied for the determination of rare earth and other elements in Yen Phu rare earth ore and other intermediate products from the flotation and hydrometallurgical process with high precision and accuracy. (author)

National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

Full Text Available Este artículo presenta una síntesis de la investigación desarrollada por el autor como trabajode grado en el Magíster en Ciencias de la Administración de la Universidad EAFIT. La investigación se centra en la identificación y comprensión de los atributos que caracterizan a las marcas de bienes reconocidas como de lujo, mediante la aplicación de la metodología de investigación denominada análisis de contenido, a una muestra de conveniencia de sitios web de dichas marcas. La investigación se circunscribe al ámbito de estudio del mercadeo y utiliza sus diferentes conceptos para determinar tanto el marco de referencia como la caracterizaciónde los atributos de las marcas de bienes de lujo.This article is a synthesis of the research study conducted by the author as a requirement for the MSc in Management at EAFIT University. The objective in this research is to identify and understand the attributes that characterize brands, acknowledged as luxury goods. The content analysis was chosen as the research method for studying a convenience sample of luxury brand sites on the Internet. This research study is developed in the marketing field, and its different concepts were adopted as the reference framework.

Full Text Available La santé constitue un domaine dans lequel les notions de bien commun ou de bien public mondial sont particulièrement pertinentes. Il faut cependant préciser de quels biens il est question. A titre d’exemple, le droit français comporte des notions susceptibles de qualifier le droit aux soins (comme droit inaliénable, ou le médicament, objet de propriété intellectuelle comme de propriété « traditionnelle ». Un droit international des biens de santé peut de la même manière être fondé sur un droit d’usage de chacun sur la molécule qui est la même dans le médicament générique et celui de marque. Une procédure d’accès équivalent aux produits, brevetés ou non, pourrait être mise en œuvre par principe dès la découverte, afin de parvenir rapidement à des prix équitables.Health is a field particularly suitable for notions as commons or public goods. It is necessary however to know which goods may be concerned. For instance, French lawyers use concepts which can be used in order to qualify right to care (as an inalienable right, or medicine as an object of intellectual property as well as “traditional” one. An international law for health goods could be also based on a right of use for each person on the molecule which is the same in the generic drug or in the patented one. A process of “equivalent access” to licensed or free products could be initiated, from the discovery itself to the use, in order to obtain rapidly fair prices.

Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems. These ...

Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

Iowa State University GIS Support and Research Facility — This fault coverage locates and identifies all currently known/interpreted fault zones in Iowa, that demonstrate offset of geologic units in exposure or subsurface...

... UAVs or Organic Air Vehicles. The approach of this effort was to analyze fault management requirements of formation flight for fleets of UAVs, and develop a layered fault management architecture which demonstrates significant...

The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

Different aspects of fault detection and fault isolation in closed-loop systems are considered. It is shown that using the standard setup known from feedback control, it is possible to formulate fault diagnosis problems based on a performance index in this general standard setup. It is also shown...

Full Text Available The Administration agent was a complex figure in Colonial Peru. This article aims to discuss that complexity, focusing its analysis on the Ministers of Lima’s Audiencia during the Seventeenth century. The concepts of greed (codicia and public good (bien público will be considered in that context, posing some ideas concerning the peculiar way of «selling» judicial posts, known as beneficio. The activities of some of the Audiencia’s Ministers in Lima will be discussed, trying to offer more references on their role in the viceregal society, as well as on their contemporaries’ perception regarding them.

A fault tolerant control (FTC) architecture based on active fault diagnosis (AFD) and the YJBK (Youla, Jarb, Bongiorno and Kucera)parameterization is applied in this paper. Based on the FTC architecture, fault tolerant control of uncertain systems with slowly varying parametric faults...... is investigated. Conditions are given for closed-loop stability in case of false alarms or missing fault detection/isolation....

Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie...

Leaky faults provide a flow path for fluids to move underground. It is very important to characterize such faults in various engineering projects. The purpose of this work is to develop mathematical solutions for this characterization. The flow of water in an aquifer system and the flow of air in the unsaturated fault-rock system were studied. If the leaky fault cuts through two aquifers, characterization of the fault can be achieved by pumping water from one of the aquifers, which are assumed to be horizontal and of uniform thickness. Analytical solutions have been developed for two cases of either a negligibly small or a significantly large drawdown in the unpumped aquifer. Some practical methods for using these solutions are presented. 45 refs., 72 figs., 11 tabs

A fault detecting apparatus and method are provided for use with an active solar system. The apparatus provides an indication as to whether one or more predetermined faults have occurred in the solar system. The apparatus includes a plurality of sensors, each sensor being used in determining whether a predetermined condition is present. The outputs of the sensors are combined in a pre-established manner in accordance with the kind of predetermined faults to be detected. Indicators communicate with the outputs generated by combining the sensor outputs to give the user of the solar system and the apparatus an indication as to whether a predetermined fault has occurred. Upon detection and indication of any predetermined fault, the user can take appropriate corrective action so that the overall reliability and efficiency of the active solar system are increased.

This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...

In the 1970s researchers noticed that radioactive particles produced by elements naturally present in packaging material could cause bits to flip in sensitive areas of electronic chips. Research into the effect of cosmic rays on semiconductors, an area of particular interest in the aerospace industry, led to methods of hardening electronic devices designed for harsh environments. Ultimately various mechanisms for fault creation and propagation were discovered, and in particular it was noted that many cryptographic algorithms succumb to so-called fault attacks. Preventing fault attacks without

An active fault diagnosis (AFD) method will be considered in this paper in connection with a Fault Tolerant Control (FTC) architecture based on the YJBK parameterization of all stabilizing controllers. The architecture consists of a fault diagnosis (FD) part and a controller reconfiguration (CR......) part. The FTC architecture can be applied for additive faults, parametric faults, and for system structural changes. Only parametric faults will be considered in this paper. The main focus in this paper is on the use of the new approach of active fault diagnosis in connection with FTC. The active fault...... diagnosis approach is based on including an auxiliary input in the system. A fault signature matrix is introduced in connection with AFD, given as the transfer function from the auxiliary input to the residual output. This can be considered as a generalization of the passive fault diagnosis case, where...

The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

Full Text Available During mapping of the already accomplished Razdrto – Senožeče section of motorway and geologic surveying of construction operations of the trunk road between Razdrto and Vipava in northwestern part of External Dinarides on the southwestern slope of Mt. Nanos, called Rebrnice, a steep NW-SE striking fault was recognized, situated between the Predjama and the Ra{a faults. The fault was named Vipava fault after the Vipava town. An analysis of subrecent gravitational slips at Rebrnice indicates that they were probably associated with the activity of this fault. Unpublished results of a repeated levelling line along the regional road passing across the Vipava fault zone suggest its possible present activity. It would be meaningful to verify this by appropriate geodetic measurements, and to study the actual gravitational slips at Rebrnice. The association between tectonics and gravitational slips in this and in similar extreme cases in the areas of Alps and Dinarides points at the need of complex studying of geologic proceses.

In this paper, we investigated the various fault features of the lyo fault and depicted fault lines or detailed topographic map. The results of this paper are summarized as follows; 1) Distinct evidence of the right-lateral movement is continuously discernible along the lyo fault. 2) Active fault traces are remarkably linear suggesting that the angle of fault plane is high. 3) The lyo fault can be divided into four segments by jogs between left-stepping traces. 4) The mean slip rate is 1.3 ～ ...

isolation is based directly on the input/output s ignals applied for the fault detection. It is guaranteed that the fault group includes the fault that had occurred in the system. The second step is individual fault isolation in the fault group . Both types of isolation are obtained by applying dedicated......Active fault isolation of parametric faults in closed-loop MIMO system s are considered in this paper. The fault isolation consists of two steps. T he first step is group- wise fault isolation. Here, a group of faults is isolated from other pos sible faults in the system. The group-wise fault...

Full Text Available A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.

Suggestion are made concerning the method of the fault tree analysis, the use of certain symbols in the examination of system failures. This purpose of the fault free analysis is to find logical connections of component or subsystem failures leading to undesirable occurrances. The results of these examinations are part of the system assessment concerning operation and safety. The objectives of the analysis are: systematical identification of all possible failure combinations (causes) leading to a specific undesirable occurrance, finding of reliability parameters such as frequency of failure combinations, frequency of the undesirable occurrance or non-availability of the system when required. The fault tree analysis provides a near and reconstructable documentation of the examination. (orig./HP) [de

Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

For many years, most computer architects have pursued one primary goal: performance. Architects have translated the ever-increasing abundance of ever-faster transistors provided by Moore's law into remarkable increases in performance. Recently, however, the bounty provided by Moore's law has been accompanied by several challenges that have arisen as devices have become smaller, including a decrease in dependability due to physical faults. In this book, we focus on the dependability challenge and the fault tolerance solutions that architects are developing to overcome it. The two main purposes

In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

In this updated edition of a previous wind turbine fault detection and fault tolerant control challenge, we present a more sophisticated wind turbine model and updated fault scenarios to enhance the realism of the challenge and therefore the value of the solutions. This paper describes...

Pilots are asked to manage faults during flight operations. This leads to the training question of the type and depth of system knowledge required to respond to these faults. Based on discussions with multiple airline operators, there is agreement th...

Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene

This book is a comprehensive, structural approach to fault diagnosis strategy. The different fault types, signal processing techniques, and loss characterisation are addressed in the book. This is essential reading for work with induction motors for transportation and energy.

Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene.

An elementary, engineering oriented introduction to fault tree analysis is presented. The basic concepts, techniques and applications of fault tree analysis, FTA, are described. The two major steps of FTA are identified as (1) the construction of the fault tree and (2) its evaluation. The evaluation of the fault tree can be qualitative or quantitative depending upon the scope, extensiveness and use of the analysis. The advantages, limitations and usefulness of FTA are discussed

In the recent years the wind turbine industry has focused on optimizing the cost of energy. One of the important factors in this is to increase reliability of the wind turbines. Advanced fault detection, isolation and accommodation are important tools in this process. Clearly most faults are deal...... scenarios. This benchmark model is used in an international competition dealing with Wind Farm fault detection and isolation and fault tolerant control....

An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.

Three major areas that are considered in the development of an overall maintenance scheme of computer equipment are described. The areas of concern related to fault isolation techniques are: the programmer (or user), company and its policies, and the manufacturer of the equipment.

This thesis considered the development of fault tolerant control systems. The focus was on the category of automated processes that do not necessarily comprise a high number of identical sensors and actuators to maintain safe operation, but still have a potential for improving immunity to component...

Beyond the study of historical surface faulting events, this work investigates the possibility, in specific cases, of identifying pre-historical events whose memory survives in myths and legends. The myths of many famous sacred places of the ancient world contain relevant telluric references: "sacred" earthquakes, openings to the Underworld and/or chthonic dragons. Given the strong correspondence with local geological evidence, these myths may be considered as describing natural phenomena. It has been possible in this way to shed light on the geologic origin of famous myths (Piccardi, 1999, 2000 and 2001). Interdisciplinary researches reveal that the origin of several ancient sanctuaries may be linked in particular to peculiar geological phenomena observed on local active faults (like ground shaking and coseismic surface ruptures, gas and flames emissions, strong underground rumours). In many of these sanctuaries the sacred area is laid directly above the active fault. In a few cases, faulting has affected also the archaeological relics, right through the main temple (e.g. Delphi, Cnidus, Hierapolis of Phrygia). As such, the arrangement of the cult site and content of relative myths suggest that specific points along the trace of active faults have been noticed in the past and worshiped as special `sacred' places, most likely interpreted as Hades' Doors. The mythological stratification of most of these sanctuaries dates back to prehistory, and points to a common derivation from the cult of the Mother Goddess (the Lady of the Doors), which was largely widespread since at least 25000 BC. The cult itself was later reconverted into various different divinities, while the `sacred doors' of the Great Goddess and/or the dragons (offspring of Mother Earth and generally regarded as Keepers of the Doors) persisted in more recent mythologies. Piccardi L., 1999: The "Footprints" of the Archangel: Evidence of Early-Medieval Surface Faulting at Monte Sant'Angelo (Gargano, Italy

The LAMPF accelerator is presently producing 800-MeV proton beams at 0.5 mA average current. Machine protection for such a high-intensity accelerator requires a fast shutdown mechanism, which can turn off the beam within a few microseconds of the occurrence of a machine fault. The resulting beam unloading transients cause the rf systems to exceed control loop tolerances and consequently generate multiple fault indications for identification by the control computer. The problem is to isolate the primary fault or cause of beam shutdown while disregarding as many as 50 secondary fault indications that occur as a result of beam shutdown. The LAMPF First-Fault Identifier (FFI) for fast transient faults is operational and has proven capable of first-fault identification. The FFI design utilized features of the Fast Protection System that were previously implemented for beam chopping and rf power conservation. No software changes were required

Tests, Diagnosis and Fault Treatment were chosen as the guiding themes of the conference. However, the scope of the conference included reliability, availability, safety and security issues in software and hardware systems as well. The sessions were organized for the conference which was completed by an industrial presentation: Keynote Address, Reconfiguration and Recover, System Level Diagnosis, Voting and Agreement, Testing, Fault-Tolerant Circuits, Array Testing, Modelling, Applied Fault Tolerance, Fault-Tolerant Arrays and Systems, Interconnection Networks, Fault-Tolerant Software. One paper has been indexed separately in the database. (orig./HP)

The types of fault rocks, microstructural characteristics of fault tectonite and their relationship with uranium mineralization in the uranium-productive granite area are discussed. According to the synthetic analysis on nature of stress, extent of crack and microstructural characteristics of fault rocks, they can be classified into five groups and sixteen subgroups. The author especially emphasizes the control of cataclasite group and fault breccia group over uranium mineralization in the uranium-productive granite area. It is considered that more effective study should be made on the macrostructure and microstructure of fault rocks. It is of an important practical significance in uranium exploration

Difference similitude matrix (DSM) is effective in reducing information system with its higher reduction rate and higher validity. We use DSM method to analyze the fault data of computer networks and obtain the fault diagnosis rules. Through discretizing the relative value of fault data, we get the information system of the fault data. DSM method reduces the information system and gets the diagnosis rules. The simulation with the actual scenario shows that the fault diagnosis based on DSM can obtain few and effective rules.

Full Text Available Este artículo trata de la construcción de la noción de Patrimonio Arquitectónico en Portugal entre 1825 y 1880, período elegido por corresponder con aquél donde se define y consolida el proyecto de preservación romántico. De este modo, a partir del problema generado por la integración en los Bienes de la Nación de la voluminosa masa de bienes muebles e inmuebles, los «bienes nacionales», que hasta entonces habían estado mayoritariamente en poder de las Ordenes Religiosas y que incluían un elevado porcentaje de aquello que hoy se considera el patrimonio nacional. Se procura no solo comprender la evolución del cuadro patrimonial, sino también la de los discursos sobre el tema, la legislación promulgada y las prácticas efectivas de definición y salvaguardia de ese conjunto de edificios que, en este período, con el desarrollo de la consciencia patrimonial, se transforman en Monumentos Nacionales.This paper deals with the construction of the notion of Architectural Heritage in Portugal between 1825 and 1880, when the romantic «preservation project» was defined and consolidated. Having as a starting point the problem generated by the integration in the property of the Nation of a voluminous mass of properties, the so-called «national properties», that had until then been in the ownership of Religious Orders and include a high percentage of what is today considered our national heritage, we try not only to understand the evolution of the constitution of our architectural heritage, but also the several speeches on the subject, the promulgated legislation and the existing practices of definition and safeguard of this set of buildings that, at this time and due to the development of the consciousness of our heritage, were turned into National Monuments.

In 2001, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired a number...... of development and research efforts on improving the reliability of driver code. Today Linux is used in a much wider range of environments, provides a much wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? Are drivers still...... a major problem? To answer these questions, we have transported the experiments of Chou et al. to Linux versions 2.6.0 to 2.6.33, released between late 2003 and early 2010. We find that Linux has more than doubled in size during this period, but that the number of faults per line of code has been...

The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

The Bien Hoa airbase (south of Vietnam) is known as one of the Agent Orange hotspots which have been seriously contaminated by Agent Orange/dioxin during the Vietnam War. Hundreds of samples including soil, sediment and fish were collected at the Bien Hoa Agent Orange hotspot for assessment of the environmental contamination caused by dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs). The toxicity equivalency quotient (TEQ) concentration of PCDD/Fs in soil and sediment varied from 7.6 to 962,000 and 17 to 4860 pg/g dry wt, respectively, implying very high contamination of PCDD/Fs in several areas. PCDD/F levels in fish ranged between 1.8 and 288 pg/g TEQ wet wt and was generally higher than advisory guidelines for food consumption. 2,3,7,8-Tetrachlorinated dibenzo-p-dioxins (2,3,7,8-TCDD) contributed 66-99 % of TEQ for most of the samples, suggesting 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) from Agent Orange as the major source of the contamination. The vertical transport of PCDD/Fs was observed in soil column with high TEQ levels above 1000 pg/g dry wt (Vietnamese limit for necessary remediation activities- TCVN 8183:2009 (2009)) even at a depth of 1.8 m. The vertical transport of PCDD/Fs has probably mainly taken place during the "Ranch Hand" defoliant spray activities due to the leaks and spills of phenoxy herbicides and solvents. The congener patterns suggest that transports of PCDD/Fs by weathering processes have led to their redistribution in the low-land areas. Also, an estimate for the total volume of contaminated soil requiring remediation to meet Vietnamese regulatory limits is provided.

Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene

Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene.

This "Special Section on Real-Time Fault Diagnosis and Fault-Tolerant Control" of the IEEE Transactions on Industrial Electronics is motivated to provide a forum for academic and industrial communities to report recent theoretic/application results in real-time monitoring, diagnosis, and fault-tolerant design, and exchange the ideas about the emerging research direction in this field. Twenty-three papers were eventually selected through a strict peer-reviewed procedure, which represent the mo...

We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

The Wilshire fault is a potentially seismogenic, blind thrust fault inferred to underlie and cause the Wilshire arch, a Quaternary fold in the Hollywood area, just west of downtown Los Angeles, California. Two inverse models, based on the Wilshire arch, allow us to estimate the location and slip rate of the Wilshire fault, which may be illuminated by a zone of microearthquakes. A fault-bend fold model indicates a reverse-slip rate of 1.5-1.9 mm/yr, whereas a three-dimensional elastic-dislocation model indicates a right-reverse slip rate of 2.6-3.2 mm/yr. The Wilshire fault is a previously unrecognized seismic hazard directly beneath Hollywood and Beverly Hills, distinct from the faults under the nearby Santa Monica Mountains.

Faults in automated processes will often cause undesired reactions and shut-down of a controlled plant, and the consequences could be damage to the plant, to personnel or the environment. Fault-tolerant control is the synonym for a set of recent techniques that were developed to increase plant...... availability and reduce the risk of safety hazards. Its aim is to prevent that simple faults develop into serious failure. Fault-tolerant control merges several disciplines to achieve this goal, including on-line fault diagnosis, automatic condition assessment and calculation of remedial actions when a fault...... is detected. The envelope of the possible remedial actions is wide. This paper introduces tools to analyze and explore structure and other fundamental properties of an automated system such that any redundancy in the process can be fully utilized to enhance safety and a availability....

Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.

The PV Fault Detection Tool project plans to demonstrate that the FDT can (a) detect catastrophic and degradation faults and (b) identify the type of fault. This will be accomplished by collecting fault signatures using different instruments and integrating this information to establish a logical controller for detecting, diagnosing and classifying each fault.

A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

Development of dependable systems relies on the ability of the system to determine and respond to off-nominal system behavior. Specification and development of these fault management capabilities must be done in a structured and principled manner to improve our understanding of these systems, and to make significant gains in dependability (safety, reliability and availability). Prior work has described a fundamental taxonomy and theory of System Health Management (SHM), and of its operational subset, Fault Management (FM). This conceptual foundation provides a basis to develop framework to design and implement FM design strategies that protect mission objectives and account for system design limitations. Selection of an SHM strategy has implications for the functions required to perform the strategy, and it places constraints on the set of possible design solutions. The framework developed in this paper provides a rigorous and principled approach to classifying SHM strategies, as well as methods for determination and implementation of SHM strategies. An illustrative example is used to describe the application of the framework and the resulting benefits to system and FM design and dependability.

An improved accelerometer is introduced. It comprises a transducer responsive to vibration in machinery which produces an electrical signal related to the magnitude and frequency of the vibration; and a decoding circuit responsive to the transducer signal which produces a first fault signal to produce a second fault signal in which ground shift effects are nullified.

In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...... the faults have occurred. The last step is a fault isolation (FI) of the faults occurring in a specific fault set, i.e. equivalent with the standard FI step. A simple example demonstrates how to turn the algebraic necessary and sufficient conditions into explicit algorithms for designing filter banks, which...

We carried out ESR dating of fault rocks collected near the nuclear reactor. The Upcheon fault zone is exposed close to the Ulzin nuclear reactor. The space-time pattern of fault activity on the Upcheon fault deduced from ESR dating of fault gouge can be summarised as follows : this fault zone was reactivated between fault breccia derived from Cretaceous sandstone and tertiary volcanic sedimentary rocks about 2 Ma, 1.5 Ma and 1 Ma ago. After those movements, the Upcheon fault was reactivated between Cretaceous sandstone and fault breccia zone about 800 ka ago. This fault zone was reactivated again between fault breccia derived form Cretaceous sandstone and Tertiary volcanic sedimentary rocks about 650 ka and after 125 ka ago. These data suggest that the long-term(200-500 k.y.) cyclic fault activity of the Upcheon fault zone continued into the Pleistocene. In the Ulzin area, ESR dates from the NW and EW trend faults range from 800 ka to 600 ka NE and EW trend faults were reactivated about between 200 ka and 300 ka ago. On the other hand, ESR date of the NS trend fault is about 400 ka and 50 ka. Results of this research suggest the fault activity near the Ulzin nuclear reactor fault activity continued into the Pleistocene. One ESR date near the Youngkwang nuclear reactor is 200 ka

Full Text Available During non-severe fault conditions, crowbar protection is not activated and the rotor windings of a doubly-fed induction generator (DFIG are excited by the AC/DC/AC converter. Meanwhile, under asymmetrical fault conditions, the electrical variables oscillate at twice the grid frequency in synchronous dq frame. In the engineering practice, notch filters are usually used to extract the positive and negative sequence components. In these cases, the dynamic response of a rotor-side converter (RSC and the notch filters have a large influence on the fault current characteristics of the DFIG. In this paper, the influence of the notch filters on the proportional integral (PI parameters is discussed and the simplified calculation models of the rotor current are established. Then, the dynamic performance of the stator flux linkage under asymmetrical fault conditions is also analyzed. Based on this, the fault characteristics of the stator current under asymmetrical fault conditions are studied and the corresponding analytical expressions of the stator fault current are obtained. Finally, digital simulation results validate the analytical results. The research results are helpful to meet the requirements of a practical short-circuit calculation and the construction of a relaying protection system for the power grid with penetration of DFIGs.

Full Text Available In this article I study, analyze and classify the variant readings and the different layers of authorial intervention that appear in Lope de Vega’s La buena guarda and La encomienda bien guardada manuscript. I contend that the second version of the work should be edited, a possibility thus far neglected by modern editors.

An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard. 1 fig.

An arc fault detection system for use on ungrounded or high-resistance-grounded power distribution systems is provided which can be retrofitted outside electrical switchboard circuits having limited space constraints. The system includes a differential current relay that senses a current differential between current flowing from secondary windings located in a current transformer coupled to a power supply side of a switchboard, and a total current induced in secondary windings coupled to a load side of the switchboard. When such a current differential is experienced, a current travels through a operating coil of the differential current relay, which in turn opens an upstream circuit breaker located between the switchboard and a power supply to remove the supply of power to the switchboard.

Probabilistic safety analysis (PSA) is the process by which the probability (or frequency of occurrence) of reactor fault conditions which could lead to unacceptable consequences is assessed. The basic objective of a PSA is to allow a judgement to be made as to whether or not the principal probabilistic requirement is satisfied. It also gives insights into the reliability of the plant which can be used to identify possible improvements. This is explained in the article. The scope of a PSA and the PSA performed by the National Nuclear Corporation (NNC) for the Heysham II and Torness AGRs and Sizewell-B PWR are discussed. The NNC methods for hazards, common cause failure and operator error are mentioned. (UK)

To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faultsfaults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results

To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faultsfaults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results.

This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion

The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.

The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

Feedback control systems are vulnerable to faults in control loop sensors and actuators, because feedback actions may cause abrupt responses and process damage when faults occur.......Feedback control systems are vulnerable to faults in control loop sensors and actuators, because feedback actions may cause abrupt responses and process damage when faults occur....

... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Fault. 17.3 Section 17.3 Foreign Relations...) § 17.3 Fault. A recipient of an overpayment is without fault if he or she performed no act of... agency may have been at fault in initiating an overpayment will not necessarily relieve the individual...

An active fault diagnosis method for parametric or multiplicative faults is proposed. The method periodically adds a term to the controller that for a short period of time renders the system unstable if a fault has occurred, which facilitates rapid fault detection. An illustrative example is given....

Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

CLEFIA is a new 128-bit block cipher proposed by SONY corporation recently. The fundamental structure of CLEFIA is a generalized Feistel structure consisting of 4 data lines. In this paper, the strength of CLEFIA against the differential fault attack is explored. Our attack adopts the byte-oriented model of random faults. Through inducing randomly one byte fault in one round, four bytes of faults can be simultaneously obtained in the next round, which can efficiently reduce the total induce times in the attack. After attacking the last several rounds' encryptions, the original secret key can be recovered based on some analysis of the key schedule. The data complexity analysis and experiments show that only about 18 faulty ciphertexts are needed to recover the entire 128-bit secret key and about 54 faulty ciphertexts for 192/256-bit keys.

Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

A method determines a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

OBJECTIVE: In this field trip we collect passive data to 1. Convert passive to surface waves 2. Locate Qademah fault using surface wave migration INTRODUCTION: In this field trip we collected passive data for several days. This data will be used to find the surface waves using interferometry and then compared to active-source seismic data collected at the same location. A total of 288 receivers are used. A 3D layout with 5 m inline intervals and 10 m cross line intervals is used, where we used 12 lines with 24 receivers at each line. You will need to download the file (rec_times.mat), it contains important information about 1. Field record no 2. Record day 3. Record month 4. Record hour 5. Record minute 6. Record second 7. Record length P.S. 1. All files are converted from original format (SEG-2) to matlab format P.S. 2. Overlaps between records (10 to 1.5 sec.) are already removed from these files

UK NIREX, the body with responsibility for finding an acceptable strategy for deposition of radioactive waste has given the impression throughout its recent public consultation that the problem of nuclear waste is one of public and political acceptability, rather than one of a technical nature. However the results of the consultation process show that it has no mandate from the British public to develop a single, national, deep repository for the burial of radioactive waste. There is considerable opposition to this method of managing radioactive waste and suspicion of the claims by NIREX concerning the supposed integrity and safety of this deep burial option. This report gives substance to those suspicions and details the significant areas of uncertainty in the concept of effective geological containment of hazardous radioactive elements, which remain dangerous for tens of thousands of years. Because the science of geology is essentially retrospective rather than predictive, NIREX's plans for a single, national, deep 'repository' depend heavily upon a wide range of assumptions about the geological and hydrogeological regimes in certain areas of the UK. This report demonstrates that these assumptions are based on a limited understanding of UK geology and on unvalidated and simplistic theoretical models of geological processes, the performance of which can never be directly tested over the long time-scales involved. NIREX's proposals offer no guarantees for the safe and effective containment of radioactivity. They are deeply flawed. This report exposes the faults. (author)

A fault-tolerant actuator module, in a single containment shell, containing two actuator subsystems that are either asymmetrically or symmetrically laid out is provided. Fault tolerance in the actuators of the present invention is achieved by the employment of dual sets of equal resources. Dual resources are integrated into single modules, with each having the external appearance and functionality of a single set of resources.

An algebraic approach is given for a design of a static residual weighting factor in connection with fault detection. A complete parameterization is given of the weighting factor which will minimize a given performance index......An algebraic approach is given for a design of a static residual weighting factor in connection with fault detection. A complete parameterization is given of the weighting factor which will minimize a given performance index...

Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

High-resolution aeromagnetic surveys are now an industry standard and they commonly detect anomalies that are attributed to faults within sedimentary basins. However, detailed studies identifying geologic sources of magnetic anomalies in sedimentary environments are rare in the literature. Opportunities to study these sources have come from well-exposed sedimentary basins of the Rio Grande rift in New Mexico and Colorado. High-resolution aeromagnetic data from these areas reveal numerous, curvilinear, low-amplitude (2–15 nT at 100-m terrain clearance) anomalies that consistently correspond to intrasedimentary normal faults (Figure 1). Detailed geophysical and rock-property studies provide evidence for the magnetic sources at several exposures of these faults in the central Rio Grande rift (summarized in Grauch and Hudson, 2007, and Hudson et al., 2008). A key result is that the aeromagnetic anomalies arise from the juxtaposition of magnetically differing strata at the faults as opposed to chemical processes acting at the fault zone. The studies also provide (1) guidelines for understanding and estimating the geophysical parameters controlling aeromagnetic anomalies at faulted strata (Grauch and Hudson), and (2) observations on key geologic factors that are favorable for developing similar sedimentary sources of aeromagnetic anomalies elsewhere (Hudson et al.).

A passive current limiting device and isolator is particularly adapted for use at high power levels for limiting excessive currents in a circuit in a fault condition such as an electrical short. The current limiting device comprises a magnetic core wound with two magnetically opposed, parallel connected coils of copper, a high temperature superconductor or other electrically conducting material, and a fault element connected in series with one of the coils. Under normal operating conditions, the magnetic flux density produced by the two coils cancel each other. Under a fault condition, the fault element is triggered to cause an imbalance in the magnetic flux density between the two coils which results in an increase in the impedance in the coils. While the fault element may be a separate current limiter, switch, fuse, bimetal strip or the like, it preferably is a superconductor current limiter conducting one-half of the current load compared to the same limiter wired to carry the total current of the circuit. The major voltage during a fault condition is in the coils wound on the common core in a preferred embodiment.

Full Text Available Recent deformation processes taking place in real time are analyzed on the basis of data on fault zones which were collected by long-term detailed geodetic survey studies with application of field methods and satellite monitoring.A new category of recent crustal movements is described and termed as parametrically induced tectonic strain in fault zones. It is shown that in the fault zones located in seismically active and aseismic regions, super intensive displacements of the crust (5 to 7 cm per year, i.e. (5 to 7·10–5 per year occur due to very small external impacts of natural or technogenic / industrial origin.The spatial discreteness of anomalous deformation processes is established along the strike of the regional Rechitsky fault in the Pripyat basin. It is concluded that recent anomalous activity of the fault zones needs to be taken into account in defining regional regularities of geodynamic processes on the basis of real-time measurements.The paper presents results of analyses of data collected by long-term (20 to 50 years geodetic surveys in highly seismically active regions of Kopetdag, Kamchatka and California. It is evidenced by instrumental geodetic measurements of recent vertical and horizontal displacements in fault zones that deformations are ‘paradoxically’ deviating from the inherited movements of the past geological periods.In terms of the recent geodynamics, the ‘paradoxes’ of high and low strain velocities are related to a reliable empirical fact of the presence of extremely high local velocities of deformations in the fault zones (about 10–5 per year and above, which take place at the background of slow regional deformations which velocities are lower by the order of 2 to 3. Very low average annual velocities of horizontal deformation are recorded in the seismic regions of Kopetdag and Kamchatka and in the San Andreas fault zone; they amount to only 3 to 5 amplitudes of the earth tidal deformations per year.A ‘fault

Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

This report presents the results of a study commissioned by the Department for Business, Enterprise and Industry (BERR; formerly the Department of Trade and Industry) into the application of fault current limiters in the UK. The study reviewed the current state of fault current limiter (FCL) technology and regulatory position in relation to all types of current limiters. It identified significant research and development work with respect to medium voltage FCLs and a move to high voltage. Appropriate FCL technologies being developed include: solid state breakers; superconducting FCLs (including superconducting transformers); magnetic FCLs; and active network controllers. Commercialisation of these products depends on successful field tests and experience, plus material development in the case of high temperature superconducting FCL technologies. The report describes FCL techniques, the current state of FCL technologies, practical applications and future outlook for FCL technologies, distribution fault level analysis and an outline methodology for assessing the materiality of the fault level problem. A roadmap is presented that provides an 'action agenda' to advance the fault level issues associated with low carbon networks.

Methods for generating repair checklists on the basis of fault tree logic and probabilistic importance are presented. A one-step-ahead optimization procedure, based on the concept of component criticality, minimizing the expected time to diagnose system failure is outlined. Options available to the operator of a nuclear power plant when system fault conditions occur are addressed. A low-pressure emergency core cooling injection system, a standby safeguard system of a pressurized water reactor power plant, is chosen as an example illustrating the methods presented

Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

The design and reliability of four fault-tolerant architectures that may be used in nuclear power plant control systems were evaluated. Two architectures are variations of triple-modular-redundant (TMR) systems, and two are variations of dual redundant systems. The evaluation includes a review of methods of implementing fault-tolerant control, the importance of automatic recovery from failures, methods of self-testing diagnostics, block diagrams of typical fault-tolerant controllers, review of fault-tolerant controllers operating in nuclear power plants, and fault tree reliability analyses of fault-tolerant systems

Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation of a containe......Fault detection and fault isolation for in-service decision support systems for marine surface vehicles will be presented in this paper. The stochastic wave elevation and the associated ship responses are modeled in the frequency domain. The paper takes as an example fault isolation...... to the quality of decisions given to navigators....

degradation in the sense of guaranteed degraded performance. A number of fault diagnosis problems, fault tolerant control problems, and feedback control with fault rejection problems are formulated/considered, mainly from a fault modeling point of view. The method is illustrated on a servo example including......A general architecture for fault tolerant control is proposed. The architecture is based on the (primary) YJBK parameterization of all stabilizing compensators and uses the dual YJBK parameterization to quantify the performance of the fault tolerant system. The approach suggested can be applied...

This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

Probabilistic Risk Assessment (PRA) techniques are utilized in the nuclear industry to perform safety analyses of complex defense-in-depth systems. A major effort in PRA development is fault tree construction. The Integrated Fault Tree Environment (IFTREE) is an interactive, graphics-based tool for fault tree design. IFTREE provides integrated building, editing, and analysis features on a personal workstation. The design philosophy of IFTREE is presented, and the interface is described. IFTREE utilizes a unique rule-based solution algorithm founded in artificial intelligence (AI) techniques. The impact of the AI approach on the program design is stressed. IFTREE has been developed to handle the design and maintenance of full-size living PRAs and is currently in use

Satellite laser ranging techniques are used to monitor the broad motion of the tectonic plates comprising the San Andreas Fault System. The San Andreas Fault Experiment, (SAFE), has progressed through the upgrades made to laser system hardware and an improvement in the modeling capabilities of the spaceborne laser targets. Of special note is the launch of the Laser Geodynamic Satellite, LAGEOS spacecraft, NASA's only completely dedicated laser satellite in 1976. The results of plate motion projected into this 896 km measured line over the past eleven years are summarized and intercompared.

New geophysical and fault kinematic studies indicate that late Cenozoic basin development in the Mormon Point area of Death Valley, California, was accommodated by fault rotations. Three of six fault segments recognized at Mormon Point are now inactive and have been rotated to low dips during extension. The remaining three segments are now active and moderately to steeply dipping. From the geophysical data, one active segment appears to offset the low-angle faults in the subsurface of Death Valley.

Anisotropic Magnetoresistance angle sensors are widely used in automotive applications considered to be safety-critical applications. Therefore dependability is an important requirement and fault-tolerant strategies must be used to guarantee the correct operation of the sensors even in case of

and variations in Mohr-Coulomb parameters including internal friction. Using SDEM modelling, we have mapped the propagation of the tip-line of the fault, as well as the evolution of the fold geometry across sedimentary layers of contrasting rheological parameters, as a function of the increased offset......Understanding the dynamics and kinematics of fault-propagation-folding is important for evaluating the associated hydrocarbon play, for accomplishing reliable section balancing (structural reconstruction), and for assessing seismic hazards. Accordingly, the deformation style of fault-propagation...... a precise indication of when faults develop and hence also the sequential evolution of secondary faults. Here we focus on the generation of a fault -propagated fold with a reverse sense of motion at the master fault, and varying only the dip of the master fault and the mechanical behaviour of the deformed...

In this study, we carried out two sets of active fault investigation by the request from Ministry of Education, Culture, Sports, Science and Technology in the sea area of the extension of Kikugawa fault and the Nishiyama fault. We want to clarify the five following matters about both active faults based on those results. (1)Fault continuity of the land and the sea. (2) The length of the active fault. (3) The division of the segment. (4) Activity characteristics. In this investigation, we carried out a digital single channel seismic reflection survey in the whole area of both active faults. In addition, a high-resolution multichannel seismic reflection survey was carried out to recognize the detailed structure of a shallow stratum. Furthermore, the sampling with the vibrocoring to get information of the sedimentation age was carried out. The reflection profile of both active faults was extremely clear. The characteristics of the lateral fault such as flower structure, the dispersion of the active fault were recognized. In addition, from analysis of the age of the stratum, it was recognized that the thickness of the sediment was extremely thin in Holocene epoch on the continental shelf in this sea area. It was confirmed that the Kikugawa fault extended to the offing than the existing results of research by a result of this investigation. In addition, the width of the active fault seems to become wide toward the offing while dispersing. At present, we think that we can divide Kikugawa fault into some segments based on the distribution form of the segment. About the Nishiyama fault, reflection profiles to show the existence of the active fault was acquired in the sea between Ooshima and Kyushu. From this result and topographical existing results of research in Ooshima, it is thought that Nishiyama fault and the Ooshima offing active fault are a series of structure. As for Ooshima offing active fault, the upheaval side changes, and a direction changes too. Therefore, we

... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Fault. 29.522 Section 29.522 Money... Overpayments § 29.522 Fault. (a) General rule. A debtor is considered to be at fault if he or she, or any other... requirement. (3) The following factors may affect the decision as to whether the debtor is or is not at fault...

In this paper we consider the problem of fault estimation and accommodation for discrete time piecewise linear systems. A robust fault estimator is designed to estimate the fault such that the estimation error converges to zero and H∞ performance of the fault estimation is minimized. Then, the es...

An apparatus and program product determine a nodal fault along the boundary, or face, of a computing cell. Nodes on adjacent cell boundaries communicate with each other, and the communications are analyzed to determine if a node or connection is faulty.

Full Text Available The paper focuses on the responsibility arising from the registered financial results. The analysis of this responsibility presupposes its evaluation and determination of the role of fault in the formation of negative results. The search for efficiency in this whole process is justified by the understanding of the mechanisms that regulate the behavior of economic actors.

The fault detection and isolation (FDI) problem in connection with Proportional Integral (PI) Observers is considered in this paper. A compact formulation of the FDI design problem using PI observers is given. An analysis of the FDI design problem is derived with respectt to the time domain...

Considers the problem of fault detection and isolation while using zero or almost zero threshold. A number of different fault detection and isolation problems using exact or almost exact disturbance decoupling are formulated. Solvability conditions are given for the formulated design problems....... The l-step delayed fault detection problem is also considered for discrete-time systems....

... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 831.1402 Section 831.1402...) RETIREMENT Standards for Waiver of Overpayments § 831.1402 Fault. A recipient of an overpayment is without fault if he/she performed no act of commission or omission which resulted in the overpayment. The fact...

The results of a perturbed gamma-gamma angular correlations experiment on In-111 implanted into a properly cut single crystal of copper show that the defect known in the literature as "stacking fault" is not a planar faulted loop but a stacking fault tetrahedron with a size of 10-50 Angstrom.

... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Fault. 255.11 Section 255.11 Employees... § 255.11 Fault. (a) Before recovery of an overpayment may be waived, it must be determined that the overpaid individual was without fault in causing the overpayment. If recovery is sought from other than the...

... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Fault. 845.302 Section 845.302... EMPLOYEES RETIREMENT SYSTEM-DEBT COLLECTION Standards for Waiver of Overpayments § 845.302 Fault. A recipient of an overpayment is without fault if he or she performed no act of commission or omission that...

The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

The increasing size and complexity of software systems makes it hard to prevent or remove all possible faults. Faults that remain in the system can eventually lead to a system failure. Fault tolerance techniques are introduced for enabling systems to recover and continue operation when they are

We are applying machine learning (ML) techniques to continuous acoustic emission (AE) data from laboratory earthquake experiments. Our goal is to apply explicit ML methods to this acoustic datathe AE in order to infer frictional properties of a laboratory fault. The experiment is a double direct shear apparatus comprised of fault blocks surrounding fault gouge comprised of glass beads or quartz powder. Fault characteristics are recorded, including shear stress, applied load (bulk friction = shear stress/normal load) and shear velocity. The raw acoustic signal is continuously recorded. We rely on explicit decision tree approaches (Random Forest and Gradient Boosted Trees) that allow us to identify important features linked to the fault friction. A training procedure that employs both the AE and the recorded shear stress from the experiment is first conducted. Then, testing takes place on data the algorithm has never seen before, using only the continuous AE signal. We find that these methods provide rich information regarding frictional processes during slip (Rouet-Leduc et al., 2017a; Hulbert et al., 2017). In addition, similar machine learning approaches predict failure times, as well as slip magnitudes in some cases. We find that these methods work for both stick slip and slow slip experiments, for periodic slip and for aperiodic slip. We also derive a fundamental relationship between the AE and the friction describing the frictional behavior of any earthquake slip cycle in a given experiment (Rouet-Leduc et al., 2017b). Our goal is to ultimately scale these approaches to Earth geophysical data to probe fault friction. References Rouet-Leduc, B., C. Hulbert, N. Lubbers, K. Barros, C. Humphreys and P. A. Johnson, Machine learning predicts laboratory earthquakes, in review (2017). https://arxiv.org/abs/1702.05774Rouet-LeDuc, B. et al., Friction Laws Derived From the Acoustic Emissions of a Laboratory Fault by Machine Learning (2017), AGU Fall Meeting Session S025

This book provides recent theoretical developments in and practical applications of fault diagnosis and fault tolerant control for complex dynamical systems, including uncertain systems, linear and nonlinear systems. Combining adaptive control technique with other control methodologies, it investigates the problems of fault diagnosis and fault tolerant control for uncertain dynamic systems with or without time delay. As such, the book provides readers a solid understanding of fault diagnosis and fault tolerant control based on adaptive control technology. Given its depth and breadth, it is well suited for undergraduate and graduate courses on linear system theory, nonlinear system theory, fault diagnosis and fault tolerant control techniques. Further, it can be used as a reference source for academic research on fault diagnosis and fault tolerant control, and for postgraduates in the field of control theory and engineering. .

compared to USAF standards.43 McCarty was con- cerned about airlift transporting unnecessary items like champagne and ice that would normally move by...ice and champagne to Dien BienPhu.44 Further con- firmation exists in the fact that upon his promotion to brigadier general, de Castries’ new rank...and congratulatory bottle of champagne were airdropped to him but fell instead into Viet Minh hands.45 Certainly not all airlifts into Dien BienPhu

Full Text Available The design of a robust fault estimation observer is studied for linear multiagent systems subject to incipient faults. By considering the fact that incipient faults are in low-frequency domain, the fault estimation of such faults is proposed for discrete-time multiagent systems based on finite-frequency technique. Moreover, using the decomposition design, an equivalent conclusion is given. Simulation results of a numerical example are presented to demonstrate the effectiveness of the proposed techniques.

face, passing through the alluvial-colluvial fan at location 2. The gentle warping of the surface was completely modified because of severe cultivation practice. Therefore, it was difficult to confirm it in field. To the south ... scarp has been modified by present day farming. At location 5 near Wandhay village, an active fault trace ...

In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

Managing faults and their resultant failures is a fundamental and critical part of developing and operating aerospace systems. Yet, recent studies have shown that the engineering "discipline" required to manage faults is not widely recognized nor evenly practiced within the NASA community. Attempts to simply name this discipline in recent years has been fraught with controversy among members of the Integrated Systems Health Management (ISHM), Fault Management (FM), Fault Protection (FP), Hazard Analysis (HA), and Aborts communities. Approaches to managing space system faults typically are unique to each organization, with little commonality in the architectures, processes and practices across the industry.

This work addresses the issue of design optimization for fault- tolerant hard real-time systems. In particular, our focus is on the handling of transient faults using both checkpointing with rollback recovery and active replication. Fault tolerant schedules are generated based on a conditional...... process graph representation. The formulated system synthesis approaches decide the assignment of fault-tolerance policies to processes, the optimal placement of checkpoints and the mapping of processes to processors, such that multiple transient faults are tolerated, transparency requirements...

the applicability of the presented methods. The theoretical results are illustrated by two running examples which are used throughout the book. The book addresses engineering students, engineers in industry and researchers who wish to get a survey over the variety of approaches to process diagnosis and fault......The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process...

Computer code CAT for the automatic construction of the fault tree is briefly described. Code CAT makes possible simple modelling of components using decision tables, it accelerates the fault tree construction process, constructs fault trees of different complexity, and is capable of harmonized co-operation with programs PREPandKITT 1,2 for fault tree analysis. The efficiency of program CAT and thus the accuracy and completeness of fault trees constructed significantly depends on the compilation and sophistication of decision tables. Currently, program CAT is used in co-operation with programs PREPandKITT 1,2 in reliability analyses of nuclear power plant systems. (B.S.)

The focus in this paper is on active fault detection (AFD) for MIMO systems with parametric faults. The problem of design of auxiliary inputs with respect to detection of parametric faults is investigated. An analysis of the design of auxiliary inputs is given based on analytic transfer functions...... from auxiliary input to residual outputs. The analysis is based on a singular value decomposition of these transfer functions Based on this analysis, it is possible to design auxiliary input as well as design of the associated residual vector with respect to every single parametric fault in the system...... such that it is possible to detect these faults....

Deformation associated with normal fault earthquakes and geologic structures provide insights into the seismic cycle as it unfolds over time scales from seconds to millions of years. Improved understanding of normal faulting will lead to more accurate seismic hazard assessments and prediction of associated structures. High-precision aftershock locations for the 1995 Kozani-Grevena earthquake (Mw 6.5), Greece image a segmented master fault and antithetic faults. This three-dimensional fault geometry is typical of normal fault systems mapped from outcrop or interpreted from reflection seismic data and illustrates the importance of incorporating three-dimensional fault geometry in mechanical models. Subsurface fault slip associated with the Kozani-Grevena and 1999 Hector Mine (Mw 7.1) earthquakes is modeled using a new method for slip inversion on three-dimensional fault surfaces. Incorporation of three-dimensional fault geometry improves the fit to the geodetic data while honoring aftershock distributions and surface ruptures. GPS Surveying of deformed bedding surfaces associated with normal faulting in the western Grand Canyon reveals patterns of deformation that are similar to those observed by interferometric satellite radar interferometry (InSAR) for the Kozani Grevena earthquake with a prominent down-warp in the hanging wall and a lesser up-warp in the footwall. However, deformation associated with the Kozani-Grevena earthquake extends ˜20 km from the fault surface trace, while the folds in the western Grand Canyon only extend 500 m into the footwall and 1500 m into the hanging wall. A comparison of mechanical and kinematic models illustrates advantages of mechanical models in exploring normal faulting processes including incorporation of both deformation and causative forces, and the opportunity to incorporate more complex fault geometry and constitutive properties. Elastic models with antithetic or synthetic faults or joints in association with a master

Full Text Available Structural characteristics of fault rocks distributed within major fault zones provide basic information in understanding the physical aspects of faulting. Mesoscopic structural observations of the drilledcores from Taiwan Chelungpu-fault Drilling Project Hole-A are reported in this article to describe and reveal the distribution of fault rocks within the Chelungpu Fault System.

Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

The report for long-term evaluation of active faults was published by the Headquarters for Earthquake Research Promotion on Nov. 2010. After occurrence of the 2011 Tohoku-oki earthquake, the safety review guide with regard to geology and ground of site was revised by the Nuclear Safety Commission on Mar. 2012 with scientific knowledges of the earthquake. The Nuclear Regulation Authority established on Sep. 2012 is newly planning the New Safety Design Standard related to Earthquakes and Tsunamis of Light Water Nuclear Power Reactor Facilities. With respect to those guides and standards, our investigations for developing the methods of evaluating active faults are as follows; (1) For better evaluation on activities of offshore fault, we proposed a work flow to date marine terrace (indicator for offshore fault activity) during the last 400,000 years. We also developed the analysis of fault-related fold for evaluating of blind fault. (2) To clarify the activities of active faults without superstratum, we carried out the color analysis of fault gouge and divided the activities into thousand of years and tens of thousands. (3) To reduce uncertainties of fault activities and frequency of earthquakes, we compiled the survey data and possible errors. (4) For improving seismic hazard analysis, we compiled the fault activities of the Yunotake and Itozawa faults, induced by the 2011 Tohoku-oki earthquake. (author)

Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs, grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Ulzin nuclear reactor. ESR signals of quartz grains separated from fault rocks collected from the E-W trend fault are saturated. This indicates that the last movement of these faults had occurred before the quaternary period. ESR dates from the NW trend faults range from 300ka to 700ka. On the other hand, ESR date of the NS trend fault is about 50ka. Results of this research suggest that long-term cyclic fault activity near the Ulzin nuclear reactor continued into the pleistocene.

Engineering systems are vulnerable to different kinds of faults. Faults may compromise safety, cause sub-optimal operation and decline in performance if not preventing the whole system from functioning. Fault tolerant control (FTC) methods ensure that the system performance maintains within...... with feedback control. Fault recoverability provides important and useful information which could be used in analysis and design. However, computing fault recoverability is numerically expensive. In this paper, a new approach for computation of fault recoverability for bilinear systems is proposed...... approach for computation of fault recoverability is proposed which reduces the computational burden significantly. The proposed results are used for an electro-hydraulic drive to reveal the redundant actuating capabilities in the system....

Two active fault diagnosis methods for additive or parametric faults are proposed. Both methods are based on controller reconfiguration rather than on requiring an exogenous excitation signal, as it is otherwise common in active fault diagnosis. For the first method, it is assumed that the system...... considered is controlled by an observer-based controller. The method is then based on a number of alternate observers, each designed to be sensitive to one or more additive faults. Periodically, the observer part of the controller is changed into the sequence of fault sensitive observers. This is done...... in a way that guarantees the continuity of transition and global stability using a recent result on observer parameterization. An illustrative example inspired by a field study of a drag racing vehicle is given. For the second method, an active fault diagnosis method for parametric faults is proposed...

In the wind power application, different asymmetrical types of the grid fault can be categorized after the Y/d transformer, and the positive and negative components of a single-phase fault, phase-to-phase fault, and two-phase fault can be summarized. Due to the newly introduced negative and even...... the natural component of the Doubly-Fed Induction Generator (DFIG) stator flux during the fault period, their effects on the rotor voltage can be investigated. It is concluded that the phase-to-phase fault has the worst scenario due to its highest introduction of the negative stator flux. Afterwards......, the capability of a 2 MW DFIG to ride through asymmetrical grid faults can be estimated at the existing design of the power electronics converter. Finally, a control scheme aimed to improve the DFIG capability is proposed and the simulation results validate its feasibility....

The areas bounding the Troms-Finnmark Fault Complex are affected by complex tectonic evolution. In this work, the history of fault growth, reactivation, and inversion of major faults in the Troms-Finnmark Fault Complex and the Ringvassøy Loppa Fault Complex is interpreted from three-dimensional seismic data, structural maps and fault displacement plots. Our results reveal eight normal faults bounding rotated fault blocks in the Troms-Finnmark Fault Complex. Both the throw-depth and displacement-distance plots show that the faults exhibit complex configurations of lateral and vertical segmentation with varied profiles. Some of the faults were reactivated by dip-linkages during the Late Jurassic and exhibit polycyclic fault growth, including radial, syn-sedimentary, and hybrid propagation. Localised positive inversion is the main mechanism of fault reactivation occurring at the Troms-Finnmark Fault Complex. The observed structural styles include folds associated with extensional faults, folded growth wedges and inverted depocentres. Localised inversion was intermittent with rifting during the Middle Jurassic-Early Cretaceous at the boundaries of the Troms-Finnmark Fault Complex to the Finnmark Platform. Additionally, tectonic inversion was more intense at the boundaries of the two fault complexes, affecting Middle Triassic to Early Cretaceous strata. Our study shows that localised folding is either a product of compressional forces or of lateral movements in the Troms-Finnmark Fault Complex. Regional stresses due to the uplift in the Loppa High and halokinesis in the Tromsø Basin are likely additional causes of inversion in the Troms-Finnmark Fault Complex.

Within the framework of the electric power market liberalization, DC networks have many interests compared to alternative ones, but their protections need to use new systems. Superconducting fault current limiters enable by an overstepping of the critical current to limit the fault current to a preset value, lower than the theoretical short-circuit current. For these applications, coated conductors offer excellent opportunities. We worked on the implementation of these materials and built a test bench. We carried out limiting experiments to estimate the quench homogeneity at various short-circuit parameters. An important point is the temperature measurement by deposited sensors on the ribbon, results are in good correlation with the theoretical models. Improved quench behaviours for temperatures close to the critical temperature have been confirmed. Our results enable to better understand the limitation mechanisms of coated conductors. (author)

The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour

In this work, the Fuzzy Signed Digraph(FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators

In this work, the Fuzzy Signed Digraph (FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators. (Author)

Accretionary sandbox experiments provide a rich environment for investigating the processes of fault development. These experiments engage students because 1) they enable direct observation of fault growth, which is impossible in the crust (type 1 physical model), 2) they are not only representational but can also be manipulated (type 2 physical model), 3) they can be used to test hypotheses (type 3 physical model) and 4) they resemble experiments performed by structural geology researchers around the world. The structural geology courses at UMass Amherst utilize a series of accretionary sandboxes experiments where students first watch a video of an experiment and then perform a group experiment. The experiments motivate discussions of what conditions they would change and what outcomes they would expect from these changes; hypothesis development. These discussions inevitably lead to calculations of the scaling relationships between model and crustal fault growth and provide insight into the crustal processes represented within the dry sand. Sketching of the experiments has been shown to be a very effective assessment method as the students reveal which features they are analyzing. Another approach used at UMass is to set up a forensic experiment. The experiment is set up with spatially varying basal friction before the meeting and students must figure out what the basal conditions are through the experiment. This experiment leads to discussions of equilibrium and force balance within the accretionary wedge. Displacement fields can be captured throughout the experiment using inexpensive digital image correlation techniques to foster quantitative analysis of the experiments.

The purpose of this study is to develop a fault detection and diagnosis scheme that can monitor process fault and instrument fault of a steam generator. The suggested scheme consists of a Kalman filter and two bias estimators. Method of detecting process and instrument fault in a steam generator uses the mean test on the residual sequence of Kalman filter, designed for the unfailed system, to make a fault decision. Once a fault is detected, two bias estimators are driven to estimate the fault and to discriminate process fault and instrument fault. In case of process fault, the fault diagnosis of outlet temperature, feed-water heater and main steam control valve is considered. In instrument fault, the fault diagnosis of steam generator's three instruments is considered. Computer simulation tests show that on-line prompt fault detection and diagnosis can be performed very successfully.(Author)

Six hundreds and eighty earthquakes causing significant damage have been recorded since the 7. century in Japan. It is important to recognize faults that will or are expected to be active in future in order to help reduce earthquake damage, estimate earthquake damage insurance and siting of nuclear facilities. Such faults are called 'active faults' in Japan, the definition of which is a fault that has moved intermittently for at least several hundred thousand years and is expected to continue to do so in future. Scientific research of active faults has been ongoing since the 1930's. Many results indicated that major earthquakes and fault movements in shallow crustal regions in Japan occurred repeatedly at existing active fault zones during the past. After the 1995 Southern Hyogo Prefecture Earthquake, 98 active fault zones were selected for fundamental survey, with the purpose of efficiently conducting an active fault survey in 'Plans for Fundamental Seismic Survey and Observation' by the headquarters for earthquake research promotion, which was attached to the Prime Minister's office of Japan. Forty two administrative divisions for earthquake disaster prevention have investigated the distribution and history of fault activity of 80 active fault zones. Although earthquake prediction is difficult, the behaviour of major active faults in Japan is being recognised. Japan Nuclear Cycle Development Institute (JNC) submitted a report titled 'H12: Project to Establish the. Scientific and Technical Basis for HLW Disposal in Japan' to the Atomic Energy Commission (AEC) of Japan for official review W. The Guidelines, which were defined by AEC, require the H12 Project to confirm the basic technical feasibility of safe HLW disposal in Japan. In this report the important issues relating to fault activity were described that are to understand the characteristics of current fault movements and the spatial extent and magnitude of the effects caused by these movements, and to

The energy crisis and environmental challenges have driven industry towards more energy efficient solutions. With nearly 60% of electricity consumed by various electric machines in industry sector, advancement in the efficiency of the electric drive system is of vital importance. Adjustable speed drive system (ASDS) provides excellent speed regulation and dynamic performance as well as dramatically improved system efficiency compared with conventional motors without electronics drives. Industry has witnessed tremendous grow in ASDS applications not only as a driving force but also as an electric auxiliary system for replacing bulky and low efficiency auxiliary hydraulic and mechanical systems. With the vast penetration of ASDS, its fault tolerant operation capability is more widely recognized as an important feature of drive performance especially for aerospace, automotive applications and other industrial drive applications demanding high reliability. The Switched Reluctance Machine (SRM), a low cost, highly reliable electric machine with fault tolerant operation capability, has drawn substantial attention in the past three decades. Nevertheless, SRM is not free of fault. Certain faults such as converter faults, sensor faults, winding shorts, eccentricity and position sensor faults are commonly shared among all ASDS. In this dissertation, a thorough understanding of various faults and their influence on transient and steady state performance of SRM is developed via simulation and experimental study, providing necessary knowledge for fault detection and post fault management. Lumped parameter models are established for fast real time simulation and drive control. Based on the behavior of the faults, a fault detection scheme is developed for the purpose of fast and reliable fault diagnosis. In order to improve the SRM power and torque capacity under faults, the maximum torque per ampere excitation are conceptualized and validated through theoretical analysis and

Full Text Available In this paper, a fault-tolerant control (FTC scheme is proposed for actuator faults, which is built upon tube-based model predictive control (MPC as well as set-based fault detection and isolation (FDI. In the class of MPC techniques, tubebased MPC can effectively deal with system constraints and uncertainties with relatively low computational complexity compared with other robust MPC techniques such as min-max MPC. Set-based FDI, generally considering the worst case of uncertainties, can robustly detect and isolate actuator faults. In the proposed FTC scheme, fault detection (FD is passive by using invariant sets, while fault isolation (FI is active by means of MPC and tubes. The active FI method proposed in this paper is implemented by making use of the constraint-handling ability of MPC to manipulate the bounds of inputs.

Full Text Available Public goods having different proprieties produce different problems of collective action. This hypothesis has been discussed from different analytical perspectives and we examine their arguments, aiming to deepen our theoretical understanding about the logic of collective action. In this paper we look for similarities and differences regarding their use of concepts and expected results. Simultaneously, we examine the link between different public goods and the interdependent behavior of individuals. To this purpose, we also revisit the connections between production functions and consumption types in the case of public goods, as a mean to explore the potential development of more elaborated models in the theory of collective action, and suggest that more empirical contrast of these models is needed. Finally, we argue the importance for theoretical discussions to include a discussion about the impact of perception in the collective action models, considering systematic deviations to rational choice, and conclude that no single theory may address all collective action problems, but systematic patterns of behavior may be identified and empirically confirmed.

A kind of fault diagnosis system of molten carbonate fuel cell (MCFC) stack is proposed in this paper. It is composed of a fuzzy neural network (FNN) and a fault diagnosis element. FNN is able to deal with the information of the expert knowledge and the experiment data efficiently. It also has the ability to approximate any smooth system. FNN is used to identify the fault diagnosis model of MCFC stack. The fuzzy fault decision element can diagnose the state of the MCFC generating system, normal or fault, and can decide the type of the fault based on the outputs of FNN model and the MCFC system. Some simulation experiment results are demonstrated in this paper.

Detection and isolation of parametric faults in closed-loop systems will be considered in this paper. A major problem is that a feedback controller will in general reduce the effects from variations in the systems including parametric faults on the controlled output from the system. Parametric...... faults can be detected and isolated using active methods, where an auxiliary input is applied. Using active methods for the diagnosis of parametric faults in closed-loop systems, the amplitude of the applied auxiliary input need to be increased to be able to detect and isolate the faults in a reasonable......-parameterization (after Youla, Jabr, Bongiorno and Kucera) for the controller, it is possible to modify the feedback controller with a minor effect on the closed-loop performance in the fault-free case and at the same time optimize the detection and isolation in a faulty case. Controller modification in connection...

A setup for active fault diagnosis (AFD) of parametric faults in dynamic systems is formulated in this paper. It is shown that it is possible to use the same setup for both open loop systems, closed loop systems based on a nominal feedback controller as well as for closed loop systems based...... on a reconfigured feedback controller. This will make the proposed AFD approach very useful in connection with fault tolerant control (FTC). The setup will make it possible to let the fault diagnosis part of the fault tolerant controller remain unchanged after a change in the feedback controller. The setup for AFD...... is based on the YJBK (after Youla, Jabr, Bongiorno and Kucera) parameterization of all stabilizing feedback controllers and the dual YJBK parameterization. It is shown that the AFD is based directly on the dual YJBK transfer function matrix. This matrix will be named the fault signature matrix when...

The Naruto-South fault is situated of about 1000m south of the Naruto fault, the Median Tectonic Line active fault system in the eastern part of Shikoku. We investigated fault topography and subsurface geology of this fault by interpretation of large scale aerial photographs, collecting borehole data and Geo-Slicer survey. The results obtained are as follows; 1) The Naruto-South fault runs on the Yoshino River deltaic plain at least 2.5 km long with fault scarplet. the Naruto-South fault is o...

Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.

Full Text Available In this paper a fuzzy logic based fault diagnosis system for a deaerator in a power plant unit is presented. The system parameters are obtained using the linearised state space deaerator model. The fuzzy inference system is created and rule base are evaluated relating the parameters to the type and severity of the faults. These rules are fired for specific changes in system parameters and the faults are diagnosed.

Objective: Is the Qademah fault that was detected in 2010 the main fault? We collected a long 2D profile, 526 m, where the fault that was detected in 2010 is at around 300 m. Layout: We collected 264 CSGs, each has 264 receivers. The shot and receiver interval is 2 m. We also collected an extra 48 CSGs with offset = 528 to 622 m with shot interval = 2 m. The receivers are the same as the main survey.

Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

A class of fault detection and identification (FDI) methods for bias-type actuator and sensor faults is explored in detail from the point of view of fault identifiability. The methods use state augmentation along with banks of Kalman-Bucy filters for fault detection, fault pattern determination, and fault value estimation. A complete characterization of conditions for identifiability of bias-type actuator faults, sensor faults, and simultaneous actuator and sensor faults is presented. It is shown that FDI of simultaneous actuator and sensor faults is not possible using these methods when all sensors have unknown biases. The fault identifiability conditions are demonstrated via numerical examples. The analytical and numerical results indicate that caution must be exercised to ensure fault identifiability for different fault patterns when using such methods.

Full Text Available A design scheme that integrates fault reconfiguration and fault-tolerant position control is proposed for a nonlinear servo system with friction. Analysis of the non-linear friction torque and fault in the system is used to guide design of a sliding mode position controller. A sliding mode observer is designed to achieve fault reconfiguration based on the equivalence principle. Thus, active fault-tolerant position control of the system can be realized. A real-time simulation experiment is performed on a hardware-in-loop simulation platform. The results show that the system reconfigures well for both incipient and abrupt faults. Under the fault-tolerant control mechanism, the output signal for the system position can rapidly track given values without being influenced by faults.

Full Text Available This paper introduces a novel thruster fault diagnosis and accommodation system for open-frame underwater vehicles with abrupt faults. The proposed system consists of two subsystems: a fault diagnosis subsystem and a fault accommodation sub-system. In the fault diagnosis subsystem a ICMAC(Improved Credit Assignment Cerebellar Model Articulation Controllers neural network is used to realize the on-line fault identification and the weighting matrix computation. The fault accommodation subsystem uses a control algorithm based on weighted pseudo-inverse to find the solution of the control allocation problem. To illustrate the proposed method effective, simulation example, under multi-uncertain abrupt faults, is given in the paper.

In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

This book intends to provide the readers a good understanding on how to achieve Fault Tolerant Control goal of Hybrid Systems. The book can be used as a reference for the academic research on Fault Tolerant Control and Hybrid Systems or used in Ph.D. study of control theory and engineering. The knowledge background for this monograph would be some undergraduate and graduate courses on Fault Diagnosis and Fault Tolerant Control theory, linear system theory, nonlinear system theory, Hybrid Systems theory and Discrete Event System theory. (orig.)

Soil radon levels have been measured across the Amer fault, which is located near the volcanic region of La Garrotxa, Spain. Both passive (LR-115, time-integrating) and active (Clipperton II, time-resolved) detectors have been used in a survey in which 27 measurement points were selected in five lines perpendicular to the Amer fault in the village area of Amer. The averaged results show an influence of the distance to the fault on the mean soil radon values. The dynamic results show a very clear seasonal effect on the soil radon levels. The results obtained support the hypothesis that the fault is still active

In this paper, a new method for quantitative security risk assessment of complex systems is presented, combining fault-tree analysis, traditionally used in reliability analysis, with the recently introduced Attack-tree analysis, proposed for the study of malicious attack patterns. The combined use of fault trees and attack trees helps the analyst to effectively face the security challenges posed by the introduction of modern ICT technologies in the control systems of critical infrastructures. The proposed approach allows considering the interaction of malicious deliberate acts with random failures. Formal definitions of fault tree and attack tree are provided and a mathematical model for the calculation of system fault probabilities is presented.

Linlin Li addresses the analysis and design issues of observer-based FD and FTC for nonlinear systems. The author analyses the existence conditions for the nonlinear observer-based FD systems to gain a deeper insight into the construction of FD systems. Aided by the T-S fuzzy technique, she recommends different design schemes, among them the L_inf/L_2 type of FD systems. The derived FD and FTC approaches are verified by two benchmark processes. Contents Overview of FD and FTC Technology Configuration of Nonlinear Observer-Based FD Systems Design of L2 nonlinear Observer-Based FD Systems Design of Weighted Fuzzy Observer-Based FD Systems FTC Configurations for Nonlinear Systems< Application to Benchmark Processes Target Groups Researchers and students in the field of engineering with a focus on fault diagnosis and fault-tolerant control fields The Author Dr. Linlin Li completed her dissertation under the supervision of Prof. Steven X. Ding at the Faculty of Engineering, University of Duisburg-Essen, Germany...

Complex systems always have high-level reliability and safety requirements, and same does their diagnosis work. As a great deal of fault tree models have been acquired during the design and operation phases, a fault diagnosis method which combines fault tree analysis with knowledge-based technology has been proposed. The prototype of fault diagnosis software has been realized and applied to mobile LIDAR system. (authors)

The dilatational strains associated with vertical faults embedded in a horizontal plate are examined in the framework of fault kinematics and simple displacement boundary conditions. Using boundary element methods, a sequence of examples of dilatational strain fields associated with commonly occurring strike-slip fault zone features (bends, offsets, finite rupture lengths, and nonuniform slip distributions) is derived. The combinations of these strain fields are then used to examine the Parkfield region of the San Andreas fault system in central California.

We investigate the problem of quantum searching on a noisy quantum computer. Taking a fault-ignorant approach, we analyze quantum algorithms that solve the task for various different noise strengths, which are possibly unknown beforehand. We prove lower bounds on the runtime of such algorithms and thereby find that the quadratic speedup is necessarily lost (in our noise models). However, for low but constant noise levels the algorithms we provide (based on Grover's algorithm) still outperform the best noiseless classical search algorithm. (paper)

In this paper, we propose a new method for passive fault-tolerant control of discrete time piecewise affine systems. Actuator faults are considered. A reliable piecewise linear quadratic regulator (LQR) state feedback is designed such that it can tolerate actuator faults. A sufficient condition f...... is illustrated on a numerical example and a two degree of freedom helicopter....

Full Text Available This paper presents a rectifier fault diagnosis method with wavelet packet analysis to improve the fault tolerant four-phase doubly fed brushless starter generator (DFBLSG system reliability. The system components and fault tolerant principle of the high reliable DFBLSG are given. And the common fault of the rectifier is analyzed. The process of wavelet packet transforms fault detection/identification algorithm is introduced in detail. The fault tolerant performance and output voltage experiments were done to gather the energy characteristics with a voltage sensor. The signal is analyzed with 5-layer wavelet packets, and the energy eigenvalue of each frequency band is obtained. Meanwhile, the energy-eigenvalue tolerance was introduced to improve the diagnostic accuracy. With the wavelet packet fault diagnosis, the fault tolerant four-phase DFBLSG can detect the usual open-circuit fault and operate in the fault tolerant mode if there is a fault. The results indicate that the fault analysis techniques in this paper are accurate and effective.

This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

This paper focuses on the longitudinal control of an Airbus passenger aircraft in the presence of elevator jamming faults. In particular, in this paper, we address permanent and temporary actuator jamming faults using a novel reconfigurable fault-tolerant predictive control design. Due to their

An accurate fault modeling and troubleshooting methodology is required to aid in making risk-informed decisions related to design and operational activities of current and future generation of CANDU designs. This paper presents fault modeling approach using Fault Semantic Network (FSN) methodology with risk estimation. Its application is demonstrated using a case study of Bruce B zone-control level oscillations. (author)

tolerant control of wind turbines using a benchmark model. In this paper, the fault diagnosis scheme is improved and integrated with a fault accommodation scheme which enables and disables the individual pitch algorithm based on the fault detection. In this way, the blade and tower loads are not increased...

This paper discusses human spaceflight fault management operations. Fault detection and response capabilities available in current US human spaceflight programs Space Shuttle and International Space Station are described while emphasizing system design impacts on operational techniques and constraints. Preflight and inflight processes along with products used to anticipate, mitigate and respond to failures are introduced. Examples of operational products used to support failure responses are presented. Possible improvements in the state of the art, as well as prioritization and success criteria for their implementation are proposed. This paper describes how the architecture of a command and control system impacts operations in areas such as the required fault response times, automated vs. manual fault responses, use of workarounds, etc. The architecture includes the use of redundancy at the system and software function level, software capabilities, use of intelligent or autonomous systems, number and severity of software defects, etc. This in turn drives which Caution and Warning (C&W) events should be annunciated, C&W event classification, operator display designs, crew training, flight control team training, and procedure development. Other factors impacting operations are the complexity of a system, skills needed to understand and operate a system, and the use of commonality vs. optimized solutions for software and responses. Fault detection, annunciation, safing responses, and recovery capabilities are explored using real examples to uncover underlying philosophies and constraints. These factors directly impact operations in that the crew and flight control team need to understand what happened, why it happened, what the system is doing, and what, if any, corrective actions they need to perform. If a fault results in multiple C&W events, or if several faults occur simultaneously, the root cause(s) of the fault(s), as well as their vehicle-wide impacts, must be

The purpose of this analysis is to evaluate potential effects of fault displacement on emplacement drifts, including drip shields and waste packages emplaced in emplacement drifts. The output from this analysis not only provides data for the evaluation of long-term drift stability but also supports the Engineered Barrier System (EBS) process model report (PMR) and Disruptive Events Report currently under development. The primary scope of this analysis includes (1) examining fault displacement effects in terms of induced stresses and displacements in the rock mass surrounding an emplacement drift and (2 ) predicting fault displacement effects on the drip shield and waste package. The magnitude of the fault displacement analyzed in this analysis bounds the mean fault displacement corresponding to an annual frequency of exceedance of 10 -5 adopted for the preclosure period of the repository and also supports the postclosure performance assessment. This analysis is performed following the development plan prepared for analyzing effects of fault displacement on emplacement drifts (CRWMS M and O 2000). The analysis will begin with the identification and preparation of requirements, criteria, and inputs. A literature survey on accommodating fault displacements encountered in underground structures such as buried oil and gas pipelines will be conducted. For a given fault displacement, the least favorable scenario in term of the spatial relation of a fault to an emplacement drift is chosen, and the analysis is then performed analytically. Based on the analysis results, conclusions are made regarding the effects and consequences of fault displacement on emplacement drifts. Specifically, the analysis will discuss loads which can be induced by fault displacement on emplacement drifts, drip shield and/or waste packages during the time period of postclosure

Nuclear as well as non-nuclear organisations are showing during the past few years a growing interest in the field of reliability analysis. This urges for the development of powerful, state of the art methods and computer codes for performing such analysis on complex systems. In this report an interactive, computer aided approach is discussed, based on the well known fault tree technique. The time consuming and difficut task of manually constructing a system model (one or more fault trees) is replaced by an efficient interactive procedure in which the flexibility and the learning process inherent to the manual approach are combined with the accuracy in the modelling and the speed of the fully automatical approach. The method presented is based upon the use of a library containing component models. The possibility of setting up a standard library of models of general use and the link with a data collection system are discussed. The method has been implemented in the CAFTS-SALP software package which is described shortly in the report

Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

In order to identify and segment the active faults, the literatures of structural geology, paleoseismology, and geophysical explorations were investigated. The existing structural geological criteria for segmenting active faults were examined. These are mostly based on normal fault systems, thus, the additional criteria are demanded for application to different types of fault systems. Definition of the seismogenic fault, characteristics of fault activity, criteria and study results of fault segmentation, relationship between segmented fault length and maximum displacement, and estimation of seismic risk of segmented faults were examined in paleoseismic study. The history of earthquake such as dynamic pattern of faults, return period, and magnitude of the maximum earthquake originated by fault activity can be revealed by the study. It is confirmed through various case studies that numerous geophysical explorations including electrical resistivity, land seismic, marine seismic, ground-penetrating radar, magnetic, and gravity surveys have been efficiently applied to the recognition and segmentation of active faults

A model for the relation between density and length of oxidation-induced stacking faults on damaged silicon surfaces is proposed, based on interactions of stacking faults with dislocations and neighboring stacking faults. The model agrees with experiments.

The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

The modular multilevel converter (MMC) is attractive for medium- or high-power applications because of the advantages of its high modularity, availability, and high power quality. The fault-tolerant operation is one of the important issues for the MMC. This paper proposed a fault-tolerant approach...... for the MMC under submodule (SM) faults. The characteristic of the MMC with arms containing different number of healthy SMs under faults is analyzed. Based on the characteristic, the proposed approach can effectively keep the MMC operation as normal under SM faults. It can effectively improve the MMC...

The Curiosity Rover, currently operating on Mars, contains flight software onboard to autonomously handle aspects of system fault protection. Over 1000 monitors and 39 responses are present in the flight software. Orchestrating these behaviors is the flight software's fault protection engine. In this paper, we discuss the engine's design, responsibilities, and present some lessons learned for future missions.

The fault detection process is modelled as a disturbance attenuation problem. The solution to this problem is found via differential game theory, leading to an H(sub infinity) filter which bounds the transmission of all exogenous signals save the fault to be detected. For a general class of linear systems which includes some time-varying systems, it is shown that this transmission bound can be taken to zero by simultaneously bringing the sensor noise weighting to zero. Thus, in the limit, a complete transmission block can he achieved, making the game filter into a fault detection filter. When we specialize this result to time-invariant system, it is found that the detection filter attained in the limit is identical to the well known Beard-Jones Fault Detection Filter. That is, all fault inputs other than the one to be detected (the "nuisance faults") are restricted to an invariant subspace which is unobservable to a projection on the output. For time-invariant systems, it is also shown that in the limit, the order of the state-space and the game filter can be reduced by factoring out the invariant subspace. The result is a lower dimensional filter which can observe only the fault to be detected. A reduced-order filter can also he generated for time-varying systems, though the computational overhead may be intensive. An example given at the end of the paper demonstrates the effectiveness of the filter as a tool for fault detection and identification.

An analytical method to describe fault tree diagrams in terms of their modular compositions is developed. Fault tree structures are characterized by recursively relating the top tree event to all its basic component inputs through a set of equations defining each of the modulus for the fault tree. It is shown that such a modular description is an extremely valuable tool for making a quantitative analysis of fault trees. The modularization methodology has been implemented into the PL-MOD computer code, written in PL/1 language, which is capable of modularizing fault trees containing replicated components and replicated modular gates. PL-MOD in addition can handle mutually exclusive inputs and explicit higher order symmetric (k-out-of-n) gates. The step-by-step modularization of fault trees performed by PL-MOD is demonstrated and it is shown how this procedure is only made possible through an extensive use of the list processing tools available in PL/1. A number of nuclear reactor safety system fault trees were analyzed. PL-MOD performed the modularization and evaluation of the modular occurrence probabilities and Vesely-Fussell importance measures for these systems very efficiently. In particular its execution time for the modularization of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using MOCUS, a code considered to be fast by present standards.

Future mobility requires sound solutions in the field of fault tolerance in real-time applications amongst which Cooperative Adaptive Cruise Control (CACC). This control system cannot rely on the driver as a backup and is constantly active and therefore more prominent to the occurrences of faults

The Knitting, Lace and Net Industry Training Board has developed a training innovation called fault diagnosis training. The entire training process concentrates on teaching based on the experiences of troubleshooters or any other employees whose main tasks involve fault diagnosis and rectification. (Author/DS)

Gearbox is widely used in industrial and military applications. Due to high service load, harsh operating conditions or inevitable fatigue, faults may develop in gears. If the gear faults cannot be detected early, the health will continue to degrade, perhaps causing heavy economic loss or even catastrophe. Early fault detection and diagnosis allows properly scheduled shutdowns to prevent catastrophic failure and consequently result in a safer operation and higher cost reduction. Recently, many studies have been done to develop gearbox dynamic models with faults aiming to understand gear fault generation mechanism and then develop effective fault detection and diagnosis methods. This paper focuses on dynamics based gearbox fault modeling, detection and diagnosis. State-of-art and challenges are reviewed and discussed. This detailed literature review limits research results to the following fundamental yet key aspects: gear mesh stiffness evaluation, gearbox damage modeling and fault diagnosis techniques, gearbox transmission path modeling and method validation. In the end, a summary and some research prospects are presented.

Fault detection in supermarket refrigeration systems is an important topic due to both economic and food safety reasons. If faults can be detected and diagnosed before the system drifts outside the specified operational envelope, service costs can be reduced and in extreme cases the costly discar...

An electro-mechanical position servo is introduced as a benchmark for mode-based Fault Detection and Identification (FDI).......An electro-mechanical position servo is introduced as a benchmark for mode-based Fault Detection and Identification (FDI)....

The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered in this paper from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well...

Naive Fault Tree (NFT) accepts a single value or a range of values for each basic event and returns values for the top event. This accommodates the need of commonly used Fault Trees (FT) for precise data making them prone to data concerns and limiting their area of application. This paper extends

A major threat in extremely dependable high-end process node integrated systems in e.g. Avionics are no failures found (NFF). One category of NFFs is the intermittent resistive fault, often originating from bad (e.g. Via or TSV-based) interconnections. This paper will show the impact of these faults

and efficiency. Here, an innovative model-based fault-detection approach for early detection of shading of PV modules and faults on the direct current (DC) side of PV systems is proposed. This approach combines the flexibility, and simplicity of a one-diode model

Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis

Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

Fault diagnosis of parametric faults in closed-loop uncertain systems by using an auxiliary input vector is considered in this paper, i.e. active fault diagnosis (AFD). The active fault diagnosis is based directly on the socalled fault signature matrix, related to the YJBK (Youla, Jabr, Bongiorno...... and Kucera) parameterization. Conditions are given for exact detection and isolation of parametric faults in closed-loop uncertain systems....

Elevated pore pressure can lead to slip reactivation on pre-existing fractures and faults when the coulomb failure point is reached. From a static point of view, the reactivation of fault submitted to a background stress (τ0) is a function of the peak strength of the fault, i.e. the quasi-static effective friction coefficient (µeff). However, this theory is valid only when the entire fault is affected by fluid pressure, which is not the case in nature, and during human induced-seismicity. In this study, we present new results about the influence of the injection rate on the stability of faults. Experiments were conducted on a saw-cut sample of westerly granite. The experimental fault was 8 cm length. Injections were conducted through a 2 mm diameter hole reaching the fault surface. Experiments were conducted at four different order magnitudes fluid pressure injection rates (from 1 MPa/minute to 1 GPa/minute), in a fault system submitted to 50 and 100 MPa confining pressure. Our results show that the peak fluid pressure leading to slip depends on injection rate. The faster the injection rate, the larger the peak fluid pressure leading to instability. Wave velocity surveys across the fault highlighted that decreasing the injection-rate leads to an increase of size of the fluid pressure perturbation. Our result demonstrate that the stability of the fault is not only a function of the fluid pressure requires to reach the failure criterion, but is mainly a function of the ratio between the length of the fault affected by fluid pressure and the total fault length. In addition, we show that the slip rate increases with the background effective stress and with the intensity of the fluid pressure pertubation, i.e. with the excess shear stress acting on the part of the fault pertubated by fluid injection. Our results suggest that crustal fault can be reactivated by local high fluid overpressures. These results could explain the "large" magnitude human-induced earthquakes

Full Text Available Local Earthquake Tomography (LET is a useful tool for imaging lateral heterogeneities in the upper crust. The pattern of P- and S-wave velocity anomalies, in relation to the seismicity distribution along active fault zones. can shed light on the existence of discrete seismogenic patches. Recent tomographic studies in well monitored seismic areas have shown that the regions with large seismic moment release generally correspond to high velocity zones (HVZ's. In this paper, we discuss the relationship between the seismogenic behavior of faults and the velocity structure of fault zones as inferred from seismic tomography. First, we review some recent tomographic studies in active strike-slip faults. We show examples from different segments of the San Andreas fault system (Parkfield, Loma Prieta, where detailed studies have been carried out in recent years. We also show two applications of LET to thrust faults (Coalinga, Friuli. Then, we focus on the Irpinia normal fault zone (South-Central Italy, where a Ms = 6.9 earthquake occurred in 1980 and many thousands of attershock travel time data are available. We find that earthquake hypocenters concentrate in HVZ's, whereas low velocity zones (LVZ s appear to be relatively aseismic. The main HVZ's along which the mainshock rupture bas propagated may correspond to velocity weakening fault regions, whereas the LVZ's are probably related to weak materials undergoing stable slip (velocity strengthening. A correlation exists between this HVZ and the area with larger coseismic slip along the fault, according to both surface evidence (a fault scarp as high as 1 m and strong ground motion waveform modeling. Smaller wave-length, low-velocity anomalies detected along the fault may be the expression of velocity strengthening sections, where aseismic slip occurs. According to our results, the rupture at the nucleation depth (~ 10-12 km is continuous for the whole fault lenoth (~ 30 km, whereas at shallow depth

Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.

This book addresses fault detection and isolation topics from a computational perspective. Unlike most existing literature, it bridges the gap between the existing well-developed theoretical results and the realm of reliable computational synthesis procedures. The model-based approach to fault detection and diagnosis has been the subject of ongoing research for the past few decades. While the theoretical aspects of fault diagnosis on the basis of linear models are well understood, most of the computational methods proposed for the synthesis of fault detection and isolation filters are not satisfactory from a numerical standpoint. Several features make this book unique in the fault detection literature: Solution of standard synthesis problems in the most general setting, for both continuous- and discrete-time systems, regardless of whether they are proper or not; consequently, the proposed synthesis procedures can solve a specific problem whenever a solution exists Emphasis on the best numerical algorithms to ...

Full Text Available One of the key prerequisite for a scalable, effective and efficient sensor network is the utilization of low-cost, low-overhead and high-resilient fault-inference techniques. To this end, we propose an intelligent agent system with a problem solving capability to address the issue of fault inference in sensor network environments. The intelligent agent system is designed and implemented at base-station side. The core of the agent system – problem solver – implements a fault-detection inference engine which harnesses Expectation Maximization (EM algorithm to estimate fault probabilities of sensor nodes. To validate the correctness and effectiveness of the intelligent agent system, a set of experiments in a wireless sensor testbed are conducted. The experimental results show that our intelligent agent system is able to precisely estimate the fault probability of sensor nodes.

A modular and scaleable Matrix Fault Current Limiter (MFCL) that functions as a "variable impedance" device in an electric power network, using components made of superconducting and non-superconducting electrically conductive materials. The matrix fault current limiter comprises a fault current limiter module that includes a superconductor which is electrically coupled in parallel with a trigger coil, wherein the trigger coil is magnetically coupled to the superconductor. The current surge doing a fault within the electrical power network will cause the superconductor to transition to its resistive state and also generate a uniform magnetic field in the trigger coil and simultaneously limit the voltage developed across the superconductor. This results in fast and uniform quenching of the superconductors, significantly reduces the burnout risk associated with non-uniformity often existing within the volume of superconductor materials. The fault current limiter modules may be electrically coupled together to form various "n" (rows).times."m" (columns) matrix configurations.

A document describes a methodology for designing fault-protection (FP) software for autonomous spacecraft. The methodology embodies and extends established engineering practices in the technical discipline of Fault Detection, Diagnosis, Mitigation, and Recovery; and has been successfully implemented in the Deep Impact Spacecraft, a NASA Discovery mission. Based on established concepts of Fault Monitors and Responses, this FP methodology extends the notion of Opinion, Symptom, Alarm (aka Fault), and Response with numerous new notions, sub-notions, software constructs, and logic and timing gates. For example, Monitor generates a RawOpinion, which graduates into Opinion, categorized into no-opinion, acceptable, or unacceptable opinion. RaiseSymptom, ForceSymptom, and ClearSymptom govern the establishment and then mapping to an Alarm (aka Fault). Local Response is distinguished from FP System Response. A 1-to-n and n-to- 1 mapping is established among Monitors, Symptoms, and Responses. Responses are categorized by device versus by function. Responses operate in tiers, where the early tiers attempt to resolve the Fault in a localized step-by-step fashion, relegating more system-level response to later tier(s). Recovery actions are gated by epoch recovery timing, enabling strategy, urgency, MaxRetry gate, hardware availability, hazardous versus ordinary fault, and many other priority gates. This methodology is systematic, logical, and uses multiple linked tables, parameter files, and recovery command sequences. The credibility of the FP design is proven via a fault-tree analysis "top-down" approach, and a functional fault-mode-effects-and-analysis via "bottoms-up" approach. Via this process, the mitigation and recovery strategy(s) per Fault Containment Region scope (width versus depth) the FP architecture.

The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

This paper presents the design of a reconfigurable avionics system based on modern Static Random Access Memory (SRAM)-based Field Programmable Gate Array (FPGA) to be used in future generations of nano satellites. A major concern in satellite systems and especially nano satellites is to build robust systems with low-power consumption profiles. The system is designed to be flexible by providing the capability of reconfiguring itself based on its orbital position. As Single Event Upsets (SEU) do not have the same severity and intensity in all orbital locations, having the maximum at the South Atlantic Anomaly (SAA) and the polar cusps, the system does not have to be fully protected all the time in its orbit. An acceptable level of protection against high-energy cosmic rays and charged particles roaming in space is provided within the majority of the orbit through software fault tolerance. Check pointing and roll back, besides control flow assertions, is used for that level of protection. In the minority part of the orbit where severe SEUs are expected to exist, a reconfiguration for the system FPGA is initiated where the processor systems are triplicated and protection through Triple Modular Redundancy (TMR) with feedback is provided. This technique of reconfiguring the system as per the level of the threat expected from SEU-induced faults helps in reducing the average dynamic power consumption of the system to one-third of its maximum. This technique can be viewed as a smart protection through system reconfiguration. The system is built on the commercial version of the (XC5VLX50) Xilinx Virtex5 FPGA on bulk silicon with 324 IO. Simulations of orbit SEU rates were carried out using the SPENVIS web-based software package.

Full Text Available An international workshop entitled “GONAF: A deep Geophysical Observatory at the North Anatolian Fault”, was held 23–27 April 2007 in Istanbul, Turkey. The aim of this workshop was to refine plans for a deep drilling project at the North Anatolian Fault Zone (NAFZ in northwestern Turkey. The current drilling target is located in the Marmara Sea offshore the megacity of Istanbul in the direct vicinity of the main branch of the North Anatolian Fault on the PrinceIslands (Figs. 1 and 2.The NAFZ represents a 1600-km-long plate boundary that slips at an average rate of 20–30 mm·yr-1 (McClusky et al., 2000. It has developed in the framework of the northward moving Arabian plate and the Hellenic subduction zone where the African lithosphere is subducting below the Aegean. Comparison of long-term slip rates with Holocene and GPS-derived slip rates indicate an increasing westwardmovement of the Anatolian plate with respect to stable Eurasia. During the twentieth century, the NAFZ has ruptured over 900 km of its length. A series of large earthquakes starting in 1939 near Erzincan in Eastern Anatolia propagated westward towards the Istanbul-Marmara region in northwestern Turkey that today represents a seismic gap along a ≥100-km-long segment below the Sea of Marmara. This segment did not rupture since 1766 and, if locked, may have accumulated a slip deficit of 4–5 m. It is believed being capable of generating two M≥7.4 earthquakes within the next decades (Hubert-Ferrari et al., 2000; however, it could even rupture in a large single event (Le Pichon et al., 1999.

High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake ( ML = 6.7, Δ = 51 km), the August 4, 1985, Kettleman Hills earthquake ( ML = 5.5, Δ = 34 km), the April 1984 Morgan Hill earthquake ( ML = 6.1, Δ = 55 km), the November 1984 Round Valley earthquake ( ML = 5.8, Δ = 54 km), the January 14, 1978, Izu, Japan earthquake ( ML = 7.0, Δ = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10 -8), with borehole dilatometers (resolution 10 -10) and a 3-component borehole strainmeter (resolution 10 -9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure.

In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

Full Text Available Medical body sensors can be implanted or attached to the human body to monitor the physiological parameters of patients all the time. Inaccurate data due to sensor faults or incorrect placement on the body will seriously influence clinicians’ diagnosis, therefore detecting sensor data faults has been widely researched in recent years. Most of the typical approaches to sensor fault detection in the medical area ignore the fact that the physiological indexes of patients aren’t changing synchronously at the same time, and fault values mixed with abnormal physiological data due to illness make it difficult to determine true faults. Based on these facts, we propose a Data Fault Detection mechanism in Medical sensor networks (DFD-M. Its mechanism includes: (1 use of a dynamic-local outlier factor (D-LOF algorithm to identify outlying sensed data vectors; (2 use of a linear regression model based on trapezoidal fuzzy numbers to predict which readings in the outlying data vector are suspected to be faulty; (3 the proposal of a novel judgment criterion of fault state according to the prediction values. The simulation results demonstrate the efficiency and superiority of DFD-M.

This paper discusses the use of fault tree analysis to identify those areas of nuclear fuel cycle facilities which must be protected to prevent acts of sabotage that could lead to sifnificant release of radioactive material. By proper manipulation of the fault trees for a plant, an analyst can identify vital areas in a manner consistent with regulatory definitions. This paper discusses the general procedures used in the analysis of any nuclear facility. In addition, a structured, generic approach to the development of the fault trees for nuclear power reactors is presented along with selected results of the application of the generic approach to several plants

The application of concatenated codes to fault tolerant quantum computing is discussed. We have previously shown that for quantum memories and quantum communication, a state can be transmitted with error {epsilon} provided each gate has error at most c{epsilon}. We show how this can be used with Shor`s fault tolerant operations to reduce the accuracy requirements when maintaining states not currently participating in the computation. Viewing Shor`s fault tolerant operations as a method for reducing the error of operations, we give a concatenated implementation which promises to propagate the reduction hierarchically. This has the potential of reducing the accuracy requirements in long computations.

This paper presents a test benchmark model for the evaluation of fault detection and accommodation schemes. This benchmark model deals with the wind turbine on a system level, and it includes sensor, actuator, and system faults, namely faults in the pitch system, the drive train, the generator......, and the converter system. Since it is a system-level model, converter and pitch system models are simplified because these are controlled by internal controllers working at higher frequencies than the system model. The model represents a three-bladed pitch-controlled variable-speed wind turbine with a nominal power...

This article realizes nonlinear Fault Detection and Isolation for actuators, given there is no measurement of the states in the actuators. The Fault Detection and Isolation of the actuators is instead based on angular velocity measurement of the spacecraft and knowledge about the dynamics...... of the satellite. The algorithms presented in this paper are based on a geometric approach to achieve nonlinear Fault Detection and Isolation. The proposed algorithms are tested in a simulation study and the pros and cons of the algorithms are discussed....

The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well as the uncertain FDI...... problem are considered. Based on this analysis, a performance index based on norms of the involved transfer functions is given. The performance index allows us also to optimize the structure of the fault detection filter directly...

This paper discusses the problem of designing fault tolerant compensators that stabilize a given system both in the nominal situation, as well as in the situation where one of the sensors or one of the actuators has failed. It is shown that such compensators always exist, provided that the system...... is detectable from each output and that it is stabilizable. The proof of this result is constructive, and a worked example shows how to design a fault tolerant compensator for a simple, yet challeging system. A family of second order systems is described that requires fault tolerant compensators of arbitrarily...

This paper demonstrates fault diagnosis on unmanned underwater vehicles (UUV) based on analysis of structure of the nonlinear dynamics. Residuals are generated using dierent approaches in structural analysis followed by statistical change detection. Hypothesis testing thresholds are made signal...... based to cope with non-ideal properties seen in real data. Detection of both sensor and thruster failures are demonstrated. Isolation is performed using the residual signature of detected faults and the change detection algorithm is used to assess severity of faults by estimating their magnitude...

The subject of the proposed research is fault-related folding and ground deformation. The results are relevant to oil-producing structures throughout the world, to understanding of damage that has been observed along and near earthquake ruptures, and to earthquake-producing structures in California and other tectonically-active areas. The objectives of the proposed research were to provide both a unified, mechanical infrastructure for studies of fault-related foldings and to present the results in computer programs that have graphical users interfaces (GUIs) so that structural geologists and geophysicists can model a wide variety of fault-related folds (FaRFs).

The Median Tectonic Line fault zone (hereinafter MTLFZ) is the longest and most active fault zone in Japan. The MTLFZ is a 400-km-long trench parallel right-lateral strike-slip fault accommodating lateral slip components of the Philippine Sea plate oblique subduction beneath the Eurasian plate [Fitch, 1972; Yeats, 1996]. Complex fault geometry evolves along the MTLFZ. The geomorphic and geological characteristics show a remarkable change through the MTLFZ. Extensional step-overs and pull-apart basins and a pop-up structure develop in western and eastern parts of the MTLFZ, respectively. It is like a "scissoring fault properties". We can point out two main factors to form scissoring fault properties along the MTLFZ. One is a regional stress condition, and another is a preexisting fault. The direction of σ1 anticlockwise rotate from N170°E [Famin et al., 2014] in the eastern Shikoku to Kinki areas and N100°E [Research Group for Crustral Stress in Western Japan, 1980] in central Shikoku to N85°E [Onishi et al., 2016] in western Shikoku. According to the rotation of principal stress directions, the western and eastern parts of the MTLFZ are to be a transtension and compression regime, respectively. The MTLFZ formed as a terrain boundary at Cretaceous, and has evolved with a long active history. The fault style has changed variously, such as left-lateral, thrust, normal and right-lateral. Under the structural condition of a preexisting fault being, the rupture does not completely conform to Anderson's theory for a newly formed fault, as the theory would require either purely dip-slip motion on the 45° dipping fault or strike-slip motion on a vertical fault. The fault rupture of the 2013 Barochistan earthquake in Pakistan is a rare example of large strike-slip reactivation on a relatively low angle dipping fault (thrust fault), though many strike-slip faults have vertical plane generally [Avouac et al., 2014]. In this presentation, we, firstly, show deep subsurface

A new method is developed to determine on-fault magnitude distributions within a complex and connected multi-fault system. A binary integer programming (BIP) method is used to distribute earthquakes from a 10 kyr synthetic regional catalog, with a minimum magnitude threshold of 6.0 and Gutenberg-Richter (G-R) parameters (a- and b-values) estimated from historical data. Each earthquake in the synthetic catalog can occur on any fault and at any location. In the multi-fault system, earthquake ruptures are allowed to branch or jump from one fault to another. The objective is to minimize the slip-rate misfit relative to target slip rates for each of the faults in the system. Maximum and minimum slip-rate estimates around the target slip rate are used as explicit constraints. An implicit constraint is that an earthquake can only be located on a fault (or series of connected faults) if it is long enough to contain that earthquake. The method is demonstrated in the San Francisco Bay area, using UCERF3 faults and slip-rates. We also invoke the same assumptions regarding background seismicity, coupling, and fault connectivity as in UCERF3. Using the preferred regional G-R a-value, which may be suppressed by the 1906 earthquake, the BIP problem is deemed infeasible when faults are not connected. Using connected faults, however, a solution is found in which there is a surprising diversity of magnitude distributions among faults. In particular, the optimal magnitude distribution for earthquakes that participate along the Peninsula section of the San Andreas fault indicates a deficit of magnitudes in the M6.0- 7.0 range. For the Rodgers Creek-Hayward fault combination, there is a deficit in the M6.0- 6.6 range. Rather than solving this as an optimization problem, we can set the objective function to zero and solve this as a constraint problem. Among the solutions to the constraint problem is one that admits many more earthquakes in the deficit magnitude ranges for both faults

Data-driven Design of Fault Diagnosis and Fault-tolerant Control Systems presents basic statistical process monitoring, fault diagnosis, and control methods, and introduces advanced data-driven schemes for the design of fault diagnosis and fault-tolerant control systems catering to the needs of dynamic industrial processes. With ever increasing demands for reliability, availability and safety in technical processes and assets, process monitoring and fault-tolerance have become important issues surrounding the design of automatic control systems. This text shows the reader how, thanks to the rapid development of information technology, key techniques of data-driven and statistical process monitoring and control can now become widely used in industrial practice to address these issues. To allow for self-contained study and facilitate implementation in real applications, important mathematical and control theoretical knowledge and tools are included in this book. Major schemes are presented in algorithm form and...

Full Text Available For the large scale and complicated structure of networked control systems, time-varying sensor faults could inevitably occur when the system works in a poor environment. Guaranteed cost fault-tolerant controller for the new networked control systems with time-varying sensor faults is designed in this paper. Based on time delay of the network transmission environment, the networked control systems with sensor faults are modeled as a discrete-time system with uncertain parameters. And the model of networked control systems is related to the boundary values of the sensor faults. Moreover, using Lyapunov stability theory and linear matrix inequalities (LMI approach, the guaranteed cost fault-tolerant controller is verified to render such networked control systems asymptotically stable. Finally, simulations are included to demonstrate the theoretical results.

This paper is concerned with the fault detection (FD) problem in finite frequency domain for continuous-time Takagi-Sugeno fuzzy systems with sensor faults. Some finite-frequency performance indices are initially introduced to measure the fault/reference input sensitivity and disturbance robustness. Based on these performance indices, an effective FD scheme is then presented such that the generated residual is designed to be sensitive to both fault and reference input for faulty cases, while robust against the reference input for fault-free case. As the additional reference input sensitivity for faulty cases is considered, it is shown that the proposed method improves the existing FD techniques and achieves a better FD performance. The theory is supported by simulation results related to the detection of sensor faults in a tunnel-diode circuit.

The Death Valley-Furnace Creek fault system, in California and Nevada, has a variety of impressive late Quaternary neotectonic features that record a long history of recurrent earthquake-induced faulting. Although no neotectonic features of unequivocal historical age are known, paleoseismic features from multiple late Quaternary events of surface faulting are well developed throughout the length of the system. Comparison of scarp heights to amount of horizontal offset of stream channels and the relationships of both scarps and channels to the ages of different geomorphic surfaces demonstrate that Quaternary faulting along the northwest-trending Furnace Creek fault zone is predominantly right lateral, whereas that along the north-trending Death Valley fault zone is predominantly normal. These observations are compatible with tectonic models of Death Valley as a northwest- trending pull-apart basin

Geodetic imaging techniques enable researchers to "see" details of fault rupture that cannot be captured by complementary tools such as seismology and field studies, thus providing increasingly detailed information about surface strain, slip kinematics, and how an earthquake may be transcribed into the geological record. For example, the recent Haiti, Sierra El Mayor, and Nepal earthquakes illustrate the fundamental role of geodetic observations in recording blind ruptures where purely geological and seismological studies provided incomplete views of rupture kinematics. Traditional earthquake hazard analyses typically rely on sparse paleoseismic observations and incomplete mapping, simple assumptions of slip kinematics from Andersonian faulting, and earthquake analogs to characterize the probabilities of forthcoming ruptures and the severity of ground accelerations. Spatially dense geodetic observations in turn help to identify where these prevailing assumptions regarding fault behavior break down and highlight new and unexpected kinematic slip behavior. Here, we focus on three key contributions of space geodetic observations to the analysis of co-seismic deformation: identifying near-surface co-seismic slip where no easily recognized fault rupture exists; discerning non-Andersonian faulting styles; and quantifying distributed, off-fault deformation. The 2013 Balochistan strike slip earthquake in Pakistan illuminates how space geodesy precisely images non-Andersonian behavior and off-fault deformation. Through analysis of high-resolution optical imagery and DEMs, evidence emerges that a single fault map slip as both a strike slip and dip slip fault across multiple seismic cycles. These observations likewise enable us to quantify on-fault deformation, which account for ~72% of the displacements in this earthquake. Nonetheless, the spatial distribution of on- and off-fault deformation in this event is highly spatially variable- a complicating factor for comparisons

The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time

The most common fault type in the electrical distribution networks is the single phase to earth fault. According to the earlier studies, for instance in Nordic countries, about 80 % of all faults are of this type. To develop the protection and fault location systems, it is important to obtain real case data of disturbances and faults which occur in the networks. For example, the earth fault initial transients can be used for earth fault location. The aim of this project was to collect and analyze real case data of the earth fault disturbances in the medium voltage distribution networks (20 kV). Therefore, data of fault occurrences were recorded at two substations, of which one has an unearthed and the other a compensated neutral, measured as follows: (a) the phase currents and neutral current for each line in the case of low fault resistance (b) the phase voltages and neutral voltage from the voltage measuring bay in the case of low fault resistance (c) the neutral voltage and the components of 50 Hz at the substation in the case of high fault resistance. In addition, the basic data of the fault occurrences were collected (data of the line, fault location, cause and so on). The data will be used in the development work of fault location and earth fault protection systems

Active fault diagnosis (AFD) of parametric faults is considered in connection with closed loop feedback systems. AFD involves auxiliary signals applied on the closed loop system. A fault signature matrix is introduced in connection with AFD and it is shown that if a limited number of faults can...

One important way that an architecture impacts fault tolerance is by making it easy or hard to implement measures that improve fault tolerance. Many such measures are described as fault tolerance tactics. We studied how various fault tolerance tactics can be implemented in the best-known

The most common fault type in the electrical distribution networks is the single phase to earth fault. According to the earlier studies, for instance in Nordic countries, about 80 % of all faults are of this type. To develop the protection and fault location systems, it is important to obtain real case data of disturbances and faults which occur in the networks. For example, the earth fault initial transients can be used for earth fault location. The aim of this project was to collect and analyze real case data of the earth fault disturbances in the medium voltage distribution networks (20 kV). Therefore, data of fault occurrences were recorded at two substations, of which one has an unearthed and the other a compensated neutral, measured as follows: (a) the phase currents and neutral current for each line in the case of low fault resistance (b) the phase voltages and neutral voltage from the voltage measuring bay in the case of low fault resistance (c) the neutral voltage and the components of 50 Hz at the substation in the case of high fault resistance. In addition, the basic data of the fault occurrences were collected (data of the line, fault location, cause and so on). The data will be used in the development work of fault location and earth fault protection systems

This paper presents a model-based fault detection algorithm for a specific fault scenario of the ADDSAFE project. The fault considered is the disconnection of a control surface from its hydraulic actuator. Detecting this type of fault as fast as possible helps to operate an aircraft more cost

A power converter is needed in almost all kinds of renewable energy systems and drive systems. It is used both for controlling the renewable source and for interfacing with the load, which can be grid-connected or working in standalone mode. Further, it drives the motors efficiently. Increasing...... efforts have been put into making these systems better in terms of reliability in order to achieve high power source availability, reduce the cost of energy and also increase the reliability of overall systems. Among the components used in power converters, a power device and a capacitor fault occurs most...... frequently. Therefore, it is important to monitor the power device and capacitor fault to increase the reliability of power electronics. In this chapter, the diagnosis methods for power device fault will be discussed by dividing into open- and short-circuit faults. Then, the condition monitoring methods...

Full Text Available In this paper, we present a new reset tree-based scheme to protect cryptographic hardware against optical fault injection attacks. As one of the most powerful invasive attacks on cryptographic hardware, optical fault attacks cause semiconductors to misbehave by injecting high-energy light into a decapped integrated circuit. The contaminated result from the affected chip is then used to reveal secret information, such as a key, from the cryptographic hardware. Since the advent of such attacks, various countermeasures have been proposed. Although most of these countermeasures are strong, there is still the possibility of attack. In this paper, we present a novel optical fault detection scheme that utilizes the buffers on a circuit’s reset signal tree as a fault detection sensor. To evaluate our proposal, we model radiation-induced currents into circuit components and perform a SPICE simulation. The proposed scheme is expected to be used as a supplemental security tool.

In this paper a model of an induction motor affected by stator faults is presented. Two different types of faults are considered, these are; disconnection of a supply phase, and inter-turn and turn-turn short circuits inside the stator. The output of the derived model is compared to real measurem......In this paper a model of an induction motor affected by stator faults is presented. Two different types of faults are considered, these are; disconnection of a supply phase, and inter-turn and turn-turn short circuits inside the stator. The output of the derived model is compared to real...... measurements from a specially designed induction motor. With this motor it is possible to simulate both terminal disconnections, inter-turn and turn-turn short circuits. The results show good agreement between the measurements and the simulated signals obtained from the model. In the tests focus...

When the failure probability of a system is extremely small or necessary statistical data from the system is scarce, it is very difficult or impossible to evaluate its reliability and safety with conventional fault tree analysis (FTA) techniques. New techniques are needed to predict and diagnose such a system's failures and evaluate its reliability and safety. In this paper, we first provide a concise overview of FTA. Then, based on the posbist reliability theory, event failure behavior is characterized in the context of possibility measures and the structure function of the posbist fault tree of a coherent system is defined. In addition, we define the AND operator and the OR operator based on the minimal cut of a posbist fault tree. Finally, a model of posbist fault tree analysis (posbist FTA) of coherent systems is presented. The use of the model for quantitative analysis is demonstrated with a real-life safety system

Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.

A district heating substation is a demanding process for fault diagnosis. The process is nonlinear, load conditions of the district heating network change unpredictably and standard instrumentation is designed only for control and local monitoring purposes, not for automated diagnosis. Extra instrumentation means additional cost, which is usually not acceptable to consumers. That is why all conventional methods are not applicable in this environment. The paper presents five different approaches to fault diagnosis. While developing the methods, various kinds of pragmatic aspects and robustness had to be considered in order to achieve practical solutions. The presented methods are: classification of faults using performance indexing, static and physical modelling of process equipment, energy balance of the process, interactive fault tree reasoning and statistical tests. The methods are applied to a control valve, a heat excharger, a mud separating device and the whole process. The developed methods are verified in practice using simulation, simulation or field tests. (orig.) (25 refs.)

A test system and protocol was developed for the testing of high-impedance fault (HIF) detection devices. A technique was established for point-by-point addition of fault and load currents, the resultant was used for testing the performance of the devices in detecting HIFs in the presence of load current. The system used digitized data from recorded faults and normal currents to generate analog test signals for high-impedance fault detection relays. A test apparatus was built with a 10 kHz band-width and playback duration of 30 minutes on 6 output channels for testing purposes. Three devices which have recently become available were tested and their performance was evaluated based on their respective test results.

This work presents a new method for real time welding fault detection in industry based on Linear Discriminant Analysis (LDA). A set of parameters was calculated from one second blocks of electrical data recorded during welding and based on control data from reference welds under good conditions, as well as faulty welds. Optimised linear combinations of the parameters were determined with LDA and tested with independent data. Short arc welds in overlap joints were studied with various power sources, shielding gases, wire diameters, and process geometries. Out-of-position faults were investigated. Application of LDA fault detection to a broad range of welding procedures was investigated using a similarity measure based on Principal Component Analysis. The measure determines which reference data are most similar to a given industrial procedure and the appropriate LDA weights are then employed. Overall, results show that Linear Discriminant Analysis gives an effective and consistent performance in real-time welding fault detection.

This paper presents framework for fault tolerant controllers (FTC) that includes input saturation. The controller architecture known from FTC is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization is extended to handle input saturation. Applying this controller architecture in connec......This paper presents framework for fault tolerant controllers (FTC) that includes input saturation. The controller architecture known from FTC is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization is extended to handle input saturation. Applying this controller architecture...... in connection with faulty systems including input saturation gives an additional YJBK transfer function related to the input saturation. In the fault free case, this additional YJBK transfer function can be applied directly for optimizing the feedback loop around the input saturation. In the faulty case......, the design problem is a mixed design problem involved both parametric faults and input saturation....

Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.

It is stated that subsurface radon emanation monitored in shallow dry holes along an active segment of the San Andreas fault in central California shows spatially coherent large temporal variations that seem to be correlated with local seismicity. (author)

.... A graphical user interface (GUI) coupled to the processor displays each query in accordance with the hierarchal order thereof. The GUT simultaneously displays identification of the various subsystems having a relationship with the data type experiencing the data fault.

This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault

Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

Pore fluid pressure in a fault zone can be altered by natural processes (e.g., mineral dehydration and thermal pressurization) and industrial operations involving subsurface fluid injection and extraction for the development of energy and water resources. However, the effect of pore pressure change on the stability and slip motion of a preexisting geologic fault remains poorly understood; yet, it is critical for the assessment of seismic hazard. Here, we develop a micromechanical model to investigate the effect of pore pressure on fault slip behavior. The model couples fluid flow on the network of pores with mechanical deformation of the skeleton of solid grains. Pore fluid exerts pressure force onto the grains, the motion of which is solved using the discrete element method. We conceptualize the fault zone as a gouge layer sandwiched between two blocks. We study fault stability in the presence of a pressure discontinuity across the gouge layer and compare it with the case of continuous (homogeneous) pore pressure. We focus on the onset of shear failure in the gouge layer and reproduce conditions where the failure plane is parallel to the fault. We show that when the pressure is discontinuous across the fault, the onset of slip occurs on the side with the higher pore pressure, and that this onset is controlled by the maximum pressure on both sides of the fault. The results shed new light on the use of the effective stress principle and the Coulomb failure criterion in evaluating the stability of a complex fault zone.

The San Andreas fault forms a dominant component of the transform boundary between the Pacific and the North American plate. The density and strength of the complex accretionary margin is very heterogeneous. Based on the density structure of the lithosphere in the SW United States, we utilize the 3D finite element thermomechanical, viscoplastic model (Underworld2) to simulate deformation in the San Andreas Fault system. The purpose of the model is to examine the role of a big bend in the existing geometry. In particular, the big bend of the fault is an initial condition of in our model. We first test the strength of the fault by comparing the surface principle stresses from our numerical model with the in situ tectonic stress. The best fit model indicates the model with extremely weak fault (friction coefficient 200 kg/m3) than surrounding blocks. In contrast, the Mojave block is detected to find that it has lost its mafic lower crust by other geophysical surveys. Our model indicates strong strain localization at the jointer boundary between two blocks, which is an analogue for the Garlock fault. High density lower crust material of the Great Valley tends to under-thrust beneath the Transverse Range near the big bend. This motion is likely to rotate the fault plane from the initial vertical direction to dip to the southwest. For the straight section, north to the big bend, the fault is nearly vertical. The geometry of the fault plane is consistent with field observations.

Full Text Available The purpose of this paper is to show a novel fault-tolerant tracking control (FTC strategy with robust fault estimation and compensating for simultaneous actuator sensor faults. Based on the framework of fault-tolerant control, developing an FTC design method for wind turbines is a challenge and, thus, they can tolerate simultaneous pitch actuator and pitch sensor faults having bounded first time derivatives. The paper’s key contribution is proposing a descriptor sliding mode method, in which for establishing a novel augmented descriptor system, with which we can estimate the state of system and reconstruct fault by designing descriptor sliding mode observer, the paper introduces an auxiliary descriptor state vector composed by a system state vector, actuator fault vector, and sensor fault vector. By the optimized method of LMI, the conditions for stability that estimated error dynamics are set up to promote the determination of the parameters designed. With this estimation, and designing a fault-tolerant controller, the system’s stability can be maintained. The effectiveness of the design strategy is verified by implementing the controller in the National Renewable Energy Laboratory’s 5-MW nonlinear, high-fidelity wind turbine model (FAST and simulating it in MATLAB/Simulink.

This report provides the information needed to use the FTDRAW (Fault Tree Draw) code, which is designed for drawing a fault tree. The FTDRAW code has several optional functions, such as the overview of a fault tree output, fault tree output in English description, fault tree output in Japanese description and summary tree output. Inputs for the FTDRAW code are component failure rate information and gate information which are filed out by a execution of the FTA-J (Fault Tree Analysis-JAERI) code system and option control data. Using the FTDRAW code, we can get drawings of fault trees which is easy to see, efficiently. (author)

This book describes the state-of-the-art in energy efficient, fault-tolerant embedded systems. It covers the entire product lifecycle of electronic systems design, analysis and testing and includes discussion of both circuit and system-level approaches. Readers will be enabled to meet the conflicting design objectives of energy efficiency and fault-tolerance for reliability, given the up-to-date techniques presented.

This paper proposes a novel model for Surrogate Object based paradigm in mobile grid environment for achieving a Fault Tolerance. Basically Mobile Grid Computing Model focuses on Service Composition and Resource Sharing Process. In order to increase the performance of the system, Fault Recovery plays a vital role. In our Proposed System for Recovery point, Surrogate Object Based Checkpoint Recovery Model is introduced. This Checkpoint Recovery model depends on the Surrogate Object and the Fau...

Existing techniques and methodologies for fault diagnosis are surveyed. The techniques run the gamut from theoretical artificial intelligence work to conventional software engineering applications. They are shown to define a spectrum of implementation alternatives where tradeoffs determine their position on the spectrum. Various tradeoffs include execution time limitations and memory requirements of the algorithms as well as their effectiveness in addressing the fault diagnosis problem.

In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

The purpose of this article is to investigate the robustness to model uncertainties of observer based fault detection and isolation. The approach is designed with a straight forward dynamic nad the observer.......The purpose of this article is to investigate the robustness to model uncertainties of observer based fault detection and isolation. The approach is designed with a straight forward dynamic nad the observer....

Most commercially produced integrated circuits are incapable of tolerating manufacturing defects. The area and function of the circuits is thus limited by the probability of faults occurring within the circuit. This thesis examines techniques for using redundancy in memory circuits to provide fault tolerance and to increase storage capacity. A hierarchical memory architecture using multiple Hamming codes is introduced and analysed to determine its resistance to manufa...

Full Text Available This paper presents a basin simulator designed to better take faults into account, either as conduits or as barriers to fluid flow. It computes hydrocarbon generation, fluid flow and heat transfer on the 4D (space and time geometry obtained by 3D volume restoration. Contrary to classical basin simulators, this calculator does not require a structured mesh based on vertical pillars nor a multi-block structure associated to the fault network. The mesh follows the sediments during the evolution of the basin. It deforms continuously with respect to time to account for sedimentation, erosion, compaction and kinematic displacements. The simulation domain is structured in layers, in order to handle properly the corresponding heterogeneities and to follow the sedimentation processes (thickening of the layers. In each layer, the mesh is unstructured: it may include several types of cells such as tetrahedra, hexahedra, pyramid, prism, etc. However, a mesh composed mainly of hexahedra is preferred as they are well suited to the layered structure of the basin. Faults are handled as internal boundaries across which the mesh is non-matching. Different models are proposed for fault behavior such as impervious fault, flow across fault or conductive fault. The calculator is based on a cell centered Finite Volume discretisation, which ensures conservation of physical quantities (mass of fluid, heat at a discrete level and which accounts properly for heterogeneities. The numerical scheme handles the non matching meshes and guaranties appropriate connection of cells across faults. Results on a synthetic basin demonstrate the capabilities of this new simulator.

In this chapter, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerized relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

Nowadays, case-based fault diagnostic (CBFD) systems have become important and widely applied problem solving technologies. They are based on the assumption that “similar faults have similar diagnosis”. On the other hand, CBFD systems still suffer from some limitations. Common ones of them are: (1) failure of CBFD to have the needed diagnosis for the new faults that have no similar cases in the case library. (2) Limited memorization when increasing the number of stored cases in the library. The proposed research introduces incorporating the neural network into the case based system to enable the system to diagnose all the faults. Neural networks have proved their success in the classification and diagnosis problems. The suggested system uses the neural network to diagnose the new faults (cases) that cannot be diagnosed by the traditional CBR diagnostic system. Besides, the proposed system can use the another neural network to control adding and deleting the cases in the library to manage the size of the cases in the case library. However, the suggested system has improved the performance of the case based fault diagnostic system when applied for the motor rolling bearing as a case of study

This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

Faults in photovoltaic (PV) systems, which can result in energy loss, system shutdown or even serious safety breaches, are often difficult to avoid. Fault detection in such systems is imperative to improve their reliability, productivity, safety and efficiency. Here, an innovative model-based fault-detection approach for early detection of shading of PV modules and faults on the direct current (DC) side of PV systems is proposed. This approach combines the flexibility, and simplicity of a one-diode model with the extended capacity of an exponentially weighted moving average (EWMA) control chart to detect incipient changes in a PV system. The one-diode model, which is easily calibrated due to its limited calibration parameters, is used to predict the healthy PV array\\'s maximum power coordinates of current, voltage and power using measured temperatures and irradiances. Residuals, which capture the difference between the measurements and the predictions of the one-diode model, are generated and used as fault indicators. Then, the EWMA monitoring chart is applied on the uncorrelated residuals obtained from the one-diode model to detect and identify the type of fault. Actual data from the grid-connected PV system installed at the Renewable Energy Development Center, Algeria, are used to assess the performance of the proposed approach. Results show that the proposed approach successfully monitors the DC side of PV systems and detects temporary shading.

Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

of failures and lower the reliability of the MMC-HVDC system. Therefore, research on the fault diagnosis and fault-tolerant control of MMC-HVDC system is of great significance in order to enhance the reliability of the system. This paper provides a comprehensive review of fault diagnosis and fault handling...

This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

In this paper a basic idea of a fault-tolerant monitoring and decision support system will be explained. Fault detection is an important part of the fault-tolerant design for in-service monitoring and decision support systems for ships. In the paper, a virtual example of fault detection...... will be presented for a containership with a real decision support system onboard. All possible faults can be simulated and detected using residuals and the generalized likelihood ratio (GLR) algorithm....

Full Text Available El proceso de reforma económica que ha experimentado la economía de China es uno de los fenómenos de mayor relevancia en la evolución de la Economía Mundial en los últimos treinta años y, sin duda, seguirá siéndolo en el futuro. Dentro de este proceso, la apertura al exterior, que el gobierno chino inició en 1978, ha situado al país en un lugar de máxima relevancia dentro de los flujos comerciales y financieros internacionales. En este contexto, el objetivo de este trabajo es intentar realizar un análisis general de la evolución del comercio exterior de China en las últimas décadas, prestando especial atención a sus exportaciones de bienes y poniendo énfasis en los cambios experimentados durante los últimos años.The process of economic reform that has experienced the Chinese economy is one of the most important events in the evolution of the World Economy in the past 30 years, and will undoubtedly remain so in the future. Within this economic reform, the opening process that the Chinese Government began in 1978, has put the country in a place of utmost importance in the international trade and financial flows. In this context, this paper try to perform an analysis of the evolution of China's foreign trade in recent decades, with particular attention to its exports and emphasizing the changes in recent years

Full Text Available Resumen El proceso de reforma económica que ha experimentado la economía de China es uno de los fenómenos de mayor relevancia en la evolución de la Economía Mundial en los últimos treinta años y, sin duda, seguirá siéndolo en el futuro. Dentro de este proceso, la apertura al exterior, que el gobierno chino inició en 1978, ha situado al país en un lugar de máxima relevancia dentro de los flujos comerciales y financieros internacionales. En este contexto, el objetivo de este trabajo es intentar realizar un análisis general de la evolución del comercio exterior de China en las últimas décadas, prestando especial atención a sus exportaciones de bienes y poniendo énfasis en los cambios experimentados durante los últimos años. Abstract The process of economic reform that has experienced the Chinese economy is one of the most important events in the evolution of the World Economy in the past 30 years, and will undoubtedly remain so in the future. Within this economic reform, the opening process that the Chinese Government began in 1978, has put the country in a place of utmost importance in the international trade and financial flows. In this context, this paper try to perform an analysis of the evolution of China's foreign trade in recent decades, with particular attention to its exports and emphasizing the changes in recent years.

Highlights: • We developed the fault-weighted quantification method of fault detection coverage. • The method has been applied to specific digital reactor protection system. • The unavailability of the module had 20-times difference with the traditional method. • Several experimental tests will be effectively prioritized using this method. - Abstract: The one of the most outstanding features of a digital I&C system is the use of a fault-tolerant technique. With an awareness regarding the importance of thequantification of fault detection coverage of fault-tolerant techniques, several researches related to the fault injection method were developed and employed to quantify a fault detection coverage. In the fault injection method, each injected fault has a different importance because the frequency of realization of every injected fault is different. However, there have been no previous studies addressing the importance and weighting factor of each injected fault. In this work, a new method for allocating the weighting to each injected fault using the failure mode and effect analysis data was proposed. For application, the fault-weighted quantification method has also been applied to specific digital reactor protection system to quantify the fault detection coverage. One of the major findings in an application was that we may estimate the unavailability of the specific module in digital I&C systems about 20-times smaller than real value when we use a traditional method. The other finding was that we can also classify the importance of the experimental case. Therefore, this method is expected to not only suggest an accurate quantification procedure of fault-detection coverage by weighting the injected faults, but to also contribute to an effective fault injection experiment by sorting the importance of the failure categories.

Sandbox experiments and field surveys were performed to investigate fault system evolution and fault-related deformation of ground surface, the Quaternary deposits and rocks. The summary of the results is shown below. 1) In the case of strike-slip faulting, the basic fault sequence runs from early en echelon faults and pressure ridges through linear trough. The fault systems associated with the 2000 western Tottori earthquake are shown as en echelon pattern that characterize the early stage of wrench tectonics, therefore no thoroughgoing surface faulting was found above the rupture as defined by the main shock and aftershocks. 2) Low-angle and high-angle reverse faults commonly migrate basinward with time, respectively. With increasing normal fault displacement in bedrock, normal fault develops within range after reverse fault has formed along range front. 3) Horizontal distance of surface rupture from the bedrock fault normalized by the height of the Quaternary deposits agrees well with those of model tests. 4) Upward-widening damage zone, where secondary fractures develop, forms in the handing wall side of high-angle reverse fault at the Kamioka mine. (author)

Fault-tolerant control of current sensors is studied in this paper to improve the reliability of a doubly fed induction generator (DFIG). A fault-tolerant control system of current sensors is presented for the DFIG, which consists of a new current observer and an improved current sensor fault...... detection algorithm, and fault-tolerant control system are investigated by simulation. The results indicate that the outputs of the observer and the sensor are highly coherent. The fault detection algorithm can efficiently detect both soft and hard faults in current sensors, and the fault-tolerant control...

This report contains data from research undertaken by the author on the Hope Fault from 2000-2004. This report provides an opportunity to include data that was additional to or newer than work that was published in other places. New results from studies along the Hurunui section of the Hope Fault, additional to that published in Langridge and Berryman (2005) are presented here. This data includes tabulated data of fault location and description measurements, a graphical representation of this data in diagrammatic form along the length of the fault and new radiocarbon dates from the current EQC funded project. The new data show that the Hurunui section of the Hope Fault has the capability to yield further data on fault slip rate, earthquake displacements, and paleoseismicity. New results from studies at the Greenburn Stream paleoseismic site additional to that published in Langridge et al. (2003) are presented here. This includes a new log of the deepened west wall of Trench 2, a log of the west wall of Trench 1, and new radiocarbon dates from the second phase of dating undertaken at the Greenburn Stream site. The new data show that this site has the capability to yield further data on the paleoseismicity of the Conway segment of the Hope Fault. Through a detailed analysis of all three logged walls at the site and the new radiocarbon dates, it may, in combination with data from the nearby Clarence Reserve site of Pope (1994), be possible to develop a good record of the last 5 events on the Conway segment. (author). 12 refs., 12 figs

Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036° and dipping ~50°SE, consistent with a 028°, 50°SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the Ordovician–Silurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035°–039°, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.

The research summarized in this report focuses on the dependability of computer systems. It addresses several complementary, theoretical as well as experimental, issues that are grouped into four topics. The first topic concerns the definition of efficient methods that aim to assist the users in the construction and validation of complex dependability analysis and evaluation models. The second topic deals with the modeling of reliability and availability growth that mainly result from the progressive removal of design faults. A method is also defined to support the application of software reliability evaluation studies in an industrial context. The third topic deals with the development and experimentation of a new approach for the quantitative evaluation of operational security. This approach aims to assist the system administrators in the monitoring of operational security, when modifications, that are likely to introduce new vulnerabilities, occur in the system configuration, the applications, the user behavior, etc. Finally, the fourth topic addresses: a) the definition of a development model focused at the production of dependable systems, and b) the development of assessment criteria to obtain justified confidence that a system will achieve, during its operation and up to its decommissioning, its dependability objectives. (author) [fr

At low confining pressure, sliding on saw cuts in granite is stable but at high pressure it is unstable. The pressure at which the transition takes place increases if the thickness of the crushed material between the sliding surfaces is increased. This experimental result suggests that on natural faults the stability of sliding may be affected by the width of the fault zone. ?? 1976.

We introduce here the concept of Geotribology as an approach to study friction, wear, and lubrication of geological systems. Methods of geotribology are applied here to characterize the friction and wear associated with slip along experimental faults composed of brittle rocks. The wear in these faults is dominated by brittle fracturing, plucking, scratching and fragmentation at asperities of all scales, including 'effective asperities' that develop and evolve during the slip. We derived a theoretical model for the rate of wear based on the observation that the dynamic strength of brittle materials is proportional to the product of load stress and loading period. In a slipping fault, the loading period of an asperity is inversely proportional to the slip velocity, and our derivations indicate that the wear-rate is proportional to the ratio of [shear-stress/slip-velocity]. By incorporating the rock hardness data into the model, we demonstrate that a single, universal function fits wear data of hundreds of experiments with granitic, carbonate and sandstone faults. In the next step, we demonstrate that the dynamic frictional strength of experimental faults is well explained in terms of the tribological parameter PV factor (= normal-stress · slip-velocity). This factor successfully delineates weakening and strengthening regimes of carbonate and granitic faults. Finally, our analysis revealed a puzzling observation that wear-rate and frictional strength have strikingly different dependencies on the loading conditions of normal-stress and slip-velocity; we discuss sources for this difference. We found that utilization of tribological tools in fault slip analyses leads to effective and insightful results.

The Confirmatory Drilling Project is the final investigation under the Pen Branch Fault Program initiated to determine the capability of the Pen Branch fault (PBF) to release seismic energy. This investigation focused on a small zone over the fault where previously collected seismic reflection data had indicated the fault deforms the subsurface at 150 msec (with reference to an 80 m reference datum). Eighteen drill holes, 2 to basement and the others to 300 ft, were arranged in a scatter pattern over the fault. To adequately define configuration of the layers deformed by the fault boreholes were spaced over a zone of 800 ft, north to south. The closely spaced data were to confirm or refute the existence of flat lying reflectors observed in seismic reflection data and to enable the authors to identify and correlate lithologic layers with seismic reflection data. Results suggest that deformation by the fault in sediments 300 ft deep ad shallower is subtle. Corroboration of the geologic interpretation with the seismic reflection profile is ongoing but preliminary results indicate that specific reflectors can be assigned to lithologic layers. A large amplitude package of reflections below a flat lying continuous reflection at 40 msec can be correlated with a lithology that corresponds to carbonate sediments in geologic cross-section. Further, data also show that a geologic layer as shallow as 30 ft can be traced on these seismic data over the same subsurface distance where geologic cross-section shows corresponding continuity. The subsurface structure is thus corroborated by both methods at this study site

This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

Active faulting in the upper plate of the Hikurangi subduction zone, North Island, New Zealand, represents a significant seismic hazard that is not yet well understood. In northern Wairarapa, the geometry and kinematics of active faults, and the Quaternary and historical surface-rupture record, have not previously been studied in detail. We present the results of mapping and paleoseismicity studies on faults in the northern Wairarapa region to document the characteristics of active faults and the timing of earthquakes. We focus on evidence for surface rupture in the 1855 Wairarapa (M w 8.2) and 1934 Pahiatua (M w 7.4) earthquakes, two of New Zealand's largest historical earthquakes. The Dreyers Rock, Alfredton, Saunders Road, Waitawhiti, and Waipukaka faults form a northeast-trending, east-stepping array of faults. Detailed mapping of offset geomorphic features shows the rupture lengths vary from c. 7 to 20 km and single-event displacements range from 3 to 7 m, suggesting the faults are capable of generating M >7 earthquakes. Trenching results show that two earthquakes have occurred on the Alfredton Fault since c. 2900 cal. BP. The most recent event probably occurred during the 1855 Wairarapa earthquake as slip propagated northward from the Wairarapa Fault and across a 6 km wide step. Waipukaka Fault trenches show that at least three surface-rupturing earthquakes have occurred since 8290-7880 cal. BP. Analysis of stratigraphic and historical evidence suggests the most recent rupture occurred during the 1934 Pahiatua earthquake. Estimates of slip rates provided by these data suggest that a larger component of strike slip than previously suspected is occurring within the upper plate and that the faults accommodate a significant proportion of the dextral component of oblique subduction. Assessment of seismic hazard is difficult because the known fault scarp lengths appear too short to have accommodated the estimated single-event displacements. Faults in the region are

One of the main objectives of this DOE-sponsored project was to reduce customer outage time. Fault location, prediction, and protection are the most important aspects of fault management for the reduction of outage time. In the past most of the research and development on power system faults in these areas has focused on transmission systems, and it is not until recently with deregulation and competition that research on power system faults has begun to focus on the unique aspects of distribution systems. This project was planned with three Phases, approximately one year per phase. The first phase of the project involved an assessment of the state-of-the-art in fault location, prediction, and detection as well as the design, lab testing, and field installation of the advanced protection system on the SCE Circuit of the Future located north of San Bernardino, CA. The new feeder automation scheme, with vacuum fault interrupters, will limit the number of customers affected by the fault. Depending on the fault location, the substation breaker might not even trip. Through the use of fast communications (fiber) the fault locations can be determined and the proper fault interrupting switches opened automatically. With knowledge of circuit loadings at the time of the fault, ties to other circuits can be closed automatically to restore all customers except the faulted section. This new automation scheme limits outage time and increases reliability for customers. The second phase of the project involved the selection, modeling, testing and installation of a fault current limiter on the Circuit of the Future. While this project did not pay for the installation and testing of the fault current limiter, it did perform the evaluation of the fault current limiter and its impacts on the protection system of the Circuit of the Future. After investigation of several fault current limiters, the Zenergy superconducting, saturable core fault current limiter was selected for

New high-voltage (HV) substations are fed by transmission lines. The position of these lines necessitates earthing design to ensure safety compliance of the system. Conductive structures such as steel or concrete poles are widely used in HV transmission mains. The earth potential rise (EPR) generated by a fault at the substation could result in an unsafe condition. This article discusses EPR based on substation fault. The pole EPR assessment under substation fault is assessed with and without mutual impedance consideration. Split factor determination with and without the mutual impedance of the line is also discussed. Furthermore, a simplified formula to compute the pole grid current under substation fault is included. Also, it includes the introduction of the n factor which determines the number of poles that required earthing assessments under substation fault. A case study is shown.

Today the fault diagnostic of the rotating machinery based on the vibration analysis is an effective method in designing predictive maintenance programs. In this method, vibration level of the turbines is monitored and if it is higher than the allowable limit, vibrational data will be analyzed and the growing faults will be detected. But because of the high complexity of the system monitoring, the interpretation of the measured data is more difficult. Therefore, design of the fault diagnostic expert systems by using the expert's technical experiences and knowledge; seem to be the best solution. In this paper,at first several common faults in turbines are studied and the how applying the neural networks to interpret the vibrational data for fault diagnostic is explained

In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

This paper presents an impact analysis of the fault impedance, in terms of its module and angle, on voltage sags caused by faults. Symmetrical and asymmetrical faults are simulated, at transmission and distribution lines, by using a frequency-domain fault simulation software called ANAFAS. Voltage sags are monitored at buses where sensitive end-users are connected. In order to overcome some intrinsic limitations of this software concerning its automatic execution for several cases, a computational tool was developed in Java programming language. This solution allows the automatic simulation of cases including the effect of the fault position, the fault type, and the proper fault impedance. The main conclusion is that the module and angle of the fault impedance can have a significant influence on voltage sag depending on the fault characteristics. (author)

Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

the fault zone. The fault zone is a shadow zone to shots detonated outside the fault zone. Finite-difference wavefield modelling supports the interpretations of the fan recordings. Our fan recording approach facilitates cost-efficient mapping of fault zones in densely urbanized areas where seismic normal......We locate the concealed Carlsberg Fault zone along a 12-km-long trace in the Copenhagen city centre by seismic refraction, reflection and fan profiling. The Carlsberg Fault is located in a NNW-SSE striking fault system in the border zone between the Danish Basin and the Baltic Shield. Recent...... earthquakes indicate that this area is tectonically active. A seismic refraction study across the Carlsberg Fault shows that the fault zone is a low-velocity zone and marks a change in seismic velocity structure. A normal incidence reflection seismic section shows a coincident flower-like structure. We have...

Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary

This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

Full Text Available A fault is a planar fracture or discontinuity in a volume of rock, across which there has been significant displacement along the fractures as a result of earth movement. Large faults within the Earth’s crust result from the action of plate tectonic forces, with the largest forming the boundaries between the plates, energy release associated with rapid movement on active faults is the cause of most earthquakes. The relationship between unevenness dislocation and gravity changes was studied on the theoretical thought of differential fault. Simulated observation values were adopted to deduce the gravity changes with the model of asymmetric fault and the model of Okada, respectively. The characteristic of unevennes fault momentum distribution is from two end points to middle by 0 according to a certain continuous functional increase. However, the fault momentum distribution in the fault length range is a constant when the Okada model is adopted. Numerical simulation experiments for the activities of the strike-slip fault, dip-slip fault and extension fault were carried out, respectively, to find that both the gravity contours and the gravity variation values are consistent when either of the two models is adopted. The apparent difference lies in that the values at the end points are 17. 97% for the strike-slip fault, 25. 58% for the dip-slip fault, and 24. 73% for the extension fault.

The rate dependence of frictional processes in faults in quartzofeldspathic crust is proposed to change at c. 300°C, because above this temperature asperity deformation can be accommodated by crystal plastic processes. As a consequence, the real fault contact area increases and the fault velocity strengthens. Conversely, faults at lower temperatures are velocity weakening and therefore prone to earthquake slip. We have investigated whether dislocation processes are important around faults in quartzites on seismic timescales, by inducing fault slip on a saw cut surface in novaculite blocks. Deformation was carried out at 450°C and 600°C in a Griggs apparatus. Slip rates of 8.3 x 10-7s-1 allowed total slip, u, of 0.5mm to be achieved in c. 10 minutes. Failure occurred at peak differential stresses of ~1.7 GPa and 1.4 GPa respectively, followed by significant weakening. Structures of the novaculite within and surrounding the fault surface were examined using EBSD, FIB-SEM and TEM to elucidate changes to their dislocation substructure. In the sample deformed at 450°C, a ~50μm thick layer of amorphous / non-crystalline silica was developed on the saw-cut surface during deformation. Rare clasts of the wall rock are preserved within this material. The surrounding sample is mostly composed of equant quartz grains of 5-10μm diameter that lack a preferred orientation, contain very few intercrystalline dislocations, and are divided by organised high angle grain boundaries. After deformation, most quartz grains within the sample retain their starting microstructure. However, within ~10μm of the sliding surface, dislocations are more common, and these are arranged into elongated, tangled zones (subgrain boundaries?). Microfractures are also observed. These microstructures are characteristic of deformation accommodated by low temperature plasticity. Our preliminary observations suggest that dislocation processes may be able to accommodate some deformation around fault

This study aims to assess the evolution of the blind Puente Hills thrust fault system (PHT) by determining its age of initiation, lateral propagation history, and changes in slip rate over time. The PHT presents one of the largest seismic hazards in the United States, given its location beneath downtown Los Angeles. The PHT is comprised of three fault segments: the Los Angeles (LA), Santa Fe Springs (SFS), and Coyote Hills (CH). The LA and SFS segments are characterized by growth stratigraphy where folds formed by uplift on the fault segments have been continually buried by sediment from the Los Angeles and San Gabriel rivers. The CH segment has developed topography and is characterized by onlapping growth stratigraphy. This depositional setting gives us the unique opportunity to measure uplift on the LA and SFS fault segments, and minimum uplift on the CH fault segment, as the difference in sediment thicknesses across the buried folds. We utilize depth converted oil industry seismic reflection data to image the fold geometries. Identifying time-correlative stratigraphic markers for slip rate determination in the basin has been a problem for researchers in the past, however, as the faunal assemblages observed in wells are time-transgressive by nature. To overcome this, we utilize the sequence stratigraphic model and well picks of Ponti et al. (2007) as a basis for mapping time-correlative sequence boundaries throughout our industry seismic reflection data from the present to the Pleistocene. From the Pleistocene to Miocene we identify additional sequence boundaries in our seismic reflection data from imaged sequence geometries and by correlating industry well formation tops. The sequence and formation top picks are then used to build 3-dimensional surfaces in the modeling program Gocad. From these surfaces we measure the change in thicknesses across the folds to obtain uplift rates between each sequence boundary. Our results show three distinct phases of

The Trans-Mexican Volcanic Belt (TMVB) is one of the most actives and representative zones of Mexico geologically speaking. Research carried out in this area gives stratigraphic, seismologic and historical evidence of its recent activity during the quaternary (Martinez and Nieto, 1990). Specifically the Morelia-Acambay faults system (MAFS) consist in a series of normal faults of dominant direction E - W, ENE - WSW y NE - SW which is cut in center west of the Trans-Mexican Volcanic Belt. This fault system appeared during the early Miocene although the north-south oriented structures are older and have been related to the activity of the tectonism inherited from the "Basin and Range" system, but that were reactivated by the east- west faults. It is believed that the activity of these faults has contributed to the creation and evolution of the longed lacustrine depressions such as: Chapala, Zacapu, Cuitzeo, Maravatio y Acambay also the location of monogenetic volcanoes that conformed the Michoacan-Guanajuato volcanic field (MGVF) and tend to align in the direction of the SFMA dominant effort. In a historical time different segments of the MAFS have been the epicenter of earthquakes from moderated to strong magnitude like the events of 1858 in Patzcuaro, Acambay in 1912, 1979 in Maravatio and 2007 in Morelia, among others. Several detailed analysis and semi-detailed analysis through a GIS platform based in the vectorial archives and thematic charts 1:50 000 scaled from the data base of the INEGI which has allowed to remark the influence of the MAFS segments about the morphology of the landscape and the identification of other structures related to the movement of the existent faults like fractures, alignments, collapses and others from the zone comprehended by the northwest of Morelia in Michoacán to the East of Acambay, Estado de México. Such analysis suggests that the fault segments possess a normal displacement plus a left component. In addition it can be

Fault Management is a critical aspect of deep-space missions. For the purposes of this paper, fault management is defined as the ability of a system to detect, isolate, and mitigate events that impact, or have the potential to impact, nominal mission operations. The fault management capabilities are commonly distributed across flight and ground subsystems, impacting hardware, software, and mission operations designs. The National Aeronautics and Space Administration (NASA) Discovery & New Frontiers (D&NF) Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns due to underestimating the complexity and support requirements for fault management. As a result of this and other recent experiences, the NASA Science Mission Directorate (SMD) Planetary Science Division (PSD) commissioned a workshop to bring together invited participants across government, industry, academia to assess the state of the art in fault management practice and research, identify current and potential issues, and make recommendations for addressing these issues. The workshop was held in New Orleans in April of 2008. The workshop concluded that fault management is not being limited by technology, but rather by a lack of emphasis and discipline in both the engineering and programmatic dimensions. Some of the areas cited in the findings include different, conflicting, and changing institutional goals and risk postures; unclear ownership of end-to-end fault management engineering; inadequate understanding of the impact of mission-level requirements on fault management complexity; and practices, processes, and

The fault tree technique has become a standard tool for the analysis of safety and reliability of complex system. In spite of the costs, which may be high for a complete and detailed analysis of a complex plant, the fault tree technique is popular and its benefits are fully recognized. Due to this applications of these codes have mostly been restricted to simple academic examples and rarely concern complex, real world systems. In this paper an interactive approach to fault tree construction is presented. The aim is not to replace the analyst, but to offer him an intelligent tool which can assist him in modeling complex systems. Using the CAFTS-method, the analyst interactively constructs a fault tree in two phases: (1) In a first phase he generates an overall failure logic structure of the system; the macrofault tree. In this phase, CAFTS features an expert system approach to assist the analyst. It makes use of a knowledge base containing generic rules on the behavior of subsystems and components; (2) In a second phase the macrofault tree is further refined and transformed in a fully detailed and quantified fault tree. In this phase a library of plant-specific component failure models is used

Full Text Available Process safety analysis, which includes qualitative fault event identification, the relative frequency and event probability functions, as well as consequence analysis, was performed on an allye chloride plant. An event tree for fault diagnosis and cognitive reliability analysis, as well as a troubleshooting system, were developed. Fuzzy inductive reasoning illustrated the advantages compared to crisp inductive reasoning. A qualitative model forecast the future behavior of the system in the case of accident detection and then compared it with the actual measured data. A cognitive model including qualitative and quantitative information by fuzzy logic of the incident scenario was derived as a fault locator for an ally! chloride plant. The obtained results showed the successful application of cognitive dispersion modeling to process safety analysis. A fuzzy inductive reasoner illustrated good performance to discriminate between different types of malfunctions. This fault locator allowed risk analysis and the construction of a fault tolerant system. This study is the first report in the literature showing the cognitive reliability analysis method.

The HERP report for long-term evaluation of active faults and the NSC safety review guide with regard to geology and ground of site were published on Nov. 2010 and on Dec. 2010, respectively. With respect to those reports, our investigation is as follows; (1) For assessment of seismic hazard, we estimated seismic sources around NPPs based on information of tectonic geomorphology, earthquake distribution and subsurface geology. (2) For evaluation on the activity of blind fault, we calculated the slip rate on the 2008 Iwate-Miyagi Nairiku Earthquake fault, using information on late Quaternary fluvial terraces. (3) To evaluate the magnitude of earthquakes whose sources are difficult to identify, we proposed a new method for calculation of the seismogenic layer thickness. (4) To clarify the activities of active faults without superstratum, we carried out the color analysis of fault gouge and divided the activities into thousand of years and tens of thousands. (5) For improving chronology of sediments, we detected new widespread cryptotephras using mineral chemistry and developed late Quaternary cryptotephrostratigraphy around NPPs. (author)

1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

Full Text Available Because historical catalogs generally span only a few repetition intervals of major earthquakes, they do not provide much constraint on how regularly earthquakes recur. In order to obtain better recurrence statistics and long-term probability estimates for events M ? 6 on the San Andreas fault, we apply a seismicity model to this fault. The model is based on the concept of fault segmentation and the physics of static dislocations which allow for stress transfer between segments. Constraints are provided by geological and seismological observations of segment lengths, characteristic magnitudes and long-term slip rates. Segment parameters slightly modified from the Working Group on California Earthquake Probabilities allow us to reproduce observed seismicity over four orders of magnitude. The model yields quite irregular earthquake recurrence patterns. Only the largest events (M ? 7.5 are quasi-periodic; small events cluster. Both the average recurrence time and the aperiodicity are also a function of position along the fault. The model results are consistent with paleoseismic data for the San Andreas fault as well as a global set of historical and paleoseismic recurrence data. Thus irregular earthquake recurrence resulting from segment interaction is consistent with a large range of observations.

Full Text Available A new fault injection and Gini concordance based method has been developed for fault severity analysis for multibody mechanical systems concerning their dynamic properties. The fault tree analysis (FTA is employed to roughly identify the faults needed to be considered. According to constitution of the mechanical system, the dynamic properties can be achieved by solving the equations that include many types of faults which are injected by using the fault injection technique. Then, the Gini concordance is used to measure the correspondence between the performance with faults and under normal operation thereby providing useful hints of severity ranking in subsystems for reliability design. One numerical example and a series of experiments are provided to illustrate the application of the new method. The results indicate that the proposed method can accurately model the faults and receive the correct information of fault severity. Some strategies are also proposed for reliability improvement of the spacecraft solar array.

It is crucial to develop a direct dating method of fault gouges for the assessment of recent fault activity in terms of site evaluation for nuclear power plants. This method would be useful in regions without Late Pleistocene overlying sediments. In order to estimate the age of the latest fault slip event, it is necessary to use fault gouges which have experienced high frictional heating sufficient for age resetting. It is said that frictional heating is higher in deeper depths, because frictional heating generated by fault movement is determined depending on the shear stress. Therefore, we should determine the reliable depth of age resetting, as it is likely that fault gouges from the ground surface have been dated to be older than the actual age of the latest fault movement due to incomplete resetting. In this project, we target the Nojima fault which triggered the 1995 Kobe earthquake in Japan. Samples are collected from various depths (300-1,500m) by trenching and drilling to investigate age resetting conditions and depth using several methods including electron spin resonance (ESR) and optical stimulated luminescence (OSL), which are applicable to ages later than the Late Pleistocene. The preliminary results by the ESR method show approx. 1.1 Ma1) at the ground surface and 0.15-0.28 Ma2) at 388 m depth, respectively. These results indicate that samples from deeper depths preserve a younger age. In contrast, the OSL method dated approx. 2,200 yr1) at the ground surface. Although further consideration is still needed as there is a large margin of error, this result indicates that the age resetting depth of OSL is relatively shallow due to the high thermosensitivity of OSL compare to ESR. In the future, we plan to carry out further investigation for dating fault gouges from various depths up to approx. 1,500 m to verify the use of these direct dating methods.1) Kyoto University, 2017. FY27 Commissioned for the disaster presentation on nuclear facilities (Drilling

Authigenic illite can form synkinematically during slip events along brittle faults. In addition it can also crystallize as a result of fluid flow and associated mineral alteration processes in hydrothermal environments. K-Ar dating of illite-bearing fault rocks has recently become a common tool to constrain the timing of fault activity. However, to fully interpret the derived age spectra in terms of deformation ages, a careful investigation of the fault deformation history and architecture at the outcrop-scale, ideally followed by a detailed mineralogical analysis of the illite-forming processes at the micro-scale, are indispensable. Here we integrate this methodological approach by presenting microstructural observations from the host rock immediately adjacent to dated fault gouges from two sites located in the Rolvsnes granodiorite (Bømlo, western Norway). This granodiorite experienced multiple episodes of brittle faulting and fluid-induced alteration, starting in the Mid Ordovician (Scheiber et al., 2016). Fault gouges are predominantly associated with normal faults accommodating mainly E-W extension. K-Ar dating of illites separated from representative fault gouges constrains deformation and alteration due to fluid ingress from the Permian to the Cretaceous, with a cluster of ages for the finest (middle Jurassic. At site one, high-resolution thin section structural mapping reveals a complex deformation history characterized by several coexisting types of calcite veins and seven different generations of cataclasite, two of which contain a significant amount of authigenic and undoubtedly deformation-related illite. At site two, fluid ingress along and adjoining the fault core induced pervasive alteration of the host granodiorite. Quartz is crosscut by calcite veinlets whereas plagioclase, K-feldspar and biotite are almost completely replaced by the main alteration products kaolin, quartz and illite. Illite-bearing micro-domains were physically separated by

Mineralogical characterizations of fault clay and associated strata in fault zone with field study and analytical methods. Mineral composition and color of fault clay and rock occur in fracture zone different from bed rocks. Fault clay mainly composed of smectite with minor zeolite such as laumontite and stilbite, and halloysite, illite, Illite and halloysite grow on the surface of smectite, and laumontite and stilbite result from precipitation or alteration of Ca rich bed rock. The result of mineralogical study at Ipsil, Wangsan, Gaegok, Yugyeori, Gacheon in Gyeongju area, the detail research of microstructure in the fault clay making it possible for prediction to age of fault activity

Mineralogical characterizations of fault clay and associated strata in fault zone with field study and analytical methods. Mineral composition and color of fault clay and rock occur in fracture zone different from bed rocks. Fault clay mainly composed of smectite with minor zeolite such as laumontite and stilbite, and halloysite, illite, Illite and halloysite grow on the surface of smectite, and laumontite and stilbite result from precipitation or alteration of Ca rich bed rock. The result of mineralogical study at Ipsil, Wangsan, Gaegok, Yugyeori, Gacheon in Gyeongju area, the detail research of microstructure in the fault clay making it possible for prediction to age of fault activity.

The main objective of this research is to extract fault features from sensor faults and process faults by using advanced fault detection and isolation (FDI) algorithms. A tank system that has some common characteristics to a NASA testbed at Stennis Space Center was used to verify our proposed algorithms. First, a generic tank system was modeled. Second, a mathematical model suitable for FDI has been derived for the tank system. Third, a new and general FDI procedure has been designed to distinguish process faults and sensor faults. Extensive simulations clearly demonstrated the advantages of the new design.

Availability is one of the key performance indicators of LHC operation, being directly correlated with integrated luminosity production. An effective tool for availability tracking is a necessity to ensure a coherent capture of fault information and relevant dependencies on operational modes and beam parameters. At the beginning of LHC Run 2 in 2015, the Accelerator Fault Tracking (AFT) tool was deployed at CERN to track faults or events affecting LHC operation. Information derived from the AFT is crucial for the identification of areas to improve LHC availability, and hence LHC physics production. For the 2015 run, the AFT has been used by members of the CERN Availability Working Group, LHC Machine coordinators and equipment owners to identify the main contributors to downtime and to understand the evolution of LHC availability throughout the year. In this paper the 2015 experience with the AFT for availability tracking is summarised and an overview of the first results as well as an outlook to future develo...

This paper is focusing on active fault detection (AFD) for parametric faults in closed-loop systems. This auxiliary input applied for the fault detection will also disturb the external output and consequently reduce the performance of the controller. Therefore, only small auxiliary inputs are used...... with the result that the detection and isolation time can be long. In this paper it will be shown, that this problem can be handled by using a modification of the feedback controller. By applying the YJBK-parameterization (after Youla, Jabr, Bongiorno and Kucera) for the controller, it is possible to modify...... the frequency for the auxiliary input is selected. This gives that it is possible to apply an auxiliary input with a reduced amplitude. An example is included to show the results....

A widely accepted explanation for the geometry of thrust faults is that initial failures occur on deeply buried planes of weak rock and that thrust faults propagate toward the surface along a staircase trajectory. We propose an alternative model that applies Gretener's beam-failure mechanism to a multilayered sequence. Invoking compatibility conditions, which demand that a thrust propagate both upsection and downsection, we suggest that ramps form first, at shallow levels, and are subsequently connected by flat faults. This hypothesis also explains the formation of many minor structures associated with thrusts, such as backthrusts, wedge structures, pop-ups, and duplexes, and provides a unified conceptual framework in which to evaluate field observations.

Analytical expressions are derived for the stress field caused by a rectangular dislocating fault of an arbitrary dip in a semi-infinite elastic medium for the case of unequal Lame constants. The results of computations for the stress fields on the ground surface of an inclined strike-slip and an inclined dip-slip fault are represented by contour maps. The effects of Poisson Ratio of the medium, the dip angle, upper and lower boundaries of the faults on the stress field at surface have been discussed. As an application, the contour maps for shear stress and hydrostatic stress of near fields of the Tonghai (1970), Haicheng (1975) and Tangshan (1976) earthquakes have been calculated and compared with the spatial distributions of strong aftershocks of these earthquakes. It is found that most of the strong aftershocks are distributed in the regions of tensional stress, where the hydrostatic stress is positive.

In this paper, analytical expressions of the stress field given rise by a rectangular dislocating fault of an arbitrary dip in a semi-infinite elastic medium for the case of unequal Lame constants are derived. The results of computations for the stress fields on the ground surface of an inclined strike-slip and an inclined dip-slip fault are represented by contour maps. The effects of the Poisson Ratio of the medium, the dip angle, upper and lower boundaries of the faults on the stress field at the surface have been discussed. As an application, the contour maps for shear stress and hydrostatic stress of near fields of the Tonghai (1970), Haicheng, (1975) and Tangshan (1976) earthquakes have been calculated and compared with the spatial distributions of strong aftershocks of these earthquakes. It is found that most of the strong aftershocks are distributed in the regions of tensional stress where the hydrostatic stress is positive.

This paper describe a concept for fault tolerant controllers (FTC) based on the YJBK (after Youla, Jabr, Bongiorno and Kucera) parameterization. This controller architecture will allow to change the controller on-line in the case of faults in the system. In the described FTC concept, a safe mode...... controller is applied as the basic feedback controller. A controller for normal operation with high performance is obtained by including certain YJBK parameters (transfer functions) in the controller. This will allow a fast switch from normal operation to safe mode operation in case of critical faults...... in the system. The described FTC architecture allow the different feedback controllers to apply different sets of sensors and actuators....

Fault Tree Analysis Techniques have been used to assess the safety system of the ZED-2 Research Reactor at the Chalk River Nuclear Laboratories. This turned out to be a strong test of the techniques involved. The resulting fault tree was large and because of inter-links in the system structure the tree was not modularized. In addition, comprehensive documentation was required. After a brief overview of the reactor and the analysis, this paper concentrates on the computer tools that made the job work. Two types of tools were needed; text editing and forms management capability for large volumes of component and system data, and the fault tree codes themselves. The solutions (and failures) are discussed along with the tools we are already developing for the next analysis

The Dead Sea transform is an active plate boundary connecting the Red Sea seafloor spreading system to the Arabian-Eurasian continental collision zone. Its geology and geophysics provide a natural laboratory for investigation of the surficial, crustal and mantle processes occurring along transtensional and transpressional transform fault domains on a lithospheric scale and related to continental breakup. There have been many detailed and disciplinary studies of the Dead Sea transform fault zone during the last?20 years and this book brings them together.This book is an updated comprehensive coverage of the knowledge, based on recent studies of the tectonics, structure, geophysics, volcanism, active tectonics, sedimentology and paleo and modern climate of the Dead Sea transform fault zone. It puts together all this new information and knowledge in a coherent fashion.

Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance ??2 km) tend to have very similar focal mechanisms, often identical to within the 1-sigma uncertainty of ???25??. This observed similarity implies that in small volumes of crust, while faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

We have designed and tested a superconducting fault detector (SFD) for a 22.9 kV superconductor triggered fault current limiters (STFCLs) using Au/YBCO thin films. The SFD is to detect a fault and commutate the current from the primary path to the secondary path of the STFCL. First, quench characteristics of the Au/YBCO thin films were investigated for various faults having different fault duration. The rated voltage of the Au/YBCO thin films was determined from the results, considering the stability of the Au/YBCO elements. Second, the recovery time to superconductivity after quench was measured in each fault case. In addition, the dependence of the recovery characteristics on numbers and dimension of Au/YBCO elements were investigated. Based on the results, a SFD was designed, fabricated and tested. The SFD successfully detected a fault current and carried out the line commutation. Its recovery time was confirmed to be less than 0.5 s, satisfying the reclosing scheme in the Korea Electric Power Corporation (KEPCO)'s power grid

Based on a distributed method of bit-error-rate (BER) monitoring, a novel multi-link faults restoration algorithm is proposed for dynamic optical networks. The concept of fuzzy fault set (FFS) is first introduced for multi-link faults localization, which includes all possible optical equipment or fiber links with a membership describing the possibility of faults. Such a set is characterized by a membership function which assigns each object a grade of membership ranging from zero to one. OSPF protocol extension is designed for the BER information flooding in the network. The BER information can be correlated to link faults through FFS. Based on the BER information and FFS, multi-link faults localization mechanism and restoration algorithm are implemented and experimentally demonstrated on a GMPLS enabled optical network testbed with 40 wavelengths in each fiber link. Experimental results show that the novel localization mechanism has better performance compared with the extended limited perimeter vector matching (LVM) protocol and the restoration algorithm can improve the restoration success rate under multi-link faults scenario.

A statistical analysis was completed on the rupture data of 29 historical strike-slip earthquakes across the world. The purpose of this study is to examine the effects of fault steps on the rupture termination of these events. The results show good correlations between the type and length of steps with the seismic rupture and a poor correlation between the step number and seismic rupture. For different magnitude intervals, the smallest widths of the fault steps (Lt) that can terminate the rupture propagation are variable: Lt = 3 km for Ms 6.5 6.9, Lt = 4 km for Ms 7.0 7.5, Lt = 6 km for Ms 7.5 8.0, and Lt = 8 km for Ms 8.0 8.5. The dilational fault step is easier to rupture through than the compression fault step. The smallest widths of the fault step for the rupture arrest can be used as an indicator to judge the scale of the rupture termination of seismic faults. This is helpful for research on fault segmentation, as well as estimating the magnitude of potential earthquakes, and is thus of significance for the assessment of seismic risks.

Faults strengthen or heal with time in stationary contact and this healing may be an essential ingredient for the generation of earthquakes. In the laboratory, healing is thought to be the result of thermally activated mechanisms that weld together micrometre-sized asperity contacts on the fault surface, but the relationship between laboratory measures of fault healing and the seismically observable properties of earthquakes is at present not well defined. Here we report on laboratory experiments and seismological observations that show how the spectral properties of earthquakes vary as a function of fault healing time. In the laboratory, we find that increased healing causes a disproportionately large amount of high-frequency seismic radiation to be produced during fault rupture. We observe a similar connection between earthquake spectra and recurrence time for repeating earthquake sequences on natural faults. Healing rates depend on pressure, temperature and mineralogy, so the connection between seismicity and healing may help to explain recent observations of large megathrust earthquakes which indicate that energetic, high-frequency seismic radiation originates from locations that are distinct from the geodetically inferred locations of large-amplitude fault slip

A fault classification method is proposed which has been applied to an electric vehicle. Potential faults in the different subsystems that can affect the vehicle directional stability were collected in a failure mode and effect analysis. Similar driveline faults were grouped together if they resembled each other with respect to their influence on the vehicle dynamic behaviour. The faults were physically modelled in a simulation environment before they were induced in a detailed vehicle model under normal driving conditions. A special focus was placed on faults in the driveline of electric vehicles employing in-wheel motors of the permanent magnet type. Several failures caused by mechanical and other faults were analysed as well. The fault classification method consists of a controllability ranking developed according to the functional safety standard ISO 26262. The controllability of a fault was determined with three parameters covering the influence of the longitudinal, lateral and yaw motion of the vehicle. The simulation results were analysed and the faults were classified according to their controllability using the proposed method. It was shown that the controllability decreased specifically with increasing lateral acceleration and increasing speed. The results for the electric driveline faults show that this trend cannot be generalised for all the faults, as the controllability deteriorated for some faults during manoeuvres with low lateral acceleration and low speed. The proposed method is generic and can be applied to various other types of road vehicles and faults.

Studies on fault diagnosis of rotating machinery have been carried out to obtain a machinery condition in two ways. First is a classical approach based on signal processing and analysis using vibration and acoustic signals. Second is to use artificial intelligence techniques to classify machinery conditions into normal or one of the pre-determined fault conditions. Support Vector Machine (SVM) is well known as intelligent classifier with robust generalization ability. In this study, a two-step approach is proposed to predict fault types and fault sizes of rotating machinery in nuclear power plants using multi-class SVM technique. The model firstly classifies normal and 12 fault types and then identifies their sizes in case of predicting any faults. The time and frequency domain features are extracted from the measured vibration signals and used as input to SVM. A test rig is used to simulate normal and the well-know 12 artificial fault conditions with three to six fault sizes of rotating machinery. The application results to the test data show that the present method can estimate fault types as well as fault sizes with high accuracy for bearing an shaft-related faults and misalignment. Further research, however, is required to identify fault size in case of unbalance, rubbing, looseness, and coupling-related faults.

Studies on fault diagnosis of rotating machinery have been carried out to obtain a machinery condition in two ways. First is a classical approach based on signal processing and analysis using vibration and acoustic signals. Second is to use artificial intelligence techniques to classify machinery conditions into normal or one of the pre-determined fault conditions. Support Vector Machine (SVM) is well known as intelligent classifier with robust generalization ability. In this study, a two-step approach is proposed to predict fault types and fault sizes of rotating machinery in nuclear power plants using multi-class SVM technique. The model firstly classifies normal and 12 fault types and then identifies their sizes in case of predicting any faults. The time and frequency domain features are extracted from the measured vibration signals and used as input to SVM. A test rig is used to simulate normal and the well-know 12 artificial fault conditions with three to six fault sizes of rotating machinery. The application results to the test data show that the present method can estimate fault types as well as fault sizes with high accuracy for bearing an shaft-related faults and misalignment. Further research, however, is required to identify fault size in case of unbalance, rubbing, looseness, and coupling-related faults

The increased penetration of wind power will increase the impact of wind power on the grid and thereby increase the importance of a clear guidance concerning the requirements on the protection system of the wind power units and the grid protection in connection to wind power units. The protection system should be able to satisfy the grid connection requirements, set by the TSO (Transmission System Operator) and the grid owners, as well as the general safety and security requirements, such as; personal safety, operational security and economic insurance, i.e. an insurance against economic losses. Vindforsk has appointed Gothia Power AB to perform a study concerning the fault clearance function in connection to wind power installations. The study is divided into two parts; Part 1: The first stage of the project handled the present praxis for the protection, including investigation of legal requirements, operational requirement and personal safety requirement applicable to wind power applications. Proposals for protection requirement for wind power units and the connecting grid are given. Basically 'normal' fault clearance requirements regarding speed, selectivity and redundancy can be used also in applications in connection to wind power. Part 2: The second part of the project results in a guideline for design of protection systems in connection to wind power. In this report mainly part 2 is covered. The main focus is given to clearance of faults in the grid connecting the wind power plants. Regarding internal faults and critical operation states within the wind power plant, a short discussion of feasible protection functions is given. Some critical fault cases in the grid have been identified and discussed: - Undetected islanding and failure of reclosing. There can be a risk of undetected island operation. In such cases it is recommended to use controlled autoreclosing in the vicinity of wind power generation. - Unwanted disconnection of a healthy feeder

Event tree - fault tree approach to model failures of nuclear plants as well as of other complex facilities is noticeably dominant now. This approach implies modeling an object in form of unidirectional logical graph - tree, i.e. graph without circular logic. However, genuine nuclear plants intrinsically demonstrate quite a few logical loops (circular logic), especially where electrical systems are involved. This paper shows the incorrectness of existing practice of circular logic breaking by elimination of part of logical dependencies and puts forward a formal algorithm, which enables the analyst to correctly model the failure of complex object, which involves logical dependencies between system and components, in form of fault tree. (author)

The potential for general application of Fault Tree Analysis to commercial products appears attractive based not only on the successful extension from the aerospace safety technology to the nuclear reactor reliability and availability technology, but also because combinatorial hazards are common to commercial operations and therefore lend themselves readily to evaluation by Fault Tree Analysis. It appears reasonable to conclude that the technique has application within the commercial industrial community where the occurrence of a specified consequence or final event would be of sufficient concern to management to justify such a rigorous analysis as an aid to decision making. (U.S.)

The integrated design of control and fault detection is studied. The result of the analysis is that it is possible to separate the design of the controller and the filter for fault detection in the case where the nominal model can be assumed to be fairly accurate. In the uncertain case, however......, the design of the filter and the controller can not be separated when an optiomal design is desired. For systems with significant uncertainties, there turn out to be a fundamental trade-off between the performance in the control loop and the performance in the filter....

Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

A method for driving a neutral point clamped three-level inverter is provided. In one exemplary embodiment, DC current is received at a neutral point-clamped three-level inverter. The inverter has a plurality of nodes including first, second and third output nodes. The inverter also has a plurality of switches. Faults are checked for in the inverter and predetermined switches are automatically activated responsive to a detected fault such that three-phase electrical power is provided at the output nodes.

For certain fundamental problems in fault detection and identification, the necessary and sufficient conditions for their solvability are derived. These conditions are weaker than the ones found in the literature, since we do not assume any particular structure for the residual generator......For certain fundamental problems in fault detection and identification, the necessary and sufficient conditions for their solvability are derived. These conditions are weaker than the ones found in the literature, since we do not assume any particular structure for the residual generator...

The fault diagnosis problem for dynamic power systems is treated, the nonlinear dynamic model based on a differential algebraic equations is transformed with reduced index to a simple dynamic model. Two nonlinear observers are used for generating the fault signals for comparison purposes, one of them being an extended Kalman estimator and the other a new extended kalman filter with moving horizon with a study of convergence based on the choice of matrix of covariance of the noises of system and measurements. The paper illustrates a simulation study applied on IEEE 3 buses test system.

We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a

We present a novel experimental approach devised to test the hydro-mechanical behaviour of different structural elements of carbonate fault rocks during experimental re-activation. Experimentally faulted core plugs were subject to triaxial tests under water saturated conditions simulating depletion processes in reservoirs. Different fault zone structural elements were created by shearing initially intact travertine blocks (nominal size: 240 × 110 × 150 mm) to a maximum displacement of 20 and 120 mm under different normal stresses. Meso-and microstructural features of these sample and the thickness to displacement ratio characteristics of their deformation zones allowed to classify them as experimentally created damage zones (displacement of 20 mm) and fault cores (displacement of 120 mm). Following direct shear testing, cylindrical plugs with diameter of 38 mm were drilled across the slip surface to be re-activated in a conventional triaxial configuration monitoring the permeability and frictional behaviour of the samples as a function of applied stress. All re-activation experiments on faulted plugs showed consistent frictional response consisting of an initial fast hardening followed by apparent yield up to a friction coefficient of approximately 0.6 attained at around 2 mm of displacement. Permeability in the re-activation experiments shows exponential decay with increasing mean effective stress. The rate of permeability decline with mean effective stress is higher in the fault core plugs than in the simulated damage zone ones. It can be concluded that the presence of gouge in un-cemented carbonate faults results in their sealing character and that leakage cannot be achieved by renewed movement on the fault plane alone, at least not within the range of slip measureable with our apparatus (i.e. approximately 7 mm of cumulative displacement). Additionally, it is shown that under sub seismic slip rates re-activated carbonate faults remain strong and no frictional

Fault strength is a fundamental property of seismogenic zones, and its temporal changes can increase or decrease the likelihood of failure and the ultimate triggering of seismic events. Although changes in fault strength have been suggested to explain various phenomena, such as the remote triggering of seismicity, there has been no means of actually monitoring this important property in situ. Here we argue that approximately 20 years of observation (1987-2008) of the Parkfield area at the San Andreas fault have revealed a means of monitoring fault strength. We have identified two occasions where long-term changes in fault strength have been most probably induced remotely by large seismic events, namely the 2004 magnitude (M) 9.1 Sumatra-Andaman earthquake and the earlier 1992 M = 7.3 Landers earthquake. In both cases, the change possessed two manifestations: temporal variations in the properties of seismic scatterers-probably reflecting the stress-induced migration of fluids-and systematic temporal variations in the characteristics of repeating-earthquake sequences that are most consistent with changes in fault strength. In the case of the 1992 Landers earthquake, a period of reduced strength probably triggered the 1993 Parkfield aseismic transient as well as the accompanying cluster of four M > 4 earthquakes at Parkfield. The fault-strength changes produced by the distant 2004 Sumatra-Andaman earthquake are especially important, as they suggest that the very largest earthquakes may have a global influence on the strength of the Earth's fault systems. As such a perturbation would bring many fault zones closer to failure, it should lead to temporal clustering of global seismicity. This hypothesis seems to be supported by the unusually high number of M >or= 8 earthquakes occurring in the few years following the 2004 Sumatra-Andaman earthquake.

This review focuses on faults in Modular Multilevel Converter (MMC) for use in high voltage direct current (HVDC) systems by analyzing the vulnerable spots and failure mechanism from device to system and illustrating the control & protection methods under failure condition. At the beginning......, several typical topologies of MMC-HVDC systems are presented. Then fault types such as capacitor voltage unbalance, unbalance between upper and lower arm voltage are analyzed and the corresponding fault detection and diagnosis approaches are explained. In addition, more attention is dedicated to control...

Patterns in fault slip rates through time and space are examined across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38 and 39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and 10Be surface exposure dating, mean fault slip rates are defined, and by utilizing markers of different ages (generally, ~ 20 ka and ~ 150 ka), rates through time and interactions among multiple faults are examined over 10 4-10 5 year timescales. At each site for which data are available for the last ~ 150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~ 20 ky and ~ 150 ky timescales): 0.3 ± 0.1 mm year - 1 (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 + 0.3/-0.1 mm year - 1 along the West Fork of the Carson River at Woodfords. Data permit rates that are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~ 20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~ 20 km between the northern Mono Basin (1.3 + 0.6/-0.3 mm year - 1 at Lundy Canyon site) to the Bridgeport Basin (0.3 ± 0.1 mm year - 1 ). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin is indicative of a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveals that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection

National Aeronautics and Space Administration — Sensor faults continue to be a major hurdle for sys- tems health management to reach its full potential. At the same time, few recorded instances of sensor faults...

National Aeronautics and Space Administration — Most fault adaptive control research addresses the preservation of system stability or functionality in the presence of a specific failure (fault). This paper...

Full Text Available For the improvement of APR1400 Diverse Protection System (DPS design, the Advanced DPS (ADPS has recently been developed to enhance the fault tolerance capability of the system. Major fault masking features of the ADPS compared with the APR1400 DPS are the changes to the channel configuration and reactor trip actuation equipment. To minimize the fault occurrences within the ADPS, and to mitigate the consequences of common-cause failures (CCF within the safety I&C systems, several fault avoidance design features have been applied in the ADPS. The fault avoidance design features include the changes to the system software classification, communication methods, equipment platform, MMI equipment, etc. In addition, the fault detection, location, containment, and recovery processes have been incorporated in the ADPS design. Therefore, it is expected that the ADPS can provide an enhanced fault tolerance capability against the possible faults within the system and its input/output equipment, and the CCF of safety systems.

Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a

Full Text Available A novel distributed fault detection strategy for a class of nonlinear stochastic systems is presented. Different from the existing design procedures for fault detection, a novel fault detection observer, which consists of a nonlinear fault detection filter and a consensus filter, is proposed to detect the nonlinear stochastic systems faults. Firstly, the outputs of the nonlinear stochastic systems act as inputs of a consensus filter. Secondly, a nonlinear fault detection filter is constructed to provide estimation of unmeasurable system states and residual signals using outputs of the consensus filter. Stability analysis of the consensus filter is rigorously investigated. Meanwhile, the design procedures of the nonlinear fault detection filter are given in terms of linear matrix inequalities (LMIs. Taking the influence of the system stochastic noises into consideration, an outstanding feature of the proposed scheme is that false alarms can be reduced dramatically. Finally, simulation results are provided to show the feasibility and effectiveness of the proposed fault detection approach.

Full Text Available Fault detection and isolation in a complex system are research hotspots and frontier problems in the reliability engineering field. Fault identification can be regarded as a procedure of excavating key characteristics from massive failure data, then classifying and identifying fault samples. In this paper, based on the fundamental of feature extraction about the fault coefficient, we will discuss the fault coefficient feature in complex electrical engineering in detail. For general fault types in a complex power system, even if there is a strong white Gaussian stochastic interference, the fault coefficient feature is still accurate and reliable. The results about comparative analysis of noise influence will also demonstrate the strong anti-interference ability and great redundancy of the fault coefficient feature in complex electrical engineering.

This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191

Nearly all aspects of earthquake rupture are controlled by the friction along the fault that progressively increases with tectonic forcing but in general cannot be directly measured. We show that fault friction can be determined at any time, from the continuous seismic signal. In a classic laboratory experiment of repeating earthquakes, we find that the seismic signal follows a specific pattern with respect to fault friction, allowing us to determine the fault's position within its failure cycle. Using machine learning, we show that instantaneous statistical characteristics of the seismic signal are a fingerprint of the fault zone shear stress and frictional state. Further analysis of this fingerprint leads to a simple equation of state quantitatively relating the seismic signal power and the friction on the fault. These results show that fault zone frictional characteristics and the state of stress in the surroundings of the fault can be inferred from seismic waves, at least in the laboratory.

Full Text Available Faults in engineering systems are difficult to avoid and may result in serious consequences. Effective fault detection and diagnosis can improve system reliability and avoid expensive maintenance. In this paper fuzzy system based fault detection scheme for permanent magnet synchronous generator is proposed. The sequence current components like positive and negative sequence currents are used as fault indicators and given as inputs to fuzzy fault detector. Also, the fuzzy inference system is created and rule base is evaluated, relating the sequence current component to the type of faults. These rules are fired for specific changes in sequence current component and the faults are detected. The feasibility of the proposed scheme for permanent magnet synchronous generator is demonstrated for different types of fault under various operating conditions using MATLAB/Simulink.

Data observed from environmental and engineering processes are usually noisy and correlated in time, which makes the fault detection more difficult as the presence of noise degrades fault detection quality. Multiscale representation of data using

Based on fault diagnosis and fault tolerant technologies, the mine-hoist active fault-tolerant control system (MAFCS) is presented with corresponding strategies, which includes the fault diagnosis module (FDM), the dynamic library (DL) and the fault-tolerant control model (FCM). When a fault is judged from some sensor by the FDM, FCM reconfigures the state of the MAFCS by calling the parameters from all sub libraries in DL, in order to ensure the reliability and safety of the mine hoist. The simulating result shows that MAFCS is of certain intelligence, which can adopt the corresponding control strategies according to different fault modes, even when there is quite a difference between the real data and the prior fault modes. 7 refs., 5 figs., 1 tab.

Radon emanation was sampled in five locations in a limestone quarry area using SSNTDs CR-39. Radon levels in the soil air at four different well-known traceable fault planes were measured along a traverse line perpendicular to each of these faults. Radon levels at the fault were higher by a factor of 3-10 than away from the faults. However, some sites have broader shoulders than the others. The method was applied along a fifth inferred fault zone. The results show anomalous radon level in the sampled station near the fault zone, which gave a radon value higher by three times than background. This study draws its importance from the fact that in Jordan many cities and villages have been established over an intensive faulted land. Also, our study has considerable implications for the future radon mapping. Moreover, radon gas is proved to be a good tool for fault zones detection

A number of states have adopted profiler based systems to automatically measure faulting, : in jointed concrete pavements. However, little published work exists which documents the : validation process used for such automated faulting systems. This p...

Full Text Available This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs, when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research.

Full Text Available Strike-slip faults may be traced along thousands of kilometers, e.g., the San Andreas Fault (USA or the North Anatolian Fault (Turkey. A closer look at such continental-scale strike faults reveals localized complexities in fault geometry, associated with fault segmentation, secondary faults and a change of related hazards. The North Anatolian Fault displays such complexities nearby the mega city Istanbul, which is a place where earthquake risks are high, but secondary processes are not well understood. In this paper, long-term persistent scatterer interferometry (PSI analysis of synthetic aperture radar (SAR data time series was used to precisely identify the surface deformation pattern associated with the faulting complexity at the prominent bend of the North Anatolian Fault near Istanbul city. We elaborate the relevance of local faulting activity and estimate the fault status (slip rate and locking depth for the first time using satellite SAR interferometry (InSAR technology. The studied NW-SE-oriented fault on land is subject to strike-slip movement at a mean slip rate of ~5.0 mm/year and a shallow locking depth of <1.0 km and thought to be directly interacting with the main fault branch, with important implications for tectonic coupling. Our results provide the first geodetic evidence on the segmentation of a major crustal fault with a structural complexity and associated multi-hazards near the inhabited regions of Istanbul, with similarities also to other major strike-slip faults that display changes in fault traces and mechanisms.

We examined the distribution of fault rock and damage zone structures in sandstone and shale along the Moab fault, a basin-scale normal fault with nearly 1 km (0.62 mi) of throw, in southeast Utah. We find that fault rock and damage zone structures vary along strike and dip. Variations are related to changes in fault geometry, faulted slip, lithology, and the mechanism of faulting. In sandstone, we differentiated two structural assemblages: (1) deformation bands, zones of deformation bands, and polished slip surfaces and (2) joints, sheared joints, and breccia. These structural assemblages result from the deformation band-based mechanism and the joint-based mechanism, respectively. Along the Moab fault, where both types of structures are present, joint-based deformation is always younger. Where shale is juxtaposed against the fault, a third faulting mechanism, smearing of shale by ductile deformation and associated shale fault rocks, occurs. Based on the knowledge of these three mechanisms, we projected the distribution of their structural products in three dimensions along idealized fault surfaces and evaluated the potential effect on fluid and hydrocarbon flow. We contend that these mechanisms could be used to facilitate predictions of fault and damage zone structures and their permeability from limited data sets. Copyright ?? 2005 by The American Association of Petroleum Geologists.

The southern San Andreas fault has not experienced a large earthquake for approximately 300 years, yet the previous five earthquakes occurred at ~180-year intervals. Large strike-slip faults are often segmented by lateral stepover zones. Movement on smaller faults within a stepover zone could perturb the main fault segments and potentially trigger a large earthquake. The southern San Andreas fault terminates in an extensional stepover zone beneath the Salton Sea—a lake that has experienced periodic flooding and desiccation since the late Holocene. Here we reconstruct the magnitude and timing of fault activity beneath the Salton Sea over several earthquake cycles. We observe coincident timing between flooding events, stepover fault displacement and ruptures on the San Andreas fault. Using Coulomb stress models, we show that the combined effect of lake loading, stepover fault movement and increased pore pressure could increase stress on the southern San Andreas fault to levels sufficient to induce failure. We conclude that rupture of the stepover faults, caused by periodic flooding of the palaeo-Salton Sea and by tectonic forcing, had the potential to trigger earthquake rupture on the southern San Andreas fault. Extensional stepover zones are highly susceptible to rapid stress loading and thus the Salton Sea may be a nucleation point for large ruptures on the southern San Andreas fault.

Fault tolerant controls have the ability to be resilient to simple faults in control loop components.......Fault tolerant controls have the ability to be resilient to simple faults in control loop components....

Full Text Available Historical data show that like the North Anatolian fault zone, which was delineated by a series of earthquakes during this century from east to west, so was the conjugate Eastern Anatolian fault zone delineated from the northeast to the southwest by a succession of large earthquakes in earlier times, with a major event at its junction with the Dead Sea fault system. This event was associated with surface faulting and occurred in a region seismically quiescent for nearly two centuries.

A fault current limiting system for direct current circuits and for pulsed power circuit. In the circuits, a current source biases a diode that is in series with the circuits' transmission line. If fault current in a circuit exceeds current from the current source biasing the diode open, the diode will cease conducting and route the fault current through the current source and an inductor. This limits the rate of rise and the peak value of the fault current.

This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.

Offline test is essential to ensure good manufacturing quality. However, for permanent or transient faults that occur during the use of the integrated circuit in an application, an online integrated test is needed as well. This procedure should ensure the detection and possibly the correction or the masking of these faults. This requirement of self-correction is sometimes necessary, especially in critical applications that require high security such as automotive, space or biomedical applications. We propose a fault-tolerant design for analogue and mixed-signal design complementary metal oxide (CMOS) circuits based on the quiescent current supply (IDDQ) testing. A defect can cause an increase in current consumption. IDDQ testing technique is based on the measurement of power supply current to distinguish between functional and failed circuits. The technique has been an effective testing method for detecting physical defects such as gate-oxide shorts, floating gates (open) and bridging defects in CMOS integrated circuits. An architecture called BICS (Built In Current Sensor) is used for monitoring the supply current (IDDQ) of the connected integrated circuit. If the measured current is not within the normal range, a defect is signalled and the system switches connection from the defective to a functional integrated circuit. The fault-tolerant technique is composed essentially by a double mirror built-in current sensor, allowing the detection of abnormal current consumption and blocks allowing the connection to redundant circuits, if a defect occurs. Spices simulations are performed to valid the proposed design.

The use of fault tree analysis techniques to systematically identify (1) the sabotage events which can lead to release of significant quantities of radioactive materials, (2) the areas of the nuclear power plant in which the sabotage events can be accomplished, and (3) the areas of the plant which must be protected to assure that release does not occur are discussed

An apparatus, program product and method checks for nodal faults in a group of nodes comprising a center node and all adjacent nodes. The center node concurrently communicates with the immediately adjacent nodes in three dimensions. The communications are analyzed to determine a presence of a faulty node or connection.

An apparatus, program product and method for detecting nodal faults may simultaneously cause designated nodes of a cell to communicate with all nodes adjacent to each of the designated nodes. Furthermore, all nodes along the axes of the designated nodes are made to communicate with their adjacent nodes, and the communications are analyzed to determine if a node or connection is faulty.