Probabilistic risk assessment

Probabilistic risk assessment (PRA) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity (such as an airliner or a nuclear power plant) or the effects of stressors on the environment (Probabilistic Environmental Risk Assessment - PERA) for example.[1]

Risk in a PRA is defined as a feasible detrimental outcome of an activity or action. In a PRA, risk is characterized by two quantities:

the magnitude (severity) of the possible adverse consequence(s), and

the likelihood (probability) of occurrence of each consequence.

Consequences are expressed numerically (e.g., the number of people potentially hurt or killed) and their likelihoods of occurrence are expressed as probabilities or frequencies (i.e., the number of occurrences or the probability of occurrence per unit time). The total risk is the expected loss: the sum of the products of the consequences multiplied by their probabilities.

The spectrum of risks across classes of events are also of concern, and are usually controlled in licensing processes – it would be of concern if rare but high consequence events were found to dominate the overall risk, particularly as these risk assessments are very sensitive to assumptions (how rare is a high consequence event?).

Probabilistic Risk Assessment usually answers three basic questions:

What can go wrong with the studied technological entity or stressor, or what are the initiators or initiating events (undesirable starting events) that lead to adverse consequence(s)?

What and how severe are the potential detriments, or the adverse consequences that the technological entity (or the ecological system in the case of a PERA) may be eventually subjected to as a result of the occurrence of the initiator?

How likely to occur are these undesirable consequences, or what are their probabilities or frequencies?

In addition to the above methods, PRA studies require special but often very important analysis tools like human reliability analysis (HRA) and common-cause-failure analysis (CCF). HRA deals with methods for modeling human error while CCF deals with methods for evaluating the effect of inter-system and intra-system dependencies which tend to cause simultaneous failures and thus significant increase in overall risk.

Contents

One point of possible objection interests the uncertainties associated with a PSA. The PSA (Probabilistic Safety Assessment) has often no associated uncertainty, though in metrology any measure shall be related to a secondary measurement uncertainty, and in the same way any mean frequency number for a random variable shall be examined with the dispersion inside the set of data.

For example, without specifying an uncertainty level, the Japanese regulatory body, the Nuclear Safety Commission issued restrictive safety goal in terms of qualitative health objectives in 2003, such that individual fatality risks should not exceed 10−6/year. Then it was translated in a safety goal for nuclear power plants:[2]

for reactors of type BWR-4, in:

Core Damage Frequency (CDF): 1.6 × 10−7 /year,

Containment Failure Frequency (CFF): 1.2 × 10−8 /yr

for reactors of type BWR-5, in:

CDF: 2.4 × 10−8 /year, and ** CFF: 5.5 × 10−9 /yr for

The second point is a possible lack of design in order to prevent and mitigate the catastrophic events, which has the lowest probability of the event and biggest magnitude of the impact,[2] and the lowest degree of uncertainty about their magnitude. A cost-effective of the factor of safety, contribute to undervaluate or completely ignore this type of remote safety risk-factors. Designers choose if the system has to be dimensioned and positioned at the mean or for the minimum level of probability-risk (with related costs of safety measures),
for being resilient and robust in relation to the fixed value.