TECH 482/535
Failure Mode and Effects Analysis (FMEA / FMECA) This method was developed in the 1950s by reliability engineers to determine problems that could arise from malfunctions of military system. Failure mode and effects analysis is a procedure by which each potential failure mode in a system is analyzed to determine its effect on the system and to classify it according to its severity. When the FMEA is extended by a criticality analysis, the technique is then called failure mode and effects criticality analysis (FMECA). Failure mode and effects analysis has gained wide acceptance by the aerospace and the military industries. In fact, the technique has adapted itself in other form such as misuse mode and effects analysis. Discussion and Conclusion The three techniques outlined above require only the employment of "hardware familiar" personnel. However, FMEA tends to be more labor intensive, as the failure of each individual component in the system has to be considered. A point to note is that these qualitative techniques can be used in the design as well as operational stage of a system. All the techniques mentioned above have seen wide usage in the nuclear and chemical processing plants. In fact, FMEA, one of the most documented techniques in use; it has been used by "Intel" and "National Semiconductor" to improve the reliability of their products. For the case of preliminary risk analysis, it has seen application in safety analysis in both industry and on offshore platforms. HAZOP, on the other hand, has been widely used in the chemical industries for detailed failure and effect study on the piping and instrumentation layout. Tree Based Techniques In this section, fault-tree analysis (FTA), event-tree analysis (ETA), cause consequence analysis (CCA), management oversight risk tree (MORT) and safety management organization review technique (SMORT) will be discussed.
TECH 482/535
TECH 482/535
TECH 482/535
TECH 482/535
Dynamic Event Logic Analytical Methodology The dynamic event logic analytical methodology (DYLAM) provides an integrated framework to explicitly treat time, process variables and system behaviors [13]. A DYLAM is usually comprised of the following procedures: (a) component modeling, (b) system equation resolution algorithms, (c) setting of TOP conditions and (d) event sequence generation and analysis. DYLAM is useful for the description of dynamic incident scenarios and for reliability assessment of systems whose mission is defined in terms of values of process variables to be kept within certain limits in time[19]. This technique can also be used for identification of system behavior and thus, as a design tool for implementing protections and operator procedures. It is important to note that system specific DYLAM simulator must be created to analyze each particular problem. Furthermore, input data such as probabilities of a component being in certain state at transient initiation, independency of such probabilities, transition rates between different states, and conditional probability matrices for dependencies among states and process variables need to be provided to run the DYLAM package. An application of DYLAM on a reservoir problem is given in literature Dynamic Event Tree Analysis Method Dynamic event tree analysis method (DETAM) is an approach that treats timedependent evolution of plant hardware states, process variable values, and operator states over the course of a scenario. In general, a dynamic event tree is an event tree in which branching are allowed at different points in time. This approach is defined by five characteristics set: (a) branching set (b) set of variables defining the system state, (c) branching rules, (d) sequence expansion rule and (e) quantification tools. The branching set refers to the set of variables that determine the space of possible branches at any node in the tree. Branching rules, on the other hand, refer to rules used to determine when a branching should take place (a constant time step). The sequence expansion rules are used to limit the number of sequences.
TECH 482/535
Discussion and Conclusion The techniques discussed above address the deficiencies found in fault/event tree methodologies when analyzing dynamic scenarios. However, there are also limitations to their usage. The digraph and GO techniques model the system behavior and deal, in limited extent, with changes in model structure over time. On the other hand, Markov modeling requires the explicit identification of possible system states and the transitions between these states. This is a problem as it is difficult to envision the entire set of possible states prior to scenario development. DYLAM and DETAM can solve the problem through the use of implicit state-transition definition. The drawbacks to these implicit techniques are implementation- oriented. With the large tree-structure generated through the DYLAM and DETAM approaches, large computer resources are required. The second problem is that the implicit methodologies may require a considerable amount of analyst effort in data gathering and model construction. Conclusions A total of 13 risk analysis techniques were reviewed in the discussion above. Qualitative methodologies though lacking the ability to account the dependencies between events are effective in identifying potential hazards and failures within the system. The tree-based techniques addressed this deficiency by taking into consideration the dependencies between each event. The probabilities of occurrence of the undesired event can also be quantified with the availability of operational data. However, no one has yet attempted to quantify the undesired top event in a MORT tree. Currently, research has been made on DYLAM and DETAM to study accident scenarios by treating time, process variables, system behavior and operators action through an integrated framework. These techniques address the problem of having less than adequate modeling of conditions affecting control system actions and operator behavior when using the fault/event tree (e.g. behavior of plant process variables, previous decisions by the operating crew). However, the drawbacks for these techniques are the requirement for large computer resources and extensive data collection. With the development of more efficient algorithm and powerful computer, such methodologies would be widely applied.