• No results found

Phased mission analysis using the cause–consequence diagram method

N/A
N/A
Protected

Academic year: 2021

Share "Phased mission analysis using the cause–consequence diagram method"

Copied!
202
0
0

Loading.... (view fulltext now)

Full text

(1)
(2)
(3)

Using the Cause-Consequence Diagram

Method

by

Gintare Vyzaite

Master's Thesis

Submitted in partial fulfilment

of the requirements for the award of

Master of Philosophy

of Loughborough University

September 2007

(4)

V

Loughborough

; Ulli\'crsitv ,.,;:J~ , ...

Pilkiu!;wn Library

Date AUG

'CA

Class

T

Ace

040%04G~D No.

(5)

Most reliability analysis techniques and tools assume that a system used for a mission consists of a single phase. However, multiple phases are natural in many missions. A system that can be modelled as a mission consisting of a sequence of phases is called a phased mission system. In this case, for successful completion of each phase the system may have to meet different requirements. System failure during any phase will result in mission failure. Fault tree analysis, binary decision diagrams and Markov techniques have been used to model phased missions.

The cause-consequence diagram method is an alternative technique capable of modelling all system outcomes (success and failure) in one logic diagram. The structure of the diagram has been shown to have advantageous features in both its representation of the system failure logic and its subsequent quantification; which can be applied to phased mission analysis.

The work developed outlines the use of the cause-consequence diagram method for systems undergoing non-repairable phased missions. Methods for the construction of the cause-consequence diagram for such systems are considered. The disjoint nature of the resulting diagram structure can be utilised in the later quantification process. The similarity with the Binary Decision Diagram method enables the use of efficient and accurate solution routines. The method is illustrated with the application of an example of the cause-consequence diagram method to a non-repairable phased mission system. The system considered is an aircraft flight. The technique is computationally efficient and the work presented here shows that it is superior to the binary decision diagram. The work is extended to systems that can have multiple faults (i.e, minor, which would allow the system to progress to the next phase, and major, which would cause system failure).

(6)

I would like to thank my supervisor Dr. Sarah Dunnett and my director of research Pro£. John Andrews for their guidance over these years. Their input was invaluable.

I would also like to thank the colleagues and academics at the department of

Aeronautical and Automotive Engineering for their help, advice and support in my project.

My deepest thanks go to my family, without whose support and encouragement

during my studies I wouldn't be here today. Thank you for believing in me.

Finally, the greatest thank you I would like to say to Carlo, who was always next to me.

(7)

1. Introduction. . . . . 2. Fault Tree Analysis . 2.1 Introduction . .

. . .

2.2 Construction Of a Fault Tree

2.3 Qualitative Analysis . . .

2.3.1 Rules Of Boolean Algebra

2.4 Fault Tree Quantification. . . . .

2.4.1 Top event probability. . .

2.4.2 Shannon's decomposition formula

2.4.3 Inclusion-Exclusion Formula . . .

2.4.4 Upper and lower bounds for system unavailability

2.4.5 Minimal cut set upper bound . . . .

3 6 6 6 8 9 10 10 12 12 13 13

2.4.6 Top event frequency . . . . . 14

2.4.7 Approximation of the system unconditional failure intensity 15

2.4.8 Expected number of system failures 15

2.5 Example... 16

2.6 Importance measures 18

2.6.1 Deterministic measures.

2.6.1.1 Structural measure of importance.

2.6.2 Probabilistic measures (System Availability)

2.6.2.1 Birnbaum's measure of importance

2.6.2.2 Criticality measure of importance .

2.6.2.3 Fussell-Vesely measure of importance.

18 18 18 18 19 19

2.6.2.4 Fussell-Vesely measure of minimal cut set importance 20

2.6.3 Probabilistic measures (Systems reliability) . . . 20

(8)

2.6.3.3 Barlow-Proschan measure of minimal cut set importance 21 2.7 Summary .. . . . . 21

3. Binary Decision Diagrams

3.1 Introduction... 3.2 Description of the BDD 3.3 Construction of the BDD . 22 22 22 24 3.3.1 Construction of the BDD using structure function 24 3.3.2 Construction Of the BDD Using If-Then-Else Approach. 26

3.4 Qualitative Analysis Of the BDD . 29

3.4.1 Minimisation . . . .

3.5 Quantitative Analysis Of the BDD

3.6 Summary . . . .

4. Cause-Consequence Diagrams

4.1 Introduction...

4.2 Cause-Consequence Diagram Method

4.3 Symbols for the cause-consequence diagram

4.4 Construction rules . . . .

4.4.1 The cause diagram method.

4.4.2 The consequence diagram method .

4.4.3 Rules for dependent failure events .

4.4.3.1 Common failure events . .

29 32 33 34 34 35 36 39 39 40 41 42 4.4.3.2 Inconsistent failure events 42

4.5 Quantitative analysis . . . 46

4.5.1 Quantitative analysis of a system containing independent failure events . . . 46 4.5.2 Quantitative analysis of a system containing dependent failure

events

4.6 Example...

4.6.1 System quantification.

4.7 Applications of Cause-Consequence Diagram Method

4.8 Summary . . . . . . . iv 47 52 54 60 61

(9)

6. 62 62 64 64 G4 65 66 67 67 69 70 72 73 5.2 Structural Ordering Schemes .

5.2.1 Top-Down Ordering. .

5.2.2 Modified Top-Down Ordering

5.2.3 Depth-First Ordering . . .

5.2.4 Modified Depth-First, Ordering

5.2.5 Modified Priority Depth-First Ordering.

5.2.6 Depth-first, with Number of Leaves

5.3 Weighted Ordering Schemes

. . .

5.3.1 Non-Dynamic Top-Down Weights.

5.3.2 Dynamic Top-Down Weighted Ordering.

5.3.3 Bottom-Up Weights.

5.3.4 Event Criticality

5.4 Summary

...

Review of Phased Mission Analysis Methods 74

74 74 76 76 6.1 6.2 6.3 Introduction . . . Analysis of phased mission systems Methods for the phased mission analysis

6.3.1 Nonrepairable systems

...

6.3.1.1 Basic event transformation and cut set cancellation 76

6.3.1.2 Approximate methods for mission unreliability . 80

6.3.1.3 Expected number of failures . . . 81

6.3.1.4 Reliability of periodic, coherent, binary systems 83

6.3.1.4.1 Lower bound systems and periodic systems 84

6.3.1.4.2 Reliability bounds for periodic systems . . 85

6.3.1.5 Generalized intersection and union concept 85

6.3.1.5.1 Generalized intersection and union concept 86

6.3.1.5.2 Inclusion-exclusion principle . . . 86

6.3.1.5.3 Methodology of mission unreliability calculation 87

6.3.1.6 Method of Lee and Hong. . . .. 87

6.3.1.7 Phased mission system analysis using boolean algebraic

methods . . . 89

(10)

6.3.1.7.2 Example . . . .

6.3.1.7.3 Sum of disjoint products and its

phased-90

extension . . . . . 92

6.3.1.8 A BDD-based algorithm for reliability analysis of

phased mission systems . . . 93

6.3.1.8.1 BDD algorithm for phased mission system 94

6.3.1.9 Imperfect coverage 95

6.3.1.10 Other methods 98

6.3.2 Repairable systems . . . 99

6.3.2.1 Markov approach for reliability evaluation . 100

6.3.2.1.1 Deterministic mission phase change time . 100

6.3.2.1.2 Random mission phase change time . . . 104

6.3.2.2 A non-homogeneous Markov model . . . 105

6.3.2.3 Discrete-state continuous-time Markov model . 107

6.3.2.4 6.4 Summary ..

Fault tree approach. · 110

· 115 7. Phased Mission Analysis using the Cause-Consequence Diagram Method . 117

7.1 Cause-Consequence Analysis. . . 117

7.2 Phased Mission Analysis Using Cause-Consequence Diagram . 117

7.3 Cause-Consequence Diagram Construction Methods. . 119

7.3.1 Method 1 . . . . 119

7.3.2 Qualitative Analysis . 122

7.3.3 Quantitative Analysis. . 123

7.3.4 Method 2 .. . 125

7.4 Program description . 131

7.5 The use of ordering schemes in construction of CCD and BDD . 132

7.6 Analysis of phased mission system with multiple faults . 138

7.6.1 E x a m p l e . . . . 139

7.6.2 Quantitative and Quantitative analysis . 147

7.7 D i s c u s s i o n . . . · 147

8. Modelling Aircraft Flight using the Cause-Consequence Analysis. . 148

8.1 Introduction . . . . · 148

(11)

8.2.2 Phase 2 8.2.3 Phase 3

8.2.4 Phase 4

8.2.5 Phase 5

8.2.6 Phase 6

8.3 Construction of the Cause-Consequence Diagram 8.4 Analysis of the Cause-Consequence Diagram

8.4.1 Qualitative analysis .

8.4.2 Quantitative analysis

8.5 Conclusions . . . .

9. Modularisation of Phased Mission Systems . 9.1 Introduction... · 155 · 156 · 157 · 158 · 158 · 160 · 161 · 161 · 164 · 164 · 166 · 166

9.2 The Linear-Time Algorithm . 166

9.3 Modularisation for Phased Mission Systems . 169

9.3.1 Modularisation of Each Phase of a Phased Mission System . 169

9.3.2 Modularisation of a Phased Mission System as a Whole . . 177

9.3.2.1 Qualitative and Quantitative Analysis . 178

9.4 Discussion . . · 181

10. Conclusions and Future Work .183

10.1 Conclusions . . . . · 183

10.2 Recommendations for future work . · 185

(12)

Ei (t) ENF(to, ti) F (t)

f

(t) Gi

(q

(t)) IfJz P lEP e If?M z If'V z P(T) Pij Q Qsys (t) q (t) R U W (O,t) WT(t)

NOMENCLATURE

Existence of a minimal cut set i

Event that component Ck works the phase k, given that

it was functioning through all previous phases

Expected number of times system enters state i

Expected number of failure in time interval to to tl

Cumulative failure distribution Failure probability density function

Criticality function for component i

Barlow-Proschan measure of initiator importance Barlow-Proschan measure of enabler importance Criticality measure of importance

Fussell-Vesely measure of component importance Fussell-Vesely measure of minimal cut set importance Structural measure of importance

Event probability

Component i availability in phase j

Phased mission system unreliability System unavailability function Minimal cut set unavailability Component unavailability Weight of basic event i

System reliability i

Performance state indicator

Expected number of system failures in time t

Rate of failure

(13)

Ws

(t)

A

(t)

J-Li

(t)

cP (x) p(x) () T

System failure intensity

Binary indicator variable for component states Component failure rate

Component detection/repair rate System structure function

Binary indicator function for each minimal cut set Inspection interval

Occurrence of minimal cut set i

(14)

Many systems perform a mission, which can be divided into consecutive time periods - phases. In each phase, the system needs to perform a specific task. The system configuration, the phase duration, and the failure rates of components often

vary from phase to phase. Burdick et al [1] describe a phased mission as a task

to be performed by a system during execution of which the system is altered such that the logic model changes at specified times. Thus, during a phased mission, time periods (phases) occur in which either the system configuration, system failure characteristics, or both are distinct from those of any immediately succeeding phase. The most important aim of phased mission analysis is to calculate the exact, or obtain bounds for: mission unreliability. This is defined as the probability that the system fails to function successfully in at least one phase. Estimating the mission unreliability by the product of the phase unreliabilities results in inaccuracies, since basic events are shared among logic models of the various phases which are not then independent. Esary and Ziehms [2] used a fault tree method for the analysis of the phased missions for non-repairable systems. They introduced basic event transformation and cut set cancellation techniques. But the method proposed by Esary and Ziehms was unable to calculate the probability of failure of each phase due to cut set cancellation, only of the whole mission. La Band and Andrews [3] introduced a new method based on non-coherent fault trees that determines the probability of failure of each phase in addition to the whole mission unreliability. The method combines the causes of success of previous phases with the causes of failure for the phase being considered to allow both qualitative and quantitative analysis of both phase failure and mission failure.

Zang, Sun and Trivedi [4] proposed an algorithm for analysis of phased mission

systems based on binary decision diagrams (BDDs). Such diagrams give a

representation of the system failure logic which is in a format more effective for analysis than that of a fault tree. As such, BDDs offer efficient mathematical

(15)

manipulation, but are difficult to construct directly from the system definition and hence are generally obtained by converting from a fault tree. The method proposed by [4] only determines the unreliability of the whole mission. A BDD methodology was also applied by La Band and Andrews [3] to evaluate the probability of failure of each phase in the mission.

As both of these methods have their own drawbacks, another method for system reliability /unreliability was introduced Nielsen [5]. The cause-consequence diagram method was developed at RISO Laboratories, Denmark, as a graphical tool for analysing relevant accidents in a complex nuclear power plant. The method presents logical connections between causes of an undesired (critical) event and the consequences of such an event, if one or more mitigating provisions fail. As all consequence sequences are investigated, the method can assist in identifying system outcomes, which may not have been investigated at the design stage. Ridley and Andrews [6] notice that, for some types of system, the final cause-consequence diagram has an identical structure to that of the BDD. They noted, however that the cause-consequence diagram was more concise due to the automatic extraction of common independent sub-modules. As the cause-consequence diagram can be obtained directly from the system description, there was no need to develop and convert from a fault tree to BDD. They also noted that as the BDD is a more efficient tool than the fault tree method then the cause-consequence diagram formulation can be advantageous. The cause-consequence diagram also has the capability to model the failure of each phase in addition to the whole mission in one diagram.

In this work cause-consequence analysis is applied to phased missions. The fault tree analysis is reviewed in Chapter 2 and some approximation techniques used to evaluate system reliability /unreliability and importance measures are described. Chapter 3 reviews binary decision diagrams. The binary decision diagram method can suffer if t.he onler in which component.s are consillerell is not well chosen and this results in an increase in the size of the resulting diagram. This decreases the efficiency of the method and hence many schemes have been devised to obtain the most efficient order of components. Some of the possible ordering schemes are discussed in Chapter 5. In Chapter 4 the cause-consequence diagram method is described and reviewed. Phased mission systems are described in Chapter 6. This chapter outlines the different methods available for evaluating a phased mission system. The purpose of this work was to apply the cause-consequence diagram

(16)

method to the phased mission systems and this is presented in Chapter 7. Two methods for constructing a cause-consequence diagram for phased mission system are presented. The following chapter gives the construction and analysis of the cause

consequence diagram of a specific phased mission system - an aircraft flight. It is a

very complex system for the capacity of this theses therefore a simplified version of the system was used. Chapter 9 goes on to describe a modularisation technique that can be applied to the cause-consequence diagrams. The conclusions of the work and some suggestions for the future work are given in Chapter 10.

(17)

2.1 Introduction

There are two main types of modelling tools used for reliability analysis. They are inductive, or forward, analysis, and deductive, or backward, analysis. An inductive analysis starts at component level and proceeds forward identifying the possible consequences. Fault tree analysis is an example of deductive analysis, where the process starts at a possible consequence and goes backwards trying to identify all

possible causes. It provides a diagrammatic description of system failure in terms

of the failure of its components.

2.2

Construction Of a Fault Tree

The first step in the construction of a fault tree is to determine a system failure mode. The system failure mode is termed the 'top event' and the fault tree is

developed in branches below this event showing its causes. It is important that the

definition of the top event is not too broad or too narrow to produce the results

required. If the system has more than one failure mode, multiple fault trees would

be constructed to represent each mode.

There are two basic elements used in fault tree construction - 'gates' and 'events'.

Events can be dassified as intermeciiate or basic:. Intermediate events can be

developed further and are represented by rectangles in the fault tree diagram. Basic events are represented by circles and cannot be developed any further. Basic events usually are component failures or human errors. These symbols are shown in Table 2.l.

The gates either allow or inhibit the passage of fault logic up through the tree and show the relationships between the 'events' needed for occurrence of a higher event. The development of a fault tree involves use of Boolean expressions represented by logical operations 'Or', 'And', 'Not'. These expressions are represented by gates

(18)

Symhol Meaning

Intermediate event further developed

0

hya gate. It indicates that the event is

capahle ofheing hroken down. This is the only symbol that will have a logic j!ate and input events below it.

6

Basic event. These symhols are found

at the hottom of fault trees and require no further development or

hreakdown.

Table 2.1: Event symbols

'Or', 'And' and 'Not' in a fault tree, respectively. Another gate used in fault tree construction is a 'k out of N' gate, also called 'Vote' gate, which can be expressed as a combination of 'Or' and 'And' gates. This gate allows the flow of logic through the tree if at least k out of N inputs occur. The symbols used to represent these gates are shown in Table 2.2. There are other gates but the ones shown are those most commonly adopted. Before analysis can be performed on any fault tree all gates must be expressed in terms of the 'And', 'Or' and 'Not' gates.

Symhol Name Relation

Q

OR Output event occurs if at least

one of the input event~ occurs

Q

AND Output event occurs if all

input events occur

~

Output event occurs if at least

VOTE k out of N possihle input~

occur

~

NOT Output event occurs if the

input event doesn't

Table 2.2: Gate symbols

Once a top event has been determined, it is developed by asking 'what could C,lllse this?'. Hence t.he immeciiate, necessary awl sllffieient. causes for its occurrence are determined. In this way, events in the tree are continually redefined in terms of lower resolution events. This process is terminated when basic events are reached.

(19)

A system where failure modes are expressed only in terms of component failures is referred to as a 'coherent' system. A coherent fault tree will have only 'Or' and

'And' gates. If failure modes in the system are expressed in terms of component

failures and successes, the system is called 'non-coherent'. Non-coherent fault trees also have 'Not' gates.

There can be two types of analysis which can be performed once the fault tree is constructed:

• Qualitative analysis • Quantitative analysis

2.3 Qualitative Analysis

Qualitative analysis involves the identification of combinations of component states, which cause the system to fail. For coherent fault trees these combinations

are called cut sets or minimal cut sets and just involve component failures [7]. In

the case of non-coherent fault trees (when 'Not' logic is involved), the combinations of basic events that would cause system failure are called implicants. The minimal sets of implicants are called prime implicants.

The definition of a cut set is:

A cut set is a collection of basic events whose presence will cause the top event to occur.

System failure, however, does not necessarily need the failure of all the components in a cut set, but for any system the largest cut set will consist of all component failures. Generally only lists of component failures which are necessary and sufficient to cause system failure are looked at. Hence the importance of the minimal cut sets.

A cut set is said to be minimal if it cannot be further minimized but still insures the occurrence of the top event.

Minimal cut sets are sometimes called the minimal failure modes of a system.

(20)

Two fault trees are logically equivalent if they have the same minimal cut sets. The order of a minimal cut set is the number of components within the set. The lowest order minimal cut sets contribute most to system failure, as fewer component failures are needed to cause system failure.

In order to determine the minimal cut sets from a fault tree, Boolean logic

expressions for the top event must be transformed to a sum-of-products form.

This can be achieved using a top-down or bottom-up approach. The top down approach would start with the top event and then gradually substitutes gates with their inputs using Boolean expressions until the expression for the top event consists only of basic events. The bottom-up approach begins at the bottom of the fault tree and works upwards to the top event. Both of these methods are straightforward to apply and involve the expansion of Boolean expressions. The (liiference bet.ween these two approaches is in which end of the fault tree is used to initiate the expansion process. The following laws of Boolean algebra are used to simplify and to remove

redundancies in the expressions obtained. In Boolean algebra, '.' is used to represent

'And' and

'+'

represents 'Or'.

1. Commutative laws

A+B=B+A A·B=B·A

2. Associative laws

2.3.1 Rules Of Boolean Algebra

(A

+

B)

+

C = A

+

(B

+

C) (A . B) . C

=

A . (B . C) 3. Distributive laws A

+

(B· C) = (A

+

B) . (A

+

C) A . (B

+

C) = A . B

+

A . C 4. Identities A+O=A A+l=l A·O=O A·l = 1

(21)

5. Idempotent law A+A=A A·A=A 6. Absorption law A+A·B=A A· (A+B) =A

7.

Complementation A+A=l A·A=O (A) = A 8. De Morgan's laws (A+B)=A·B (A·B)=A+B

Laws 5 and 6 enable the removal of redundancies in expressions: law 5 removes repeated cut sets and repeated events within each cut set and law 6 removes non-minimal cut sets.

2.4 Fault Tree Quantification

Quantitative analysis of the fault tree allows the calculation of a number of parameters, which are used to assess the system. The top event probability and frequency are used together with the expected number of occurrences of the top event and event importance measures to gain a full understanding of the system.

Quantitative analysis is based on a probabilistic method known as 'Kinetic Tree Theory' introduced by Vesely [8]. The underlying assumption of the Kinetic Tree Theory is that all basic events in the tree structure occur independently of one another.

2.4.1 Top event probability

Each system is assumed to exist in one of two states - working or failed. The state of the system will be a function of the state of each component in the system.

(22)

Each component is also assumed to exist in one of two states - working or failed.

For the ith component the binary indicator variable Xi is define(l to he:

{

1 if component i is failed

Xi = 0 if component i is working

where i = 1,2, ... ,n, and n is the number of components in the system.

The system structure function is defined as:

Ne {1 if system is failed

4> (x)

=

1-

P:'=l

(1- Pi (X))

=

0

. if system is working (2.1)

where Pi (X) is the binary indicator function for each minimal cut set Ci, i

I. .. Nc:

() IT

{

1 if cut set Ci exists

Pi X = Xj =

jECi 0 if cut set Ci does not exist

(2.2) The probability of the top event is given by the expected value of the system structure function:

Qsys (t) = E [4> (x)] (2.3)

If each minimal cut set is independent (there are no common events between any

cut sets), then:

4> [E(x)] = E[4>(x)] (2.4)

Hence the expected value of the structure function for a fault tree without repeated events would be calculated by substituting the probability of failure of each component in the structure function.

However minimal cut sets are not usually independent, and in this case the full expansion of the structure function is needed. For example, if there are two minimal cut sets: Cl

=

{Xl, X 2}, C2

=

{X2' X3 }, then the structure function is given by:

4> (x) 1 - (1 - Xl . X2) (1 - X2 . X3)

1 - (1 - Xl . X2 - X2 . X3

+

Xl • X2 . X2 . X3) (2.5)

After reduction of the indicator variables (i.e. Xi = Xf) the following result is

obtained:

(23)

The probability of the top event T for the fault tree with 2 minimal cut sets given earlier is described by the expected value of the expanded and reduced structure function:

(2.7)

An alternative, more efficient way to deal with repeated events is to use Shannon's decomposition formula.

2.4.2 Shannon's decomposition formula

According to Shannon's formula, a Boolean function

f

(~), where ~

-(Xl, ... ,Xn ) , can be expressed as:

(2.8)

where

f

(li'~) represents

f

(~) with component Xi failed and

f

(Oi'~) represents

f

(~) with component Xi working.

f

(li'~) and

f

(Oi'~) are known as the residues

of

f

(~) with respect to Xi .

The structure function is pivoted around the most repeated variable using Shannon's formula. This is continued until no repeated events are left in the residues. Applying Shannon's formula to the structure function given in (2.5) and pivoting

around variable X2 gives:

<fJ (x) X2 [1 - (1 - Xl) (1 - X3)]

+

(1 - X2)

[0]

X2 [1 - (1 - xr) (1 - X3)]

The probability of the top event is then given by:

Qsys (t)

=

E [<fJ (x)]

=

P (X2 ) [1- (1 - P (Xl)) (1 - P (X3 ))] (2.9)

An alternative approach to the structure function method to obtain the top event probability is to use the inclusion-exclusion formula.

2.4.3 Inclusion-Exclusion Formula

This approach is suitable whether basic events are repeated or not. The top event T occurs if at least one cut set exists. This gives the following Boolean expression

(24)

for T:

Ne

T

=

Cl

+

C 2

+ ...

+

CNe

=

U

C i i=l

Expanding this expression gives the inclusion-exclusion expansion:

Ne Ne i-I

P(T) = LP(Ci) - LLP(Ci

n

Cj )

+ ... +

(_l)Ne-l P (Cl

n

C 2

n ... n

CNe)

i=l i=2 j=l

(2.10)

If the number of minimal cut sets, Ni , is large the expression (2.10) becomes

tedious and time consuming to calculate. In simulations its calculation may be

impractical and hence approximations are used.

2.4.4 Upper and lower bounds for system unavailability

Taking the first two terms of the inclusion-exclusion expansion gives the following:

Ne Ne i-I Ne

L:

P (Ci) -

L: L:

P (Ci

n

Cj ) ::; Qsys (t) :::;

L:

P(Ci)

i=l i=2j=1 i=l

lower bound exact upper bound

The upper bound of the top event probability is known as the rare

approximation since it is accurate if the component failure events are rare.

2.4.5 Minimal cut set upper bound

A more accurate upper bound is the minimal cut set upper bound.

Qsys (t)

As

P (system failure) = P (at least 1 minimal cut set occurs) - 1 - P (no minimal cut set occurs)

Ne

(2.11)

event

P (no minimal cut set occurs)

>

IT

P (minimum cut set i does not occur) ,

(25)

the following is correct:

Ne

Qsys (t):::; 1-

IT

(1-P(Cd) (2.12)

i=l

It can be shown that

Ne Ne

Qsys (t) :::; 1 -

IT

(1 - P (Ci )) :::;

L-

P (Ci )

i=l i=l

exact minimal cut set rare event

upper bound approximation

2.4.6 Top event frequency

The top event frequency or the system failure intensity Ws

(t)

is defined as the

probability that the top event occurs at

t

per unit time. Therefore Ws

(t) dt

is the

probability that the top event occurs in the time interval

[t, t

+

dt).

For the top event to occur between t and t

+

dt

all the minimal cut sets must

not exist at

t

and then one or more minimal cut sets occur during

t

to

t

+

dt.

It is

assumed that

dt

is so small that only one component fails in this time. More than

one minimal cut set can occur in a small time element

dt

since component failure

events can be common to more than one minimal cut set. This can be expressed as:

(2.13) Ne

where A is the event that all minimal cut sets do not exist at time t and

U

(}i is the

i=l

event that one or more minimal cut sets occurs in time

t

to

t

+

dt.

As P (A) = 1-P (A), equation 2.13 can be written as:

~OO.=P~Q~=P~~-P~Q~

where A means that at least one minimal cut set exists at

t.

(2.14)

The first term of the equation 2.14 is the contribution from the occurrence of

at least one minimal cut set in the small time element

dt

and the second term is

a correction term representing the contribution of minimal cut sets occurring while other minimal cut sets already exist (Le. system is already failed). Denoting these terms by wil)

(t) dt

and W~2)

(t) dt

respectively gives the following:

(26)

Terms on the right side of the equation 2.14 can be expanded using the inclusion-exclusion principle, but as this is computationally intensive approximations may be used.

2.4.7 Approximation of the system unconditional failure intensity

From equation 2.15

Ws (t) dt ~

w!

(t) dt (2.16)

and hence there is an upper bound WSMAX (t) for Ws (t):

WSMAX (t) =

w!

(t) (2.17)

If the component failures are rare events then the minimal cut set failures will

also be rare events. The second term of equation 2.15,

wi

2) (t) dt, requires minimal

cut sets to exist and occur at the same time. When component failures are rare this occurrence rate is also very small and hence Ws (t) -:::= WSMAX (t).

As

Ne

w~l) (t) dt =

UP

(Oi) (2.18)

i=l

results in a series expansion, it can be truncated after the first term to give the rare event approximation: Ne WSMAXdt

<

LP(Oi) i=l Ne

<

Lwoi

(t) dt i=l (2.19)

where P (Oi) is the probability of the occurrence of minimal cut set i; wOi is the

unconditional failure intensity of minimal cut set i.

2.4.8 Expected number of system failures

The expected number of system failures in time

t,

W (0, t), is given by the integral

of the system failure intensity in the interval [0, t):

(27)

The expected number of system failures is an upper bound for system unreliability:

F(t)

<

W(O,t)

U nreliability Expected number of system failures

If system failure is rare, this upper bound is a close approximation.

2.5 Example

To illustrate the use of fault tree analysis consider the example shown in Figure 2.1. The top-down approach is demonstrated using this example fault tree.

Figure 2.1: Example fault tree

In the top-down approach the starting point is the top event. Then it is expanded by substituting each gate in the expression by events appearing lower down in the fault tree and simplifying the expression until it has only basic component failures.

The top event in Figure 2.1 has an 'Or' gate with two inputs: Top = Gate1

+

Gate2

(28)

Gate 1 is an 'And' gate with two input events, A and B: Gate1

Top

A·B

A· B

+

Gate2 Gate 2 is an 'Or' gate with two inputs, C and Gate3:

Gate2 Top

C

+

Gate3

A· B

+

C

+

Gate3

Gate 3 is an 'And' gate with two input events, Band D:

. Gate3 = B· D

Hence, the following expression for the Top event is obtained: Top = A· B

+

C

+

B . D

This is the minimal disjunctive form of the logic equation, each term of which is a minimal cut set. This fault tree therefore has three minimal cut sets, one of order

one and two of order two: {C}, {A, B}, {B, D}.

The probability of the top event using the inclusion-exclusion would be calculated as follows:

P (Top) P(A· B) +P(C) +P(B· D)

- P (A . B . C) - P (A . B . D) - P (C . B . D) +P (A· B· C· D)

Minimal cut set bound (see equation 2.12) for this system would be: P (Top) ~ 1- (1-P(C))(l- P (A· B))(l- P (B· D)) The rare event approximation (equation 2.11) is:

(29)

2.6 Importance measures

An importance analysis is a sensitivity analysis which identifies weak areas of the system and can be very valuable at the design stage. For each component its

importance measure signifies the role that it plays in either causing or contributing

to the occurrence of the top event. This allows components or cut sets to be ranked according to the extent of their contribution to the occurrence of the top event.

Importance measures can be categorised as deterministic or probabilistic. Probabilistic measures can also be categorised into those dealing with system availability assessment and those concerned with system reliability assessment.

2.6.1 Deterministic measures

Deterministic measures assess the importance of a component to the system operation without considering the component's probability of occurrence. One such measure is the structural measure of importance.

2.6.1.1 Structural measure of importance

The structural measure of importance for a component i is defined by equation

2.21:

J?T = number of critical system states for component i

~ total number of states for the (n - 1) remaining components (2.21)

A system state for component i will be described as a critical state if failure of

component i causes the system to go from a working to a failed state.

2.6.2 Probabilistic measures (System Availability)

Probabilistic measures are generally of more use than deterministic measures in reliability problems as they take into account the component's probability of failure.

2.6.2.1 Birnbaum's measure of importance

Birnbaum's measure of importance is also known as the criticality function. The criticality function for a component i, Gi (q

(t)),

is defined as t,he probabilit.y t.hat,

the system is in a critical system state for component i.

(30)

1.

2.

Gi (q (t)) = Q (li' q (t)) - Q (Oi, q (t)) (2.22)

where Q (t) is the probability that the system fails, (li' q) = (ql,"" qi-l, 1, qi+l,' .. , qn), (Oi, q) = (ql,"" qi-l, 0, qi+l,· .. , qn).

This expression gives the probability that the system. fails with component i

failed minus the probability that the system fails with component i working.

So, this gives the probability that the system fails only if component i fails.

(2.23) This defines the critica1ity function as a partial derivative which is the same as the first expression 2.22 as:

8Q

(q)

Q

(li' q (t)) -

Q

(Oi, q (t))

8qi 1 - 0 (2.24)

2.6.2.2 Criticality measure of importance

The criticality measure of importance is defined as the probability that the

system is in a critical state for component i, and i has failed (weighted by the

system unavailability Qsys):

I~M

=

Gi (q(t))qi (t)

1 Qsys (q (t))

(2.25)

2.6.2.3 Fussell- Vesely measure of importance

This measure of importance is defined as the probability of union of the minimal

cut sets containing component i given that the system has failed:

(2.26) The importance rankings by Fussell-Vesely method are very similar to those produced by the criticality measure of importance (2.25).

(31)

2.6.2.4 Fussell- Vesely measure of minimal cut set importance

This measure provides a similar function to the previously defined importance measures for components except that the minimal cut sets are themselves ranked.

The importance measure is defined as the probability of occurrence of cut set i given

that the system has failed:

(2.27)

2.6.3 Probabilistic measures (Systems reliability)

Probabilistic measures for system reliability are appropriate for systems where the interval reliability is being assessed and the sequence in which components fail matters. The sequence of failure can be described with the use of enabling and initiating events. This is of particular use when analysing safety protection systems.

For example, if a hazardous event occurs after the protection system failed, this

would result in a dangerous system failure. However, if the protection system was working when the hazardous event occurred, but failed later, then it would shutdown the system and a dangerous situation would be avoided. So, in this example, the hazardous event is an initiator, as it would result in a system failure only if the

enabling event has already occurred. If the initiating event occurs first, then the

safety system would respond as required and danger would be avoided. Initiating and enabling events are defined as follows:

Initiating events perturb system variables and place a demand on control/protective systems to respond.

Enabling events are inactive control/protective systems which permit initiating events to cause the top event.

All probabilistic measures for system reliability are weighted according to the

expected number of system failures, W (0, t).

2.6.3.1 Barlow-Proschan measure of initiator importance

The Barlow-Proschan measure of initiator importance is the probability that the initiating event i causes the system failure over the interval [0,

t).

It is defined

(32)

in terms of the criticality function and the unconditional failure intensity of the component:

rt

{Q (li' q

(t)) -

Q (Oi, q

(t))) Wi

(t) dt

lBP ~l~na

____________

~~

__________ __

i - W (0, t) (2.28)

2.6.3.2 Sequential contributory measure of enabler importance

The sequential contributory measure of enabler importance is the probability

that enabling event i permits an initiating event to cause system failure over the

interval [0, t). The failure of the enabler i is only a factor when it is contained in

the same minimal cut set as the initiating event j:

L

j iofj i and jECk lBP = for some k e W (0, t)

This expression is only an approximation.

2.6.3.3 Barlow-Proscllan measure of minimal cut set importance

(2.29)

This measure of cut set importance is the probability that a minimal cut set i

causes the system failure in interval [0,

t)

given that the system has failed:

L

t

[1 -

Q

(OJ,

li-{j}, q

(t'))]

IT

qk

(t')

Wj

(t') dt'

jEi

la

ki'j

kEi

~=---~~--~~---W (O,t) (2.30)

j is each initiating event in the minimal cut set {i}.

2.7 Summary

Fault tree analysis is very important and frequently used to quantify system

performance. It gives a diagrammatic representation of the system failure causes,

and also provides a means for system quantification. Performing analysis upon large fault trees (quantitative or qualitative) may be computationally intensive and hence approximations are needed for some parameters and that will lead to loss of accuracy.

(33)

3.1 Introduction

Fault trees described in the previous chapter are a good way to represent the logic of the system. However, if the fault tree is large, then performing analysis on it can be computationally expensive. Approximations are needed for many parameters and that would result in loss of accuracy. A more accurate and efficient way to perform these calculations is to use the Binary Decision Diagram technique.

Binary Decision Diagrams (BDDs) were introduced by Lee [9] who used them to represent switching circuits. They were further studied by Akers [10] who defined a digital function in terms of a diagram, which told the user the output value of the function by examining the values of its inputs. The BDDs were first applied to reliability and, more specifically, to fault tree analysis, in 1980's by Schneeweiss

[11].

Further development of the use of BDDs in reliability analysis was developed by Rauzy [12], who suggested that they could provide an alternative technique for performing fault tree analysis.

The BDD method first converts a fault tree to a binary decision diagram which

can then be used for analysis. In order to do this, an order in which components are considered must be taken. The BDD represents the Boolean equation for the top event, which is much easier to analyse than a fault tree. The method allows for quantitative and qualitative analysis of the fault tree. The advantage of this method compared to fault tree analysis is that exact solutions can be calculated efficiently without the need for approximations.

3.2 Description

of

the BDD

A BDD is a directed acyclic graph. According to Rauzy[12]' BDDs have two

important features:

(34)

• the graphs are compacted by sharing equivalent subgraphs;

• the results of operations performed on BDD are memorised and thus a job is never performed twice.

A BDD is composed of terminal and non-terminal nodes (vertices), connected by branches. The non-terminal nodes encode basic events and the terminal nodes correspond to the final state of the system. The example of a BDD is shown in Figure 3.1.

Non-terminal vertex

Tenninal vertex Terminal vertex

Figure 3.1: Example of Binary Decision Diagram

A non-terminal node of a BDD has two outgoing branches: if the basic event represented by the non-terminal node occurs, then the diagram is further developed following the left-hand side branch ('1' branch), and ifthe basic event doesn't occur the diagram is developed on the right hand side branch ('0' branch). In the following work, all left branches of a BDD will represent '1' branches and all right branches will represent '0' branches. The size of the BDD is usually measured by the number of non-terminal nodes. Terminal nodes have the value 1 if the top event occurs (i.e. system fails) or 0 if the top event doesn't occur (i.e. system doesn't fail).

All paths through the diagram start at the root vertex, the top node, and proceed to a terminal node marking the end of the path. A path terminating in node '1' gives a cut set of the fault tree. Only nodes lying on the '1' branches of the path are included in the cut set.

(35)

3.3 Construction

of

the BDD

3.3.1 Construction of the BDD using structure function

One method to construct a BDD from a fault tree is to use the structure function

ljJ (~) of the system. An order in which components will be considered in the

construction process is important as it can significantly influence the size of BDD. Once an order of components is determined, values of 1 and 0 are substituted for each component in the structure function according to the chosen ordering. To illustrate the process a fault tree shown in Figure 3.2 is used.

Figure 3.2: Fault Tree Example

This fault tree has four minimal cut sets:

1. {A, C}

2. {A, D}

3. {B,C}

4. {B,D}

which gives the following structure function:

Using top-down, left-right ordering scheme (simply ordering the variables as they are encountered on a top-down, left-right traversal of the fault tree) the component order would be:

(36)

l-(l-xA ·xc)(l-xA ·xD)(l-xB ·xC)(l-XB .XD)

FI

l-(l-xC )(l-xD)(l-xB ·xc )(l-xB· XD)

F2

Figure 3.3: Binary Decision Diagram for Fault Tree shown in Figure 3.2

This means that basic event A is considered first, then basic event B, then C and finally basic event D. The first node (root vertex) represents basic event A. The result of the left-hand branch is obtained by substituting the value 1 into the structure

function for each XA and the result for the right-hand side branch is obtained by

substituting value 0 for A: XA = 1:

XA =0:

</J(:J2) = 1- (1- xc) (1- XD) (1-XB· xc) (1- XB· XD) (3.2)

</J (:J2) = 1 - (1 - XB . xc) (1 - XB . XD)

Other basic events are considered in the same way until the terminal nodes are reached. The resulting BDD is shown in Figure 3.3.

The resulting BDD is not in its most efficient form and although it will generate cut sets, these are not minimal. A BDD can be made more efficient by applying

collapsing operations. These can be applied to equivalent nodes where, from

Friedman and Supowit [13], two nodes of a BDD are equivalent if they both are: • terminal nodes with the same value, or

• non-terminal nodes having the same label and their left sons are equivalent and their right sons are equivalent.

(37)

The son of a node is the node to which either the '1' or '0' branch leads. The following 'collapsing' operations can be used to reduce the size of a BDD:

1. If two sons of node A are equivalent, then delete node A and direct all of its

incoming branches to its left son.

2. If nodes A and B are equivalent, then delete node B and direct all of its

incoming branches to A.

The above operations can be used to reduce the BDD shown in Figure 3.3. Operation 1 can be applied to node F2 as both its sons are equivalent. This results in the incoming branch from node F1 being directed to the left son of F2, node F4. Therefore, nodes F2, F5 and F8 are deleted. Then operation 2 can be applied to equivalent nodes F4 and F6. Following the rule, node F6 is deleted and the incoming branch from node F3 is directed to node F4. The resulting BDD is shown in Figure 3.4.

Figure 3.4: Reduced BDD from Figure 3.3

The reduced BDD is much smaller than the original. It has four non-terminal

nodes compared with nine in the original. It must be noted, that this reduction

does not change the logic of the BDD.

3.3.2 Construction Of the BDD Using If-Then-Else Approach

The if-then-else (ite) method for constructing BDD's was developed by Rauzy [12].

It is derived from Shannon's formula given in equation 2.8:

(38)

where

!I

represents

I

(~) with Xl = 1 and

12

represents

I

(~) with Xl = O. Functions

!I

and

12

are one order less than

I

(~).

Each non-terminal node in the BDD has an ite structure ofthe form ite (Xl,

/I,

h)

where Xl is a Boolean variable and

!I

and

12

are logic functions. This means: if

Xl fails then consider

11

else consider

h.

In the BDD structure

11

would be at the

end of the '1' branch of the node Xl and

12

would be at the end of '0' branch. The

structure is represented in Figure 3.5.

~

1

1 0

It h

Figure 3.5: ite structure for component Xl

Variable ordering must be chosen before construction of the BDD. Then each basic event Xi is assigned the ite structure ite (Xi, 1, 0) . The following rules are then

used for manipulation of ite structures:

If J

=

ite (x,

11,

h) and H

=

ite (y, gl, g2), then

1. X

<

Y (x appears before y in the variable ordering)

J

*

H = ite (x,

!I

*

H,

12

*

H)

(3.4)

2. x=y

J

*

H = ite (x,!I

*

gl,

12

*

g2) (3.5)

where

*

corresponds to a Boolean operation 'And' or 'Or'.

To simplify the results the following properties are also used:

1+H=1 1·H=H (3.6)

O+H=H O·H=O

(39)

Figure 3.6: Fault Tree Example

Using the top-down left-right ordering strategy the variable order is C

<

A

<

B.

The ite structures for each basic event are:

A=ite(A,l,O) B=ite(B,l,O) C=ite(C,l,O)

Gate G1 can be expressed (using rule 1 in ( 3.4) and ( 3.6)) as :

G1 - A· B = ite (A, 1,0) . ite (B, 1,0) =

ite (A, ite (B, 1,0) ,0)

The ite structure for the event Top is given (using rule 1 ( 3.4) and ( 3.6)) by:

Top C

+

G1 = ite (C, 1,0)

+

ite (A, ite (B, 1,0) ,0) =

ite (C, 1, ite (A, ite (B, 1,0),0)) (3.7) To construct the BDD from 3.7 '1' and '0' branches are considered for each variable in turn. For example, C is the first basic event in the variable ordering and it is encoded in the root node of the BDD structure. At the end of '1' branch is a

terminal node 1 and the structure ite (A, ite (B, 1,0) ,0) is at the end of '0' branch.

Basic event A is considered next and is encoded in the node at the end of the right-hand branch ('0' branch). Its left-right-hand branch ('1' branch) will end in the structure

ite (B, 1, 0), while its right-hand side ('0') branch will terminate in a terminal node O. The process is repeated once more for the basic event B. The resulting BDD is shown in Figure 3.7.

(40)

Figure 3.7: Binary Decision Diagram for Fault Tree shown in Figure 3.6

3.4 Qualitative Analysis Of the BDD

Each path from the root node of a BDD to a terminal node '1' defines a solution

of the Boolean function j (22) [12]. Only nodes lying on the '1' branches of the path

are included in the cut set. For the BDD shown in Figure 3.7 the cut sets are:

1.

{Cl

2. {A, B}

These are also minimal cut sets. But the BDD does not always produce a list of minimal cut sets. To obtain minimal cut sets the BDD can be minimised or the list of cut sets can be reduced using Boolean algebra rules.

3.4.1 Minimisation

Only if a BDD is in its minimal form will the cut sets produced from it be minimal. A minimisation process for a BDD, developed by Rauzy [12], is applied to its ite form awl creates CL new I3DD which defines all miuimal cut sets of the fault

tree. All shared nodes must be expanded before minimisation.

Let j be a Boolean function of the BDD. If (J" is a solution of j, then a path

exists from the root of the BDD to terminal node '1' which defines a solution 8 of j

such that 8 is included in (J".

Consider any node in the BDD, the output of which is represented by the function

F where:

(41)

If J is a minimal solution of G, then the intersection of {J}

n

x is a solution of F. In addition, if <5 is not a minimal solution of H, then a solution of F smaller than

{is}

n

x does not exist and {is}

n

x is minimal. The set of all minimal solutions of F

will also include minimal solutions of H:

SOlmin (F) = {O'} 0' = [{ is}

n

xl

U [SOlmin (H)

1

Pi

Figure 3.8: Example BDD for minimisation

(3.8)

(3.9)

This algorithm can be applied to the BDD in Figure 3.8. This BDD would produce these cut sets:

1.

{A,B,C}

2. {A,

C}

(42)

As this BDD is not minimal, it does not generate minimal cut sets. The first cut set is redundant as it contains the second cut set as a subset. To minimize the BDD, each node is considered in turn:

F1

=

ite (A, F2, 0) - F2 does not contain any paths that are included in '0' branch, as this leads to terminal vertex.

F2 = ite (B, F3, F4) - Event 'C' is included in a path on both the '1' branch (F3)

and the '0' branch (F4). Therefore, 'C' is removed from the

'1' branch as this will be a non-terminal son of F2. This is done by replacing the terminal '1' vertex with a terminal '0' vertex.

F3 = ite (C, 1, F5) - F5 does not contain any paths that are included in the '1' branch as it leads to the terminal vertex.

F4

=

ite (C, 1,0) - Both the '1' and '0' branches are terminal.

F4

=

ite (D, 1,0) - Both the '1' and '0' branches are terminal.

The minimised BDD is shown in Figure 3.9.

This BDD produces the following minimal cut sets:

1. {A, C}

2. {A,B,D}

The minimised BDD didn't produce the redundant minimal cut set {A, B, C}.

This technique can only be used to obtain the minimal cut sets as it destroys the structure function form of the BDD and hence the minimised BDD must not be used for quantification.

(43)

Figure 3.9: The minimised BDD

3.5 Quantitative Analysis Of the BDD

The probability of the root event can be expressed by the BDD as the sum of probabilities of the paths that lead from the root node to any terminal node '1' as these paths will give minimal cut sets. For quantitative analysis nodes lying on the

'0' branches of the path are included as well. For the component i that lies on '0'

branch the probability of occurrence is described as qi' qi = 1 - qi. For the BDD

shown in Figure 3.7 the paths to consider would be l.G

2. GAB

(44)

3.6 Summary

The BDD technique is useful to identify the minimal cut sets of a fault tree and to calculate the exact probability of the top event. The difficulty with the technique is the conversion of a fault tree to a BDD as variable ordering can significantly influence the size of the resulting BDD. However, for large systems the BDD method allows more accurate analysis than is possible to achieve using traditional methods, i.e. fault tree analysis.

(45)

4.1 Introduction

The purpose of risk analysis is to assess probabilities of accidents and evaluate

their consequences. Techniques adopted (such as fault tree analysis, Markov

analysis, etc.) are incapable of identifying all possible causes and consequences of a critical event.

The cause-consequence diagram method, which was developed at RISO

Labora-tories, Denmark, by Nielsen [5] in 1971, is a method which presents logical

connections between causes of an undesired (critical) event and the consequences of such an event, if one or more preventing/limiting provisions fail [14, 15]. It was initially developed as a graphical tool for analysing relevant accidents in a

complex nuclear power plant. It has subsequently been applied to various industrial

systems [16]. EDF (Electricite de France) also applied the method to the reliability study of safety-related systems in nuclear power plants and the method was found to be advantageous to other methods previously adopted, essentially for certain mechanical systems [17].

In developing the methods Nielsen noticed that a given accident may be characterised by a 'cause', a sequence of events where the time between the occurrence of the single event can be an important parameter, and finally by the consequences of the accident, when the method should be able to determine all possible causes and consequences that some critical event may lead to if one or more limiting provisions fail. Nielsen [5] states that the method should also provide a basis for determination of the probabilities of any single consequence.

The principle difference between fault trees and cause-consequence diagrams is that the cause-consequence diagram retains information about the order in which the components in the system are called upon [18] and is able to model not only causes of system failure, but also consequences. Event trees are usually used to map

(46)

the developments from the initiating event to the set of all possible outcomes, but not to determine causes of the failure. By combining both causes and consequences of the critical event, the cause-consequence diagrams also provide the way for easy quantification as the logic is very similar to binary decision diagrams. Nielsen and others [19] noted, that compared with the event trees the cause-consequence diagram gives a better representation of event sequences and the conditions under which these events can take place. The cause-consequence diagram has a benefit of the lIRe of simple, comprehensible symbols that facilitate the communication between different people in the development and commissioning of the system.

4.2 Cause-Consequence Diagram Method

The main principle of the cause-consequence diagram technique is based on the occurrence of a critical event, which for example may be an event involving the failure of components or subsystems, that is likely to produce undesired consequences. Once a critical event has been identified, all relevant causes of it and its potential consequences are developed using two conventional reliability analysis methods -fault tree analysis and event tree analysis [6].

The 'cause' part of the diagram (cause searching) is a fault tree. Fault tree analysis is used to describe the causes of an undesired event. The construction of the tree begins with the definition of the top event (the critical event). Then the causes are indicated and connected with the top event using logical gates 'And' and 'Or' and this procedure is iterated until all causes are fully developed.

The 'consequence' part of the diagram (consequence searching) is an event tree (event-sequential diagram) showing the consequences that a critical event may lead to if one or more preventing/limiting systems do not function as supposed. The event tree method is used to identify the various paths that the system could take, following the critical event, depending on whether certain subsystems or components function correctly or not.

With a combination of fault tree, representing causes of the critical event, and event tree, listing all possible consequences, the logical connection between the causes of a critical event and its consequences can be established. Compared with fault tree analysis, the cause-consequence diagram method gives a simpler representation of event sequences and the conditions under which these events can

(47)

take place [19].

The relationship between the two reliability methods is shown in Figure 4.1.

Figure 4.1: Cause-consequence diagram structure

4.3 Symbols for the cause-consequence diagram

The symbols for construction of the cause-consequence diagram are listed in Table 4.1. The symbols for the cause part are the same as those used for the fault tree method. For the consequence part new symbols were developed [5, 17, 6].

The main symbol used in the construction of the consequence diagram is the

decision box. The decision box was proposed by Nielsen and is an identical

representation of 'YES - NO' branches of an event tree structure. The c~nnection

point between the cause and consequence diagrams is the NO branch of the decision boxes as the failure causes of the system, represented by a decision box, are developed using fault tree analysis. Nielsen notes the importance of the delay symbol. The delay symbol is used in constructing consequence diagrams for systems where time delay is important as the knowledge of this may help the analyst to differentiate the different outcomes of the system.

To illustrate a typical cause-consequence diagram the simple system for lighting a lamp can be used (Figure 4.2) [17, 22]. A cause-consequence diagram for this system is represented in Figure 4.3. The initiating (critical) event is 'operator depresses button'. The causes why the bulb is not alight can be that the battery fails to produce power (BAT), the bulb has blown (B) or the fuse is broken (F). '!\vo consequences are considered: there is no light (NL) or the bulb is alight (L).

(48)

Table 4.1: Symbols used for the cause-consequence diagram

Symbols for the cause diagram

Symbols for the consequence diagram

qj Ft! YES Ftle:::;>

V"

+

Y

Q l-Q

o

AND gate allows the causality to pass up the tree if at any time all inputs to the gate occur

OR gate allows causality to pass up through the tree if at any time at least one input to the gate occurs

The decision box represents the functionality of a component/system. The NO box represents failure to perfonn correctly, the probability of which is obtained via a fault tree or single component probability ql

Fault tree arrow represents the number of the fault tree structure that corresponds to the decision box

The initiator triangle represents the initiating event for a sequence where A. indicates the rate of occurrence

Time delay indicates that the time starts from the time at which the delay symbol is entered and continues up to the end of the time interval in the delay symbol

OR gate symbol is used to simplify the cause-consequence diagram when more than one decision box enters the same decision box or consequence box

The existence box represents a component existing in a certain state

The consequence box represents the outcome event due to a particular sequence of events

References

Related documents

availability of a dental clinic on campus, the value of preventive care regarding their oral health, the barriers preventing them from accessing the dental clinic, and lastly, do the

This study therefore investigated the dermal absorption profile of dioxane as a single chemical in human skin and examined the effects of different concentrations of sodium lauryl

Experiments show that the proto- col is significantly more robust than regular TCP at dealing with unreliable links and multiple errors in a window; we have achieved

This paper studies innovativeness in SMEs from a set of innovation indicators at the firm level, capturing various types of innovation (product, process, organisational, and

BTF3 transcription factor was shown to be overexpressed in colorectal cancer (70); in addition, a number of studies have described the role of TP53 in colorectal cancer (71,

minimised. The font size and the size of interactive elements should be appropriately large because vibrations in a helicopter could be higher compared to a fixed wing

There is an extensive academic research that examines the importance of corporate governance in constraining accrual-based earnings management in the U.S.

When the group delay is constant, as it is over the passband of all FIR filters having symmetrical coefficients, all frequency components of the filter input signal are delayed by