Anda di halaman 1dari 3

E I TECHNICAL

Safety

Quantifying human reliability


in risk assessments
Jamie Henderson and David Embrey, from Human Reliability Associates, provide an
overview of the new EI Technical human factors publication Guidance on quantified
human reliability analysis (QHRA).

ollowing the Buncefield accident in


2005, operators of bulk petroleum
storage facilities in the UK were
requested to provide greater assurance
of their overfill protection systems by
risk assessing them using the layers of
protection analysis (LOPA) technique. A
subsequent review of LOPAs1 indicated
a recurring problem with the use of
human error probabilities (HEPs)
without an adequate consideration of
the conditions that influence these
probabilities in the scenario under consideration. It is obvious that the error
probability will be affected by a
number of factors (eg time pressure,
quality of procedures, equipment
design and operating culture) that are
likely to be specific to the situation
being evaluated. Using an HEP from a
database or table without considering
the task context can therefore lead to
inaccurate results in applications such
as quantified risk assessment (QRA),
LOPA and safety integrity level (SIL)
determination studies.
Human reliability analysis (HRA) techniques are available to support the
development of HEPs and, in some cases,
their integration into QRAs. However,
without a basic understanding of human
factors issues, and the strengths, weaknesses and limitations of the
techniques, their use can lead to wildly
pessimistic or optimistic results.
Using funding from its Technical
Partners and other sponsors, the Energy
Institutes (EI) SILs/LOPAs Working
Group commissioned Human Reliability

Associates to develop guidance in this


area. The aim is to reduce instances of
poorly conceived or executed analyses.
The guidance provides an overview of
important practical considerations,
worked examples and supporting
checklists, to assist with commissioning
and reviewing HRAs.2

HRA techniques
HRA techniques are designed to support
the assessment and minimisation of
risks associated with human failures.
They have both qualitative (eg task
analysis, failure identification) and
quantitative (eg human error estimation) components. The guidance focuses
primarily on quantification, but illustrates the importance of the associated
qualitative analyses that can have a significant impact on the numerical results.
Further EI guidance on qualitative
analysis is also available.3 There are a
large number of available HRA techniques that address quantification one

review identified 72 different methods.4


The respective merits of HRA techniques
are not addressed in the new guidance,
since this information, and more
detailed discussion of the concept of
HRA, are available elsewhere.4,5,6
Attempts to quantify the probability
of human failures have a long history.
Early efforts treated people like any
other component in a reliability assessment (eg what is the probability of an
operator failing to respond to an
alarm?). Because these assessments
required probabilities as inputs, there
was a requirement to develop HEP
databases. However, very few industries
were prepared to invest in the effort
required to collect the data to develop
HEPs, so this led to the widespread use
of generic data contained in tools such
as THERP (technique for human error
rate prediction). In fact, the data contained in THERP and other popular
quantification techniques such as
HEART (human error assessment and
reduction technique) are actually

1. Preparation and problem definition


2. Task analysis
3. Failure identification
4. Modelling
5. Quantification
6. Impact assessment
7. Failure reduction
8. Review

Table 1: Generic HRA process

30

Figure 1: Examples of the potential impact of human failures on an event sequence

PETROLEUM REVIEW NOVEMBER 2012

derived primarily from laboratory-based


studies on human performance.
As it became recognised that people
and, consequently HEPs, are significantly
influenced by a wide range of
environmental factors, techniques were
developed to modify baseline generic
HEPs to take into account these contextual
factors (eg time pressure, distractions,
quality of training and quality of the
human machine interface) known as
performance influencing factors (PIFs) or
performance shaping factors (PSFs). A
parallel strand of development was in the
use of expert judgement techniques, such
as paired comparisons and absolute probability judgement, to derive HEPs. Other
techniques, such as SLIM (success likelihood index method) used a combination
of inputs from human factors specialists
and subject matter experts to develop a
context specific set of PIFs/PSFs. These were
then used to derive an index, which could
be converted to a HEP, based on the
quality of these factors in the situation
being assessed.
Despite well-known issues with their
application, and more recent attempts
to develop new techniques that attempt
to address these issues, techniques such
as THERP and HEART are still the most
widely used.
Whilst the quantification of HEPs may
be problematic, the importance of the
human contribution to overall system
risk cannot be overstated. For example,
the bow-tie diagram in Figure 1 shows
how different human failures can affect
the initiation (left hand side), mitigation
and escalation (right hand side) of a
hypothetical event.

Practical issues
The EI guidance2 provides an overview
of some of the most important factors
that can undermine the validity of an
HRA. These include:
Expert judgement Every HRA technique requires some degree of expert
judgement in deciding which factors
influence the likelihood of error in the
situation being assessed and whether
these are adequately addressed in
the quantification technique. A welldeveloped understanding of the task
and operating environment is therefore
essential and any HRA report must
include a documented record of all
assumptions made during the analysis.
In particular, this must provide a justification for any HEPs that have been
imported from an external source such
as a database. It may also be useful, in
interpreting the results, to demonstrate
the potential impact of changes to these
assumptions on the final outcome.
Impact of task context upon HEPs As
discussed previously, human perfor-

PETROLEUM REVIEW NOVEMBER 2012

Commentary Identifying failures


Using a set of guidewords to identify potential deviations is a common approach
to this stage of the analysis. However, this can be a time-consuming and potentially complex process. There are some steps that can be taken to reduce this
complexity, and simplify the subsequent modelling stage:
Be clear about the consequences of identified failures. If the outcomes of concern
are specified at the project preparation stage then some failures will result in
consequences that can be excluded (eg production and occupational safety issues).
Document opportunities for recovery of the failure before consequences are realised
(eg planned checks).
Identify existing control measures designed to prevent or mitigate the consequences
of the identified failures.
Group failures with similar outcomes together. For example, not doing something at
all may be equivalent, in terms of outcome, to doing it too late. Care should be
taken here, however, as whilst the outcomes may be the same, the reasons for the
failures may be different.

Table 2: Example of commentary from the guidance (Step 3 Failure identification)


mance is highly dependent upon
prevailing task conditions. For example,
a simple task, under optimal laboratory
conditions, might have a failure probability of 0.0001 (ie once in 10,000 times).
However, this probability could easily be
degraded to 0.1 (ie once in 10 times) if
the person is subject to PIFs such as high
levels of stress or distractions. There are
very few HEP databases that specify the
contextual factors that were present
when the data were collected. Instead,
the usual approach has been to take
data from other sources, such as laboratory experiments, and modify these HEPs
to suit specific situations.
Sources of data in HRA techniques
Depending on the HRA technique used,

it can be difficult to establish the exact


source of the base HEP data. It might be
from operating experience, experimental research, simulator studies,
expert judgement, or some combination
of these sources. This has implications
for the ability of the analyst to determine the relevance of the data source to
the situation under consideration.

Qualitative modelling
Some HRA techniques, in addition to
HEP estimation, provide the opportunity
to consider and model the impact of PIFs
upon safety critical tasks. This means
that, whilst the generated HEP may be
continued on p34...

Checklist 3: Reviewing HRA outputs

Guidance section

3.1 Was a task analysis developed?

Step 2 Task analysis

3.2 Did the development of the task


analysis involve task experts (ie people
with experience of the task)?

Step 2 Task analysis

3.3 Did the task analysis involve a


walkthrough of the operating
environment?

Step 2 Task analysis

3.4 Is there a record of the inputs to


the task analysis (eg operator
experience, operating instructions,
piping and instrumentation diagrams,
work control documentation)?

Step 2 Task analysis

3.5 Was a formal identification process


used to identify all important failures?

Step 3 Failure
identification

3.6 Does the analysis represent all


obvious errors in the scenario, or explain
why the analysis has been limited to a
sub-set of possible failures?

Step 3 Failure
identification

Yes/No

Table 3: Extract from Checklist 3 Reviewing HRA outputs

31

... continued from p31


treated with scepticism, the analysis
provides useful insights into the factors
affecting task performance and how
these can be improved. For example,
factors such as the quality of communication and equipment layout may be
identified as the PIFs having the
greatest influence over the HEP and,
accordingly, these factors can be prioritised when considering where resources
should be applied in order to achieve
an improved level of reliability.
Whilst in practice it may be difficult
to establish the absolute probability of
failure, an analyst can use appropriate
HRA techniques to establish which factors have the greatest relative impact
on the probability of failure.

Guidance structure
The guidance that HRA has developed
for the EI2 takes a generic HRA process
as its starting point (see Table 1, p30).
Each stage is described, alongside a discussion of relevant potential pitfalls and
commentaries regarding important practical considerations. For example, Table 2
addresses issues related to the failure
identification stage of the process.

34

In addition, to support organisations


commissioning or reviewing HRA
analyses, three checklists are provided:
Checklist 1: Deciding whether to under
take HRA.
Checklist 2: Preparing to undertake
HRA.
Checklist 3: Reviewing HRA outputs.
The checklist items are related to the
stages of the HRA process set out in
Table 1. A short, illustrative extract from
Checklist 3 is provided in Table 3.
The guidance also includes full
worked examples, along with associated
commentary related to the checklist
items, to further illustrate common
issues with HRA analyses.
The use of HEPs and associated HRA
techniques is a difficult area. The aim of
the EI guidance is to better equip
non-specialists thinking of undertaking,
or charged with commissioning, HRAs.
In many cases, a qualitative HRA may be
more appropriate than a quantitative
analysis. Any proposed analysis should
be mindful of the pitfalls set out in
the guidance. Moreover, the limitations
of the outputs should be clearly
communicated to the final user of the
analysis.

References
1. Health & Safety Executive, Research
Report RR716: A review of Layers of
Protection Analysis (LOPA) analyses of
overfill of fuel storage tanks, HSE
Books, 2009.
2. Energy Institute, Guidance on quantified human reliability analysis (QHRA),
2012.
3. Energy Institute, Guidance on human
factors safety critical task analysis, 2011.
www.energyinst.org/scta
4. Health & Safety Executive, Research
Report RR679: Review of human reliability assessment methods, HSE Books,
2009.
5. Embrey, D E, Human reliability
assessment, in Human factors for engineers, Sandom, C and Harvey R S (eds),
ISBN 0 86341 329 3, Institute of
Electrical Engineers Publishing, London,
2004.
6. Kirwan, B, A guide to practical
human reliability assessment, London:
Taylor & Francis, 1994.
Guidance on quantified human reliability analysis (QHRA), ISBN 978 0 85293
635 1, September 2012, is freely available from www.energyinst.org/qhra

PETROLEUM REVIEW NOVEMBER 2012