language-icon Old Web
English
Sign In

Author and Affiliation

2010 
Few systems operate completely independent of humans. Thus any study of system risk or reliability requires analysis of the potential for failure arising from human activities in operating and managing this. Human reliability analysis (HRA) grew up in the 1960s with the intention of modelling the likelihood and consequences of human error. Initially, it treated the humans as any other component in the system. They could fail and the consequences of their failure were examined by tracing the effects through a fault tree. Thus to conduct a HRA one had to assess the probability of various operator errors, be they errors of omission or commission. First generation HRA may have used some sophistication in accomplishing this, but in essence that is all they did. Over the years, methods have been developed that recognise human potential to recover from a failure, on the one hand, and the effects of stress and organisational culture on the likelihood of possible errors, on the other. But no method has yet been developed which incorporates all our understanding of individual, team and organisational behaviour into overall assessments of system risk or reliability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    84
    References
    0
    Citations
    NaN
    KQI
    []