Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.
The RESPOND Programme logo
Funded by NIHR logo

Human Factors Perspective

Mark SujanBy Mark Sujan

Mark is a Human Factors Consultant and researcher in Resilience Engineering. He is a Co-Investigator on the RESPOND Programme.

Turning “Failure to Rescue” on its head: How Resilience Engineering is embedded in RESPOND

The RESPOND project is concerned with the management of patients who deteriorate following emergency surgery.  The management of deterioration is an important topic, and has been considered at both national and international level, where a diverse set of recommendations has been produced. Among the most important ones is the use of early warning scores, such as the National Early Warning Score (NEWS) developed by the Royal College of Physicians.  The thinking behind early warning scores is that deterioration frequently announces itself in the patient’s vital signs up to 24 hours before it actually sets in.  And, so, if we monitor the patient’s vital signs, such as respiration rate, heart rate and blood pressure, then we can spot deterioration early and get a head start.  So, NEWS is a simple scoring tool, where we monitor and assess vital signs, and if one or several of these are abnormal, appropriate steps can be taken early.  

Interestingly, studies have shown that across organisations with the same rate of post-surgical complications, there are significant differences in mortality rate.  And that is, arguably, down to the fact that some organisations simply manage deterioration better than others.  And, so, some years back a new metric was introduced, “Failure to Rescue”, which denotes the rate of death following complications.  And this Failure to Rescue rate is influenced by whether deterioration is recognised in a timely fashion, whether it is appropriately communicated and escalated, and whether suitable action is taken.

There have been many studies over the years that have looked at why departments and organisations fail to rescue patients.  There’s quite a long list of potential failures, including:  

  • failure to notice that a patient is unwell, 
  • failure to measure vital signs, 
  • failure to calculate the trigger score, 
  • failure to inform senior doctors, and 
  • failure to arrange a suitable care management plan. 

Suggestions for improvement focus frequently on organisational issues such as staffing levels, so, having more nurses for example, education and training, the use of standardised communication protocols and the usability of equipment.  And it’s probably fair to say that significant progress has been made over the years.  However, the management of deterioration remains problematic, and interventions such as the early warning scores have often not delivered the improvements that people anticipated.

And, so, this is really the starting point for the RESPOND project.  In the past, people have looked at Failure to Rescue, i.e., how the management of deterioration fails.  But there is another way of looking at this problem of course, and that is to try to understand how the management of deterioration usually succeeds.  This, in a way, is the premise of Resilience Engineering.  

Resilience has been defined as “the intrinsic ability of a system or organization to adjust its functioning prior to, during, or following changes, disturbances, and opportunities so that it can sustain required operations under both expected and unexpected conditions” (Hollnagel et al., 2015).  The Resilience Engineering perspective suggests that imperfect conditions are ever-present in complex modern healthcare systems, full of inevitable tensions, contradictions and competing priorities.  But, of course, we don’t fail to rescue patients on a daily basis.  Most of the time we succeed to manage deterioration well.  So, this might be down to the safeguards that have been designed into the system.  But, from a Resilience Engineering perspective, it might also be down to some factors – or resilient forms of behaviour - that help us to stay in control.  

In the Resilience Engineering literature, these resilient forms of behaviour have been called resilience abilities or resilience potentials.  And, in essence, this is about people and orgnisations responding to different situations, monitoring what’s going on, anticipating – for example - when and where extra resources might be needed, and creating actionable learning from past events.  That is our premise, and using an approach called Functional Resonance Analysis Method (FRAM) we want to understand better how clinicians manage deterioration even though things are chaotic and far from perfect.  

Using FRAM, we identified lots of situations where people adapt their behaviour and where they resolve tensions and make trade-offs.  We then linked these behaviours conceptually to resilience abilities.  So, we found that the management of deterioration works because, for example, people monitor a lot what is going on, they actively look for signs of deterioration, they build an awareness and an overview of what’s going on; they described how they can change their ways of working when they need more people, how they can pull in people from elsewhere and how they can pre-alert intensive care, for example, even before they are needed; and so on.  

So, what have we learned so far?  This type of analysis allowed us to understand and to represent everyday work – how people make it work even though there are always the usual disturbances, such as lack of resources.  And in the first instance we can reflect that back to the people doing the work and those managing it.  Which is quite useful in itself.  For example, junior doctors often get blamed for not escalating quickly enough to senior doctors – but on the other hand, they are expected to demonstrate that they can manage patients independently; and they are very aware of the high levels of workload of everyone around them.  So, they need to make very difficult trade-off decisions.  Just reflecting this back to allow people to think about it explicitly can be helpful.  

It is probably fair to say that this type of analysis does not provide simple, quick fixes.  If we compare this analysis with other, more traditional approaches, we can see that they are complementary.  The focus is different.  For example, a traditional analysis would look at how people might fail to rescue patients, and how we can introduce education and training, standardised communication protocols, better tools and equipment, and make improvements to the work environment.  This can be quite useful.  The use of early warning scores is one such example.  But they don’t always work as well in practice as imagined.  Hence, the focus in this analysis is more on understanding how people and organisations monitor, anticipate, respond and learn within the messy reality of everyday clinical work.