Article written by Nippin Anand PhD MSc Master Mariner FNI
In January 2015, the pure car and truck carrier (PCTC) Hoegh Osaka developed a severe list on departing from Southampton, and was left stranded outside the port for more than 19 days. The official investigation revealed how decision making became the victim of production pressures. The vessel sailed from port without determining accurately the stability conditions upon completion of cargo. It was a routine practice to leave this task to be carried out once the vessel was out at sea; a practice that appears to be common within the PCTC industry. The weights of the cargoes declared at the time of loading were significantly different from the actual weights; a practice that extends even beyond the PCTC industry. The port captain never felt the need to involve the chief officer in the preparation of the stowage plan. The chief officer, on the other hand, did not feel he had the authority to question the pre-stowage plan.
The preventive actions that followed from the accident should not surprise anyone. A volley of plans, presentations and questionnaires were sent off to the entire fleet reinforcing the importance of compliance with procedures and checklists, and warning the crew against being influenced by perceived commercial pressure. But will these actions actually do anything to improve safety?
Safety management system
During the 1980s a number of very serious accidents including the Herald of Free Enterprise led to the introduction of the International Safety Management (ISM) Code, based on the principles of the ISO standards, and taking a structured, systematic and documented approach to the management of safety and quality. A key requirement of the Code was for every organisation to formally adopt a safety management system.
And what exactly is the purpose of the SMS? It can be illustrated using the ‘Swiss Cheese’ model of accident prevention, where several slices of cheese are lined up against each other. The cheese slices represent organisational barriers to prevent accidents. These typically include crew competence and training, emergency preparedness, maintenance of safety equipment, analysis and reporting of accidents, documentation control, effective control and monitoring from the shore side etc.
The holes in the cheese represent noncompliances – instances where rules, regulations and procedures were not followed. These are referred to as noncompliances. When an accident happens, the conventional explanation is that there was a hole in the barrier because rules and procedures were not followed. The purpose of the SMS is to ensure the systematic identification, detection and follow up of noncompliances so that the organisation is better prepared to manage safety risks.
If the capsize of the Herald of Free Enterprise led to the introduction of the safety management system, the stranding of the Hoegh Osaka has surely reaffirmed that we need more of it. More rules, procedures and checklists to plug those holes! But the existing approach to safety management has proven deeply flawed and dangerously misleading. For the rest of this paper I will illustrate some myths about safety management systems, and then debunk these myths by offering a new view about safety management systems.
Light bulbs and the myth of compliance
Light bulbs either light up or they do not. There is no middle way to determine whether they work or not. The underlying philosophy of a safety management system is similar to the working of light bulbs – bimodal and absolute (yes and no), with the sole aim of establishing whether rules, regulations and industry standards have been complied with or not.
But if we apply this bimodal approach to the Hoegh Osaka, there are not many instances of noncompliance. The vessel complied with all the statutory requirements and was manned by competent crew who were adequately rested at the time of the accident. The loading computer program was ‘approved’ and would have worked accurately if the correct cargo weights had been fed into the computer. The remote gauges for tank sounding were not operational at the time of the accident – but this was not necessarily a noncompliance, as long as tank soundings could be obtained manually. It appears that in the absence of compliance risks, the company regarded rectifying the fault in the remote gauges as a low priority. The official accident report stated that: ‘In light of the low priority given by the company to repairing the gauges, a similar low priority was assumed by Hoegh Osaka’s chief officer, who resorted to estimating ballast tank quantities.’ A defective ballast sounding system that was otherwise compliant with regulations was encouraging ‘unsafe practices’ onboard.
Crossing the red line
At a maritime symposium on ‘safety culture’, the importance of following the rules and procedures came up, as one would expect. An elderly gentleman stood up and said, ‘Ladies and gentlemen, we are in the business of transporting flammables, remember you must never cross the red line’. What he meant was that the workers should under no circumstances dare to breach rules and procedures. The crew on board the Hoegh Osaka had crossed the red line on numerous occasions. The Master did not hold a pre-load meeting with the deck crew and officers. The chief officer underestimated the importance of accurately calculating the stability condition before departure. Instructions on the use of loading computer were not part of the chief officer’s familiarisation checks. The heavy lift cargoes were not secured in accordance with the CSS Code 2011, IMO Resolution A.489 (XII), IMO Resolution A.533 (13), IMO Resolution A.581 (14), IMO Resolution A.581 (14), as amended by MSC.1/Circ.1355, MCA publication Roll-on/Roll-off Ships: Stowage and Securing of Vehicles Code of Practice (and add to this a raft of regulations, circulars and industry standards that even the experts specialising in cargo securing plans may be unaware of).
All of this would have played its part in the accident. But pick a routine cargo operation on any PCTC and chances are that you may find an even more comprehensive list of rule violations. A seafarer with whom I recently discussed this issue stated: ‘If you go to this level of detail, you will find problems in everything I do’. A dozen experts analysing split second decisions influenced by intense production pressures will no doubt establish numerous instances where rules were violated and procedures were not followed. But is this approach really effective in managing safety?
The proceduralisation of everything
The accident investigation report into the Hoegh Osaka found that there were a total of 213 checks to be completed by the chief officer for cargo operations alone. This exemplifies a ‘rotten onion’ style of management; one where multiple layers of procedures and checklists can cover up the core issues. These procedures (referred to as ‘objective evidences’ in the language of the safety management system) make it extremely difficult for an outsider (ie regulator) to gain insight into the core practices and culture of an organisation. I am reminded of a fire damper that was found fully corroded and inoperable during a survey, despite maintenance management plans indicating fully operational firefighting systems in ‘excellent condition’. No amount of processes, procedures and checklists can solve core problems of this nature. If anything, they only make core issues more inaccessible.
To a large extent the problem lies in how safety audits are conducted. The auditor finds a few non-conformances and the company addresses them by adding a set of procedures and half a dozen checks to the SMS. The quest for paperwork to prove safety generates even more paperwork for managing safety. Everything from starting the main engines to switching the kettle on is ‘proceduralised’ and ‘risk assessed’ and the safety management system eventually becomes a monster. There is very little foresight and thinking in this mundane ‘check-do’ process.
The Hoegh Osaka’s two hundred or so checks for cargo operations alone are a true reflection of this contagion. While the company was busy creating the checks, the chief officer was busy ticking the boxes on the checks, the Master was too busy to verify the checks, the regulator was kept busy assessing the checks, the investigator was busy counting the checks – and with these multiple layers of protection, the safety management system was drifting into failure beneath all these checks. The imaginary world of procedures and checks had drifted too far away from the real world of practice.
‘No blame’ myth
Most accident investigations’ reports and safety audits start by stating that the purpose of the exercise is not to apportion blame or find faults with individuals. In practise, this is far from achievable with our current approach to managing safety. Within the 83 page Hoegh Osaka incident report, the term ‘chief officer’ appears 132 times, and Master 89 times. By contrast, the organisation responsible for the safety management system appears in the report only on 60 occasions. Of the two dozen conclusions drawn from the report, 16 are centred on the vessel and the crew on board. It is obvious that the focus of the report remains on the behaviour and actions of those proximally closer to the scene of accident. Research in accident studies views this tendency of focusing excessively on the actions of those physically closer to the accident as ‘proximity bias’.
It is interesting to see how a highly ambiguous and uncertain situation is captured and presented as a structured and systematic report. In an attempt to present an official narrative, the report illustrates a one-sided construct of the entire ordeal. The report suggests the problem begins with an erroneous stability condition and ends with an extremely tender vessel that capsizes just after departing from the port. There is no mention in the report of the last safety audit, management onboard visit, charterer’s inspection, or QHSE plans and reports. How could so many entities have missed so many unsafe practices that were so common on board? The voices from the control room and wheelhouse are lacking. The inability to calculate final stability conditions prior to departure is considered a ‘drift from fundamental principles of seamanship’. But it should be noted that there is not much rigour in such statements. Under these circumstances, how can we preach the mantras of ‘no blame’, ‘just culture’ and ‘safety first’ to anyone involved in an accident?
A new view of the safety management system
Having summed up the four popular myths of a safety management syste, there are several questions to be answered.
• First, we place so much faith in compliance with regulations in managing safety – but is compliance really as straightforward (yes and no) as it appears on its face? And if not, can we still make effective use of compliance in managing safety?
• Second, can we think beyond the punitive language of ‘rule violations’ in managing safety?
• Third, can we ever manage safety genuinely without shaming and blaming our workers?
• And finally, if excessive procedures and checklists are taking us away from our core problems, what can we do to bridge this gap?
The answers will offer an alternative approach to the safety management system (and hopefully debunk some myths).
Purposeful compliance
Technology moves far faster than our ability to control and regulate it. When compliance with ‘rigid’ regulations conflicts with operations, owners may seek ‘alternative compliance’ through risk assessments, waivers, and exemptions and even threatening to transfer their vessels to ‘business friendly’ flag states. What appears a matter of absolutes on the surface is in fact imperfect, convoluted, interpretive, and open to abuse. Many high risk industries have realised the limitations of compliance with rules and regulations and resorted to requiring a duty of care and responsibility from the operators even if this requires undertaking measures beyond compliance.
(Of course, this approach is not without its own problems.)
In the case of the Hoegh Osaka, it surely made sense to use all available Codes, Circulars and IMO Resolutions to verify compliance with cargo securing when compiling the accident report – except that this was undertaken in hindsight and with ample time (the official report took more than a year to publish). The knowledge surely existed when the vessel was being loaded, but could it ever be applied as a means of preventing accidents, rather than just identifying noncompliances in the wake of an accident? This is an important question that we need to ask in designing and implementing our safety management system. Compliance must have a meaning and purpose, not be something demanded for its own sake.
Approximate adjustments
In many societies, even the thought of breaching the rules can be intimidating (just as in other societies it is a way of life). After all, rules and procedures are there to assist us. It is unthinkable for many of us to imagine that a vessel could ever sail from port without obtaining final stability calculations. And how could the chief officer tick off checks that were never really carried out? Why, despite clear instructions in the SMS, were tank soundings not obtained on a daily basis? Is this really a case of unreliable seafarers ‘falsifying records’ and crossing red lines? Far from unsafe practices and a drift away from seamanship, this is exactly how work is performed. If the chief officer had diligently followed the rules and performed all the two hundred or so checks, the vessel may not have departed from the port in time. In many countries, working to rule is a deliberate form of protest.
When the chief officer was selective in following the checklist, it could well be that he was indeed applying seamanship (using his professional judgment, prioritising and making adjustments when faced with time constraints) rather than ‘drifting away’ from it. What we consider as ‘red lines crossed’ are approximate adjustments required to succeed at all levels within the organisation. These adjustments are approximate because we cannot write precise rules and procedures for every single task; because those procedures demand resources that may not always be available (for example ample time, competencies); because procedures are underspecified, involving terms such as ‘apply good seamanship’ that do not specify what is expected from the individual in a given situation. Approximate adjustments have to be made to get the job done. This is how we succeed in everyday work despite demanding deadlines and budget constraints.
The equivalence of success and failure
Do we always need someone to blame in the wake of an accident, or is there an alternative? Let us examine the fine details of the Hoegh Osaka accident: a last minute change that made Southampton the first call in the port rotation plan rather than the last; a historical trend of guessing ballast quantities rather than obtaining actual tank soundings; a routine practice of declaring less than the actual weights in cargo manifests; a metacentric height (GM) marginally short of the required stability standards; a mere 0.6 metres bow trim that led to a high rate of turn; and a righting moment that brought the vessel back upright when she developed a heel while turning at a speed of 10 knots, but which became insufficient at a mere two extra knots speed at the next turn in the channel. Note the dynamic nature of certain variables and how the routine practices and approximate adjustments came together. Where is the root cause of the accident? This shows how approximate adjustments and routine practices can sometimes emerge as disproportionate, non-linear outcomes.
Change any one of those variables and there is a good chance that the Hoegh Osaka would have safely exited the channel just as do most PCTCs and many merchant vessels each day. None of us would have noticed the ‘deep rooted’ problems so pervasive within the industry. On the contrary, the management would have rewarded the employees in their next performance review. Who would not wish for a workforce that could balance safety with quality so well? Is this not how competitive organisations are meant to operate in an aggressive market? It does not help to explain why we should blame people who exhibit a ‘can do’ attitude and are willing to go that ‘extra mile’. Granted, there are negligent behaviours and unsafe practices – but the boundaries between success and failure seem to have diminished.
Business is safety
It does not make much sense to react to ‘unsafe practices’ by replacing a handful of seafarers and introducing more checks, controls and barriers. When something goes wrong, it has usually gone well many, many times before. That is why people do it! So without understanding why it was done in this way and why it went well, we have no hope of understanding why it went wrong. It pays to observe a successful routine operation with an open mind.
Recall the last minute changes to the port schedule of Hoegh Osaka. This is a usual problem for many ships (it was also an issue in the case of the Herald of Free Enterprise). Therefore, we should begin by looking at the usual and normal actions in this case. How do crew members adjust to last minute changes to port schedules? When time is limited, how does the crew meet deadlines when getting their jobs done? How does the vessel still manage to depart from the port on time despite a late arrival in port?
Pay close attention to whether crucial decisions are made based on incomplete, incomprehensible knowledge and poorly written procedures. Observe how work is performed when not all crew members are adequately experienced in handling key operations.
Find out how shortcomings in apparently certified equipment are compensated for in everyday work. It is here that we start to appreciate human performance. It is here that we realise the need to remove the unnecessary checks and barriers that impede rather than facilitate decision making. It is also here that we start to realise that we cannot write procedures and checks for every conceivable situation. And it is here that procedures and checklists start to mesh with the messy world of operations. Here lies an opportunity to genuinely promote a ‘no blame’ culture and reduce the administrative burdens that are helping neither safety nor businesses.
After more than two decades of futile attempts to implement a ‘structured, systematic and documented approach’ in managing safety, it should be clear that it does not exist. The case discussed here was chosen not because it was unique or one-off. It only serves as a recent example available in the public domain to expose the fatal fallacy that we call safety management system. Perhaps the time has come to leave behind the light bulbs, red lines and rotten onions and embrace a new view of safety management system. Safety is not a crime against business. Business is safety.
First published in Seaways. Re-printed by kind permission of The Nautical Institute.
FURTHER READING:
Safety – I and Safety – II: The Past and Future of Safety Management, Erik Hollnagel
The views expressed by the author in this feature article may not be the views of the organisation that the author represents.