STRAIGHT TRACK : Intercraft Communications for Reality-Based Rails

 

LINKS


































ARCHIVES

December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003

Subscribe to "STRAIGHT TRACK" in Radio UserLand.

Click to see the XML version of this web page.

Click here to send an email to the editor of this weblog.

 
 

Blame it on the Pin Puller

 

http://submissions.miracd.com/trb2004                          

On the LEVELS of Error in Railroad Operations:  Beyond “Blame It on the Pin Puller”

 

Frederick C. Gamst

University of Massachusetts, Boston

University of Wyoming

5419 Ridge Road

Cheyenne, WY 82009-4527  

(307) 632-8561 tel.

(307) 632-8673 fax

fcgamst@aol.com                  [7497 words, from Abstract, on]

 

Abstract

 

The customary focus of internal and external investigations of human error on US railroads is the individual.  For undistorted understanding of errors, and resulting accidents, we must understand the power structure in the hierarchy of levels of error.  At the highest level, a state society and its culture(s) generate human errors.  Below this are the errors from legislation, its executive enforcement including by regulatory agencies, and their judicial interpretations.  Next, we reach the level of error from organizations, in actions and inactions of managers.  At the bottom of the levels of error causation are the team and, then, the individual whose error is ordinarily not in isolation but shaped by errors on the higher levels. When supra-individual error remains uncountered, then, efforts to reduce error frequency in a workplace are largely ineffective.

 

Keywords:  Human Error, Societal Levels, Organizations, Regulation, Railroads

 

INTRODUCTION

 

To consider who is involved in an error-caused accident delves into the very heart of the logic of cause and effect in our sociocultural system.  When we contemplate who made the human error resulting in an accident, according to the nature of the event, we can range from the individual to the all-embracing society.   This paper focuses empirically on US railroads.  The conceptualizations have wider applicability.

Only rarely do accidents have just one cause, as US railroad investigatory practice has ordained for 174 years.  They usually have multiple causes.  Cause can be quite intricate, even regarding events not appearing intricate.  Nevertheless, many, managers, researchers, consultants, and government regulators of railroads mistakenly attempt to learn the cause (a false singularity) of a rail accident.

For decades, we have known the investigator must, "forego the temptation to place the burden of accident prevention on the individual worker" (1).  It is all too human to fashion a so-called human-factors outlook for blaming the scapegoat at the bottom of the chain of command and control.  In US railroading, the tradition followed is blame it on the “pin puller” (switchman), etc.  And it is an easier, less problematic task to find a simplistic, singular cause rather than an intricately interwoven one, often reaching up the chain of command.  Railroads use adversarial investigations directed toward finding a rules violator so that he/she can be disciplined.  Railroad managers customarily "fix" their employees instead of their behavior-engendering system.

The customary investigatory outlook focused on the behavior of one railroader, or the occurrence of one event, obstructs the search for systemic causes in the encompassing organizational and regulatory bodies of the industry.  Causes include deeds of omission and commission.  Causation might end in a work act but reaches back to failed, managerial practice, rules formulation, training, work organization, and engineering decisions, mostly under government regulation.  An individual's error, then, might involve errors in the overarching system.

Errors could be viewed as usually systemic, with causation(s) ranging across the entire societal system.  An accident having causation at a particular level, however, does not necessarily have causes in a higher level.  Reason explains:  “the commission of unsafe acts is determined by a complex interaction between intrinsic system influences . . . and those arising from the outside world. . . .”  Furthermore, reason instructs, “systems accidents have their primary origins in fallible decisions by designers and high-level (corporate or plant) managerial decision makers. . . .  Fallible decisions are an inevitable part of the design and management process“(2).

Reason’s comprehensive viewpoint probes beyond the individual and the organization:  “Human-machine mismatches . . . being the result of prior decisions in the upper echelons of the system.  And these, in turn, are shaped by wider regulatory and societal factors” (3).  Before we can understand individual human error, then, we must comprehend all the contributing levels overarching the individual initiating action.  For example, at the Hinton, Alberta head-on collision of a freight into a passenger train, we find cascading errors of the involved Parliament, regulatory organization, railroad organization, and labor organizations (4).

A working definition of human error could be:  A human action or inaction exceeding a prescribed limit of allowable adequacy.  Error may or may not result in a loss.  Loss includes injury and death to humans and damage to property, environment, business and government procedure, and business and government well-being, reputation, and good will. Error could be recovered in time to prevent loss.  My focus is error at work (5).

Error could be divided into those potentially consequential and those inconsequential, that is, having no possible loss.  An initiating event for a consequential error could be a rules violation, whether, willfully done as part of collectively accepted local practice (for operational short cuts), willfully done as an idiosyncrasy for individual gain (reducing effort, saving time, increasing remuneration, etc.), ignorantly done (“I did not know that rule”), or inadequately enacted (action too little, too much, too late, too early, out of allowable sequence, ineptly performed, or some combination of these).   An initiating event results in a potentially consequential error, when all of the safeguards for a system have been breached.  Accordingly, no one action causes error, but preventing one action in a chain of events could stop performance of an error at the point of control. 

Logically and for results, an analyst needs to ascend the hierarchy of error causation to chart any hierarchical chain of error resulting in an initiating event.  Only in the upper levels of organizations, and higher, do we come to grips with basic kinds of error.  At the highest overarching level are human errors generated by a state society and its culture(s), including the component institutions.  Below this level are the human errors engendered in government by legislation and judicial interpretation, a fleshing out of the legislative skeleton, and by executive enforcement, often through regulatory agencies.  Next, we descend to the level of error from modern organizations.  An organization’s errors stem from the actions and inactions of managers on all levels, from the board of directors to first-line supervisors.  At the bottom of this hierarchy of error causation is the individual actor, sometimes working in a team of co-actors.  His/her error is ordinarily not in isolation but shaped by errors on the higher levels.   

As Reason explains regarding these supra-individual levels of error making:  “Only in the upper levels of the system can we begin to get to grips with the ‘parent’ failure types--the processes that create the downstream ‘problem children’.  If these remain unchanged, then the efforts to improve things at the workplace and worker level will be largely in vain” (3:121).  Since about 1970, the search for causes of accidents has slowly extended outwards in scope and backwards in time, to discover increasingly remote, often multifactoral precursors (6).   The present paper is an intended contribution toward this extension of scope.

 

THE TRADITIONAL FOCUS ON ONE PERSON

 

Problems of the Traditional Approach to Individual Error

 

Folk Beliefs of Individual Culpability

 

Why do we ordinarily insist on placing blame on one individual (“the evildoer”)?  The broad answer lies in the autonomy of the individual as an ancient, profound, Anglo-Saxon cultural value.  We emphasize the separateness of each individual rather than the social connections among group members.  Associated with this central value is the belief that an individual has free will, thought, and acts.  Such “freedom,” however, is merely a deep-seated, culture-bound idea.  Because of this belief that humans have “will,” we hold them socially responsible for their acts.  We usually do not recognize the normative constraint of our culture because we generally conform to its comfortable guidelines for sentiment, thought, and acts.  Because freedom of will and action are normatively limited, What, then, are the limits of individual responsibility? 

Americans generally believe that a person is a free agent, therefore, they also believe that individual error in performance is entirely volitional and thus hold the individual as solely responsible for error consequences.  Moreover, in the management of any organization, it is always conceptually and procedurally easier to “fix” an individual than a complex interaction of organizational policies, plans, rules, supervision, and government regulation gone wrong in a particular alignment of events.  

The narrow answer is we cannot easily understand the complexities of social behavior in its hierarchies within a group which, in turn, are nested in a near infinity of overarching, interacting groups.  When an error occurs in systems conceivable only abstractly, such as an organization or a society, the typical person shrugs, comprehends little, and moves on to something more concrete.  (“Yes, but who do we blame?”)  Additionally, failures of omission and commission of organizations leading to hazards have no sensory characteristics.  Being undetectable with the senses, they are abstractions not readily perceived.  Thus we resort to a more understandable, reductionist, lay, folk “psychologizing” of targeting one person.  Hence, we say that the person was, using poor judgment, breaking the rules, heedless, “accident-prone,” or a “foul up.”

 

Moreover, we have an inclination against attempting to find intricate chains of causes of errors.  It is much less arduous, time consuming, mentally taxing, (and less costly to an organization) to facilely blame an "accident" on an individual.  That individual’s encompassing organization, thereby, self-absolves against any part of the culpability and responsibility to society. 

Hale puts blaming the individual in perspective:  “A natural response after an accident is to start looking for someone to blame, someone to pick up the pieces and someone to foot the bill for the damage and the victims. . . .  We seem to attach a high priority to fixing responsibility and blame.  By so doing we localize and encapsulate the events in a way which is understandable and manageable.  Something or someone failed;  if that can be tracked down and put right, we can all sleep soundly in our beds tonight” (6:1).

 

Folk Beliefs about Accidents

 

Buried primordially deep in the culture history of Anglo-Saxon ideas about "accidents" is a bias against attempting to find all causes of such events.  The very word accident, the shorthand label for these ideas in our culture, is fraught with semantic ambiguity and excuse making.  This is despite the word's nearly universal use in the literature on safety.  The Latin root of the word is accidere, "to fall upon," as a tree branch falling by chance upon one's head.  Accidents, however, cannot be summarily dismissed as fortuitous, or having a complete absence of causal chains.  Instead, we must realize that an intricately interwoven chain of events ordinarily underlies a final accident event.   An error enacted at the point of an accident is usually not a solo performance. 

From the perspective of social factors, work accidents and errors leading to them, beyond being nonfortuitious, may usefully be viewed as errors systematically produced during the work process. Moreover, work accidents are business problems evading the control of managers and the direction of government regulators. 

 

Professional Practice with Individual Culpability

 

Human-factors caused accidents are usually defined to be from a single-operator error such as when running through a stop signal or exceeding speed restrictions (e.g., 7).  Most of the literature on human factors concerns the interface of one person or a team with a machine or machines (8).  From the broader perspective of human factors grading into social factors, however, the cause usually is from the overall social relations and other social factors behind such error, for example, pressures to infract operating rules.  Similarly, an error caused by a railroader falling asleep on duty could stem from not being allowed, by organizational practice, to lay off for sufficient rest.   Or this error could be because of a regulatory law--as governing the railroad industry-- providing for a minimum of 8 hours off duty but not, logically, allowing for 8 hours sleep, say, when a railroader commutes from worksite to quarters having meals, sanitary activities, and bed.  The NTSB has commented on the irregular work schedule of operating railroaders and the crucial “lack of a policy or procedure for removing crewmembers from service when they are not fit for duty because of a lack of sleep, and the inadequacy of Federal rules and regulations that govern hours-of-service” (9).

 

For one of myriad examples of railroads blaming the individual, the Union Pacific, in "UP Online" of August 25, 1997, reported a series of large-scale train collisions.  The UP made the following comment regarding the accidents:  "There have been nine fatalities on the system this year involving the train and engine crafts.  All involved some sort of human failure. . . .   We have to go back and reinforce 'safety is my responsibility.'"

Regarding the colliding of a Union Pacific freight train with another near Delia Kansas on July 2, 1997 (one in this series of collisions), the NTSB found the engineer of the errant train failed to stop at a stop signal.  This individual error was enabled by management, in its not ensuring redundant safety systems and effective crew resource management.  Further, the FRA and railroad industry contributed by failing to develop aggressively a Positive Train Control (PTC) system (10).

What is meant by human failure "involving the train and engine crafts"?  Does it, correctly, mean failure of all blameworthy humans in the chain of command, control, and other interaction with the train crewmembers?  Or is this statement yet another time-worn, industry attempt to "blame it on the engineer and conductor” and, thereby, not to contemplate any broader, potentially indicting responsibility for a degradation of operating safety? 

Blaming an accident on a railroader, often a victim of the accident, is a practice from invalidated research and managerial programs.  As Howe instructs: “These programs blame workers (the victims of occupational health and safety exposures to hazards) by focusing on worker behavior rather than problems in the system, such as hazards inherent to the work process.  By focusing on workers' 'unsafe acts' as the causes of injuries and illnesses, companies do little to address the root causes of safety and health risks”  (11). 

In this approach regarding worker behavior, we find the traditional focus of railroad safety analysis, labeled man failure.  The approach has Heinrichian conceptual roots:  individual human errors are at the bottom of most work accidents.  This nonsystemic tradition of blaming a rail accident on one employee is a practice constituting an outmoded and discredited managerial philosophy.  Heinrich developed much of the classic, "scientific" approach to behavior-based safety (12).  Centrally, Heinrich held that “a total of 88 per cent of all industrial accidents . . . are caused primarily by the unsafe acts of persons” (12:18). 

The Heinrichian model contributed to simplistic, nonsystemic error theory and reinforced “scientifically” the folk belief in single causes in accidents.  Manuele highlights the underlying flaw in industrial applications of this behavior-based safety, in which allegedly "man-failure is the heart of the problem."   “For years many safety practitioners based their work on Heinrich's theorems, working very hard to overcome ‘man failure,’ believing with great certainty that 88 percent of accidents were primarily caused by unsafe acts of employees.  How sad that we were so much in error“ (13). 

A great amount of investigative research and managerial activity has been done regarding error.  Mental and biological loads on the individual operator are the focus of investigation.  Such investigation homes in on symptoms rather than underlying causes of accidents. In organizations, individual errors are often symptoms indicating the existence of dormant, precursor conditions in the encompassing system. 

Usually needed is a probing beyond the error of the individual for understanding such failure (14).  Reason usefully divides human errors into active and latent; thus, he opens up a path to the basic causes of most accidents.  As Reason explains:  “It is assumed that the primary systemic origins of latent failure are the fallible decisions taken by top-level plant and corporate managers” (2:201-202).  In active errors, the effects are felt almost immediately whereas, in latent errors, the consequences could remain dormant in the system for some time before manifesting themselves.  The latent errors accrete and combine with other elements to breach a system’s safeguards.  In complex systems, with multi-layered safeguards, it is difficult detect the accumulated latent errors.  Such concealment of errors is aggravated “when those who design or manage such a system are more inclined to attribute its occasional catastrophic failures to individual fallibility than to intrinsic system weaknesses” (3:55).  Latent errors, then, range far beyond an individual:  they are engendered in a hierarchy of overarching social collectivities. The latent errors tick away like time bombs in almost all industrial systems and are an unavoidable part of any complex organization and its encompassing society. 

More specifically, active errors are associated with operators on-- called since my railroading days--the point of control (of work in operations).  Such operators include aircraft pilots, locomotive engineers, switchmen, air traffic controllers, train dispatchers, power plant operators, control room personnel, ships’ officers, transit vehicle operators, and truck and bus drivers.  Latent errors are created by persons removed in time and space from the point of control--from the direct human interface with an operation.  These persons comprise designers including civil, mechanical, electrical, and electronic engineers; high-level and intermediate-level decision makers; fabricating and construction personnel; inspection and maintenance personnel; local managers and first-line supervisors, etc.  Although in accident analysis we are prone to blame an accident on the operator(s) on the point, “Their part is usually that of adding the final garnish to a lethal brew whose ingredient have already been long in the cooking,“ Reason notes.  Attempts at locating and neutralizing latent errors will have a much greater positive affect on system safety than customary efforts at minimizing active errors (2:173).  Latent errors were manifest at Hinton, Clapham Junction, Three Mile Island, Chernobyl, Bhopal, and the Challenger and Columbia explosions.

In all, most active errors of individuals or teams can be viewed as the consequences of latent error from an overarching level.  Identifying an active error is ordinarily only the start of the investigation of cause for an accident.  As Reason concludes:  Latent errors/conditions “arise from strategic and other top level decisions made by governments, regulators, manufacturers, designers and organizational managers” (3:10).  On this subject, Reason summarizes:  “In aviation and elsewhere, human error is one of a long-established list of ‘causes’ used by the press and accident investigators.  But human error is a consequence and not a cause.  Errors . . . are shaped and provoked by upstream workplace and organizational factors” (3:126). 

 

Advantages of a Systemic Approach to Human Error

 

Systematic explaining of error permits organizational learning for safety and efficiency.  Blaming the person on the point of an operation does not afford such learning opportunity.  Hale explains:  “Organizational learning requires that event analysis traces the causal factors and determinants of an event further back into the past than before, and further up the chain of management control.  At each step it needs to ask whether those responsible for hardware, people, rules and procedures, communication and organizational structures had taken suitable decisions to select, prepare, instruct, supervise, monitor and improve them.  Such questions lead into the heart of the safety management system. . .” (6:9). 

 

For broad learning, analysis of the roots of an error does not stop with an individual.  A complete root causes analysis moves as far along the chain on command and responsibility as necessary to include all of the roots.  Root causes (a plural concept) are the most basic reasons for an accident:  their correction should prevent repetition of the accident.

 

THE LEVELS OF HUMAN ERROR

 

Societal Error

 

In the pursuit of public policy for work and other safety including error reduction, the highest-level question should be, What do the normative systems of a state society prescribe and proscribe regarding it?   Societal values engender greater reaction to 100 deaths, annually, in an airliner crash than to 40,000 in automotive accidents.

No one takes much account of the close-fitting cloak of his/her culture, normatively and comfortably guiding behavior.  This lack of notice includes analysts, who do not know the full consequences of their own contingency of subcultural bias.  One of our cardinal values is to be "unbiased" and "objective," a cultural impossibility.  That is, an analyst can recognize “objective knowledge” and “truth” only in light of his/her learned expectations and antecedent preconceptions.  As Helmreich and Merritt observe:  “The power of culture often goes unrecognized since it represents ‘the way we do things here’--the natural and unquestioning mode of viewing the world” (15). 

Most specialists in error analysis use some manner of cognitive orientation involving various models of human behavior from psychology.  Their aim is to learn how an individual relates to an error he/she makes.   In their defense, as with everyone else, the specialists cannot perceive their own cultural biases. 

 

Governmental Error

 

Governmental regulation of industries preempts private action.  Government thereby prescribes and proscribes safety actions of organizations and their employees.  In the US railroad industry, government has been thus acting since 1893. 

After a catastrophic accident, government must reassure various concerned sectors of the public that the hazard and related errors are both sufficiently known and controlled.  Witness the highly consequential Congressional reaction to the catastrophic collision of a Conrail engine with an Amtrak train at Chase, Maryland.  Some sectors of the public, including the industry under increased post-accident regulation, complain about the restriction of increased regulations, “red tape.”  Other sectors demand more regulation, “there ought to be a law.” 

Government needs valid and reliable statistical data for its assessing, monitoring, and regulating of society.  Regulatory agencies, however, can collect and disseminate accident data that are neither valid nor reliable, thereby precluding nearly all useful studies of error from agency databases.  The FRA accepts FRA cause codings and related descriptions for accidents from the railroads.  These causal reports to the FRA can be obfuscating and even false in stating cause.  Furthermore, the cause ordinarily reflects the railroad and FRA practice of determining cause simplistically as a single individual error.  Problems of obfuscation have long been noted regarding the FRA coding of rail accidents. For one example, the NTSB notes:   “. . . two Safety  Board investigations--Sugar Valley Georgia (August 9, 1990), and Corona, California (November 7, 1990)-- in which fatigue was cited by the Safety Board as a causal factor were not coded in the FRA database as fatigue-related but rather as a failure to comply with signals” (16). 

In another example, on December 19, 2000, a radio-Remote Control Locomotive (RCL) operated for a granary at Blair, Nebraska had an uncontrolled runaway movement with 36 loaded freight cars attached.  A Nebraska Public Service Commission report  found regarding the RCL operator:  "After this brief period and not seeing the [his] train, he [the operator] got back in his truck and headed back to Grant Street" (17).  Thus the operator neither controlled nor had in sight his RCL movement immediately before the runaway derailment onto a Union Pacific main track.  Despite these linked operating events, the UP used the FRA code for the accident as follows:  "M504 Failure by a nonrailroad employee, e.g., industry employee, to control speed of car using hand brake."  The FRA, apparently, took no exception to this coding.

The Transportation Research Board (TRB) Committee for "Review of the Federal Railroad Administration Research and Development Program" in its letter of April 30, 1999 to Jolene M. Molitoris, Administrator, FRA, recommended as follows regarding the lack of utility of the FRA database:  “The committee recommends that the FRA Administrator, in coordination with the Office of Safety and the Office of R&D, take the necessary steps to improve FRA’s data collection so that the multiple contributing factors involved in an accident can be correctly identified and analyzed and the sequence of events characterized. One of the numerous benefits of more accurate and complete accident data would be the ability to conduct R&D in closer balance with actual safety risks.”

We cannot rely on extant and future databases for railroad accidents until independent analysts have objectively reviewed and purged unreliable data.  The FRA databases have information collected in the first instance by railroad officers who have personal and organizational interests to present, protect, and, at times, fashion data. 

 

Legislative Error

 

Part of the governmental activity for safety is in the legislature.  In the Foisy Commission report on the Hinton two-train collision, the legislature is charged with error (4).   “It is the opinion of the Commission that the legislative and regulatory environment within which the railway system operates . . . is inadequate” (4:5).

Legislation authorizing regulatory action can have positive results for error reduction, in particular areas.  The railroads’ Master Car Builders standardized the Janney automatic car coupler in 1888 (18). Installation was at a snail’s pace on the 1 million freight cars in service, however.  The federal Safety Appliance Act of 1893 required the installation of automatic couplers for all railcars (45 U.S.C. 1-7).  This, and subsequent amendments, lit a fire under the railroads.

A legislature’s selective inaction significantly affects work, transportation, and other safety(19).  Similarly, the Congress has not functionally desired to reduce the carnage from drunken driving on the highways.  In March 1998, after lobbying by beer distributors and restaurant chains, a House panel voice voted to keep a bill toughening drunken driving laws from reaching the House floor (20).  Drunk drivers are hit by trains at grade crossings.

 

Judicial Error

Part of the governmental activity for safety is in the courts.  They, however, often seal the records of an accident and error case, thereby depriving the public of ability to learn from and act on particular hazards and errors.  Judicial procedure becomes paramount, not error management and accident prevention.  The public right to know in a democracy becomes submerged in judicial process,

Furthermore, the courts have a punitive outlook for accident investigation and error causation, giving both a direction and a termination point for inquiry.  The judicial process in accident analysis is a fixing of blame, often with a search for the single, reduced account of what led to the accident.  “It is directed at uncovering truth; something which is seen as objective and of which there is only one version, though what that version is may be disputed.  It stops when the culpable actions are found and does not bother to dig deeper to find out why they were carried out” (6:6-7).  Legal practitioners generally feel that they can find a culprit for almost any accident.

The courts also review appeals regarding and can overturn the rulings of regulatory agencies.  This is especially so when regulatory language is not precise in intent or poorly drafted.  A regulator must satisfy the court that it has considered everything germane.  Thus regulators draft with an eye to court reaction.    

 

Regulatory Error

 

Part of the governmental activity for safety is in the executive administration, of which regulatory agencies are a part.  Regulatory agencies can be the source of error and can exacerbate error.  Sometimes error cannot be immediately regulated.

The Chicago line of the former New York Central had an Automatic Train Stop system (ATS), designed to apply air brakes if an engineer failed to acknowledge an audible alarm after passing a more restrictive wayside signal.  The FRA approved removal of the ATS in the 1970s.  A distracted engineer on an Amtrak train collided with a freight train on this track having downgraded safeguards, at Syracuse, New York.  ATS might well have prevented the accident.  Furthermore, the engineer had to go to the back wall of his cab to manipulate an isolation switch.  Here we have an old, higher-level, hazardous design of controls and indicators, located behind an engineer monitoring the route ahead (21).

Regarding this Amtrak-freight train collision, the NTSB reminded the FRA that it had long advocated systems of PTC, to automatically stop a train from running into another train (22).   On the surface, this might seem another case of regulatory error.  Not so, however.  Since November 1997, the FRA has conducted numerous arduous meetings, for developing its rulemaking and risk assessments, among all the industry stakeholders in PTC. Moreover, the costs of PTC systems are so enormous, that they cannot be rushed into, while the technology is still under development.

Concerning the rupture of a tank car containing hazardous waste at Clymers, Indiana, the NTSB found:  “The U.S. Department of Transportation Hazardous Materials Regulations are deficient because they fail to require the development and implementation of comprehensive, written loading and unloading procedures for hazardous materials” (23).

The boundary between organizational and regulatory error is sometimes difficult to delineate, because the two intertwine.  Thus, for Vaughan, one of three major elements of organizational misconduct is:  “The regulatory environment . . . , which is affected by the relationship between regulators and the organizations which they regulate, frequently minimizing the capacity to control and deter violations, consequently contributing to their occurrence” (24).

Detailed discussion of one kind of error of rail regulators is found in the Foisy report (4:128-130).  The Railway Transport Committee of the Canadian Transport Commission regulating the operations of Canadian railroads, had a conflict of interest.   It had combined functions of operating-rule making and supervision and enforcement of these rules. 

A regulatory agency, as an organization, necessarily exists in an environment (a surrounding, all the conditions affecting some thing).   The regulatory agency must negotiate with the business organizations it regulates.  The regulator cannot be adversarial and must compromise; otherwise it could not function.  A regulated organization has defenses against all outside organizations legitimately or illegitimately probing its nature.  It feeds the information it thinks necessary for members of all outside organizations, including regulatory.  Regulators are stretched thin. They make infrequent visits to sites and request information, which they receive in ideal presentation formats.  Regulators, then, are dependent upon their positive, negotiated social relations with personnel of the organizations regulated.  Sometimes the two rotate between employment as the regulated and regulator.

At times, the decisions of a regulatory agency result in reactions from various sectors of their environment including, filing lawsuits, testifying at hearings, writing letters, lobbying, demonstrating, picketing, and negotiating.  When the agency eschews making a decision that is politically difficult, these reactions sometimes lessen.  There can always be a need for further deliberation.  The organizational environmental pressures on a regulatory agency can be significant in “establishing” error.   

As Vaughan explains about the relations of the regulator and the regulated:  “Situations of mutual dependence are likely to engender continual negotiation and bargaining rather than adversarial tactics, as each party tries to control the other’s use of resources and conserve its own.  To interpret the consequences of this negotiation and bargaining (for example, the ‘slap on the wrist’ sanction or no sanction at all) as regulatory ‘failures’ is to miss the point.  Compromise is an enforcement pattern systematically generated by the structure of inter-organizational regulatory relations“ (25).

Regulation tends to be reactive, against past hazards, rather than proactive, toward future hazards.  Regulation sometimes ignores error patterns brought to the regulators’ attention. Regulation can at times be inadequate to reduce error patterns.  Sometimes regulation protects the hazard maker (gives a license to err).  Sometimes, regulatory agencies exacerbate known error patterns.  As Reason notes regulation, “can contribute to the breakdown of complex well-defended technologies” (3:157).

After the advent of cabooseless operations, a freight train could no longer be air braked from the rear end, as could be done since the dawn of train air brakes.   An old complex of safeguards against a blocked air-brake pipe (running the length of a train), from understandings, rules, practices, and technology had broken down.  Following ten catastrophic runaways of trains, largely preventable by end-of-train braking, in 1996, the FRA ordered the use of previously extant, two-way, End-of-Train Devices (EOTs).  These can activate train air brakes from the rear end, via telemetric radio signal from the engineer at the head end.  In 1989, the NTSB had recommended that the FRA order the railroads to use the EOT (26).  

Occasionally, regulatory agencies create new risks allowing new kinds of errors.  Regulators sometimes allow an organization’s conversion of a new gain in safeguards against hazard into an increase in productivity.  Thus the overall amount of hazard remains in equilibrium.  Systems of train detection and rail traffic control have evolved steadily since the 1870s.  The new, planned systems of PTC could lose aspects of both technological protection (e.g., from detection of kinds of broken rail) and institutional operating knowledge with wayside signaling gained since the 1870s (27).

In sum, regulation of industrial safety cannot be merely of the individual, in the form of what Reason labels “‘tokenism’--the focusing of remedial efforts upon preventing the recurrence of specific unsafe acts” (2:206).   The sensible way to deal with these error acts is working on eliminating their preconditions and to know that, whatever the safeguards taken, some unsafe acts, nevertheless, will occur.  What we necessarily find, at times, are regulated patterned errors and accidents.  Regulation, then, must embrace anti-tokenism. 

 

Organizational Error

 

Business Organizational Error

 

Business organizations can be the source of and can exacerbate error.  Learning from the outside about an organization’s errors, however, poses two difficulties. First, if learning about the cause of an accident poses a risk to an organization, information could be withheld.  Second, if the knowledge of the outside investigators is not equal to the insider knowledge of the organization, important information might not be learned (28).

Rather than individual error, lack of sufficient training and evaluation for a locomotive engineer, an unsafe rule for descending steep grades, and refusal to provide a pilot for unfamiliar territory, for example, could be organizational error causes for an excessive-speed derailment of a coal train, as at Bloomington, Maryland (29).   Organizational error is cited for a derailment of a train of hazardous chemicals at Eunice, Louisiana.  The NTSB found the railroad’s “track inspection procedures in use before the derailment  were inadequate. . . .”   It was also regulatory error, however:  “The frequency and type of track inspections routinely performed by the Federal Railroad Administration . .  . were inappropriate given the fact that this was a key route that carried large volumes of hazardous materials” (30).

 

Labor Organizational Error

 

Labor organizations also impact hazard and related error.  The Foisy report found errors from organized labor and its represented railroaders (4). 

Error could be interorganizational, among a railroad and unions, in its genesis.  The Foisy “commission believes that the style of operations and the culture of the ‘railroader’, as it has evolved within CN, creates an environment in which otherwise well motivated and responsible people throughout the company place inadequate priority on safety and, in effect, give tacit acceptance to rules violations that affect the safety of CN’s rail operations.  Within this culture, rules and procedures intended and developed to insure the safe and prudent operations of the system have become ‘background’ and ritual, with the result that CN management and its partner in the definition of work environments and conditions--organized labor --fail to place proper or effective emphasis on safety.“   Moreover, the rest rules of the covering labor agreement  "discourage the booking of rest in the away-from-home terminal. . . .   A further discouragement of the booking of rest at the away-from-home terminal  is the fear that if rest is booked, the crew will looses its turn on the pool list. . ." (4:5, 87).

 

Team and Individual Error

 

Now, we focus on a switch-engine team engaged in common error-generating activity.  An individual or small team of individuals can volitionally increase errors in work by greatly accelerating the pace of tasks to achieve some desired goal.  For example, a switch-engine crew could “put away the cars” at a much quicker pace than normal to receive an “early quit.”  At the greater speed, an increased probability exists of damaging moving cars.  Also the chance increases that a switchman will slip, possibly across the live rails.  Surely, here, this hustling crew is blameworthy.  Who, however, historically generates such practice and currently allows and rewards it?  The answer is supra-individual--the railroad, monitored by the concerned unions.  This kind of team error has higher-level antecedents

Individual error gradates from motor tasks.  He threw the wrong switch.  And it gradates into judgments.  The train dispatcher made the wrong decision.

 

CONCLUSIONS

 

Nothing presented advocates abandoning study and analysis of individual error to promote safety.  The point made is that individual-oriented study alone usually misses seeing the broader error forest of overarching levels because of focus on individual error trees.

Reducing the active part of the complex of human error chips away at the tip of the accident-causal iceberg.  But because of our penchant for feeling accomplishment by such facile chipping away at individual human error, we continue to do so.  In any event contemplating the latent errors is arduous, time consuming, and might get one “dehorned.” 

Usually, error is not merely blamable on the end-person in a chain of events leading to that error.  By focusing on an individual’s behavior, the correction for error becomes restricted to personal remediation. Accordingly, in this myopia of individual behavior, error is believed correctable largely by admonishments, discipline, advising, training, and behavior modification, to condition error-free individuals in future situations.  And regarding personal remediations as the correct-all for operator error:  “The evidence from a large number of accident inquiries indicates that bad events are more often the result of error-prone situations and error-prone activities than they are of error-prone people.” (2:129). 

Quantified data from the common practice of blaming one employee, or one event, for an accident does not provide valid and reliable information for accident and risk assessment.  Above all, for safety assessment of new rail operating technologies, arrangements, and procedures of work, use of past flawed accident data should not compell the well-established railroad practice of simplistically blaming just one employee or one event for an accident. 

When pointing at error, we must realize that we usually look through the wrong end of the telescope.  After an accident, investigators and journalists customarily point to individual error as the cause of the event.  Individual error, however, is usually consequence, not cause.  The underlying generators of most error are further up the societal hierarchy.  Fingering the individual at the point of control must be the start of the inquiry into accident cause, not the end.  Human safety-critical errors can be arrayed throughout a societal system.  The higher the system errors occur, the more widespread and repetitive the consequences for safety on the individual level. "Blaming it on the pin puller" is part of a hoary railroad managerial problem and not part of an investigative solution.

 

Despite claims for scientific objectivity, we may view error pragmatically as a variably culturally relative set of ideas and methods for ordering reality.  This ordering allows a sense of security that earthly hazards have been or can be coped with.  As an ordering of reality, error is part of our society’s worldview.  Developed by various groups in society, error consists of sets of learned views, among which individuals can choose opportunistically.  Culture bounds the choices, however.

Error analysis, then, is not a kind of Immaculate Conception, not fouled by the real world.  Because perceptions of and reactions to error are mediated through learned cultural constructions guiding thought, error depicts, assesses, and accesses some part of reality by culturally delimiting it.  One’s learning and its consequent development, by their conceptual inclinations, condition any formulation of error.  At times, error perceptions and reactions are laden with the values of certain groups which are in some way privileged or who want to exert social control regarding an error matter.  In other words, error cannot be fully comprehended apart from society’s norms, values, and political processes including those of control:  “blame it on the pin puller.” 

The goal of scientific investigation is objective knowledge free from bias. But how can any person be free from the biases of his/her continuous enculturation?  Any error investigation, then, is to varying degrees necessarily subjective, from an investigator’s own experiences and expectations and from the experiences and expectations of the producers of concepts and gatherers of information used.  Certain groups, such as railroad managers, with particular outlooks and agendas generate hypotheses and methods for handling error information.  Outlooks and agendas also result in part from a person’s social position in society.  Moreover, the values, norms, and status of the error analyst guide selecting the matters chosen for study, conceptual framework, methodology, and questions to be asked.  In short, error analysts cannot posit procedures mitigating error by venturing that their conclusions flow inevitably from “the hard data.” 

In all, error relates to ideas from politics and political control, especially regarding accountability, responsibility, and, eventually, blame.  Thus, 'blame it on the pin puller."  Why are some errors disregarded or minimized as such, and others reacted to with great regard and amounts of alarm?  The answer is certain groups select, from their collectively held beliefs, particular errors among others for value-laden reasons germane to them.  That is, genuine errors exist in the real world, but the value systems of various societal subcultures mediate human contemplation, selection, and reaction regarding them.  All positions on error have subjective cultural underpinnings. 

If error analysis is to have merit for policymaking, we must recognize not only the physical and procedural but also the political, social, and normative aspects of a technology.  Discussion of error cannot be solely in the “scientific” domain because, in the final analysis, all such discussions are grounded in epistemology, always a cultural reflection.

 

 

REFERENCES

 

1. Johnson, W. G. MORT Safety Assurance Systems. National Safety Council, New York, 1980, p. 42.

2. Reason, J. Human Error.  Cambridge University Press, Cambridge, 1990, pp. 206, 203

3. Reason, J. Managing the Risks of Organizational Accidents.  Ashgate, Aldershot, 1997, p. 226.

4. Foisy, R. Commission of Inquiry:  Hinton Train Collision.  Report of the Commissioner the Honourable Mr. Justice René P. Foisy.  December 1986.

5. Gamst, F. C. Work, Sociology of.  International Encyclopedia of the Social and Behavioral Sciences :16575-16579.  Pergamon, Oxford, 2001.

6. Hale, A. Introduction:  The Goals of Event Analysis.  In A. Hale, B. Wilpert, and M. Freitag (eds.) After the Event:  From Accident to Organizational Learning.  Pergamon, New York, 1997, pp. 1-10, p. 7.

7. US General Accounting Office Federal Railroad Administration’s New Approach to Railroad Safety. Report GAO/RCED-97-142, p. 33.

8. Woodson, W. E., B. Tillman, and P. Tillman Human Factors Design Handbook, 2d ed.  McGraw Hill, New York. 1992, pp. vi-vii, 729-731.

9.  NTSB Collision between Atchison Topeka and Santa Fe Railway Company Freight Trains ATSF 818 and ATSF 891 on the ATSF Railway, Corona, California, November 7, 1990.  Report NTSB/RAR-91/03.

10. NTSB Collision between Union Pacific Freight Trains MKSNP-01 and ZSEME-29 near Delia, Kansas, July 2, 1997.  Report NTSB/RAR-99/04.

11. Howe, J. A Union Perspective on Behavior-Based Safety.  In G. Swartz (ed.) Safety Culture and Effective Safety Management.  National Safety Council, Chicago, 2000, p. 226.

12. Heinrich, H. W. Industrial Accident Prevention:  A Scientific Approach.  McGraw Hill, New York, 1931, 1942, 1950.

13. Manuele, F. A. On the Practice of Safety. Van Nostrand Reinhold, New York, 1997, pp. 62-63.

14. Cook, R. I., D. D. Woods, and C. Miller A Tale of Two Stories:  Contrasting Views on Patient Safety.  Chicago, 1998.  www.npsf.org/exec/report.html.  Accessed June 10, 2002

15. Helmreich, R. L. and A. C. Merritt Culture at Work in Aviation and Medicine:  National, Organizational and Professional Influences.  Ashgate, Aldershot, 1998, p. 1.           

16.  Safety Report NTSB/SR/99/01, 1999, p. 8.

17. Nebraska Public Service Commission, Memorandum, January 17, 2001, p. 2.

18. Master Car Builders’ Association Report of the Proceedings of the Twenty-second Annual Convention. . . 1888.  New York, 1888, pp. 133-135.

19. Wiener, E. L. Midair Collisions: The Accidents, the Systems, and the Realpolitik. Human Factors, Vol. 22, 1980, pp. 521-533, p. 521.

20. Alvarez, L. Plan to Toughen Drunken Driving Laws Suffers Blow in House.  New York Times 1 April 1998, p. A21.

21. Gamst, F. C. Human Factors Analysis of the Diesel-Electric Locomotive Cab.

Human Factors , Vol. 17, 1975, pp. 150-151.

22. NTSB Rear-End Collision of National Railroad Passenger Corporation Train P286 with CSXT Freight Train Q620 on the CSX Railroad at Syracuse, New York February 5 , 2001.  Report NTSB/RAR-01/04.

23. NTSB Rupture of a Railroad Tank Car Containing Hazardous Waste near Clymers, Indiana February 18, 1999.  Report NTSB/HZM-01/01.

24. Vaughan, D. The Challenger Launch Decision:  Risky Technology, Culture, and Deviance at NASA.  University of Chicago Press, Chicago, 1996, p. 458.

25. Vaughan, D. Autonomy, Interdependence, and Social Control:  NASA and the Space Shuttle Challenger.  Administrative Science Quarterly  Vol. 35, 1990, pp. 225-233, p. 227.

26. NTSB Rear-End Collision of Atchison, Topeka and Santa Fe Railway Freight Train PBHLA1-10 and Union Pacific Railroad Freight train CUWLA-10 near Cajon, California, December 14, 1994.  Report NTSB/RAR-95/04.

27. Light, L. PTC:  A Reality Check.  Railway Age, June 2002, p. 76.

28. Becker, G. Event Analysis and Regulation:  Are We Able to Discover Organizational Factors?  In A. Hale, B. Wilpert, and M. Freitag (eds.) After the Event:  From Accident to Organizational Learning.  Pergamon, New York, 1997, p. 206.

29. NTSB Derailment of CSX Transportation Coal Train V986-26 at Bloomington, Maryland, January 30, 2000.  Report NTSB/RAR-02/02.

30. NTSB Derailment of Union Pacific Railroad Train QFPLI-26 at Eunice, Louisiana, May 27, 2000.  Report NTSB/RAR-02/03.

 

 

 

 

 

           

 

 

.

 

 

 



© Copyright 2005 The Usual Suspect.
Last update: 5/25/2005; 5:09:38 PM.

Click here to visit the Radio UserLand website.