Many of us think in Newtonian terms, writes Sean Brady. We believe that systems can be broken down into individual components, and once these individual components are understood, the overall behaviour of the system can also be understood.

A system is the sum of its parts. But while this may be true for simple systems, it is not the case for so-called complex systems. Complex systems are much more than the sum of their parts because they exhibit complex behaviour.

This behaviour arises not because of individual components in the system, but because of the interactions between these components. These systems exhibit non-linear behaviour – small causes can produce large effects.

They are open systems – they interact with and respond to their environment. And they are adaptive and produce emergent behaviour – behaviour that was not designed into the system.

A complexity or complex systems thinking approach, therefore, overcomes the shortfalls of Newtonian thinking by not only considering a system’s components, but by also considering its interactions and boundaries.

In the case of a construction project, the components of the system are the parties doing the work, and the interactions between them are their formal and informal relationships with one another.

The purpose of this paper is to explore how complex systems thinking can be applied to a construction project, specifically from the perspective of understanding why engineering failures can occur. But complex systems thinking can be applied to most forms of project failure, be they technical, delay or financial.

The paper begins by presenting an engineering failure, in this case the collapse of the Florida International University (FIU) bridge. It then explores Newtonian thinking, before introducing the concept of complex systems thinking. 

The Florida International University bridge(1)

In the early afternoon of March 15, 2018, a partially constructed pedestrian bridge at the Florida International University (FIU) collapsed. It fell 5.6m to the highway below and crushed or partially crushed eight vehicles. Five vehicle occupants died, as well as one worker who was on the bridge at the time of the failure.

In October 2019 the National Transportation Safety Board (NTSB) released its report into the incident, which forms the basis of the discussion that follows(2).

Only the main span of the bridge was in place at the time of the collapse. It was still under construction, but if it had been completed it would have also included a back span crossing the Tamiami Canal, as well as a pylon and steel pipes. The final structure was meant to resemble a cable stayed bridge – see Figure 1. 

The structure was, in fact, not a cable stayed bridge. And in the days following the incident this caused confusion: a true cable stayed bridge requires a pylon and cables to support the span. (The bridge span is supported by the cables, which in turn are supported by the pylon, which then carries the load down into the ground.)

Given that only the main span of the bridge was in place at the time of collapse, questions were asked as to how this span was supported in the absence of the rest of the structure.

The answer was that the bridge was only designed to mimic, as opposed to be, a cable stayed bridge – the main span was intended to span the highway without the assistance of cables, it was a simply supported structure.

The pylon and cables, which in this case were steel tubes, did not carry any load. Instead they had two functions: 1) to improve the bridge’s dynamic performance (they increased the natural frequency of the structure), and 2) to make the bridge look more architecturally impressive. 

This main span was a post-tensioned and reinforced concrete truss structure. The bottom of the structure was a concrete deck, the top was a concrete canopy, and down its centreline were diagonal and vertical truss members connecting the deck to the canopy – see Figure 2. 

The main span was initially constructed off site, and then on the night of March 10, 2018, was moved into position over the eight-lane highway. As part of this move, Members 2 and 11 (see Figure 3) had been post-tensioned, but afterwards, as was planned, this tension was released – it was only required during transport. 

Fast forward five days, to the day of the failure, and Member 11 was being re-tensioned by workers located on top of the bridge canopy. The reason this member was being re-tensioned was that serious cracks had appeared in the bridge.

These cracks were located at the joint where Member 11 and 12 joined the bridge deck. Member 11 was being re-tensioned to ‘pull’ this joint back together and close the cracks.

This re-tensioning would apply significant forces to the bridge, but despite this, the freeway was not completely closed. Traffic flowed freely in six of the eight lanes. What happened next was captured on a video camera mounted in the interior of a pickup truck.

At 43 seconds and 881 milliseconds past 1.46pm (13:46:43:881), the first sign of failure was evident. There appeared to be a blowout of concrete at the Member 11 and 12 deck joint – see Figure 4. This joint failed catastrophically because of the forces being applied by the re-tensioning of Member 11. 

Then, a mere 165 milliseconds later, the canopy fractured at the top of Member 11 and the span began to hinge downwards – see Figure 5. 

The span struck the freeway, with only Members 1, 2, 3 and 4 remaining intact – see Figure 6. 

The time was 13:46:44 and 310 milliseconds, and the collapse had taken just 429 milliseconds.

Cause of failure and peer review

There are many aspects of this failure that we could focus on in this paper, but we will limit our discussion to the cause of the failure and why this cause went undetected and led to the collapse.

Fundamentally, the NTSB investigation would conclude that there were multiple design errors made in the bridge by the bridge’s designer, FIGG. For the purposes of this discussion we will focus on the significant errors made in the design of the Member 11 and 12 to deck joint connection.

The actions in this connection were higher than the designers assumed, and the strength of the key joint was lower than the designers calculated. This led to an overloaded connection that cracked badly, culminating in its explosive failure when tension was reapplied to the post-tensioning rods in Member 11.

We will not explore further why these design errors occurred, but instead we will focus on why they went undetected when the design of the bridge was peer reviewed.

The Florida Department of Transport (FDOT) required a peer review of the bridge’s design by an independent third party, and it set out its peer review requirements in its Plans Preparation Manual.

These requirements demand that an independent firm – not the firm that designed the bridge – check that the bridge plans are compliant with the relevant design requirements.

The independent firm must be prequalified with the FDOT to undertake this work, and the cost of the review is borne by the design and construct firm.

In theory, this is a robust system to ensure a design is error free, carried out by an independent firm whose expertise has been confirmed by the FDOT. This process, however, failed to identify the serious design errors made in the bridge.

The firm undertaking the peer review, Louis Berger, didn’t check the bridge’s design for all stages of construction, nor did it check the bridge’s connections – thus failing to identify the design errors in the Member 11 and 12 to deck joint connection. As these checks were not undertaken, the peer review failed, thus contributing to the collapse. 

However, while this explanation explains what happened, it doesn’t explain why it happened. To better understand why this happened, it is useful to take a complex systems thinking approach.

Newtonian thinking(3

Before commencing with a discussion on complex systems thinking, it is useful to examine a form of thinking that many of us are more familiar with, namely Newtonian thinking.

Newtonian thinking assumes that if the behaviour of the individual components of a system is understood, and the system’s initial conditions are known, then the behaviour of the overall system can be both understood and predicted.

In other words, we can view a system in a ‘mechanical’ manner that will produce predictable outcomes. This form of thinking underpins science, appears common sense, and for many systems – namely, simple systems – adequately describes their behaviour.

This thinking also underpins investigations into why systems fail: in order to understand why a system failed, all that is required is to identify the component or components that individually failed.

A number of Newtonian thinking assumptions are worthy of discussion. One assumption is that there is a direct link between cause and effect. Dekker says "in the Newtonian vision of the world, everything that happens has a definitive, identifiable cause and a definitive effect. There is symmetry between cause and effect [they are equal but opposite]. The determination of the 'cause' or 'causes' is of course seen as the most important function of investigations when something goes wrong, but it assumes that physical effects can be traced back to physical causes [or a chain of causes-effects] (Leveson, 2002)".(4)

So not only can a link between cause and effect be clearly drawn, but the seriousness of the effect is related to the seriousness of the cause – big failures are due to big causes, small failures to small ones.

Another assumption is that Newtonian thinking is reductionist, a concept already introduced above – the system can be broken down into its component parts, including technological and human, and once the behaviour of each component is understood, the behaviour of the system as a whole can also be understood.

The system is the sum of its parts. Dekker succinctly articulates the issue as the "functioning or nonfunctioning of the whole can be explained by the functioning or non-functioning of constituent components".(5)

This Newtonian way of thinking is not, however, limited to the sciences, but permeates many aspects of how we view the world. The extent of Newtonian thinking is highlighted by Boulton, Allen, Bowman (2014): "The assumption that the world, including the social and natural world, can in the main be treated as if it were a machine – predictable, rational, measurable and controllable – is still so prevalent that we cannot overemphasise this point"(6).

I would argue that the manner in how we set up construction projects is Newtonian in nature, despite the evidence that these projects, especially mega projects, often proceed in a manner that would not be described as predictable, rational, measurable and controllable.

Therefore, if these projects do not always behave in a Newtonian manner, then they also don’t necessarily fail in a Newtonian manner either. And to understand how these systems fail, we need to examine the concept of complex systems thinking.

Complex Systems Thinking(7)

As discussed earlier, some of the key aspects of Newtonian thinking are that there is a clear link between cause and effect, and a reductionist approach is reasonable – in other words, the system can be reduced to its components, and once these components are understood the behaviour of the system can also be understood.

In contrast to Newtonian thinking, complex systems thinking is very different. While a detailed discussion of complexity is well beyond the scope of this paper, we will discuss some of its key aspects.

Dekker describes how complex behaviour "arises because of the interaction between the components of a system. It asks us to focus not on individual components but on their relationships. The properties of the system emerge as a result of these interactions; they are not contained within individual components’(8). ‘Complexity is a feature of the system, not of components inside of it"(9,10).

The system interactions, therefore, are not only critical in complex systems, they define them: "The behaviour of the system cannot be reduced to the behaviour of the constituent components. If we wish to study such systems, we have to investigate the system as such. It is at this point that reductionist methods fail"(11).

Put another way, the very act of reducing the system to its components destroys what makes it a system. A useful analogy is that you cannot understand how a forest works by collecting each species of animal and plant and pinning them to a board – this may tell you a lot about individual species, but it will leave you none the wiser with respect to the overall behaviour of the forest. 

And when it comes to the investigation of failure in complex systems, the reductionist assumption in Newtonian thinking has significant limitations. It typically aims to identify only the individual components that failed, not the interactions that caused the failure.

Such investigations will not identify the interface and relational causes(12). This concept will become key to understanding why the peer review failed to identify the design errors in the FIU bridge.

Second, some properties of the system are classed as emergent. In other words, they are not designed into the system, but they naturally occur (and emerge) as the system behaves.

Dekker says we "used to say that the whole is more than the sum of its parts. Today we would say that the whole has emergent properties"(13).

As we will see in the case of the FIU bridge, the peer review undertaken by Louis Berger was certainly not consistent with what it was ‘designed’ to be, it instead emerged because of the interactions between Louis Berger and the other parties in the project. While there is no single definition for what makes a system complex, these systems typically exhibit the following characteristics:

  • Emergence: as discussed above, because of interactions the whole of the system can be more than the sum of its parts. Unanticipated behaviour can emerge as part of these interactions.
  • Non-linearity: complex systems exhibit non-linear behaviour. There is not always a linear relationship between cause and effect – small causes can produce big effects and combinations of causes can do the same, in some cases because of direct and indirect feedback loops.
  • Open systems: they are open systems and they interact with their environment, which means they do not tend towards equilibrium.
  • Adaptation and drift: They are adaptive. They can reorganise themselves naturally.

A construction project of sufficient size is, therefore, a complex system. The system is an open system that responds to external pressures, it has non-linearity and feedback loops, its interactions and internal relationships between components are complex and critical to understanding the system, and it displays emergent behaviour, particularly with respect to how parties behave relative to one another.

Fundamentally, these systems can fail without any of its components failing – instead the relationships and interfaces between the components can fail(14).

As a result of the interactions, one error or mistake can shatter the system: big effects do not require big causes. Small causes, given the right combinations and interfaces, have the potential to generate big effects because of non-linear behaviour(15).

Given that the interactions between components play a key role in the overall behaviour of a complex system, let us now examine the interactions between the party conducting the peer review, Louis Berger, and the designer of the bridge, FIGG. It is by understanding these interactions that we can better understand why the peer review failed. 

The peer review failure

On January 14, 2016, MCM was engaged in a design-build contract to deliver the bridge. MCM, engaged FIGG to act as both engineer of record and to provide bridge design and engineering services for the project.

But in FIGG’s technical proposal to MCM, it stated that it, not an independent firm, would undertake the peer review. As discussed earlier, the peer review requirements demand that an independent firm – not the firm that designed the bridge – check that the bridge plans are compliant with the relevant design requirements.

FIGG’s technical proposal did not fulfil these requirements because it intended to undertake the review itself. Fast forward to a meeting on June 30, 2016, where the FDOT inform FIGG it can’t undertake the peer review – an independent peer review would be required by an external firm.

FIGG didn’t factor this situation into its fee with MCM, so it put out requests for bids. On July 5, 2016, Louis Berger submitted a bid for the peer review.

The scope of the review was not up for discussion – this was a peer review in accordance with the FDOT’s Plans Preparation Manual – Louis Berger was required to ensure that the bridge, during all stages of construction and its service life met the relevant standards. (It not only needed to check that the structure was compliant in its final (service) state, it also needed to check each stage of construction, eg, casting, transportation, tensioning and detensioning.)

Louis Berger submitted its scope of work, along with a proposed fee of $110,000. But a month later Louis Berger became nervous. On August 10, it asked FIGG to: “Please note the quote we have is for a very thorough scope and creation of independent models. Please inform FIU [Florida International University] on evaluating bids, as a lesser fee may be associated with less effort/value.”

This email also stated: “We would appreciate an opportunity to respond with a BAFO [Best and Final Offer] if necessary to be fair and level the assumptions.”

The firm was right to be nervous – FIGG had received three bids from consultants and Louis Berger’s was the most expensive at $110,000. The other two firms came in at $85,000 and $63,000.

The NTSB report doesn’t spell out what happens next, but it looks like Louis Berger was given the opportunity to come back with, in its words, its best and final offer.

The following day Louis Berger confirmed to FIGG that it had revised its fee from $110,000 down to $61,000. This brought it in lower than the bid of $63,000. But the scope of work remained the same, despite the fee decrease.

Louis Berger also agreed to a reduced timetable of seven weeks for the work, down from the 10 weeks originally proposed. This was to meet FIGG requirements.

The contract between FIGG and Louis Berger was dated September 16, 2016, and it specifically stated: “Louis Berger will perform Independent Peer Review for the concrete pedestrian bridge plans in accordance with the project and request for proposal requirements and FDOT Plans Preparation Manual [Chapter 26]”.

Without knowing it, the parties had sown the seeds that would culminate in the collapse 18 months later. As discussed, FIGG made errors in the design of the bridge, which resulted in the joint between Member 11 and the bridge deck having insufficient strength.

This was the joint that blew out on the day of the failure and led to the collapse. As discussed above, Louis Berger’s peer review didn’t identify these design errors for two key reasons.

First, Louis Berger only analysed the structure in its service state. It didn’t analyse its performance during the construction or transportation stages, despite the FDOT requirements stating the review had to be comprehensive. It was during these stages when the serious cracking in the bridge initiated and began to grow.

Second, Louis Berger didn’t analyse the complete structure. It only analysed and checked the design of the structural members, it didn’t check the design of the joints between members.

If you don’t check the joints, you won’t identify design errors in these joints. But if a peer review is required to be comprehensive, the reviewer doesn’t get to pick and choose which parts of a structure to check – a bridge is either compliant in its entirety, for the duration of its construction and service life, or it’s not.

So why did Louis Berger pick and choose what to check? And it’s here that the interactions between Louis Berger and the other parties becomes important. When asked about the peer review of the bridge’s construction stages, a Louis Berger engineer told investigators: “My model was for the structure as one structure. Doing construction sequence staging analysis was not part of our scope. And again, doing such an analysis requires much more time than what we agreed about [with FIGG]”.

With respect to checking the joints it said: “. . . in the beginning, I suggested to do this kind of analysis, to analyse the connections. I’m talking about the nodes, or the joints to analyse the connections. However, the budget and time to do this actually was not agreed upon with the designer”.

Louis Berger had formally cut its budget, which in turn appears to have driven cuts to the scope of work when it came to the practicalities of the review. These cuts were inconsistent with what it had contracted to do for FIGG, and were inconsistent with the comprehensive peer review required by the FDOT.

So while the story of what happened is straightforward: FIGG made design errors, and these errors were not identified in the peer review. The 'why' it happened is a tangled web of cascading decisions that began with FIGG assuming they could undertake the peer review internally.

Once the FDOT vetoed this idea, cost was thrust into the spotlight as a major consideration, resulting in Louis Berger drastically reducing its fee. This fee cut led, apparently informally, to a reduction in review scope that culminated in the design errors going undetected and the bridge collapsing.

This aspect of the FIU collapse illustrates that while the components of the system can fail, in this case Louis Berger didn’t identify the design errors, the interactions between Louis Berger and FIGG played a key role in driving the behaviour that resulted in the failure of the peer review.

Closure

The application of a Newtonian thinking approach to systems that are fundamentally not Newtonian in nature often results in unanticipated failure modes. These failure modes typically are a result of the interactions between the components of the system.

In complex systems these interactions drive behaviour, and, if unaccounted for, create new pathways for failures in a project, be they technical, time, or financial in nature.

A systems thinking approach provides the opportunity to examine these (formal and informal) interactions to ensure they are considered from a risk management perspective. Managing risk in a construction project by only adopting a Newtonian view will inevitably fail to catch the key risks that lie at the heart of these complex projects.

References

1) Portions of this description of the bridge failure have been previously published at https://www.bradyheywood.com.au.

2) The NTSB report can be found at https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR1902.pdf. 

3) Portions of this description of Newtonian thinking have been previously published in the Brady Review (2020), available at https://documents.parliament.qld.gov.au/tableOffice/TabledPapers/2020/5620T197.pdf.

4) Dekker, S & Cilliers, P & Hofmeyr J-H 2011, ‘The complexity of failure: Implications of complexity theory for safety investigations’, Safety Science 49.

5) Dekker et al., 2011, The complexity of failure: Implications of complexity theory for safety investigations 

6) Boulton, Allen, Bowman, 2014, Embracing complexity: strategic perspectives for the age of turbulence.

7) Portions of this description of complex systems thinking have been previously published in the Brady Review (2020), available at https://documents.parliament.qld.gov.au/tableOffice/TabledPapers/2020/5620T197.pdf.

8) Dekker et al., 2011, The complexity of failure: Implications of complexity theory for safety investigations.

9) Dekker et al., 2011, The complexity of failure: Implications of complexity theory for safety investigations.

10) At this point it is useful to highlight that a complex system is different to a complicated system. Dekker explains that certain systems ‘may be quite intricate and consist of a huge number of parts, e.g. a jet airliner. Nevertheless, it can be taken apart and put together again. Even if such a system cannot practically be understood completely by a single person, it is understandable and describable in principle. This makes them complicated.’ In other words, a complicated system is reductionist, it can be reduced to, and understood by, understanding its individual components. And while this system may have a large number of components, the manner in which they interact is well understood and predictable – and they do so because they were designed to do so. A complicated system, however, can become a complex system. A jet liner is a complicated system that becomes a complex system when it is placed in service. It is now subject to interaction with outside influences, such as air traffic control, schedule pressures, maintenance issues, human interaction, etc. (Dekker et al., 2011, The complexity of failure: Implications of complexity theory for safety investigations).

11) Dekker et al., 2011, The complexity of failure: Implications of complexity theory for safety investigations. 

12) ‘Yet simplicity and linearity remain the defining characteristics of the theories we use to explain bad events that emerge from this complexity.’ Dekker, S, 2011, Drift into Failure: From Hunting Broken Components to Understanding Complex Systems, Ashgate, Farnham, UK.

13) Dekker, 2011, Drift into Failure: From Hunting Broken Components to Understanding Complex Systems.

14) ‘The [system] accident results from the relationships between components (or software and people running them), not from the workings or dysfunction of any component part.’ (Dekker, 2011, Drift into Failure: From Hunting Broken Components to Understanding Complex Systems.) 

15) This is often referred to as the butterfly effect, which was a term introduced by Edward Lorenz and his research into weather modelling. Very small changes to initial conditions, can produce very large consequences – just like a butterfly flapping its wings in Brazil and ruffling the air can cause a tornado in Texas.

Author: Dr Sean Brady, managing director, Brady Heywood. Email: sbrady@bradyheywood.com.au