placeholder-text

Introduction

“Plan for what is difficult while it easy and what is great while it is small”
(Sun Tzu & Griffith 1964)

As the global commodity markets continue to experience uncertainty, our focus on increasing certainty in our mining operations is more important than ever. There would be few mining houses who are not engaged in robust financial scenario modelling process to understand the impacts of external and internal forces acting on the business. However, the application of risk profiling applied to financial modelling of our plans generally does not extend to assumptions made in the technical spaces, such as geotechnical engineering.

Experienced geotechnical engineers in mining know that too much or too little failure is a costly undertaking for a mine operation. However, many companies are unsure how to evaluate the financial risks associated with navigating conservative to aggressive design selection in their operation so they can ride out the bottom of the cost curve with greater certainty of continuation and delivery of ore. This paper outlines the common areas for error that occur in the estimation of safe design, and a proposed process for evaluating this risk in financial terms.

Geotechnical engineering is anecdotally considered to be a black art as it can be accurate or precise, rarely both, and sometimes neither. This is largely due to the complexity, variability and uncertainty in the parameters that are used to calculate stability.

Geotechnical risk is one that tends to blindside many operators, and there is a long history of underestimating the likelihood and consequences of geotechnical instability particularly on the capacity of an operation to continue to supply ore to market. Like many black swan events the devil is in the detail.  To understand the level of risk involved in the designs we approve, and the potential for them to be inadequate in their tolerance of the drivers of failure in a rockmass, we must understand how and where our assessments are likely to be incorrect.

placeholder-text

Estimating the Cost of Failure

The cost of a wall failure for an operation is rarely reported in explicit terms, but basic knowledge of mining costs tells us they can be significant and even lead to potential early closure in low margin sites. Regardless we still see many events blindside operators as the failure mechanism while recognised in early modelling, was classified as unlikely to occur in operational risk assessments.

The risk of it emerging is typically addressed with geotechnical controls for managing safety impacts to people and assets and decided the risk is “acceptable”. Rarely are the risks associated with the financial implications thought through and mitigated ahead of time.

In June 2018 Australian Mining reported:

“The Super Pit rock falls, which included two hits within less than 12 hours, have caused joint venture partner Newmont Mining to reduce its Australian gold production guidance for 2018….

The two large slips destroyed a ramp and blocked access to part of the pit, cutting its expected production from 700-800k Ounces down to a range of 550k -600koz.” (Zhao 2018)

With mining costs as low as $1000/oz today this would be equated to an operating profit loss of $300M in that year. When we add on costs of cleaning up potential unplanned cutbacks and opportunity losses associated with the capital involved in the Life of Mine extensions the impact on the projects NPV can more than double any delay in ore recover.

If we look further afield of our Australian shores, we can see the example of Bingham Canyon where a mine landslide damaged access to the pit and several large equipment assets. Clean up and losses of this geotechnical event were reported to cost between US$700M and $1B (Petley 2013). The flow on effects to the mine workers and the impact on the broader economy were significant.

With monitoring technology firmly in place in both events, it is clear that the failures were expected in the short term, and were well managed from a people safety perspective. However, the financial risks that were realised after the event were significant, and likely not foreseen. It is unknown if the failure mechanism was predicted in early modelling be it above a FoS of 1.2-1.5 or not.

Geotechnical risk can occur at multiple scales and each site’s exposure to risk comes with a unique cost profile that can help us understand (not dismissing safety) if the cost of failure is greater than the opportunity cost of a more conservative design that would eliminate or mitigate the failure.

At a batter scale it is typically easy to make a general assessment of this impact. As we move to multi-batter and inter-ramp wall scale the prediction of failure, and costs associated with that failure, become more complex. The cost of a rockfall or batter failure is of consequence only if it cannot be managed safely, and if the event causes a stop or slow down to production. Considerations here would apply to the cost of mitigating controls, such as wall protection blasting (using presplit, mid split, or trim shots, for example), the need for bunding under walls and along ramps, or perhaps increased standoff distances for higher risk areas.

There are also costs associated with equipment and labour time to clean it up if it is not safely contained. Sites can assess the impact of rockfalls on the operation in the design stage, by understanding the berm relationship between berm width and berm height and its potential of for retaining failures. Sites can also assess for themselves the likelihood of the berm exceeding its capacity, by understanding the kinematic of the defects in the rockmass fabric and applying basic cumulative frequency analysis against wall angle.

On a much larger scale, an inter-ramp to overall slope failure, is likely to have significant costs that could threaten the financial viability of an operation (such as in the Bingham Canyon example). If a wall failure occurs across or on to a ramp, then the impact on production is likely to be significant. To understand the financial risk involved one must first have developed an accurate cost model. To understand time required to ‘clean up’ this failure, an estimation of likely failure size can be provided from modelling. Comparing this to the bulk cubic meters (BCM) or tonnes per day rate of the site will establish how many days of ‘clean up’ are required or an estimation of the time taken to cut back behind the failure.

To simplify the process of calculating the cost of failure we propose using an activity-based financial cost model that is factored to understand the impact of free cash flow and life of mine extension costs. This will provide guidance on the cost of halting mine activities and/or delaying access to ore while undertaking a cutback. The model does not need to factor in geotechnical information. It should be designed in such a way that it will provide guidance on the daily cost of stopping an operation and extending it. This then allows the user to extrapolate those costs over subsequent days. Additional capital requirements for cutbacks or remediation can then be factored in.

To fully risk assess a design for the financial impacts of failure a Marginal Revenue / Marginal Cost approach could assist in better understanding the ‘worth it” scenario analysis. The base case Net Present Value (NPV) of the most conservative design is used as the control. From here we are able to assess both upside and downside of our relatively more aggressive design cases (Figure 1). In Table 1 below we have identified useful parameters for the financial cost model.

Variables

Days in Month

Month

% Impacted

OT Variable cost %

Assumptions

FY Yearly Production

FY Monthly Production TMM

FY Avg Daily Production TMM

FY Monthly Production OP Ore

FY Avg Daily Production OP Ore

FY OP Grade

FY OP Recovery

Commodity Price (A$/gm/oz/tonnes)

OP Cost $/t TMM

OT Cost $/mill tonne @ Variable selection

LOM Fixed Cost Extension (i.e.12 months = $90M) $M/day

TABLE 1 – Suggested Parameters for Financial Cost Model

When assessing the stability of your base case and less conservative designs the impact the schedule run out should provide an indicative mine costs of failure. The use of a cost model is effective as it provides an estimation of both the cost of clean-up and cutback – reducing all these costs so that they can be assessed in units of time.

The Marginal Revenue/ Marginal Cost approach could assist in better understanding the ‘worth it” scenario analysis. From here we are able to assess both upside and downside of our relatively more aggressive design cases.

To understand time required to clean up an estimation of likely failure size can be provided from modelling. Comparing this to the BCM/day rate of the site will establish how many days of “clean up” are required or an idea of the time taken to cut back behind the failure. In Figure 1 below we can see that the high risk option is favourable if clean up or remediation can be established within 48 days of mining. Further risks to market supply fall outside the scope of this document but may be managed through stockpiling if this is considered significant.

FIG 1 – Example of Marginal Revenue/Marginal Cost Study

placeholder-text

The Hidden Errors in Geotechnical Assessments for Risk Evaluation

For the purposes of this discussion geotechnical risk is referring to the consequence and likelihood matrix that is used to risk assess operations when deciding on the risk to reward considerations an operation is prepared to accept.

When a new mine or cutback is designed it is typically designed to ‘eliminate’ uncontrolled movement of ground of largescale instabilities based on known failure mechanisms. However more often than not, this design is based on a sparsity of data that has gone into the estimation of the rockmass strength and performance.

In the authors experience, the mining industry will use factors of safety (FoS), which typically range between 1.2 to 1.5 to address this uncertainty. This is also the recommended guidance from Read & Stacey (2010), however it is not a silver bullet nor a guarantee that the wall will be stable. The FoS should be viewed as an indicative number and when you are given it you should want to know if it has statistical significance and robust engineering practice behind it.

There are many sources of error that can go into the estimation of a FoS that require sound judgement. Such sources of error or ambiguity include:

  1. The amount and quality of data that has gone into the calculation.
  2. The quality of the samples and inclusion of failed tests in data analytics
  3. The subjective assessment of field collated data
  4. The robustness of the statistical analyses
  5. The computational analyses used to determine stability estimates

A FoS value is likely the result of a deterministic analysis that is calculated by defining the forces that drive failure and the forces resisting failure. These forces are estimated from field related stresses and strains and the rockmass strength data which are applied to mathematical parameter models such as Hoek-Brown or Mohr-Coulomb.

Both parameter models can be derived from lab tests for friction and cohesion, unconfined or confined/triaxial compressive strength, tensile strength and young’s modulus to name but a few. They may also be estimated from core logging or field mapping and applied to a rockmass classification model that provides a generalised and indicative rockmass strength. Further estimations and analyses are required to understand the shear strength of the defects in the rocks that will typically form the path of least resistance to failure. However, in many analyses these features are ignored unless they are on a large scale.

Care must be taken to ensure that only data that qualifies as passing the lab test is used in the analysis. There is great temptation to use data that has failed a lab test standard due to the high costs of lab testing and low volume of samples available, or ignorance on the behalf of the engineer. Commonly models are built from singular rockmass and defect parameters, that are derived from a range of results which may be the mean, the mode, the lower 25th percentile or even as conservative as the 10th percentile depending on the level of conservatism applied by the engineer. This is common practice but may not reflect a number that is statistically significant.

In statistics, if a sample size is too small (<30samples) there is a risk the results will not be valid, too large is a waste of time and money and clustered samples can create bias. Few mines will collect sufficient and sufficiently spaced samples for statistical evaluations because the cost of data collection is significant. If you have fewer than 30-40 samples then small sample theory may need to be applied to ensure you are factoring in for greater error (Grenon et al 2015). Read (2013) has also qualified a process that is largely accepted in the mining geotechnical community for Target Levels of Data Confidence (TLDC). This requires and understanding of the expected value of reliability E(R).

For more details and further reading on sampling and testing a good source is the International Society of Rock Mechanics (Ulusay 2014) who have produced guidelines for assisting engineers in determining how to navigate this uncertainty. Depending on the test, the cost may be $100 – $1500 per sample. When you have 5-10 rock types and 5-10 parameters to test for including and multiple structures this starts to add up. The cost of drilling a hole can be upwards of $200K depending on depth and further infield testing required. It would not be unusual for a geotechnical drilling program to cost several million dollars. Cost savings can be achieved by combining this program with a Geological drilling program.

There are caveats to this decision; the angled approach and bias to orientation of these holes can cause complications in data collection and blind spots to defect orientation. Consideration must be given to the expertise of the geologist in geotechnical data collection and how appropriate the angle of the hole is for a good result of sampling and potential defect blind spots. Where sufficient data has been collected a Probability of Failure (PoF) approach should be used. This method accounts for all useable data to understand the distribution of parameters and the probability that any data point is more representative than others in estimating the parameters capacity.

A further trap for young players is modelling a rockmass as homogenous or ‘intact’ at a significant scale (Multi bench – see figure 1). This approach will likely overestimate the rock mass strength by ignoring subtle yet pervasive fabrics, such as wide spaced jointing or foliation. Weathered rockmasses will frequently host remnant structures that can be difficult to detect but have been the lead cause behind many prominent large-scale failures.

FIG 2 – Scale assessment for GSI model applications (Hoek & Brown 2018)

If a Geotech is armed with good data they can carry out an analysis using the appropriate method for the rock mass, be it limit equilibrium for more homogenous or weaker slopes where circular failure is expected; or shear strength reduction in finite element and numerical modelling where the failure is likely to be driven by complex structure, pressure and or stress. The more complex the rockmass, the more one should be inclined to use more advanced modelling codes such as those in numerical models that can adequately represent its behaviour. This modelling will give the engineer a volume of the expected failure.

At many sites, the FoS of a wall is probably well known by Management but its probability of failure is unlikely to have been considered. Large scale failures are more frequently driven by step path failure from jointing, faults and fractures in the rockmass. Understanding the variability and fabric of the rockmass itself will assist in determining risk of largescale wall failure.

placeholder-text

Conclusions

There are many sources for uncertainty in determining a geotechnical design that will allow for the optimised extraction of a resource. Relying a FoS number without understanding the rigour of the process that has gone into the parameter estimation and modelling can lead to unintended consequences. Following industry standards and applying a probability mindset to the methodology will inform a more robust financial risk assessment for show stopping instabilities that may occur.

When the geotechnical model is well understood applying a financial analysis process such as marginal cost / marginal revenue against a base case NPV can assist in understanding how much financial risk a site can realistically tolerate in more or less conservative design options.

We recommend reviewing industry accepted risk tolerance tables such as those produced by Read & Stacey (2013), Read (2013) and Grenon et al (2015)  to better understand how statistical analyses can apply to geotechnical design.

placeholder-text

References

Freund, J.E & Gary, A. S. (1997) Modern Elementary Statistics. (Prentice Hall International, Inc New Jersey).

Grenon. M, Boudreau. C, Bruneau. G, Caumartin.R. (2015) Statistical characterisation of intact rock properties at a Canadian underground mining project. Underground Design Methods 2015 (Australian Centre for Geomechanics Perth Australia pp351- 366

Hoek. E & E.T. Brown (2019) The Hoek-Brown failure criterion and GSI – 2018 edition, Journal of Rock Mechanics and Geotechnical Engineering 11 (2019 pp445-463)

Petley, D. (2013) Is Bingham Canyon copper mine landslide the most expensive single mass movement in history? AGU  Advancing Earth and Space Science Available from https://blogs.agu.org/landslideblog/2013/04/16/is-the-bingham-canyon-copper-mine-landslide-the-most-expensive-single-mass-movement-in-history/ accessed 9 October 2020

Read. J, & Stacey, P. (2010) Guidelines for Open Pit Slope Stability (CSIRO Publishing Collingwood Australia)

Read. J (2013) Data gathering, interpretation, reliability and geotechnical models. Slope Stability 2013 (Australian Centre for Geomechanics Perth pp81-90)

Sun-Tzu, & Griffith, S. B. (1964). The art of war. (Clarendon Press Oxford).

Ulusay. R (Ed) (2014). The ISRM suggested methods for rock characterization, testing and monitoring 2007-2014. Springer publishing – Available from  https://www.isrm.net/gca/index.php?id=1151

Zhao, V. (2018) Kalgoorlie Super Pit JV partners reveal rock fall impact. Australian Mining Available from:  https://www.australianmining.com.au/news/kalgoorlie-super-pit-jv-partners-reveal-rock-fall-impact/