OSHA

What Good Looks Like: Making the Case for Leading Indicators

It is not the adoption of leading indicators or the collection of leading indicators that leads to improvement, it is the actions taken with the information that determine success.

The safety profession has an unhealthy fixation on measuring using purely negative values. OSHA recordable and lost-time injuries spring to mind—both are lagging and, I would suggest, negative indicators. Once they occur, there is nothing that can be done but to investigate and hopefully learn enough to avoid similar incidents in the future. But even these metrics are flawed in that a lack of injuries or incidents does not necessarily equate to a safe workplace. It could be a matter of just being lucky.

Many organizations now realize that simply measuring lagging data in the form of incidents and injuries isn't enough. Because of this, safety-conscious companies have begun to adopt leading indicators that attempt to show how the safety process is working. The most common indicators are near-miss reports and work site observations. Near misses, however, are merely incidents that did not reach their full potential and rely on a mishap to occur before being observed and reported. In a mature and effective safety process, leading indicators further up the value chain are used, including observable inputs such as the behaviors and conditions that could lead to the near miss or incident.

The Purpose of Metrics
While safety experts have made a sound case to adopt leading safety indicators instead of relying solely on lagging safety indicators (e.g., injury rates), the reality is that creating and sustaining metrics for leading safety indicators can be daunting for many. While lagging indicators share a universal set of metrics driven by regulatory requirements and are frequently used globally by organizations, leading indicator metrics are varied and are often slow to be developed. As a result, leading indicator metrics are often glacial in adoption, even within a single organization.

As a baseline, a metric is defined as a quantifiable measure that is used to track and assess the status of a specific process. Often a metric is a simple proxy or substitute for a broader and generally more complex process. When implemented correctly, metrics can provide organizations with the following potential benefits:

  • Guide stakeholders on how they are doing and whether they are meeting expectations
  • Indicate where resources need to be proactively focused on the most critical issues
  • Allow for comparisons, either to each other or to an established set of norms, such that deviations or exceptions can be readily spotted
  • Enable organizations to focus on the right things at the right times
  • Create consistent measures within an organization and communicate findings and direction
  • Identify gaps in safety processes and systems
  • Assess leadership and employee engagement

Despite the many potential benefits, there are some caveats and rules that must be established in order to utilize metrics effectively:

  • Like a goal, a good metric must be S.M.A.R.T.—specific, measurable, achievable, realistic, and timely. Lofty and idealistic aspirations belong in vision statements, not metrics.
  • For a metric to work, expectations must be set and clearly communicated. Specifically, criteria are necessary to indicate a "good" range and a "needs improvement" range or set of ranges. Ideally, these ranges have prescribed action items established to drive improvement of the process.
  • Metrics are indicators of a process. Management of the process is the focus, not management of the metrics. Gaming of metrics can and will happen when managing to the metric. In this case, you get what you ask for.
  • Multiple metrics often provide better insight into a complex process than a single metric.
  • Both quantitative and qualitative metrics should be established to ensure high-level KPIs are worthy of consideration. For example, the number of safety inspections is not as valuable without also considering the quality of inspections.
  • Collecting metrics is only part of the process. Acting on the information is necessary to drive improvement.

Which Leading Indicators to Use
There are two primary leading indicators from which most other metrics derive:

  • Inspections—a collection of one or more observations.
  • Observations—a single instance of a behavior or condition (e.g., a worker wearing a hard hat). Observations can be determined to be safe or at risk.

These two primary metrics are the basic building blocks of a more comprehensive safety observation program and concurrently aid in the development of key leading indicators for an organization’s measurement of safety. In conjunction with the use of appropriate safety checklists for hazards and processes within an organization, these metrics, along with their derivative components, can help an organization determine what is safe or what good looks like.

1. As inspections increase, incidents go down.
This is the easiest metric to measure, and it is important to promote inspection activity. However, doing more inspections alone will solve for nothing. The act of collecting more and more safety inspections, by itself, does very little. That would be like trying to lose weight by standing on the scale more often. It helps to provide information, and is necessary to gain insights, but it is simply the first step.

2. The probability of having an incident decreases as the number and diversity of the people performing inspection increases.
Sending the safety team out to conduct more inspections isn't the answer. In order for safety to improve, ownership by the team is essential. This means that everyone in the organization, from leaders to front-line supervisors to workers, has a part to play in identifying hazards, reporting them, and helping to mitigate the risk they pose—both short- and long-term.

3. Too many 100% safe inspections are predictive of higher injury rates.
Typically, a high number of inspections with no at-risk findings are seen on work sites with a relatively higher rate of injury. One would think that as safety efforts improve, fewer at-risk findings would be found. However, as long as humans are involved in the process, error will be present. In addition, as one systemic issue is discovered and addressed, another is likely to surface that was virtually unseen before. Another potential issue with reporting at-risk observations is the negative connotation it can pose to those within an organization. The opportunity to improve should be seen as a gift instead of an accusation or a curse. Finding and addressing at-risk items allows an organization to learn and grow positively, while driving continuous improvement overall.

4. Too many at-risk observations are predictive of higher injury rates.
While this metric may seem counter to the previous 100% safe metric, this is a relative measurement. Finding at-risks is not the problem. Finding the same systemic issue repeatedly can be a problem. As an example, an observation finds someone standing on the top of a ladder during a work site inspection. As a conscientious person, the observer stops the work and makes it safe. The issue is discussed with the worker and a safe resolution is sought. The problem is averted and the observer moves on. But how many times has this happened? What if the data indicated it happened across the organization many times in the last month? Finding and fixing the issue is a start, but only by addressing the causal factors (why it is happening over and over) will it result in a sustained improvement.

How Metrics Can Be Used to Drive Improvement
Ideally, the goal is to develop actionable leading indicators. The action should not be to mandate or influence the metric itself, but to elicit constructive conversations, as well as to develop or establish value in the action represented by the metric itself. For example, inspections provide a wealth of leading indicator information from which organizations can derive insight into the differences between work as imagined or expected (e.g., what is defined in a safety and health program) and work as performed (e.g., what occurs in the field). Mandating inspections won’t provide more insight if the observers don’t find value in the process. Additionally, it is hard to manage risk if it is unknown where the risk resides. At-risk findings can help clarify where the risk is, both real and perceived.

To be clear, simply picking from a list of indicators and measuring the results is a lesson in futility. Metrics and expectations are to be established to determine whether a process is in control or effective. The results will indicate where on the spectrum of success your organization lies. From there, action must be taken to adjust the trajectory of the process. The subsequent results then provide insight as to whether the actions proved effective and the trends are tracking in a positive direction.

Conclusion
It is important to remember that it is not the adoption of leading indicators or the collection of leading indicators that leads to improvement, it is the actions taken with the information that determine success. It is less about the metrics and more about the conversations and feedback they elicit. Bear in mind when adopting any leading indicators to make sure they are actionable. When driving continuous improvement, it is the frequency and quality of the feedback generated from the findings that determine the level of success.

This article originally appeared in the June 2019 issue of Occupational Health & Safety.

Featured

Artificial Intelligence