Have you ever been to a meeting where the presenter displays a slide packed with percentiles, bar charts, and awkwardly colored stoplight diagrams? Did the confusing, overly complex, and poorly designed metrics distract you from the intent of the meeting? Did the experience leave you just trying to figure out the difference between orange and yellow on that stoplight diagram – causing you to completely miss what the presenter was actually trying to convey? Even worse, were the metrics failing to tell the real story or answer the performance-related question you were really hoping to get answered?
You’re not alone.
Ineffective metrics can harm rather than help if they are not developed to tell a story and communicate the right information to the right people. Instead, it’s critical to choose the right measurements that answer two high-level, core questions: “Are we doing the right things to prepare?” and “Are we really prepared?” If you only give a cursory thought to metrics, you’ll experience an all-to-common result … lower engagement from all stakeholders. Luckily, this is avoidable if you invest the time up front to create metrics that matter and use them to drive action within your program.
Why Measure?
Why should you invest time into analyzing planning activities and response/recovery capabilities? For the business continuity or IT disaster recovery program manager, six key reasons often exist:
Answer fundamental questions about the program performance. Metrics help answer fundamental questions, such as “Can we recover?” At their core, metrics help tell a story and deliver meaningful insight to those seeking to understand why the program exists and what capabilities are in place.
- Affect change. To effectively communicate the need to alter a program’s current course of action, a program manager must have the measurables to justify their position. This is especially true if that change requires an increase in resources. The ability to highlight a measure of less-than-optimal performance and drill down into the root cause is paramount in initiating, leading, and managing change.
- Drive continual improvement. Effective metrics don’t just tell a story, they also paint a picture for the future. Metrics provide a window into the organization’s existing shortfalls and gaps and can be used to highlight those areas that either have been improved or need to be improved.
- Drive action. “What gets measured gets done.” While the origin of this quote is debated, what isn’t disputed is the truth behind this statement. Good metrics help paint a picture of what work has been done and what still needs to be done.
- Show maturity over time. As stated previously, metrics should tell a story. Metrics should be used to show a trend of gradual or incremental improvement and display the value of continued engagement on behalf of stakeholders.
- It’s a requirement. For many industries, business continuity is a requirement (e.g., FFIEC) and, increasingly, organizations choose to align to a standard (e.g., ISO 22301). With increasing visibility of disruptive events in the news and social media, it’s also a common customer requirement that an organization maintain a business continuity program. To meet these requirements, regardless of the driver, the proof will be in the metrics.
What Makes a Metric ‘Good?’
Now that we understand the “why” behind metrics, let’s look at what makes a metric “good.” First and foremost, a good metric should be appropriate for the audience to which it’s being delivered. Put yourself in the recipient’s shoes. What do they want to know and how can you give them that information in a way they can understand and action?
Once you have the right information for the right audience, the major obstacle is how to most effectively present it. The following provides a list of considerations for making a metric effective:
- Quantify whenever possible, qualify if you must. Qualified metrics are “squishy” and can be left open to interpretation. Quantified metrics remove subjectivity and enable more direct discussions.
- Use the language of the business. Communication among different disciplines can lose its effectiveness if too much of it is tied up in jargon. Look for common terms that everyone will understand and communication techniques that are in place in other areas of the business.
- Measure based on the current state but consider measuring based on what’s going to be important in the future.
- Be clear and concise and show trends whenever possible.
- Where metrics fail to measure up to expectations, enable a discussion to determine why and what can be done to correct the performance issue.
Types of Metrics
Let’s examine different types of metrics. Quality metrics for a business continuity program can be divided into one of three categories: activity and compliance metrics, product and service metrics, and goal-oriented metrics.
Activity and Compliance Metrics
Often referred to as key performance indicators (KPIs), activity metrics track the execution of key deliverables in the business continuity lifecycle (e.g. BIAs, plans, exercises). Often times, regulatory or customer requirements outline the need for specific program elements. By completing these activities and tracking their progress, the program manager is able to monitor and report on the organization’s progress towards achieving compliance with the aforementioned requirements. These items also allow other stakeholders to understand what work has been performed to date and what still needs to be executed in comparison to requirements, both internal and external.
Activity and compliance metrics should answer the question, “Is everyone involved in the program doing what they are supposed to?” Here are some examples of activity and compliance metrics:
- Number of BIAs completed/updated compared to total expected;
- Number of supplier business continuity assessments completed compared to the total number of “critical” suppliers; and
- Number of tabletop exercises conducted, and post-exercise reports produced and approved.
Product and Service Metrics
Often referred to as key risk indicators (KRIs), product and service metrics measure the ability of the organization’s current-state capabilities to meet continuity requirements. These metrics allow the program manager and other stakeholders to identify where gaps exist in the organization’s ability to recover products, services, and their associated process within stated downtime tolerances.
Product and service metrics should answer the question, “Is what we are doing effective, and can we recover in-line with our risk tolerance and stakeholder expectations?” The following are examples of product and service metrics:
- Product or service recovery capability measured against leadership’s stated downtime tolerance; and
- Business “process” or “activity” recovery capability measured against the requested recovery objective, while identifying potential gaps.
When product and service metrics are tied to the organization’s risk tolerance, it can provide a powerful answer to the question: “Can we recover?” that resonates with senior management.
Goal-Oriented Metrics
Goal-oriented metrics measure the performance of the program with a set of preset goals, ideally classified into short, medium, and long-term timeframes. These metrics are intended to drive continual improvement in the program by setting targets. Goal-oriented metrics help protect against program stagnation caused by simply executing methodology year in and year out. Goal-oriented metrics should answer the question, “Are we improving the effectiveness of our program?” Examples of goal-oriented metrics include:
- Close the top-20 corrective actions this quarter;
- 90 percent of employees complete business continuity awareness training this year; and
- Improvement on a maturity model or conformance assessment.
How to Develop Metrics that Matter
Now that we understand the “why” and the “what,” it’s time to start building a scorecard. Below is a step-by-step process that can help you create and track metrics that matter. As you work through the steps below, remember that a key consideration for developing good metrics is understanding your audience. By understanding the “who,” you can better build our metrics to fuel engagement.
- Brainstorm activity and compliance metrics. Build a list of potential performance measures and then work to narrow it down.
- Document product and service metrics. Chances are you’ve already engaged leadership to understand the organization’s priorities and gauge their risk appetite. Use this background information to document these priorities, along with maximum downtimes. Think through whether you could answer “yes or no” regarding the ability to recover each product or service.
- Include compliance requirements. There may be some requirements that are mandated (e.g., regulatory, contractual, self-imposed). Make sure these are captured in the performance measures you’ve developed.
- Map metrics to stakeholders. Once you have a good list of running metrics, the next step is to map them to relevant, interested parties and make sure they remain relevant. This step should allow you to refine, add, or eliminate metrics as needed.
- Identify how to collect and track data. The next step is to identify how to collect the requisite information and where to track progress. Many business continuity management software tools can track most activity or product and service metrics, but there may be some information that needs to be recorded manually. If you’re not using a business continuity management tool, you may have to create an excel spreadsheet or database to manage this information.
- Figure out when and how to deliver. Once you have figured out what to track, how to track it, and who to deliver it to, the next step is to make sure that you deliver metrics in the appropriate forum at the right frequency. Map your new metrics to recurring meetings you have scheduled with your sponsor and team, steering committee, and recovery team owners in the business.
- Identify ways to measure progress over time. As a program matures, you will need to show improvement over time or relative changes in program maturity. Some examples of this include measuring adherence to a standard over time, identifying improvements resulting from audits, or using a maturity model to show gradual growth.
Pitfalls to Avoid
Ineffective metrics can cause more harm than good if they are not deliberately developed to fuel engagement. The following is a list of the most common pitfalls to avoid when developing metrics that matter.
- Focusing Solely on KPIs. KPIs are an important aspect of every scorecard, but KPIs in isolation don’t tell a complete story. It’s important to understand how many business continuity plans have been completed and approved, or exercises conducted, but those numbers don’t inform stakeholders about the organization’s response and recovery capabilities or the increased levels of resilience that the business continuity program delivers. This, in turn, can lead to decreased interest and waning engagement from stakeholders.
- Delivering metrics at the wrong time to the wrong people. This is one of the most common issues business continuity program managers face. One size does not fit all for stakeholders and metrics. The metrics you deliver to a customer will be different than those you deliver to a regulator, which will also be different than those you deliver to a senior executive. Being able to package metrics in a way that communicates effectively to different stakeholders is what will drive action continual improvement.
- Presenting metrics without a root cause analysis. If a metric fails to measure up to expectations, not being prepared to discuss the root cause or tie it to a recommended corrective action will diminish engagement. It will lead stakeholders to question why we measure.
- Metrics as an end rather than a means to an end. Metrics are a bridge to continual improvement, not the other way around. Actions and change should be the end state of developing meaningful metrics.
Conclusion
Ineffective metrics can harm rather than help if they are not developed to tell a story and communicate the right information to the right people. Instead, it’s critical to choose the right measurements that answer two high-level, core questions: “Are we doing the right things to prepare?” and “Are we really prepared?”
You can follow all of the guidelines and steps in this article, but how do you know if what you’ve developed is “good?” Examine your scorecard and then ask yourself the following questions:
- Are your recovery capabilities consistent with business continuity targets and requirements?
- Are you doing the things you’re supposed to do to prepare?
- Do your interested parties feel informed to help drive continual improvement?
If what you’ve developed can answer those three questions, then chances are you’ve developed metrics that matter that will, in turn, drive engagement across stakeholders.